Biofeedback is today a recognized treatment method for a number of physical and psychological problems. Experienced clinicians often achieve good results in these areas and their success largely builds on many years of experience and often thousands of treated patients. Unfortunately many of the areas where biofeedback is used are very complex, e.g. diagnosis and treatment of stress. Less experienced clinicians may even have difficulties to initially classify the patient correctly. Often there are only a few experts available to assist less experienced clinicians. To reduce this problem we propose a computer assisted biofeedback system helping in classification, parameter setting and biofeedback training. By adopting a case based approach in a computer-based biofeedback system, decision support can be offered to less experienced clinicians and provide a second opinion to experts. We explore how such a system may be designed and validate the approach in the area of stress where the system assists in the classification, parameter setting and finally in the training. In a case study we show that the case based biofeedback system outperforms novice clinicians based on a case library of cases authorized by an expert.
The 500MW Indian pool type Prototype Fast Breeder Reactor (PFBR), is provided with two independent and diverse Decay Heat Removal (DHR) systems viz., Operating Grade Decay Heat Removal System (OGDHRS) and Safety Grade Decay Heat Removal System (SGDHRS). OGDHRS utilizes the secondary sodium loops and SteamWater System with special decay heat removal condensers for DHR function. The unreliability of this system is of the order of 0.10.01. The safety requirements of the present generation of fast reactors are very high, and specifically for DHR function the failure frequency should be less than 1E-7/ry. Therefore, a passive SGDHR system using four completely independent thermo-siphon loops in natural convection mode is provided to ensure adequate core cooling for all Design Basis Events. The very high reliability requirement for DHR function is achieved mainly with the help of SGDHRS. This paper presents the reliability analysis of SGDHR system. Analysis is performed by Fault Tree method using "CRAFT" software developed at Indira Gandhi Centre for Atomic Research. This software has special features for compact representation and CCF analysis of high redundancy safety systems encountered in nuclear reactors. Common Cause Failures (CCF) are evaluated by beta-factor method. The reliability target for SGDHRS arrived from DHR reliability requirement and the ultimate number of demands per year (7/y) on SGDHRS is that the failure frequency should be <=1.4E-8/de. Since it is found from the analysis that the unreliability of SGDHRS with identical loops is 5.2E-6/de and dominated by leak rates of components like AHX, DHX and sodium dump and isolation valves, options with diversity measures in important components were studied. The failure probability of SGDHRS for a design consisting of 2 types of diverse loops (Diverse AHX, DHX and sodium dump and isolation valves) is 2.1E-8/de, which practically meets the reliability requirement.
In full electric (EV) or hybrid electric (HEV) vehicles the onboard communication is a crucial issue, that can take advantage from a reliable and robust interaction among the embedded units (ECUs) providing among others the power and batteries management, services, accessories and supervision; an updated global view of the system under control (the car) can improve the quality of control, reliability, safety and comfort. Power Line Communication technology (PLC), that take advantage of the DC power line as a physical medium to exchange messages, according to standard protocols, is a candidate at substituting the actual wired solutions, by substantially reducing the cabling burden. The present paper aims at i) reviewing PLC technology for DC bus in the automotive sector; ii) defining the requirements of a reliable communication systems within a car environment, with special attention given to the data exchange needed by the more critical systems in EV and HEV, and namely the control of electrical actuators for traction; iii) describing a benchmark for testing the data transfer characteristics and performance, including details on proposed protocols and architectures and presenting experimental results on data channel characterization and transmission.
Today, everyday life for many people contain many situations that may trigger stress or result in an individual living on an increased stress level under long time. High level of stress may cause serious health problems. It is known that respiratory rate is an important factor and can be used in diagnosis and biofeedback training, but available measurement of respiratory rate are not especially suitable for home and office use. The aim of this project is to develop a portable sensor system that can measure the stress level, during everyday situations e.g. at home and in work environment and can help the person to change the behaviour and decrease the stress level. The sensor explored is a finger temperature sensor. Clinical studies show that finger temperature, in general, decreases with stress; however this change pattern shows large individual variations. Diagnosing stress level from the finger temperature is difficult even for clinical experts. Therefore a computer-based stress diagnosis system is important. In this system, case-based reasoning and fuzzy logic have been applied to assists in stress diagnosis and biofeedback treatment utilizing the finger temperature sensor signal. An evaluation of the system with an expert in stress diagnosis shows promising result.
The Hierarchical Scheduling Framework (HSF) has been introduced as a design-time framework enabling compositional schedulability analysis of embedded software systems with real-time properties. However, supporting resource sharing in HSF is a major challenge, since it increases the amount of the CPU resources required to guarantee the schedulability of the hard real time tasks and decreases the composability at the system level. In this paper, we discuss and identify the key parameters of a compositional framework called the bounded-delay resource open environment (BROE) server to support global resource sharing, that have a great effect on how the framework will utilize CPU resources. Furthermore, we provide an algorithm, that has a pseudo-polynomial complexity, to evaluate the optimal setting for the BROE server. In addition, we provide a polynomial-time approximation algorithm for generating near-optimal setting for the BROE server. The performance of the BROE server, as well as the efficiency of the approximated algorithm, is evaluated by the means of simulation analysis.
Supporting resource sharing in multiprocessor architectures is one of the problems which may limit the benefits that can be archived using this type of architecture. Many approaches and algorithms have been proposed to support resource sharing, however, most of them impose either high blocking times on tasks or require a large memory size. In this paper we investigate the possibility of combining the lock-based approaches and wait-free approaches (using multiple buffers) in order to decrease both the blocking time that may affect the schedulability of tasks and the required memory. To achieve this, we propose a solution based on evaluating the maximum allowed blocking time on each task according to the schedulability analysis, and then find the minimum memory requirement for each resource such that it limits the blocking times on tasks to be less than the maximum allowed blocking times.
The Hierarchical Scheduling Framework (HSF) has been introduced as a design-time framework enabling compositional schedulability analysis of embedded software systems with real-time properties. However, supporting resource sharing in HSF is a major challenge, since it increases the amount of CPU resources required to guarantee schedulability of the hard real time tasks, and it decreases the composability at the system level. In this paper, we focus on a compositional framework called the bounded-delay resource open environment (BROE) server, and we identify key parameters of this framework that have a great effect on how the framework will utilize CPU resources. In addition, we show how to select optimal values for these parameters in order to reduce the required CPU resource
Today, the drive in many organizations is to focus on reducing production costs while increasing customer satisfaction. One key to succeed with these goals is to develop and improve both quality and maintenance in production as well as quality and maintenance in early phases of the development processes. The purpose of this paper is to discuss how and motivate why research within quality and maintenance development may interact, in order to help companies meet customer demand while at the same time increase productivity. The paper is based on ideas and research perspectives of the newly formed competence group on Quality- and Maintenance Development at the School of Innovation, Design and Engineering at the Malardalen University, Sweden. This paper elaborates on the concepts of Quality and Maintenance, its important integration, and provides some examples of ongoing research projects within the competence group.
Purpose The purpose of this paper is to challenge a traditional image of the content of entrepreneurship, which is associated with creativity, identity and discovery recognition. Design/methodology/approach A narrative approach is used in telling the story about the artist/entrepreneur Mikael Genberg. The story is based on interviews, newspaper material and observations. Taking this story as the point of departure, an alternative image of entrepreneurship is suggested. Findings First, from a traditional Schumpeterian perspective Genberg could be portrayed as a good example of a hero entrepreneur, an archetype of the creative artist/entrepreneur. Instead Genberg in this paper is described in terms of a creative imitator. Second, the Schumpeterian “hero entrepreneur” is associated with a fixed and strong identity. This picture is challenged and replaced by a demonstration of how double or multiple identities are used in legitimizing work which is argued to be more illustrative to the content of entrepreneurship than finding the true identity of the hero entrepreneur. Third, discovery recognition from a traditional perspective is attributed to the individual, while in this case opportunity creation signifies the process of making discoveries collectively shared. Research limitations/implications This study is exploratory and based on a single case, while the results cannot be taken as generalizations. Instead an alternative understanding of the content of entrepreneurship is illustrated. Originality/value The value of this study is the demonstration of an alternative image of the content of entrepreneurship.
An Adaptive Cruise Control (ACC) is an automobile system which purpose is control the velocity of the vehicle with regards to the surrounding environment. This report gives description of ACC models, a detailed description of the architecture of the ACC, and a survey of the ACCs on the market of today.
The requirements and specification of a protocol for low level communication between the run-time systems in a distributed Ada environment is presented. This allows an Ada system to be separated into software resources and run-time controllers. Calls to the local run-time system of a node, concerning task management, are transformed into remote calls to the controller, that schedules all tasks in the application. The calls to the run-time system together with all messages, requests and replies, that are triggered as a consequence, are described. The controller will be implemented in hardware separate from the processors. Communication between processors and controllers are by means of high speed (Gigabit) networks. In the proposed system, partitioning and distribution of Ada programs can fully utilize the inherent and strong type checking in Ada.
Distribution of a single Ada program on a local area network is accomplished by partitioning the run-time system into two parts. A central scheduling module is responsible for task management. Distributed run-time executives handle context switches and remote entry calls; however all activities are supervised by the scheduler. The scheduler can be implemented in hardware in order to achieve high efficiency. A network based on optical fibers is necessary due to the high speed required for system calls. Asynchronous Transfer Mode is suggested as the protocol for the communication. We describe an implementation of the divided run-time system on an Ethernet network, using MC68030-based micro computers as targets and an Ada program executing on a Rational host as the scheduler.
In rail freight operation, freight cars need to be separated and reformed into new trains at hump yards. The classification procedure is complex and hump yards constitute bottlenecks in the rail freight network, often causing outbound trains to be delayed. One of the problems is that planning for the allocation of tracks at hump yards is difficult, given that the planner has limited resources (tracks, shunting engines, etc.) and needs to foresee the future capacity requirements when planning for the current inbound trains. In this paper, we consider the problem of allocating classification tracks in a rail freight hump yard for arriving and departing trains with predetermined arrival and departure times. The core problem can be formulated as a special list coloring problem. We focus on an extension where individual cars can temporarily be stored on a special subset of the tracks. An extension where individual cars can temporarily be stored on a special subset of the tracks is also considered. We model the problem using mixed integer programming, and also propose several heuristics that can quickly give feasible track allocations. As a case study, we consider a real-world problem instance from the Hallsberg Rangerbangard hump yard in Sweden. Planning over horizons over two to four days, we obtain feasible solutions from both the exact and heuristic approaches that allow all outgoing trains to leave on time.
This paper presents an efficient best-effort approach for simulation-based timing analysis of complex real- time systems. The method can handle in principle any software design that can be simulated, and is based on controlling simulation input using a simple yet novel hill- climbing algorithm. Unlike previous approaches, the new algorithm directly manipulates simulation parameters such as execution times, arrival jitter and input. An evaluation is presented using six different simulation models, and two other simulation methods as reference: Monte Carlo simulation and MABERA. The new method proposed in this paper was 4-11% more accurate while at the same time 42 times faster, on average, than the reference methods.
This paper describes different aspects of teaching distributed software development, regarding the types of project customers: industry and academia. These approaches enable students to be more engaged in real-world situations, by having customers from the industry, local or distributed customers in universities, distributed customers in software engineering contests or being involved in an ongoing project, thus simulating the company merging. The methods we describe are used in a distributed project-oriented course, which is jointly carried out by two universities from Sweden and Croatia. The paper presents our experiences of such projects being done during the course, the differences in each approach, issues observed and ways to solve them, in order to create a more engaging education for better-prepared engineers of tomorrow.
In todays turbulent environment, manufacturing companies are forced to efficiently change or develop production systems that are robust and dynamic enough to handle changing production situations during its entire life-cycle. To achieve such a production system requires a structured development process that should be carried out simultaneously to the product development process and considers the companys product portfolio. Within a structured development process, the requirements specification of the proposed system is vital since it will guide the design process and the evaluation of the system on a conceptual as well as a detailed level. The aim of this paper is to address the requirement specification process that covers all aspects of the production system to be designed. This paper argues for the need of a holistic view in the requirement specification process of production systems. A holistic view of the overall process will facilitate to manage the various demands and categories important to deal with in the specification of requirements. Based on the holistic view it will be possible to identify the gates and stakeholders of the process itself, but also the substantial content of this process map.
In this paper, we present on-going work on data collected by a questionnaire surveying process practices, preferences, and methods in industrial software engineering.
Heterogeneous multiprocessor systems are becoming more common and scheduling real-time tasks on them is an extremely challenging research problem. While the stringent functional and timing requirements are to be met, this problem becomes even more difficult in dynamic environments, for example, caused by processor failures. Furthermore, in safety critical applications having tasks with mixed criticality levels, guaranteeing adaptive fault tolerance to meet the reliability requirements adds another complex dimension. The key contribution of our research is a framework for task allocation and scheduling in the above context, which has a generic task model enabling task-level redundancy, a range of reconfiguration/task migration options during processor failures and definition of a set of performance metrics. We have addressed the issues of both timeliness and reliability under three different allocation strategies for a multiprocessor system with the feasibility check being performed using the well-known Rate Monotonic (RM) schedulability test. The algorithm presented in this paper, ensures that all required deadlines are met with efficient processor utilization under normal conditions and guarantees essential operations even during processor failures. In real-time multiprocessor systems used in safety critical applications, the proposed approach is expected to provide better utilization of resources and guarantees with respect to the system reliability. We demonstrate as well as evaluate the performance of our approach by simulation studies on task scheduling in heterogeneous multiprocessor environments.
Building a secure and reliable network system especially for safety critical applications is an extremely challenging task even when the scale of the application or physical boundaries of the system are small and well-defined. The complex issues in network communications, security and data quality apart from the high reliability requirements pose difficult scientific problems one has to tackle with. In the context of the international monitoring system, these challenges become much more daunting due to heterogeneous network topologies, mixing of private networks and internet as well as the enormity of geographical coverage. This paper attempts to provide an overview of the various approaches followed internationally in dealing with reliable network communications. One of methods highlighted in this paper for a secure communication for the International Monitoring System is the usage of Virtual Private Networks (VPN) in the identified sensor locations to communicate data to desired local access server locations through unsecured public networks. This setup could be for nearby local stations within a specified radius. The data is routed through a tunnel to local servers in the VPN using protocols such as IPSEC, PPTP etc. Multi-homed network that provides redundant links are cost effective and are proposed as means to ensure high reliability and end-to-end availability between the VPN servers to the centralized system located at Vienna. This paper also compares various communication technologies and dependability strategies available and recommends suitable combinations that overcomes the challenges such as malicious attacks, various failure modes, dynamic changing of the routing table to address dead links etc., to preserve data integrity and provide highly reliable information to the end users.
In a model-driven engineering development process that focuses on guaranteeing that extra-functional concerns modeled at design level are preserved at platform execution level, the task of automated code generation must produce artifacts that enable back-annotation activities. In fact when the target platform code has been generated, quality attributes of the system are evaluated by appropriate code execution monitoring/analysis tools and their results back-annotated to the source models to be extensively evaluated. Only at this point the preservation of analysed extra-functional aspects can be either asserted or achieved by re-applying the code generation chain to the source models properly optimized according to the evaluation results. In this work we provide a solution for the problem of automatically generating target platform code from source models focusing on producing code artifacts that facilitate analysis and enable back-annotation activities. Arisen challenges and solutions are described together with completed and planned implementation of the proposed approach.
We analyze here the topic of integration, in the area of process automation, from sensor/actuator levels to plant management levels. The communication at fieldbus level is based on wireless technology while management applications run in wired control systems, but can also be distributed, communicating via the Internet. This work aims at building a real-life demonstrator at Boliden, a mining and smelting plant located in Boliden, Sweden.. A small process control environment is to be deployed at the plant to supervise a tank level control system. Targeted results are an interface between wireless and wired systems, the deployment of a wireless process control environment at Boliden, and the development of the enterprise business management facilities.
The scope of Model-Driven Engineering is not limited to Model-Based Development (MBD), i.e. the generation of code from system models, but involves also Model-Based Testing (MBT), which is the automatic generation of efficient test procedures from corresponding test models. Both MBD and MBT include activities such as model creation, model checking, and use of model compilers for the generation of system/test code. By reusing these common activities, the efficiency of MBT can be significantly improved for those organizations which have already adopted MBD, since one of the major efforts in MBT is the creation of test models. In this work, we propose to exploit modeling activity efforts by deriving test models from system models. In this respect, we present a case study in which the Executable and Translatable UML system models are used for automatically generating test models in the QTronic Modeling Language using horizontal model transformations. In turn, the derived artefacts can be used to produce test cases which result to be appropriate for the system under development.
A multitude of component models exist today, characterized by slightly different conceptual architectural elements, focusing on a specific operational domain, covering different phases of component life-cycle, or supporting analysis of different quality attributes. When dealing with different variants of products and in evolution of systems, there is a need for transformation of system models from one component model to another one. However, it is not obvious that different component models can accurately exchange models, due to their differences in concepts and semantics. This paper demonstrate an approach to achieve that. The paper proposes a generic framework to interchange models among component models. The framework, named DUALLY allows for tool and notations interpretability easing the transformation among many different component models. It is automated inside the Eclipse framework, and fully-extensible. The DUALLY approach is applied to two different component models for real-time embedded systems and observations are reported.
Together with increase of the influence of software systems in all aspects of everyday life there is also a need to focus on their non-functional characteristics. Reliability is one important software quality characteristic, which is defined as continuity of correct service. Reasoning and modeling are necessary in order to achieve desired levels of reliability both during design and usage of software systems. There exist different techniques for gathering of data for software reliability estimation and the aim of this paper is to make a good overview of them. As software testing is the biggest and most widely applied technique we also make a study of current state of the art in application of different testing methods for collection of data to be used for reliability estimation.
In his article Open Problems in the Philosophy of Information [1] Luciano Floridi presented a Philosophy of Information research program in the form of eighteen open problems, covering the following fundamental areas: Information definition, information semantics, intelligence/cognition, informational universe/nature and values/ethics. We revisit Floridi's program, highlighting some of the major advances, commenting on unsolved problems and rendering the new landscape of the Philosophy of Information (PI) emerging at present. As we analyze the progress of PI we try to situate Floridi's program in the context of scientific and technological development that have been made last ten years. We emphasize that Philosophy of Information is a huge and vibrant research field, with its origins dating before Open Problems, and its domains extending even outside their scope. In this paper, we have been able only to sketch some of the developments during the past ten years. Our hope is that, even if fragmentary, this review may serve as a contribution to the effort of understanding the present state of the art and the paths of development of Philosophy of Information as seen through the lens of Open Problems.
Development of large and complex software intensive systems with continuous builds typically generates large volumes of information with complex patterns and relations. Systematic and automated approaches are needed for efficient handling of such large quantities of data in a comprehensible way. In this paper we present an approach and tool enabling autonomous behavior in an automated test management tool to gain efficiency in concurrent software development and test. By capturing the required quality criteria in the test specifications and automating the test execution, test management can potentially be performed to a great extent without manual intervention. This work contributes towards a more autonomous behavior within a distributed remote test strategy based on metrics for decision making in automated testing. These metrics optimize management of fault corrections and retest, giving consideration to the impact of the identified weaknesses, such as fault-prone areas in software.
Popular description of The Art Break Project, Projekt Konstpaus "Projekt Konstpaus" (The Art Break Project) is a development pro¬ject partially financed by the European Union (EU). The vision of the project embodies equality, multiculturalism and sustainable community development. The municipality of Strangnas is the leading partner in the project and provides the necessary support for the project idea, local development, financing and infrastructure. The project team consists of several artists, and academics such as archaeologists, cultural geographers, biologists and geologists. The main objective of the project team is to provide the basis for the construction of a culturally inspired walking and bicycle path, with several rest spots/rest stops ("Konstpaus") designed with artistic cha¬racter influenced primarily by the municipalitys extensive nature/cul¬tural heritage. An initial task of the project team is the inventorying of the nature and culture artefacts within the project area as a means to promote na¬ture/culture preservation for the benefit of future generations through information sharing. The walking/bicycle path will be accessible to all with special provision for physical challenged individuals. The intention is to provide an environment for both quietude and physical recreation.
Meeting customer demands require manufacturing systems with a high degree of flexibility in the same time as the use of automation is becoming critical for competition. This is challenging, especially for SMEs with their inferior economical and competence conditions. This paper presents a new set up where the Factory-in-a-Box concept has been realized for a small manufacturing company with a profile of craftsmanship and small volumes. The objective of this paper is to discuss the possibility for SMEs to use automation and the Factory-in-a-box-concept to stay competitive and also the Factory-in-a-Box concept as means for realizing a Product-Service System.