The Black Pearl is a custom made autonomous underwater vehicle developed at Mälardalen University, Sweden. It is built in a modular fashion, including its mechanics, electronics and software. After a successful participation at the RoboSub competition in 2012 and winning the prize for best craftsmanship, this year we made minor improvements to the hardware, while the focus of the robot's evolution shifted to the software part. In this paper we give an overview of how the Black Pearl is built, both from the hardware and software point of view.
Settling down the software architecture for embedded system is a complex and time consuming task. Specific concerns that are generally issued from implementation details must be captured in the software architecture and assessed to ensure system correctness. The matter is further complicated by the inherent complexity and heterogeneity of the targeted systems, platforms and concerns. In addition, tools capable of conjointly catering for the complete design-verificationdeployment cycle, extra-functional properties and reuse are currently lacking. To address this, we have developed Pride, an integrated development environment for component-based development of embedded systems. Pride is based on an architecture relying on components with well-defined semantics that serve as the central development entity, and as means to support and aggregate various analysis and verification techniques throughout the development - from early specification to synthesis and deployment. Pride also provides generic support for integrating extra-functional properties into architectural definitions.
The growing enactment of Global Software Engineering in industry has triggered educational institutions to perceive the importance of preparing students for distributed software development. During the last twelve years we have disclosed advantages and pitfalls of GSE to our students through our Distributed Software Development course. After running the projects according to the iterative process model for eleven years, we decided to shift to an agile development model, SCRUM. This decision was due to the growing industrial adoption of agile methods, but more importantly to increase proactiveness, sense of responsibility, and to balance the workload among the project team members. In this paper we describe the process and outcomes of our first attempt at introducing SCRUM in our distributed course.
We present an approach to combine model-driven and component-based software engineering of distributed embedded systems. Specifically, we describe how deployment modelling is performed in two steps, and present an incremental synthesis of runnable representations of model entities on various abstraction levels. Our approach allows for flexible reuse opportunities, in that entities at different levels of granularity and abstraction can be reused. It also permits detailed analysis, e.g., with respect to timing, of units smaller than a whole physical node. We present a concept, virtual nodes, which preserves real-time properties across reuse and integration in different contexts.
Embedded systems are becoming more and more complex, thus demanding innovative means to tame their challenging development. Among others, early architecture optimization represents a crucial activity in the development of embedded systems to maximise the usage of their limited resources and to respect their real-time requirements. Typically, architecture optimization seeks good architecture candidates based on model-based analysis. Leveraging abstractions and estimates, this analysis usually produces approximations useful for comparing architecture candidates. Nonetheless, approximations do not provide enough accuracy in estimating crucial extra-functional properties. In this article, we provide an architecture optimization framework that profits from both the speed of model-based predictions and the accuracy of execution-based measurements. Model-based optimization rapidly finds a good architecture candidate, which is refined through optimization based on monitored executions of automatically generated code. Moreover, the framework enables the developer to leverage her optimization experience. More specifically, the developer can use runtime monitoring of generated code execution to manually adjust task allocation at modelling level, and commit the changes without halting execution. In the article, our architecture optimization mechanism is first described from a general point of view and then exploited for optimizing the allocation of software tasks to the processing cores of a multicore embedded system; we target extra-functional properties that can be concretely represented and automatically compared for different architectural alternatives (such as memory consumption, energy consumption, or responsetime).
This paper describes PRIDE, an integrated development environment for efficient component-based software development of embedded systems. PRIDE uses reusable software components as the central development units, and as a means to support and aggregate various analysis and verification techniques throughout the whole lifecycle - from early specification to deployment and synthesis. This paper focuses on support provided by PRIDE for the modeling and analysis aspects of the development of embedded systems based on reusable software components.
As component-based software engineering is growing and its usage expanding, more and more component models are developed. In this paper we present a survey of software component models in which models are described and classified respecting the classification framework for component models proposed by Crnković et. al. This framework specifies several groups of important principles and characteristics of component models: lifecycle, constructs, specification and management of extra-functional properties, and application domain. This paper gives examples three component models using the classification framework.
When developing embedded systems, certain constraints regarding extra functional properties have to be guaranteed. It is desirable to be able to perform early design time verification of embedded systems with respect to their extra-functional properties, because the earlier potential design flaws are caught, the easier and cheaper it is to correct them. Employing component-based software engineering and model-driven development for the development of embedded systems can facilitate this early verification. In this paper we present our planned research on early analysis of component-based embedded systems, which enables avoiding designs infeasible with respect to constraints on timing and resource consumption. © 2011 ACM.
Modern embedded systems are becoming increasingly performance intensive, since, on the one hand, they include more complex functionality than before, and one the other hand, the functionality that was typically realized with hardware is often moved to software. Multicore technology, previously successfully used for general-purpose systems, is penetrating into the domain of embedded systems. While it does increase the performance capacity, it also introduces the problem of how to allocate software tasks to the cores of the hardware platform, as different allocations exhibit different extra-functional properties. An intuitive example is allocating too many tasks to a core --- the core will be overloaded and tasks will miss their deadlines.
This thesis addresses the issue of task allocation in multicore embedded systems. The overall goal of the thesis is to advance the way soft real-time multicore systems are developed, by providing new methods and tools that enable deciding already at design-time which task to run on which core, with respect to a number of timing-related extra-functional properties. To achieve this goal, we developed a model-based framework for task allocation optimization. The framework uses model simulation in order to obtain performance predictions for particular task allocations. This in turn enables testing a large number of allocation candidates in search for one that exhibits good timing-related performance. Apart from defining and implementing the framework, three additional contributions are provided, each tackling a particular aspect of the framework: the influence of task allocation on communication duration is studied and interpreted in the context of design-time model-based analysis; a novel heuristic for guiding task allocation optimization is defined; and finally, a novel optimization method combining performance prediction and performance measurement is defined.
In many domains of embedded systems, the increasing performance demands are tackled by increasing performance capacity through the use of multicore technology. However, adding more processing units also introduces the issue of task allocation --- decisions have to be made which software task to run on which core in order to best utilize the hardware platform. In this paper, we present an optimization mechanism for allocating tasks to cores of a soft real-time embedded system, that aims to minimize end-to-end response times of task chains, while keeping the number of deadline misses below the desired limit. The optimization relies on a novel heuristic that proposes new allocation candidates based on information how tasks delay each other. The heuristic was evaluated in a series of experiments, which showed that it both finds better allocations, and does it in fewer iterations than two heuristics that we used for comparison.
In order to get accurate performance predictions, design-time architectural analysis of multicore embedded systems has to consider communication overhead. When communicating tasks execute on the same core, the communication typically happens through the local cache. On the other hand, when they run on separate cores, the communication has to go through the shared memory. As the shared memory has a significantly larger latency than the local cache, we expect a significant difference between intra-core and inter-core task communication. In this paper, we present a series of experiments we ran to identify the size of this difference, and discuss its impact on architectural analysis of multicore embedded systems. In particular, we show that the impact of the difference is much lower than anticipated.
Multicore technology provides a way to improve the performance of embedded systems in response to the demand in many domains for more and more complex functionality. However, increasing the number of processing units also introduces the problem of deciding which task to execute on which core in order to best utilize the platform. In this paper we present a model-based approach for automatic allocation of software tasks to the cores of a soft real-time embedded system, based on design-time performance predictions. We describe a general iterative method for finding an allocation that maximizes key performance aspects while satisfying given allocation constraints, and present an instance of this method, focusing on the particular performance aspects of timeliness and balanced computational load over time and over the cores.
SaveCCM is a domain specific component model developed specifically for safety-critical hard real-time embedded systems. The goal of this paper is to extend the scope of SaveCCM to make it usable also outside this narrow domain, as the general concepts behind SaveCCM are applicable as well for embedded systems that have soft or no real-time constraints. We describe the modifications made to SaveCCM in order to adjust it to the wider scope, focusing on defining a new realization mechanism. In its original form, a SaveCCM system is realized by component allocation to real-time tasks, which means that individual components are not observable in the run-time system. We propose realizing SaveCCM by a transformation to JavaBeans, making the advantages of component-based development present also at run-time. This way we also make the executable system more general and portable.
SaveCCM is a domain-specific component model developed specifically for safety-critical hard real-time embedded systems in the vehicular domain. This paper expands the scope of SaveCCM to make it usable also outside this narrow domain, as the general concepts behind SaveCCM are applicable for a wider range of embedded systems. We describe the extensions made to SaveCCM in order to adjust it to a broader scope, focusing on a new realization mechanism. In its original form, SaveCCM systems are realized by components being grouped and transformed into real-time tasks. We propose an alternative realization of SaveCCM - by transformation to JavaBeans, which makes the executable system more general and portable, and maintains the structure of the component-based design.
Typically, architecture optimization searches for good architecture candidates based on analyzing a model of the system. Model-based analysis inherently relies on abstractions and estimates, and as such produces approximations which are used to compare architecture candidates. However, approximations are often not sufficient due to the difficulty of accurately estimating certain extra-functional properties. In this paper, we present an architecture optimization approach where the speed of model-based optimization is combined with the accuracy of monitored system runs. Model-based optimization is used to quickly find a good architecture candidate, while optimization based on monitored system runs further refines this candidate. Using measurements assures a higher accuracy of the metrics used for optimization compared to using performance predictions. We demonstrate the feasibility of the approach by implementing it in our framework for optimizing the allocation of software tasks to the processing cores of a multicore embedded system.
Students and teachers do not necessarily have the same understanding of a course – of the purpose, the objective, and in particular of the course elements – the way the course is performed, the examination procedure, and similar. In distributed-development courses, in which students and teachers are dispersed over different locations, this difference can be larger than in “ordinary” courses, but also less visible, due to limited communication. In this paper we discuss these different perspectives, their rationales, possible consequences on the course performance and on the result, as well as lessons learned from students’ feedback.
Internet of Things (IoT) applications transcend traditional telecom to include enterprise verticals such as transportation, healthcare, agriculture, energy and utilities. Given the vast number of devices and heterogeneity of the applications, both ICT infrastructure and IoT application providers face unprecedented complexity challenges in terms of volume, privacy, interoperability and intelligence. Cognitive automation will be crucial to overcoming the intelligence challenge.
As component-based software engineering is growing and its usage expanding, more and more component models are developed. In this report we present a survey of software component models in which models are described and classified respecting the classification framework for component models proposed by CrnkoviA et. al. [1]. This framework specifies several groups of important principles and characteristics of component models: lifecycle, constructs, specification and management of extra-functional properties, and application domain. This report analyzes a considerable amount of component models, including widely used industrial models, as well as research models.
When a project had followed advices from the best practices, we can raise a question whether the success (or failure) of the project came from following (or not following) the best practices, or whether there were additional reasons that led to the positive (or negative) outcome. In this paper we analyze a case of a student project performed as a part of our Distributed Software Development course. The project followed the advices from the "Ten Tips to Succeed in Global Software Engineering Education" publication. This paper analyzes the project work with respect to the advices. Focusing on the perspective of a student participating in the project, the paper tries to answer whether following the advices is sufficient for a positive project outcome.
Component-based development promises many improvements in developing software for embedded systems, e.g., greater reuse of once written software, less error-prone development process, greater analyzability of systems and shorter time needed for overall development. One of the aspects commonly left out of component models is communication of software components with hardware devices such as sensors and actuators. As one of the main characteristics of embedded systems is the interaction with their environment through hardware devices, the effects of this interaction should be fully included in component models for embedded systems. In this paper we present a framework that enables inclusion of hardware devices in different phases of the component-based development process, including system design, deployment, analysis and code synthesis. Our framework provides a way for software components to explicitly state their dependencies on hardware devices, promotes reuse of software components with such dependencies and provides a basis for including hardware devices in analysis of component based embedded systems. We evaluate the feasibility of our approach by applying it to the ProCom component model.
This article gives a short overview of the contribution of DICES project. The goal of the project is to advance theories and technologies used in development of distributed embedded systems. Three examples of the contributions are presented: a) reverse engineering of web based applications, design extraction and extraction of reusable user-interface controls, b) a fretwork for building systems that use UPnP devices and treat them as components in the same way as software components, and c) PRIDE - development environment for designing, modeling and developing embedded systems, based on ProCom technology.