mdh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Resource Optimization in Multi-processor Real-time Systems
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.ORCID iD: 0000-0002-1384-5323
2017 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis addresses the topic of resource efficiency in multiprocessor systems in the presence of timing constraints. 

 Nowadays, almost wherever you look, you find a computing system. Most computing systems employ a multiprocessor platform. Multiprocessor systems can be found in a broad spectrum of computing systems ranging from a tiny chip hosting multiple cores to large geographically-distributed cloud data centers connected by the Internet. In multiprocessor systems, efficient use of computing resources is a substantial element when it comes to achieving a desirable performance for running software applications. 

 Most industrial applications, e.g., automotive and avionics applications, are subject to a set of real-time constraints that must be met. Such kinds of applications, along with the underlying hardware and software components running the application, constitute a real-time system. In real-time systems, the first and major concern of the system designer is to provide a solution where all timing constraints are met. Therefore, in multiprocessor real-time systems, not only resource efficiency, but also meeting all the timing requirements, is a major concern. 

 Industrie 4.0 is the current trend in automation and manufacturing when it comes to creating next generation of smart factories. Two categories of multiprocessor systems play a significant role in the realization of such a smart factory: 1) multi-core processors which are the key computing element of embedded systems, 2) cloud computing data centers as the supplier of a massive data storage and a large computational power. Both these categories are considered in the thesis, i.e., 1) the efficient use of embedded multi-core processors where multiple processors are located on the same chip, applied to execute a real-time application, and 2) the efficient use of multi-processors within a cloud computing data center. We address these two categories of multi-processor systems separately. 

 For each of them, we identify the key challenges to achieve a resource-efficient design of the system. We then formulate the problem and propose optimization solutions to optimize the efficiency of the system, while satisfying all timing constraints. Introducing a resource efficient solution for those two categories of multi-processor systems facilitates deployment of Industrie 4.0 in smart manufacturing factories where multi-core embedded processors and cloud computing data centers are two central cornerstones.

Place, publisher, year, edition, pages
Västerås: Mälardalen University , 2017.
Series
Mälardalen University Press Licentiate Theses, ISSN 1651-9256 ; 263
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:mdh:diva-35387ISBN: 978-91-7485-336-0 (print)OAI: oai:DiVA.org:mdh-35387DiVA, id: diva2:1098606
Presentation
2017-10-05, Paros, Mälardalens högskola, Västerås, 13:30 (English)
Opponent
Supervisors
Available from: 2017-09-14 Created: 2017-05-24 Last updated: 2018-01-13Bibliographically approved
List of papers
1. A communication-aware solution framework for mapping AUTOSAR runnables on multi-core systems
Open this publication in new window or tab >>A communication-aware solution framework for mapping AUTOSAR runnables on multi-core systems
2014 (English)In: 19th IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2014, 2014, p. Article number 7005244-Conference paper, Published paper (Refereed)
Abstract [en]

An AUTOSAR-based software application contains a set of software components, each of which encapsulates a set of runnable entities. In fact, the mission of the system is fulfilled as result of the collaboration between the runnables. Several trends have recently emerged to utilize multi-core technology to run AUTOSAR-based software. Not only the overhead of communication between the runnables is one of the major performance bottlenecks in multi-core processors but it is also the main source of unpredictability in the system. Appropriate mapping of the runnables onto a set of tasks (called mapping process) along with proper allocation of the tasks to processing cores (called task allocation process) can significantly reduce the communication overhead. In this paper, three solutions are suggested, each of which comprises both the mapping and the allocation processes. The goal is to maximize key performance aspects by reducing the overall inter-runnable communication time besides satisfying given timing and precedence constraints. A large number of randomly generated experiments are carried out to demonstrate the efficiency of the proposed solutions.

Keyword
Ant System, AUTOSAR, feedback-based search, mapping, multi-core, runnable, Simulated Annealing, Application programs, Factory automation, Ant systems, Feed-back based, Multi core, Microprocessor chips
National Category
Computer and Information Sciences Computer Sciences
Identifiers
urn:nbn:se:mdh:diva-27937 (URN)10.1109/ETFA.2014.7005244 (DOI)000360999100195 ()2-s2.0-84946692528 (Scopus ID)9781479948468 (ISBN)
Conference
19th IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2014, 16 September 2014 through 19 September 2014
Available from: 2015-04-30 Created: 2015-04-30 Last updated: 2018-01-11Bibliographically approved
2. An Efficient Scheduling of HPC Applications on Geographically Distributed Cloud Data Centers
Open this publication in new window or tab >>An Efficient Scheduling of HPC Applications on Geographically Distributed Cloud Data Centers
2013 (English)In: Computer Networks and Distributed Systems: International Symposium, CNDS 2013, Tehran, Iran, December 25-26, 2013, Revised Selected Papers, Springer, 2013, p. 155-167Chapter in book (Refereed)
Abstract [en]

Cloud computing provides a flexible infrastructure for IT industries to run their High Performance Computing (HPC) applications. Cloud providers deliver such computing infrastructures through a set of data centers called a cloud federation. The data centers of a cloud federation are usually distributed over the world. The profit of cloud providers strongly depends on the cost of energy consumption. As the data centers are located in various corners of the world, the cost of energy consumption and the amount of CO2 emission in dif-ferent data centers varies significantly. Therefore, a proper allocation of HPC applications in such systems can result in a decrease of CO2 emission and a substantial increase of the providers’ profit. Reduction of CO2 emission also mitigates the destructive environmental impacts. In this paper, the problem of scheduling HPC applications on a geographically distributed cloud federation is scrutinized. To address the problem, we propose a two-level scheduler which is able to reach a good compromise between CO2 emission and the profit of cloud provider. The scheduler should also satisfy all HPC applications’ deadline and memory constraints. Simulation results based on a real intensive workload indi-cate that the proposed scheduler reduces the CO2 emission by 11% while at the same time it improves the provider’s profit in average.

Place, publisher, year, edition, pages
Springer, 2013
Series
Communications in Computer and Information Science ; 428
Keyword
Cloud Computing, Data Center, Energy-aware scheduling, CO2 emission, Multi-objective optimization
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:mdh:diva-25130 (URN)10.1007/978-3-319-10903-9_13 (DOI)000347888900013 ()2-s2.0-84908543721 (Scopus ID)978-3-319-10903-9 (Local ID)978-3-319-10902-2 (ISBN)978-3-319-10903-9 (Archive number)978-3-319-10903-9 (OAI)
Conference
International Symposium on Computer Networks and Distributed Systems CNDS'13, 25 Dec 2013, Tehran, Iran
Available from: 2014-06-09 Created: 2014-06-05 Last updated: 2018-01-29Bibliographically approved
3. A resource efficient framework to run automotive embedded software on multi-core ECUs
Open this publication in new window or tab >>A resource efficient framework to run automotive embedded software on multi-core ECUs
(Swedish)Manuscript (preprint) (Other academic)
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-36448 (URN)
Available from: 2017-09-18 Created: 2017-09-18 Last updated: 2017-09-18Bibliographically approved
4. A Profit-aware Allocation of High Performance Computing Applications on Distributed Cloud Data Centers with Environmental Considerations
Open this publication in new window or tab >>A Profit-aware Allocation of High Performance Computing Applications on Distributed Cloud Data Centers with Environmental Considerations
2014 (English)In: CSI Journal on Computer Science and Engineering JCSE, Vol. 2, no 1, p. 28-38Article in journal (Refereed) Published
Abstract [en]

A Set of Geographically Distributed Cloud data centers (SGDC) is a promising platform to run a large number of High Performance Computing Applications (HPCAs) in a cost-efficient manner. Energy consumption is a key factor affecting the profit of a cloud provider. In a SGDC, as the data centers are located in different corners of the world, the cost of energy consumption and the amount of CO2 emission significantly vary among the data centers. Therefore, in such systems not only a proper allocation of HPCAs results in CO2 emission reduction, but it also causes a substantial increase of the provider's profit. Furthermore, CO2 emission reduction mitigates the destructive environmental impacts. In this paper, the problem of allocation of a set of HPCAs on a SGDC is discussed where a two-level allocation framework is introduced to deal with the problem. The proposed framework is able to reach a good compromise between CO2 emission and the providers' profit subject to satisfy HPCAs deadlines and memory constraints. Simulation results based on a real intensive workload demonstrate that the proposed framework enhances the CO2 emission by 17% and the provider's profit by 9% in average.

Keyword
Cloud Computing, Data Center, Energy-aware allocation, CO2 emission, Multi-objective optimization, Live migration.
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-35488 (URN)
Projects
PREMISE - Predictable Multicore Systems
Available from: 2017-05-31 Created: 2017-05-31 Last updated: 2018-02-26Bibliographically approved
5. Towards Energy-Aware Resource Scheduling to Maximize Reliability in Cloud Computing Systems
Open this publication in new window or tab >>Towards Energy-Aware Resource Scheduling to Maximize Reliability in Cloud Computing Systems
2013 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Cloud computing has become increasingly popular due to deployment of cloud solutions that will enable enterprises to cost reduction and more operational flexibility. Reliability is a key metric for assessing performance in such systems. Fault tolerance methods are extensively used to enhance reliability in Cloud Computing Systems (CCS). However, these methods impose extra hardware and/or software cost. Proper resource allocation is an alternative approach which can significantly improve system reliability without any extra overhead. On the other hand, contemplating reliability irrespective of energy consumption and Quality of Service (QoS) requirements is not desirable in CCSs. In this paper, an analytical model to analyze system reliability besides energy consumption and QoS requirements is introduced. Based on the proposed model a new online resource allocation algorithm to find the right compromise between system reliability and energy consumption while satisfies QoS requirement is suggested. The algorithm is a new swarm intelligence technique based on imperialist competition which elaborately combines the strengths of some well-known meta-heuristic algorithms with an effective fast local search. A wide range of simulation results, based on real data clearly demonstrate high efficiency of the proposed algorithm.

Keyword
cloud computing, reliability, analytical model, resource allocation, quality of service, energy-aware scheduling
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mdh:diva-23562 (URN)
Conference
5th IEEE International Conference on High Performance Computing and Communications,(HPCC 2013),Zhangjiajie, China, November 13-15, 2013
Projects
PREMISE - Predictable Multicore SystemsAUTOSAR for Multi-Core in Automotive and Automation Industries
Available from: 2013-12-16 Created: 2013-12-16 Last updated: 2017-09-18Bibliographically approved
6. Towards Energy-Aware Placement of Real-Time Virtual Machines in a Cloud Data Center
Open this publication in new window or tab >>Towards Energy-Aware Placement of Real-Time Virtual Machines in a Cloud Data Center
2015 (English)In: Proceedings - 2015 IEEE 17th International Conference on High Performance Computing and Communications, 2015 IEEE 7th International Symposium on Cyberspace Safety and Security and 2015 IEEE 12th International Conference on Embedded Software and Systems, HPCC-CSS-ICESS 2015, 2015, p. 1657-1662Conference paper, Published paper (Refereed)
Abstract [en]

Cloud computing is an evolving paradigm which is becoming an adoptable technology for a variety of applications. However, cloud infrastructures must be able to fulfill application requirements before adopting cloud solutions. Cloud infrastructure providers communicate the characteristics of their services to their customers through Service Level Agreements (SLA). In order for a real-time application to be able to use cloud technology, cloud infrastructure providers have to be able to provide timing guarantees in the SLAs. In this paper, we present our ongoing work regarding a cloud solution in which periodic tasks are provided as a service in the Software as a Service (SaS) model. Tasks belonging to a certain application are mapped in a Virtual Machine (VM). We also study the problem of VMplacement on a cloud infrastructure. We propose a placement mechanism which minimizes the energy consumption of the data center by consolidating VMs in a minimum number of servers while respecting the timing requirement of virtual machines.

National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-29238 (URN)10.1109/HPCC-CSS-ICESS.2015.22 (DOI)000380408100272 ()2-s2.0-84949578819 (Scopus ID)9781479989362 (ISBN)
External cooperation:
Conference
17th IEEE International Conference on High Performance Computing and Communications, IEEE 7th International Symposium on Cyberspace Safety and Security and IEEE 12th International Conference on Embedded Software and Systems, HPCC-ICESS-CSS 2015; New York; United States; 24 August 2015 through 26 August 2015
Projects
ARROWS - Design Techniques for Adaptive Embedded SystemsPRESS - Predictable Embedded Software Systems
Available from: 2015-10-06 Created: 2015-09-29 Last updated: 2017-09-18Bibliographically approved

Open Access in DiVA

fulltext(1039 kB)41 downloads
File information
File name FULLTEXT02.pdfFile size 1039 kBChecksum SHA-512
bfcec93a09546733bcdbb95815263bfe43482b4fdded545b3d4c16c348aefb5a1fb64d00f506817293561fe99b6a2759fbb506f36001dd56255a6b07d624888a
Type fulltextMimetype application/pdf

Authority records BETA

Faragardi, Hamid Reza

Search in DiVA

By author/editor
Faragardi, Hamid Reza
By organisation
Embedded Systems
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 41 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 236 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf