mdh.sePublications
Change search
Refine search result
1 - 23 of 23
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Faragardi, Hamid Reza
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. hamid.faragardi@uibk.ac.at.
    Optimizing Timing-Critical Cloud Resources in a Smart Factory2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis addresses the topic of resource efficiency in the context of timing critical components that are used in the realization of a Smart Factory.The concept of the smart factory is a recent paradigm to build future production systems in a way that is both smarter and more flexible. When it comes to realization of a smart factory, three principal elements play a significant role, namely Embedded Systems, Internet of Things (IoT) and Cloud Computing. In a smart factory, efficient use of computing and communication resources is a prerequisite not only to obtain a desirable performance for running industrial applications, but also to minimize the deployment cost of the system in terms of the size and number of resources that are required to run industrial applications with an acceptable level of performance. Most industrial applications that are involved in smart factories, e.g., automation and manufacturing applications, are subject to a set of strict timing constraints that must be met for the applications to operate properly. Such applications, including underlying hardware and software components that are used to run the application, constitute a real-time system. In real-time systems, the first and major concern of the system designer is to provide a solution where all timing constraints are met. To do so we need a time-predictable IoT/Cloud Computing framework to deal with the real-time constraints that are inherent in industrial applications running in a smart factory. Afterwards, with respect to the time predictable framework, the number of required computing and communication resources can and should be optimized such that the deployed system is cost efficient. In this thesis, to investigate and present solutions that provide and improve the resource efficiency of computing and communication resources in a smart factory, we conduct research following three themes: (i) multi-core embedded processors, which are the key element in terms of computing components embedded in the machinery of a smart factory, (ii) cloud computing data centers, as the supplier of a massive data storage and a large computational power, and(iii) IoT, for providing the interconnection of computing components embedded in the objects of a smart factory. Each of these themes are targeted separately to optimize resource efficiency. For each theme, we identify key challenges when it comes to achieving a resource-efficient design of the system. We then formulate the problem and propose solutions to optimize the resource efficiency of the system, while satisfying all timing constraints reflected in the model. We then propose a comprehensive resource allocation mechanism to optimize the resource efficiency in the whole system while considering the characteristics of each of these research themes. The experimental results indicate a clear improvement when it comes to timing-critical IoT / Cloud Computing resources in a smart factory. At the level of multi-core embedded devices, the total CPU usage of a quad-core processor is shown to be improved by 11.2%. At the level of Cloud Computing, the number of cloud servers that are required to execute a given set of real-time applications is shown to be reduced by 25.5%. In terms of network components that are used to collect sensor data, our proposed approach reduces the total deployment cost of thesystem by 24%. In summary these results all contribute towards the realization of a future smart factory.

  • 2.
    Faragardi, Hamid Reza
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Resource Optimization in Multi-processor Real-time Systems2017Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis addresses the topic of resource efficiency in multiprocessor systems in the presence of timing constraints. 

     Nowadays, almost wherever you look, you find a computing system. Most computing systems employ a multiprocessor platform. Multiprocessor systems can be found in a broad spectrum of computing systems ranging from a tiny chip hosting multiple cores to large geographically-distributed cloud data centers connected by the Internet. In multiprocessor systems, efficient use of computing resources is a substantial element when it comes to achieving a desirable performance for running software applications. 

     Most industrial applications, e.g., automotive and avionics applications, are subject to a set of real-time constraints that must be met. Such kinds of applications, along with the underlying hardware and software components running the application, constitute a real-time system. In real-time systems, the first and major concern of the system designer is to provide a solution where all timing constraints are met. Therefore, in multiprocessor real-time systems, not only resource efficiency, but also meeting all the timing requirements, is a major concern. 

     Industrie 4.0 is the current trend in automation and manufacturing when it comes to creating next generation of smart factories. Two categories of multiprocessor systems play a significant role in the realization of such a smart factory: 1) multi-core processors which are the key computing element of embedded systems, 2) cloud computing data centers as the supplier of a massive data storage and a large computational power. Both these categories are considered in the thesis, i.e., 1) the efficient use of embedded multi-core processors where multiple processors are located on the same chip, applied to execute a real-time application, and 2) the efficient use of multi-processors within a cloud computing data center. We address these two categories of multi-processor systems separately. 

     For each of them, we identify the key challenges to achieve a resource-efficient design of the system. We then formulate the problem and propose optimization solutions to optimize the efficiency of the system, while satisfying all timing constraints. Introducing a resource efficient solution for those two categories of multi-processor systems facilitates deployment of Industrie 4.0 in smart manufacturing factories where multi-core embedded processors and cloud computing data centers are two central cornerstones.

  • 3.
    Faragardi, Hamid Reza
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Dehnavi, Saed
    University of Tehran, Iran.
    Kargahi, Mehdi
    University of Tehran, Iran.
    Papadopoulos, Alessandro
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. IS (Embedded Systems).
    A Time-Predictable Fog-Integrated Cloud Framework: One Step Forward in the Deployment of a Smart Factory2018In: CSI International Symposium on Real-Time and Embedded Systems and Technologies REST'18, 2018, p. 54-62Conference paper (Refereed)
    Abstract [en]

    This paper highlights cloud computing as one of the principal building blocks of a smart factory, providing a huge data storage space and a highly scalable computational capacity. The cloud computing system used in a smart factory should be time-predictable to be able to satisfy hard real-time requirements of various applications existing in manufacturing systems. Interleaving an intermediate computing layer-called fog-between the factory and the cloud data center is a promising solution to deal with latency requirements of hard real-time applications. In this paper, a time-predictable cloud framework is proposed which is able to satisfy end-to-end latency requirements in a smart factory. To propose such an industrial cloud framework, we not only use existing real-time technologies such as Industrial Ethernet and the Real-time XEN hypervisor, but we also discuss unaddressed challenges. Among the unaddressed challenges, the partitioning of a given workload between the fog and the cloud is targeted. Addressing the partitioning problem not only provides a resource provisioning mechanism, but it also gives us a prominent design decision specifying how much computing resource is required to develop the fog platform, and how large should the minimum communication bandwidth be between the fog and the cloud data center.

  • 4.
    Faragardi, Hamid Reza
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Dehnavi, Saed
    University of Tehran, Iran.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Kargahi, Mehdi
    An Energy-Aware Time-Predictable Cloud Data CenterIn: Software, practice & experience, ISSN 0038-0644, E-ISSN 1097-024XArticle in journal (Refereed)
  • 5.
    Faragardi, Hamid Reza
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. Univ Innsbruck, Innsbruck, Austria.
    Dehnavi, Saeid
    Univ Tehran, Sch Elect & Comp Engn, Coll Engn, Tehran, Iran..
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Kargahi, Mehdi
    Univ Tehran, Sch Elect & Comp Engn, Coll Engn, Tehran, Iran.;Inst Res Fundamental Sci IPM, Sch Comp Sci, Tehran, Iran..
    Fahringer, Thomas
    Univ Innsbruck, Inst Comp Sci, Distributed & Parallel Syst Grp, Innsbruck, Austria.
    An energy-aware resource provisioning scheme for real-time applications in a cloud data center2018In: Software, practice & experience, ISSN 0038-0644, E-ISSN 1097-024X, Vol. 48, no 10, p. 1734-1757Article in journal (Refereed)
    Abstract [en]

    Based on a pay-as-you-go model, cloud computing provides the possibility of hosting pervasive applications from both academic and business domains. However, data centers hosting cloud applications consume huge amounts of electrical energy, contributing to high operational costs and large carbon footprints to the environment. Energy-aware resource provisioning is an effective solution to diminish the energy consumption of cloud data centers. Recently, a growing trend has emerged, where cloud technology is used to run periodic real-time applications such as multimedia, telecommunication, video gaming, and industrial applications. In order for a real-time application to be able to use cloud services, cloud providers have to be able to provide timing guarantees. In this paper, we introduce an energy-aware resource provisioning mechanism for cloud data centers, which are capable of serving real-time periodic tasks following the Software as a Service model. The proposed method is compared against an energy-aware version of the RT-OpenStack. RT-OpenStack is a recently proposed approach to provide a time-predictable version of OpenStack. The experimental results manifest that our proposed resource provisioning method outperforms energy-aware version of the RT-OpenStack by 16.01%, 25.45%, and 25.45% in terms of energy consumption, number of used servers, and average utilization of used servers, respectively. Moreover, from the scalability perspective, the preference of the proposed method for large-scale data centers is more considerable.

  • 6.
    Faragardi, Hamid Reza
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Fotouhi, Hossein
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Rahmani, Rahim
    Stockholm University, Stockholm, Sweden.
    A Cost Efficient Design of a Multi-Sink Multi-ControllerWSN in a Smart Factory2017In: Proceedings - 2017 IEEE 19th Intl Conference on High Performance Computing and Communications, HPCC 2017, 2017 IEEE 15th Intl Conference on Smart City, SmartCity 2017 and 2017 IEEE 3rd Intl Conference on Data Science and Systems, DSS 2017, 2017, p. 594-602Conference paper (Refereed)
    Abstract [en]

    Internet of Things (IoT), one of the key elementsof a smart factory, is dubbed as Industrial IoT (IIoT). Softwaredefined networking is a technique that benefits network managementin IIoT applications by providing network reconfigurability.In this way, controllers are integrated within the networkto advertise routing rules dynamically based on network andlink changes. We consider controllers within Wireless SensorNetworks (WSNs) for IIoT applications in such a way to providereliability and timeliness. Network reliability is addressed for thecase of node failure by considering multiple sinks and multiplecontrollers. Real-time requirements are implicitly applied bylimiting the number of hops (maximum path-length) betweensensors and sinks/controllers, and by confining the maximumworkload on each sink/controller. Deployment planning of sinksshould ensure that when a sink or controller fails, the networkis still connected. In this paper, we target the challenge ofplacement of multiple sinks and controllers, while ensuring thateach sensor node is covered by multiple sinks (k sinks) andmultiple controllers (k' controllers). We evaluate the proposedalgorithm using the benchmark GRASP-MSP through extensiveexperiments, and show that our approach outperforms thebenchmark by lowering the total deployment cost by up to24%. The reduction of the total deployment cost is fulfilled notonly as the result of decreasing the number of required sinksand controllers but also selecting cost-effective sinks/controllersamong all candidate sinks/controllers.

  • 7.
    Faragardi, Hamid Reza
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Lisper, Björn
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Towards a Communication-efficient Mapping of AUTOSAR Runnables on Multi-cores2013Conference paper (Refereed)
    Abstract [en]

    Multi-core technology is recognized as a key component to develop new cost-efficient products. It can lead to reduction of the overall hardware cost through hardware consolidation. However, it also results in tremendous challenges related to the combination of predictability and performance. The AUTOSAR consortium has developed as the worldwide standard for automotive embedded software systems. One of the prominent aspects of this consortium is to support multi-core systems. In this paper, the ongoing work on addressing the challenge of achieving a resource efficient and predictable mapping of AUTOSAR runnables onto a multi-core system is discussed. The goal is to minimize the runnables’ communication cost besides meeting timing and precedence constraints of the runnables. The basic notion utilized in this research is to consider runnable granularity, which leads to an increased flexibility in allocating runnables to various cores, compared of task granularity in which all of the runnables hosted on a task should be allocated on the same core. This increased flexibility can potentially enhance communication cost. In addition, a heuristic algorithm is introduced to create a task set according to the mapping of runnables on the cores. In our current work, we are formulating the problem as an Integer Linear Programming (ILP). Therefore, conventional ILP solvers can be easily applied to derive a solution.

  • 8.
    Faragardi, Hamid Reza
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Lisper, Björn
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Sandström, K.
    ABB Corporate Research, Västeräs, Sweden.
    Nolte, Thomas
    ABB Corporate Research, Västerås, Sweden.
    A communication-aware solution framework for mapping AUTOSAR runnables on multi-core systems2014In: 19th IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2014, 2014, p. Article number 7005244-Conference paper (Refereed)
    Abstract [en]

    An AUTOSAR-based software application contains a set of software components, each of which encapsulates a set of runnable entities. In fact, the mission of the system is fulfilled as result of the collaboration between the runnables. Several trends have recently emerged to utilize multi-core technology to run AUTOSAR-based software. Not only the overhead of communication between the runnables is one of the major performance bottlenecks in multi-core processors but it is also the main source of unpredictability in the system. Appropriate mapping of the runnables onto a set of tasks (called mapping process) along with proper allocation of the tasks to processing cores (called task allocation process) can significantly reduce the communication overhead. In this paper, three solutions are suggested, each of which comprises both the mapping and the allocation processes. The goal is to maximize key performance aspects by reducing the overall inter-runnable communication time besides satisfying given timing and precedence constraints. A large number of randomly generated experiments are carried out to demonstrate the efficiency of the proposed solutions.

  • 9.
    Faragardi, Hamid Reza
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Lisper, Björn
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Sandström, Kristian
    RISE SICS, Västerås, Sweden.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    A resource efficient framework to run automotive embedded software on multi-core ECUs2018In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, p. 64-83Article in journal (Refereed)
    Abstract [en]

    The increasing functionality and complexity of automotive applications requires not only the use of more powerful hardware, e.g., multi-core processors, but also efficient methods and tools to support design decisions. Component-based software engineering proved to be a promising solution for managing software complexity and allowing for reuse. However, there are several challenges inherent in the intersection of resource efficiency and predictability of multi-core processors when it comes to running component-based embedded software. In this paper, we present a software design framework addressing these challenges. The framework includes both mapping of software components onto executable tasks, and the partitioning of the generated task set onto the cores of a multi-core processor. This paper aims at enhancing resource efficiency by optimizing the software design with respect to: 1) the inter-software-components communication cost, 2) the cost of synchronization among dependent transactions of software components, and 3) the interaction of software components with the basic software services. An engine management system, one of the most complex automotive sub-systems, is considered as a use case, and the experimental results show a reduction of up to 11.2% total CPU usage on aquad-core processor, in comparison with the common framework in the literature. 

  • 10.
    Faragardi, Hamid Reza
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Lisper, Björn
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Sandström, Kristian
    ABB Corp. Research, Vasterås, Sweden.
    Nolte, Thomas
    ABB Corp. Research, Vasterås, Sweden.
    An efficient scheduling of AUTOSAR runnables to minimize communication cost in multi-core systems2014In: 2014 7th International Symposium on Telecommunications, IST 2014, 2014, p. 41-48Conference paper (Refereed)
    Abstract [en]

    The AUTOSAR consortium has developed as the worldwide standard for automotive embedded software systems. From a processor perspective, AUTOSAR was originally developed for single-core processor platforms. Recent trends have raised the desire for using multi-core processors to run AUTOSAR software. However, there are several challenges in reaching a highly efficient and predictable design of AUTOSAR-based embedded software on multi-core processors. In this paper a solution framework comprising both the mapping of runnables onto a set of tasks and the scheduling of the generated task set on a multi-core processor is suggested. The goal of the work presented in this paper is to minimize the overall inter-runnable communication cost besides meeting all corresponding timing and precedence constraints. The proposed solution framework is evaluated and compared with an exhaustive method to demonstrate the convergence to an optimal solution. Since the exhaustive method is not applicable for large size instances of the problem, the proposed framework is also compared with a well-known meta-heuristic algorithm to substantiate the capability of the frameworks to scale up. The experimental results clearly demonstrate high efficiency of the solution in terms of both communication cost and average processor utilization.

  • 11.
    Faragardi, Hamid Reza
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Rajabi, A.
    School of ECE, University of Tehran, Iran.
    Sandstrom, Kristian
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    EAICA: An energy-aware resource provisioning algorithm for Real-Time Cloud services2016In: IEEE International Conference on Emerging Technologies and Factory Automation, ETFA, 2016Conference paper (Refereed)
    Abstract [en]

    Cloud computing is receiving an increasing attention when it comes to providing a wide range of cost-effective services. In this context, energy consumption of communication and computing resources contribute to a major portion of the cost of services. On the other hand, growing energy consumption not only results in a higher operational cost, but it also causes negative environmental impacts. A large number of cloud applications in, e.g., telecommunication, multimedia, and video gaming, have real-time requirements. A cloud computing system hosting such applications, that requires a strict timing guarantee for its provided services, is denoted a Real-Time Cloud (RTC). Minimizing energy consumption in a RTC is a complicated task as common methods that are used for decreasing energy consumption can potentially lead to timing violations. In this paper, we present an online energy-aware resource provisioning framework to reduce the deadline miss ratio for real-time cloud services. The proposed provisioning framework not only considers the energy consumption of servers but it also takes the energy consumption of the communication network into account, to provide a holistic solution. An extensive range of simulation results, based on real data, show a noticeable improvement regarding energy consumption while keeping the number of timing violations less than 1% in average.

  • 12.
    Faragardi, Hamid Reza
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Rajabi, A.
    School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran .
    Shojaee, R.
    School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran .
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Towards energy-aware resource scheduling to maximize reliability in cloud computing systems2013In: Proc. - IEEE Int. Conf. High Perform. Comput. Commun., HPCC IEEE Int. Conf. Embedded Ubiquitous Comput., EUC, 2013, p. 1469-1479Conference paper (Refereed)
    Abstract [en]

    Cloud computing has become increasingly popular due to deployment of cloud solutions that will enable enterprises to cost reduction and more operational flexibility. Reliability is a key metric for assessing performance in such systems. Fault tolerance methods are extensively used to enhance reliability in Cloud Computing Systems (CCS). However, these methods impose extra hardware and/or software cost. Proper resource allocation is an alternative approach which can significantly improve system reliability without any extra overhead. On the other hand, contemplating reliability irrespective of energy consumption and Quality of Service (QoS) requirements is not desirable in CCSs. In this paper, an analytical model to analyze system reliability besides energy consumption and QoS requirements is introduced. Based on the proposed model, a new online resource allocation algorithm to find the right compromise between system reliability and energy consumption while satisfying QoS requirements is suggested. The algorithm is a new swarm intelligence technique based on imperialist competition which elaborately combines the strengths of some well-known meta-heuristic algorithms with an effective fast local search. A wide range of simulation results, based on real data, clearly demonstrate high efficiency of the proposed algorithm. 

  • 13.
    Faragardi, Hamid Reza
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Rajabi, Aboozar
    University of Tehran, Tehran, Iran.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Heidarizadeh, Amir Hosein
    A Profit-aware Allocation of High Performance Computing Applications on Distributed Cloud Data Centers with Environmental Considerations2014In: CSI Journal on Computer Science and Engineering JCSE, Vol. 2, no 1, p. 28-38Article in journal (Refereed)
    Abstract [en]

    A Set of Geographically Distributed Cloud data centers (SGDC) is a promising platform to run a large number of High Performance Computing Applications (HPCAs) in a cost-efficient manner. Energy consumption is a key factor affecting the profit of a cloud provider. In a SGDC, as the data centers are located in different corners of the world, the cost of energy consumption and the amount of CO2 emission significantly vary among the data centers. Therefore, in such systems not only a proper allocation of HPCAs results in CO2 emission reduction, but it also causes a substantial increase of the provider's profit. Furthermore, CO2 emission reduction mitigates the destructive environmental impacts. In this paper, the problem of allocation of a set of HPCAs on a SGDC is discussed where a two-level allocation framework is introduced to deal with the problem. The proposed framework is able to reach a good compromise between CO2 emission and the providers' profit subject to satisfy HPCAs deadlines and memory constraints. Simulation results based on a real intensive workload demonstrate that the proposed framework enhances the CO2 emission by 17% and the provider's profit by 9% in average.

  • 14.
    Faragardi, Hamid Reza
    et al.
    Mälardalen University, School of Innovation, Design and Engineering.
    Rajabi, Aboozar
    University of Tehran, Tehran, Iran.
    Shojaee, Reza
    University of Tehran, Tehran, Iran.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering.
    Towards Energy-Aware Resource Scheduling to Maximize Reliability in Cloud Computing Systems2013Conference paper (Refereed)
    Abstract [en]

    Cloud computing has become increasingly popular due to deployment of cloud solutions that will enable enterprises to cost reduction and more operational flexibility. Reliability is a key metric for assessing performance in such systems. Fault tolerance methods are extensively used to enhance reliability in Cloud Computing Systems (CCS). However, these methods impose extra hardware and/or software cost. Proper resource allocation is an alternative approach which can significantly improve system reliability without any extra overhead. On the other hand, contemplating reliability irrespective of energy consumption and Quality of Service (QoS) requirements is not desirable in CCSs. In this paper, an analytical model to analyze system reliability besides energy consumption and QoS requirements is introduced. Based on the proposed model a new online resource allocation algorithm to find the right compromise between system reliability and energy consumption while satisfies QoS requirement is suggested. The algorithm is a new swarm intelligence technique based on imperialist competition which elaborately combines the strengths of some well-known meta-heuristic algorithms with an effective fast local search. A wide range of simulation results, based on real data clearly demonstrate high efficiency of the proposed algorithm.

  • 15.
    Faragardi, Hamid Reza
    et al.
    University of Tehran, Tehran, Iran.
    Shojaee, Reza
    University of Tehran, Tehran, Iran.
    Yazdani, Nasser
    University of Tehran, Tehran, Iran.
    Reliability-Aware Task Allocation in Distributed Computing Systems using Hybrid Simulated Annealing and Tabu Search2012In: 14th IEEE International Conference on High Performance Computing and Communication HPCC'14, 2012, p. 1088-1095Conference paper (Refereed)
    Abstract [en]

    Reliability is one of the important issues in the design of distributed computing systems (DCSs). This paper deals with the problem of task allocation in heterogeneous DCSs for maximizing system reliability with several resource constraints. Memory capacity, processing load and communication rate are major constraints in the problem. Reliability oriented task allocation problem is NP-hard, thus many algorithms were presented to find a near optimal solution. This paper presents a Hybrid of Simulated Annealing and Tabu Search (HSATS) that uses a non-monotonic cooling schedule to find a near optimal solution within reasonable time. The HSATS algorithm was implemented and evaluated through experimental studies on a large number of randomly generated instances. Results have shown that the algorithm can obtain optimal solution in most cases. When it fails to produce optimal solution, deviation is less than 0.2 percent. Therefore in terms of solution quality, HSATS is significantly better than pure Simulated Annealing.

  • 16.
    Khalilzad, Nima
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Faragardi, Hamid Reza
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Towards Energy-Aware Placement of Real-Time Virtual Machines in a Cloud Data Center2015In: Proceedings - 2015 IEEE 17th International Conference on High Performance Computing and Communications, 2015 IEEE 7th International Symposium on Cyberspace Safety and Security and 2015 IEEE 12th International Conference on Embedded Software and Systems, HPCC-CSS-ICESS 2015, 2015, p. 1657-1662Conference paper (Refereed)
    Abstract [en]

    Cloud computing is an evolving paradigm which is becoming an adoptable technology for a variety of applications. However, cloud infrastructures must be able to fulfill application requirements before adopting cloud solutions. Cloud infrastructure providers communicate the characteristics of their services to their customers through Service Level Agreements (SLA). In order for a real-time application to be able to use cloud technology, cloud infrastructure providers have to be able to provide timing guarantees in the SLAs. In this paper, we present our ongoing work regarding a cloud solution in which periodic tasks are provided as a service in the Software as a Service (SaS) model. Tasks belonging to a certain application are mapped in a Virtual Machine (VM). We also study the problem of VMplacement on a cloud infrastructure. We propose a placement mechanism which minimizes the energy consumption of the data center by consolidating VMs in a minimum number of servers while respecting the timing requirement of virtual machines.

  • 17.
    Mahmud, Nesredin
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Rodriguez-Navas, Guillermo
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Faragardi, Hamid Reza
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Mubeen, Saad
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Seceleanu, Cristina
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Power-aware Allocation of Fault-tolerant Multi-rate AUTOSAR Applications2018In: 25th Asia-Pacific Software Engineering Conference APSEC'18, 2018Conference paper (Refereed)
    Abstract [en]

    This paper proposes an Integer Linear Programming optimization approach for the allocation of fault-tolerant embedded software applications that are developed using the AUTOSAR standard. The allocation takes into account the timing and reliability requirements of the multi-rate cause-effect chains in these applications and the heterogeneity of their execution platforms. The optimization objective is to minimize the total power consumption of the these applications that are distributed over more than one computing unit. The proposed approach is evaluated using a range of different software applications from the automotive domain, which are generated using the real-world automotive benchmark. The evaluation results indicate that the proposed allocation approach is effective and scalable while meeting the timing, reliability and power requirements in small- and medium-sized automotive software applications.

  • 18.
    Mousavi, Seyedeh Kosar
    et al.
    Islamic Azad University, Ramsar, Iran.
    Fazliahmadi, Saber
    Islamic Azad University, Tehran, Iran.
    Rasouli, Nayereh
    KaKaraj Branch, Technical and Vocational University, Alborz, Iran.
    Faragardi, Hamid Reza
    KTH, Stockholm, Sweden.
    Fotouhi, Hossein
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Fahringer, Thomas
    The University of Innsbruck, Innsbruck, Austria.
    A Budget-Constrained Placement of Controller Nodes for Maximizing the Network Performance in SDN-Enabled WSNsIn: The ISC International Journal of Information Security, ISSN 2008-2045Article in journal (Refereed)
    Abstract [en]

    Software Defined Networking (SDN) is a novel technique to provide network reconfigurability in Wireless Sensor Networks (WSNs). SDN is highly suitable to be applied in WSNs where high scalability and high reliability are required. To realize the SDN concept, a set of additional nodes, referred to as SDN-controller nodes (or controllers for short), are integrated into the network. Controllers are responsible to advertise routing rules dynamically based on network and link changes. Emerging controllers rises a new research challenge to determine the number and location of controller nodes in a WSN to maximize the network performance subject to both reliability and budget constraints. The budget constraint restricts the maximum number of controller nodes deployed in a WSN. In this paper, we first deal with the challenge to place SDN-controller nodes by introducing an ILP model for the problem which then is solved using the CPLEX ILP solver. We evaluate the results of the proposed method through comparison with the state-of-the-art method. Extensive experiments demonstrate that the proposed method reduces the maximum distance between sensors and controllers by 13% in average in comparison with the state-of-the-art method.

  • 19.
    Nikoueia, Reihaneh
    et al.
    Shahid Bahonar University of Kerman, Iran.
    Rasouli, Nayereh
    Technical and Vocational University, Karaj Branch, Alborz, Iran.
    Tahmasebi, Shirin
    Sharif University of Technology, Tehran, Iran.
    Zolfid, Somayeh
    University of Science and Technology, Tehran, Iran.
    Faragardi, Hamid Reza
    KTH Royal Institute of Technology, Stockholm, Sweden.
    Fotouhi, Hossein
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    A Quantum-Annealing-Based Approach to Optimize the Deployment Cost of a Multi-Sink Multi-Controller WSNIn: Procedia Computer Science, ISSN 1877-0509, E-ISSN 1877-0509Article in journal (Refereed)
    Abstract [en]

    Software Defined Networking (SDN) provides network significant reconfiguration capability to Wireless Sensor Networks (WSNs). SDN is a promising technique for WSNs with high scalability and high reliability requirements. In SDN, a set of controller nodes are integrated into the network to advertise routing rules dynamically based on network and link changes. Determining the number and location of both sinks (are in charge of collecting the sensors data) and controller nodes in a WSN subject to both reliability and performance constraints is an important research challenge. In this paper, to address this research challenge, we propose a Quantum Annealing approach that improves the deployment cost of the system by minimizing the number of required sinks and SDN controller nodes. The experiments show that our approach improves the deployment cost of the network against the state-ofthe-art by 10.7% on average.

  • 20.
    Rajabi, Aboozar
    et al.
    University of Tehran, Tehran, Iran.
    Faragardi, Hamid Reza
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Nolte, Thomas
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    An Efficient Scheduling of HPC Applications on Geographically Distributed Cloud Data Centers2014In: Computer Networks and Distributed Systems: International Symposium, CNDS 2013, Tehran, Iran, December 25-26, 2013, Revised Selected Papers, Springer, 2014, p. 155-167Chapter in book (Refereed)
    Abstract [en]

    Cloud computing provides a flexible infrastructure for IT industries to run their High Performance Computing (HPC) applications. Cloud providers deliver such computing infrastructures through a set of data centers called a cloud federation. The data centers of a cloud federation are usually distributed over the world. The profit of cloud providers strongly depends on the cost of energy consumption. As the data centers are located in various corners of the world, the cost of energy consumption and the amount of CO2 emission in dif-ferent data centers varies significantly. Therefore, a proper allocation of HPC applications in such systems can result in a decrease of CO2 emission and a substantial increase of the providers’ profit. Reduction of CO2 emission also mitigates the destructive environmental impacts. In this paper, the problem of scheduling HPC applications on a geographically distributed cloud federation is scrutinized. To address the problem, we propose a two-level scheduler which is able to reach a good compromise between CO2 emission and the profit of cloud provider. The scheduler should also satisfy all HPC applications’ deadline and memory constraints. Simulation results based on a real intensive workload indi-cate that the proposed scheduler reduces the CO2 emission by 11% while at the same time it improves the provider’s profit in average.

  • 21.
    Rajabi, Aboozar
    et al.
    University of Tehran, Tehran, Iran.
    Faragardi, Hamid Reza
    University of Tehran, Tehran, Iran.
    Yazdani, Nasser
    University of Tehran, Tehran, Iran.
    Communication-aware and Energy-efficient Resource Provisioning for Real-Time Cloud Services2013In: CADS 2013 CADS 2013, 2013, p. 125-129Conference paper (Refereed)
    Abstract [en]

    Operating expense of data centers is a tremendous challenge in cloud systems. Energy Consumption of communication equipment and computing resources contributes to significant portion of this cost. In this paper, an online energy-aware resource provisioning framework to minimize the deadline miss rate for real-time cloud services is introduced. Communication-awareness is also taken into consideration in order to reduce energy consumption of the network equipment. A wide range of simulation results, based on real data clearly demonstrates noticeable improvement of energy consumption while at the same time the deadline miss rate is less than 1.5% in average.

  • 22.
    Razavi, R.
    et al.
    Sch. of ECE, Univ. of Tehran, Tehran, Iran.
    Rajabi, A.
    Sch. of ECE, Univ. of Tehran, Tehran, Iran.
    Faragardi, Hamid Reza
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Pourashraf, T.
    Sch. of ECE, Univ. of Tehran, Tehran, Iran.
    Yazdani, N.
    Sch. of ECE, Univ. of Tehran, Tehran, Iran.
    Energy-efficient scheduling of real-time cloud services using task consolidation and Dynamic Voltage Scaling2014In: 2014 7th International Symposium on Telecommunications, IST 2014, 2014, p. 675-682Conference paper (Refereed)
    Abstract [en]

    Energy consumption has attracted a lot of attention in the past few years, because energy reduction causes a significant mitigation of the negative impact on the environment along with an operational cost reduction. Energy-efficient task scheduling is an effective technique to decrease the energy consumption in the Cloud Computing Systems (CCSs). In this paper, the problem of scheduling a set of precedence-constrained real-time services onto a set of heterogenous servers is investigated. Each service contains a set of tasks bounded with a specific deadline. The main notion applied in this paper is to employ the consolidation approach along with the Dynamic Voltage Scaling (DVS) technique. The proposed scheduler is developed in three phases. Tasks' deadlines and a laxity metric are computed for each service according to the corresponding service deadline prior to the main scheduling phase. Afterwards, in order to consolidate the tasks onto the minimum number of servers, the algorithm estimates the required number of servers. Finally, in the last phase, the tasks are scheduled while the DVS technique is applied with considering the tasks' deadlines. The extensive experimental results clearly demonstrate that the proposed algorithm reduces the energy consumption of a CCS by 14% on average in comparison with beam search algorithm. In addition, it outperforms the non power-Aware algorithm by 84%.

  • 23.
    Shojaee, Reza
    et al.
    University of Tehran, Tehran, Iran.
    Faragardi, Hamid Reza
    Mälardalen University, School of Innovation, Design and Engineering. Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. University of Tehran, Tehran, Iran.
    Yazdani, Nasser
    University of Tehran, Tehran, Iran.
    From Reliable Distributed System Toward Reliable Cloud by Cat Swarm Optimization2013In: International Journal of Information and Communication Technology IJICT, ISSN 2251-6107, Vol. 5, no 4, p. 9-18Article in journal (Refereed)
    Abstract [en]

    Distributed Systems (DS) are usually complex systems composed of various components and cloud is a common type of DSs. Reliability is a major challenge for the design of cloud systems and DSs in general. In this paper an analytical model to analyze reliability in DSs with regards to task allocation was presented. Subsequently, this model was modified and a new model to analyze reliability in cloud systems with regards to Virtual Machine(VM) allocation was suggested. On the other hand, optimal task allocation in DSs is an NP-hard problem, thus finding exact solutions are limited to small-scale problems. This paper presents a new swarm intelligence technique based on Cat Swarm Optimization (CSO) algorithm to find near optimal solution. For evaluating the algorithm, CSO is compared with Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The experimental results show that in contrast to PSO and GA, CSO acquires acceptable reliability in reasonable execution time.

1 - 23 of 23
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf