mdh.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Papadopoulos, AlessandroORCID iD iconorcid.org/0000-0002-1364-8127
Alternative names
Publications (10 of 27) Show all publications
Ioli, D., Falsone, A., Papadopoulos, A. & Prandini, M. (2019). A compositional modeling framework for the optimal energy management of a district network. Journal of Process Control, 74, 160-176
Open this publication in new window or tab >>A compositional modeling framework for the optimal energy management of a district network
2019 (English)In: Journal of Process Control, ISSN 0959-1524, E-ISSN 1873-2771, Vol. 74, p. 160-176Article in journal (Refereed) Published
Abstract [en]

This paper proposes a compositional modeling framework for the optimal energy management of a district network. The focus is on cooling of buildings, which can possibly share resources to the purpose of reducing maintenance costs and using devices at their maximal efficiency. Components of the network are described in terms of energy fluxes and combined via energy balance equations. Disturbances are accounted for as well, through their contribution in terms of energy. Different district configurations can be built, and the dimension and complexity of the resulting model will depend both on the number and type of components and on the adopted disturbance description. Control inputs are available to efficiently operate and coordinate the district components, thus enabling energy management strategies to minimize the electrical energy costs or track some consumption profile agreed with the main grid operator.

Place, publisher, year, edition, pages
Elsevier Ltd, 2019
Keywords
Building thermal regulation, Compositional systems, Energy management, Smart grid modeling, Electric power transmission networks, Compositional modeling, Cooling of buildings, District networks, Electrical energy costs, Energy balance equations, Energy management strategies, Smart grid, Thermal regulation, Smart power grids
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:mdh:diva-43055 (URN)10.1016/j.jprocont.2017.10.005 (DOI)000465050900015 ()2-s2.0-85033227619 (Scopus ID)
Available from: 2019-04-10 Created: 2019-04-10 Last updated: 2019-05-09Bibliographically approved
Miloradović, B., Curuklu, B., Ekström, M. & Papadopoulos, A. (2019). Extended colored traveling salesperson for modeling multi-agent mission planning problems. In: ICORES 2019 - Proceedings of the 8th International Conference on Operations Research and Enterprise Systems: . Paper presented at 8th International Conference on Operations Research and Enterprise Systems, ICORES 2019, 19 February 2019 through 21 February 2019 (pp. 237-244). SciTePress
Open this publication in new window or tab >>Extended colored traveling salesperson for modeling multi-agent mission planning problems
2019 (English)In: ICORES 2019 - Proceedings of the 8th International Conference on Operations Research and Enterprise Systems, SciTePress , 2019, p. 237-244Conference paper, Published paper (Refereed)
Abstract [en]

In recent years, multi-agent systems have been widely used in different missions, ranging from underwater to airborne. A mission typically involves a large number of agents and tasks, making it very hard for the human operator to create a good plan. A search for an optimal plan may take too long, and it is hard to make a time estimate of when the planner will finish. A Genetic algorithm based planner is proposed in order to overcome this issue. The contribution of this paper is threefold. First, an Integer Linear Programming (ILP) formulation of a novel Extensive Colored Traveling Salesperson Problem (ECTSP) is given. Second, a new objective function suitable for multi-agent mission planning problems is proposed. Finally, a reparation algorithm to allow usage of common variation operators for ECTSP has been developed. 

Place, publisher, year, edition, pages
SciTePress, 2019
Keywords
Colored traveling salesperson (CTSP), Genetic algorithms, Multi-agent mission planning, Integer programming, Operations research, Software agents, Human operator, Integer Linear Programming, Mission planning, Mission planning problem, Objective functions, Traveling salesperson problem, Variation operator, Multi agent systems
National Category
Computer Sciences
Identifiers
urn:nbn:se:mdh:diva-43305 (URN)2-s2.0-85064712559 (Scopus ID)9789897583520 (ISBN)
Conference
8th International Conference on Operations Research and Enterprise Systems, ICORES 2019, 19 February 2019 through 21 February 2019
Available from: 2019-05-09 Created: 2019-05-09 Last updated: 2019-05-09Bibliographically approved
Faragardi, H. R., Dehnavi, S., Kargahi, M., Papadopoulos, A. & Nolte, T. (2018). A Time-Predictable Fog-Integrated Cloud Framework: One Step Forward in the Deployment of a Smart Factory. In: CSI International Symposium on Real-Time and Embedded Systems and Technologies REST'18: . Paper presented at CSI International Symposium on Real-Time and Embedded Systems and Technologies REST'18, 09 May 2018, Tehran, Iran (pp. 54-62).
Open this publication in new window or tab >>A Time-Predictable Fog-Integrated Cloud Framework: One Step Forward in the Deployment of a Smart Factory
Show others...
2018 (English)In: CSI International Symposium on Real-Time and Embedded Systems and Technologies REST'18, 2018, p. 54-62Conference paper, Published paper (Refereed)
Abstract [en]

This paper highlights cloud computing as one of the principal building blocks of a smart factory, providing a huge data storage space and a highly scalable computational capacity. The cloud computing system used in a smart factory should be time-predictable to be able to satisfy hard real-time requirements of various applications existing in manufacturing systems. Interleaving an intermediate computing layer-called fog-between the factory and the cloud data center is a promising solution to deal with latency requirements of hard real-time applications. In this paper, a time-predictable cloud framework is proposed which is able to satisfy end-to-end latency requirements in a smart factory. To propose such an industrial cloud framework, we not only use existing real-time technologies such as Industrial Ethernet and the Real-time XEN hypervisor, but we also discuss unaddressed challenges. Among the unaddressed challenges, the partitioning of a given workload between the fog and the cloud is targeted. Addressing the partitioning problem not only provides a resource provisioning mechanism, but it also gives us a prominent design decision specifying how much computing resource is required to develop the fog platform, and how large should the minimum communication bandwidth be between the fog and the cloud data center.

National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-38638 (URN)10.1109/RTEST.2018.8397079 (DOI)2-s2.0-85050457708 (Scopus ID)9781538614754 (ISBN)
Conference
CSI International Symposium on Real-Time and Embedded Systems and Technologies REST'18, 09 May 2018, Tehran, Iran
Projects
PREMISE - Predictable Multicore Systems
Available from: 2018-02-12 Created: 2018-02-12 Last updated: 2019-01-04Bibliographically approved
Frasheri, M., Curuklu, B., Ekström, M. & Papadopoulos, A. (2018). Adaptive Autonomy in a Search and Rescue Scenario. In: International Conference on Self-Adaptive and Self-Organizing Systems, SASO, Volume 2018-September, 15 January 2019: . Paper presented at 12th IEEE International Conference on Self-Adaptive and Self-Organizing Systems, SASO 2018; Trento; Italy; 3 September 2018 through 7 September 2018 (pp. 150-155).
Open this publication in new window or tab >>Adaptive Autonomy in a Search and Rescue Scenario
2018 (English)In: International Conference on Self-Adaptive and Self-Organizing Systems, SASO, Volume 2018-September, 15 January 2019, 2018, p. 150-155Conference paper, Published paper (Refereed)
Abstract [en]

Adaptive autonomy plays a major role in the design of multi-robots and multi-agent systems, where the need of collaboration for achieving a common goal is of primary importance. In particular, adaptation becomes necessary to deal with dynamic environments, and scarce available resources. In this paper, a mathematical framework for modelling the agents' willingness to interact and collaborate, and a dynamic adaptation strategy for controlling the agents' behavior, which accounts for factors such as progress toward a goal and available resources for completing a task among others, are proposed. The performance of the proposed strategy is evaluated through a fire rescue scenario, where a team of simulated mobile robots need to extinguish all the detected fires and save the individuals at risk, while having limited resources. The simulations are implemented as a ROS-based multi agent system, and results show that the proposed adaptation strategy provides a more stable performance than a static collaboration policy. 

National Category
Engineering and Technology Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:mdh:diva-40254 (URN)10.1109/SASO.2018.00026 (DOI)000459885200016 ()2-s2.0-85061910844 (Scopus ID)9781538651728 (ISBN)
Conference
12th IEEE International Conference on Self-Adaptive and Self-Organizing Systems, SASO 2018; Trento; Italy; 3 September 2018 through 7 September 2018
Available from: 2018-07-18 Created: 2018-07-18 Last updated: 2019-03-14Bibliographically approved
Papadopoulos, A., Bini, E., Baruah, S. & Burns, A. (2018). AdaptMC: A control-theoretic approach for achieving resilience in mixed-criticality systems. In: Leibniz International Proceedings in Informatics, LIPIcs: . Paper presented at 30th Euromicro Conference on Real-Time Systems, ECRTS 2018, 3 June 2018 through 6 June 2018. Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
Open this publication in new window or tab >>AdaptMC: A control-theoretic approach for achieving resilience in mixed-criticality systems
2018 (English)In: Leibniz International Proceedings in Informatics, LIPIcs, Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing , 2018Conference paper, Published paper (Refereed)
Abstract [en]

A system is said to be resilient if slight deviations from expected behavior during run-time does not lead to catastrophic degradation of performance: minor deviations should result in no more than minor performance degradation. In mixed-criticality systems, such degradation should additionally be criticality-cognizant. The applicability of control theory is explored for the design of resilient run-time scheduling algorithms for mixed-criticality systems. Recent results in control theory have shown how appropriately designed controllers can provide guaranteed service to hardreal- time servers; this prior work is extended to allow for such guarantees to be made concurrently to multiple criticality-cognizant servers. The applicability of this approach is explored via several experimental simulations in a dual-criticality setting. These experiments demonstrate that our control-based run-time schedulers can be synthesized in such a manner that bounded deviations from expected behavior result in the high-criticality server suffering no performance degradation and the lower-criticality one, bounded performance degradation.

Place, publisher, year, edition, pages
Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing, 2018
Keywords
Bounded overloads, Control theory, Mixed criticality, Run-time resilience, Criticality (nuclear fission), Interactive computer systems, Scheduling algorithms, Catastrophic degradation, Control-theoretic approach, Experimental simulations, Mixed criticalities, Mixed-criticality systems, Performance degradation, Runtimes, Real time systems
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:mdh:diva-40237 (URN)10.4230/LIPIcs.ECRTS.2018.14 (DOI)2-s2.0-85049304309 (Scopus ID)9783959770750 (ISBN)
Conference
30th Euromicro Conference on Real-Time Systems, ECRTS 2018, 3 June 2018 through 6 June 2018
Available from: 2018-07-12 Created: 2018-07-12 Last updated: 2018-07-12Bibliographically approved
Ilyushkin, A., Ali-Eldin, A., Herbst, N., Bauer, A., Papadopoulos, A., Epema, D. & Iosup, A. (2018). An Experimental Performance Evaluation of Autoscalers for Complex Workflows. ACM TRANSACTIONS ON MODELING AND PERFORMANCE EVALUATION OF COMPUTING SYSTEMS, 3(2), Article ID UNSP 8.
Open this publication in new window or tab >>An Experimental Performance Evaluation of Autoscalers for Complex Workflows
Show others...
2018 (English)In: ACM TRANSACTIONS ON MODELING AND PERFORMANCE EVALUATION OF COMPUTING SYSTEMS, ISSN 2376-3639, Vol. 3, no 2, article id UNSP 8Article in journal (Refereed) Published
Abstract [en]

Elasticity is one of the main features of cloud computing allowing customers to scale their resources based on the workload. Many autoscalers have been proposed in the past decade to decide on behalf of cloud customers when and how to provision resources to a cloud application based on the workload utilizing cloud elasticity features. However, in prior work, when a new policy is proposed, it is seldom compared to the state-of-the-art, and is often compared only to static provisioning using a predefined quality of service target. This reduces the ability of cloud customers and of cloud operators to choose and deploy an autoscaling policy, as there is seldom enough analysis on the performance of the autoscalers in different operating conditions and with different applications. In our work, we conduct an experimental performance evaluation of autoscaling policies, using as application model workflows, a popular formalism for automating resource management for applications with well-defined yet complex structures. We present a detailed comparative study of general state-of-the-art autoscaling policies, along with two new workflow-specific policies. To understand the performance differences between the seven policies, we conduct various experiments and compare their performance in both pairwise and group comparisons. We report both individual and aggregated metrics. As many workflows have deadline requirements on the tasks, we study the effect of autoscaling on workflow deadlines. Additionally, we look into the effect of autoscaling on the accounted and hourly based charged costs, and we evaluate performance variability caused by the autoscaler selection for each group of workflow sizes. Our results highlight the trade-offs between the suggested policies, how they can impact meeting the deadlines, and how they perform in different operating conditions, thus enabling a better understanding of the current state-of-the-art.

Place, publisher, year, edition, pages
ASSOC COMPUTING MACHINERY, 2018
Keywords
Autoscaling, elasticity, scientific workflows, benchmarking, metrics
National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-39204 (URN)10.1145/3164537 (DOI)000430350200004 ()
Available from: 2018-05-11 Created: 2018-05-11 Last updated: 2018-05-11Bibliographically approved
Papadopoulos, A. & Maggio, M. (2018). Challenges in High Performance Big Data Frameworks. In: 4th International Workshop on Autonomic High Performance Computing AHPC 2018: . Paper presented at 4th International Workshop on Autonomic High Performance Computing AHPC 2018, 16 Jul 2018, Orleans, France (pp. 153-156).
Open this publication in new window or tab >>Challenges in High Performance Big Data Frameworks
2018 (English)In: 4th International Workshop on Autonomic High Performance Computing AHPC 2018, 2018, p. 153-156Conference paper, Published paper (Refereed)
Abstract [en]

Nowadays, we live in a society with billions of devices that are interconnected and interact together to improve the quality of our lives. The management and processing of information and knowledge have by now become our main resources, and the fundamental factors of economic and social development, and it is achieved through Big Data Frameworks (BDFs). The amount of such data is becoming larger every day, and this calls for scalable and reliable BDFs, that can process such data also with real-time requirements. For example, the data collected by an autonomous car should be processed, combined, and interpreted as fast as possible in order to guarantee a safe interaction with the surrounding environment, and of the passengers. 

This paper analyses the main limitations of current BDFs while providing some key challenges for increasing their flexibility. In particular, we focus on performance aspects, envisioning adaptation as a viable way to automate and improve performance in Big Data Applications.

National Category
Engineering and Technology Computer Systems
Identifiers
urn:nbn:se:mdh:diva-40859 (URN)10.1109/HPCS.2018.00039 (DOI)000450677700023 ()2-s2.0-85057371047 (Scopus ID)978-1-5386-7879-4 (ISBN)
Conference
4th International Workshop on Autonomic High Performance Computing AHPC 2018, 16 Jul 2018, Orleans, France
Projects
Future factories in the Cloud
Available from: 2018-09-20 Created: 2018-09-20 Last updated: 2019-01-04Bibliographically approved
Konstantinos, A., Papadopoulos, A., Vitor E., S. & John, M. (2018). Engineering Self-Adaptive Software Systems: From Requirements to Model Predictive Control. ACM Transactions on Autonomous and Adaptive Systems, 13(1), Article ID 1.
Open this publication in new window or tab >>Engineering Self-Adaptive Software Systems: From Requirements to Model Predictive Control
2018 (English)In: ACM Transactions on Autonomous and Adaptive Systems, ISSN 1556-4665, E-ISSN 1556-4703, Vol. 13, no 1, article id 1Article in journal (Refereed) Published
Abstract [en]

Self-adaptive software systems monitor their operation and adapt when their requirements fail due to unexpected phenomena in their environment. This article examines the case where the environment changes dynamically over time and the chosen adaptation has to take into account such changes. In control theory, this type of adaptation is known as Model Predictive Control and comes with a well-developed theory and myriad successful applications. The article focuses on modeling the dynamic relationship between requirements and possible adaptations. It then proposes a controller that exploits this relationship to optimize the satisfaction of requirements relative to a cost function. This is accomplished through a model-based framework for designing self-adaptive software systems that can guarantee a certain level of requirements satisfaction over time by dynamically composing adaptation strategies when necessary. The proposed framework is illustrated and evaluated through two simulated systems, namely, the Meeting-Scheduling exemplar and an E-Shop.

National Category
Computer Systems
Identifiers
urn:nbn:se:mdh:diva-41841 (URN)10.1145/3105748 (DOI)000434636500002 ()2-s2.0-85064529237 (Scopus ID)
Available from: 2018-12-27 Created: 2018-12-27 Last updated: 2019-05-09Bibliographically approved
Souza, A., Papadopoulos, A., Tomas, L., Gilbert, D. & Tordsson, J. (2018). Hybrid adaptive checkpointing for virtual machine fault tolerance. In: Proceedings - 2018 IEEE International Conference on Cloud Engineering, IC2E 2018: . Paper presented at 2018 IEEE International Conference on Cloud Engineering, IC2E 2018, 17 April 2018 through 20 April 2018 (pp. 12-22). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Hybrid adaptive checkpointing for virtual machine fault tolerance
Show others...
2018 (English)In: Proceedings - 2018 IEEE International Conference on Cloud Engineering, IC2E 2018, Institute of Electrical and Electronics Engineers Inc. , 2018, p. 12-22Conference paper, Published paper (Refereed)
Abstract [en]

Active Virtual Machine (VM) replication is an application independent and cost-efficient mechanism for high availability and fault tolerance, with several recently proposed implementations based on checkpointing. However, these methods may suffer from large impacts on application latency, excessive resource usage overheads, and/or unpredictable behavior for varying workloads. To address these problems, we propose a hybrid approach through a Proportional-Integral (PI) controller to dynamically switch between periodic and on-demand check-pointing. Our mechanism automatically selects the method that minimizes application downtime by adapting itself to changes in workload characteristics. The implementation is based on modifications to QEMU, LibVirt, and OpenStack, to seamlessly provide fault tolerant VM provisioning and to enable the controller to dynamically select the best checkpointing mode. Our evaluation is based on experiments with a video streaming application, an e-commerce benchmark, and a software development tool. The experiments demonstrate that our adaptive hybrid approach improves both application availability and resource usage compared to static selection of a checkpointing method, with application performance gains and neglectable overheads.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2018
Keywords
Checkpoint, COLO, Control theory, Fault tolerance, Resource management, Application programs, Benchmarking, Network security, Software design, Two term control systems, Application performance, Proportional integral controllers, Software development tools, Video Streaming Applications, Workload characteristics, Virtual machine
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mdh:diva-39981 (URN)10.1109/IC2E.2018.00023 (DOI)2-s2.0-85048315473 (Scopus ID)9781538650080 (ISBN)
Conference
2018 IEEE International Conference on Cloud Engineering, IC2E 2018, 17 April 2018 through 20 April 2018
Available from: 2018-06-21 Created: 2018-06-21 Last updated: 2018-06-21Bibliographically approved
Papadopoulos, A., Krzywda, J., Elmroth, E. & Maggio, M. (2018). Power-Aware Cloud Brownout: response time and power consumption control. In: 2017 IEEE 56TH ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC): . Paper presented at IEEE 56th Annual Conference on Decision and Control (CDC), DEC 12-15, 2017, Melbourne, AUSTRALIA (pp. 2686-2691).
Open this publication in new window or tab >>Power-Aware Cloud Brownout: response time and power consumption control
2018 (English)In: 2017 IEEE 56TH ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2018, p. 2686-2691Conference paper, Published paper (Refereed)
Abstract [en]

Cloud computing infrastructures are powering most of the web hosting services that we use at all times. A recent failure in the Amazon cloud infrastructure made many of the website that we use on a hourly basis unavailable(1). This illustrates the importance of cloud applications being able to absorb peaks in workload, and at the same time to tune their power requirements to the power and energy capacity offered by the data center infrastructure. In this paper we combine an established technique for response time control - brownout - with power capping. We use cascaded control to take into account both the need for predictability in the response times (the inner loop), and the power cap (the outer loop). We execute tests on real machines to determine power usage and response times models and extend an existing simulator. We then evaluate the cascaded controller approach with a variety of workloads and both open-and closed-loop client models.

Series
IEEE Conference on Decision and Control, ISSN 0743-1546
National Category
Energy Engineering
Identifiers
urn:nbn:se:mdh:diva-38791 (URN)000424696902097 ()2-s2.0-85046161925 (Scopus ID)978-1-5090-2873-3 (ISBN)
Conference
IEEE 56th Annual Conference on Decision and Control (CDC), DEC 12-15, 2017, Melbourne, AUSTRALIA
Available from: 2018-03-01 Created: 2018-03-01 Last updated: 2018-05-11Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1364-8127

Search in DiVA

Show all publications