mdh.sePublications
Change search
Refine search result
12 1 - 50 of 89
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Aysan, Huseyin
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Bate, Iain
    University of York.
    Graydon, Patrick
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Punnekkat, Sasikumar
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Improving Reliability of Real-Time Systems through Value and Time Voting2013Conference paper (Refereed)
    Abstract [en]

    Critical systems often use N-modular redundancy to tolerate faults in subsystems. Traditional approaches to N-modular redundancy in distributed, loosely-synchronised, real-time systems handle time and value errors separately: a voter detects value errors, while watchdog-based health monitoring detects timing errors. In prior work, we proposed the integrated Voting on Time and Value (VTV) strategy, which allows both timing and value errors to be detected simultaneously. In this paper, we show how VTV can be harnessed as part of an overall fault tolerance strategy and evaluate its performance using a well-known control application, the Inverted Pendulum. Through extensive simulations, we compare the performance of Inverted Pendulum systems which employs VTV and alternative voting strategies to demonstrate that VTV better tolerates well-recognised faults in this realistically complex control problem.

  • 2.
    Aysan, Hüseyin
    et al.
    Mälardalen University, School of Innovation, Design and Engineering.
    Dobrin, Radu
    Mälardalen University, School of Innovation, Design and Engineering.
    Punnekkat, Sasikumar
    Mälardalen University, School of Innovation, Design and Engineering.
    Bate, Iain
    Mälardalen University, School of Innovation, Design and Engineering.
    On Voting Strategies for Loosely Synchronized Dependable Real-Time Systems2012In: 7th IEEE International Symposium on Industrial Embedded Systems, 2012, p. 120-129Conference paper (Refereed)
    Abstract [en]

    Hard real-time applications typically have to satisfy high dependability requirements in terms of fault tolerance in both the value and the time domains. Loosely synchronized real-time systems, which represent many of the systems that are developed, make any form of voting difficult as each replica may provide different outputs independent of whether there has been an error or not. This can also lead to false positives and false negatives which makes achieving fault tolerance, and hence dependability, difficult. We have earlier proposed a majority voting technique, ”Voting on Time and Value” (VTV) that explicitly considers combinations of value and timing errors, targeting loosely synchronised systems. In this paper, we extend VTV to enable voter parameter tuning to obtain the desired user specified trade-offs between the false positive and false negative rates in the voter outputs. We evaluate the performance of VTV against Compare Majority Voting (CMV), which is a known voting approach applicable in similar contexts, through extensive simulation studies. The results clearly demonstrate that VTV outperforms CMV in all scenarios with lower false negative rates.

  • 3.
    Bartlett, Mark
    et al.
    University of York, UK.
    Bate, Iain
    University of York, UK.
    Cussens, James
    University of York, UK.
    Instruction Cache Prediction Using Bayesian Networks2010In: Frontiers in Artificial Intelligence and Applications, Volume 215, 2010, p. 1099-1100Conference paper (Refereed)
  • 4.
    Bartlett, Mark
    et al.
    University of York, UK.
    Bate, Iain
    University of York, UK.
    Cussens, James
    University of York, UK.
    Learning Bayesian networks for improved instruction cache analysis2010In: Proceedings - 9th International Conference on Machine Learning and Applications, ICMLA 2010, 2010, p. 417-423Conference paper (Refereed)
  • 5.
    Bartlett, Mark
    et al.
    University of York.
    Bate, Iain
    University of York.
    Cussens, James
    Kazakov, Dimitar
    Probabilistic Instruction Cache Analysis using Bayesian Networks2011In: Proceedings - 17th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications, RTCSA 2011, 2011, Vol. 1, p. 233-242Conference paper (Refereed)
    Abstract [en]

    Current approaches to instruction cache analysis for determining worst-case execution time rely on building a mathematical model of the cache that tracks its contents at all points in the program. This requires perfect knowledge of the functional behaviour of the cache and may result in extreme complexity and pessimism if many alternative paths through code sections are possible. To overcome these issues, this paper proposes a new hybrid approach in which information obtained from program traces is used to automate the construction of a model of how the cache is used. The resulting model involves the learning of a Bayesian network that predicts which instructions result in cache misses as a function of previously taken paths. The model can then be utilised to predict cache misses for previously unseen inputs and paths. The accuracy of this learned model is assessed against real benchmarks and an established statistical approach to illustrate its benefits.

  • 6.
    Bartlett, Mark
    et al.
    University of York.
    Bate, Iain
    University of York.
    Kazakov, Dimitar
    University of York.
    Accurate Determination of Loop Iterations for Worst-Case Execution Time Analysis2010In: IEEE Transactions on Computers, ISSN 0018-9340, Vol. 59, no 12, p. 1520-1532Article in journal (Refereed)
    Abstract [en]

    Determination of accurate estimates for the Worst-Case Execution Time of a program is essential for guaranteeing the correct temporal behavior of any Real-Time System. Of particular importance is tightly bounding the number of iterations of loops in the program or excessive undue pessimism can result. This paper presents a novel approach to determining the number of iterations of a loop for such analysis. Program traces are collected and analyzed allowing the number of loop executions to be parametrically determined safely and precisely under certain conditions. The approach is mathematically proved to be safe and its practicality is demonstrated on a series of benchmarks.

  • 7.
    Bartlett, Mark
    et al.
    University of York.
    Bate, Iain
    University of York.
    Kazakov, Dimitar
    University of York.
    Challenges in relational learning for real-time systems applications2008In: Inductive Logic Programming: 18th International Conference, ILP 2008 Prague, Czech Republic, September 10-12, 2008 Proceedings, Springer Berlin/Heidelberg, 2008, p. 42-58Chapter in book (Other academic)
    Abstract [en]

    The problem of determining the Worse Case Execution Time (WCET) of a piece of code is a fundamental one in the Real Time Systems community. Existing methods either try to gain this information by analysis of the program code or by running extensive timing analyses. This paper presents a new approach to the problem based on using Machine Learning in the form of ILP to infer program properties based on sample executions of the code. Additionally, significant improvements in the range of functions learnable and the time taken for learning can be made by the application of more advanced ILP techniques.

  • 8.
    Bartlett, Mark
    et al.
    University of York.
    Bate, Iain
    University of York.
    Kazakov, Dimitar
    University of York.
    Guaranteed loop bound identification from program traces for WCET2009In: Proceedings of the IEEE Real-Time and Embedded Technology and Applications Symposium, RTAS, 2009, p. 287-294Conference paper (Refereed)
    Abstract [en]

    Static analysis can be used to determine safe estimates of Worst Case Execution Time. However, overestimation of the number of loop iterations, particularly in nested loops, can result in substantial pessimism in the overall estimate. This paper presents a method of determining exact parametric values of the number of loop iterations for a particular class of arbitrarily deeply nested loops. It is proven that values are guaranteed to he correct using information obtainable from a finite and quantifiable number of program traces. Using the results of this proof, a tool is constructed and its scalability assessed.

  • 9.
    Bate, Iain
    University of York.
    Systematic approaches to understanding and evaluating design trade-offs2008In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 81, no 8, p. 1253-1271Article in journal (Refereed)
    Abstract [en]

    The use of trade-off analysis as part of optimising designs has been an emerging technique for a number of years. However, only recently has much work been done with respect to systematically deriving the understanding of the system problem to be optimised and using this information as part of the design process. As systems have become larger and more complex then a need has arisen for suitable approaches. The system problem consists of design choices, measures for individual values related to quality attributes and weights to balance the relative importance of each individual quality attribute. In this paper, a method is presented for establishing an understanding of a system problem using the goal structuring notation (GSN). The motivation for this work is borne out of experience working on embedded systems in the context of critical systems where the cost of change can be large and the impact of design errors potentially catastrophic. A particular focus is deriving an understanding of the problem so that different solutions can be assessed quantitatively, which allows more definitive choices to be made. A secondary benefit is it also enables design using heuristic search approaches which is another area of our research. The overall approach is demonstrated through a case study which is a task allocation problem.

  • 10.
    Bate, Iain
    Department of Computer Science, University of York.
    Utilising Application Flexibility in Energy Aware Computing2008Conference paper (Refereed)
  • 11.
    Bate, Iain
    et al.
    Mälardalen University, School of Innovation, Design and Engineering.
    Burns, Alan
    Control system2009Patent (Other (popular science, discussion, etc.))
  • 12.
    Bate, Iain
    et al.
    University of York.
    Conmy, Philippa
    University of York.
    Certification of FPGAs-Current Issues and Possible Solutions2009In: Safety-Critical Systems: Problems, Process and Practice - Proceedings of the 17th Safety-Critical Systems Symposium, SSS 2009, Springer London, 2009, p. 149-165Conference paper (Other academic)
    Abstract [en]

    This paper looks at possible applications of Field Programmable Gate Arrays (FPGAs) within the safety critical domain. We examine the potential benefits these devices can offer, such as parallel computation and reconfiguration in the presence of failure and also the difficulties which these raise for certification. A possible safety argument supporting the use of basic reconfiguration facilities of a reprogrammable FPGA to remove Single Event Upsets (SEUs) is presented. We also demonstrate a technique which has the potential to be used to identify areas which are sensitive to SEUs in terms of safety effect, thus allowing optimisation of an FPGAs design and supporting our argument.

  • 13.
    Bate, Iain
    et al.
    Mälardalen University, School of Innovation, Design and Engineering.
    Fairbairn, M. L.
    Department of Computer Science, University of York.
    Searching for the minimum failures that can cause a hazard in a wireless sensor network2013In: GECCO - Proceedings of the 2013 Genetic and Evolutionary Computation Conference, 2013, p. 1213-1220Conference paper (Refereed)
    Abstract [en]

    Wireless Sensor Networks (WSN) are now being used in a range of applications, many of which are critical systems, e.g. monitoring assisted living facilities or for fire detection systems which is the example used in this paper. For critical systems it is important to be able to determine the minimum number of failures that can cause a hazard to occur. This is normally a manual, human intensive, task. This paper presents a novel application of search to both the WSN and safety domains; searching for combinations of failures that can cause a hazard and then reducing these to the minimum possible using a combination of automated search and manual refinement. Due to the size and complexity of the search problem, a parallel search algorithm is designed that runs on available compute resources with the results being processed using R. Copyright © 2013 ACM.

  • 14.
    Bate, Iain
    et al.
    Mälardalen University, School of Innovation, Design and Engineering.
    Hansson, Hans
    Mälardalen University, School of Innovation, Design and Engineering.
    Punnekkat, Sasikumar
    Mälardalen University, School of Innovation, Design and Engineering.
    Better, Faster, Cheaper, and Safer Too: Is This Really Possible?2012In: IEEE Symposium on Emerging Technologies and Factory Automation, ETFA, 2012, p. 6489706-Conference paper (Refereed)
    Abstract [en]

    Increased levels of automation together with increased complexity of automation systems brings increased responsibility on the system developers in terms of quality demands from the legal perspectives as well as company reputation. Component based development of software systems provides a viable and cost-effective alternative in this context provided one can address the quality and safety certification demands in an efficient manner. In this paper we present our vision, challenges and a brief outline of various research themes in which our team is engaged currently within two major projects.

  • 15.
    Bate, Iain
    et al.
    University of York.
    Kazakov, Dimitar
    University of York.
    New directions in worst-case execution time analysis2008In: 2008 IEEE Congress on Evolutionary Computation, CEC 2008, 2008, p. 3545-3552Conference paper (Refereed)
    Abstract [en]

    Most software engineering methods require some form of model populated with appropriate information. Real-time systems are no exception. A significant issue is that the information needed is not always freely available and derived it using manual methods is costly in terms of time and money. Previous work showed how machine learning information derived during software testing can be used to derive loop bounds as part of the Worst-Case Execution Time analysis problem. In this paper we build on this work by investigating the issue of branch prediction.

  • 16.
    Bate, Iain
    et al.
    University of York.
    Khan, Usman
    University of Cambridge.
    WCET analysis of modern processors using multi-criteria optimisation2011In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 16, no 1, p. 5-28Article in journal (Refereed)
    Abstract [en]

    The Worst-Case Execution Time (WCET) is an important execution metric for real-time systems, and an accurate estimate for this increases the reliability of subsequent schedulability analysis. Performance enhancing features on modern processors, such as pipelines and caches, however, make it difficult to accurately predict the WCET. One technique for finding the WCET is to use test data generated using search algorithms. Existing work on search-based approaches has been successfully used in both industry and academia based on a single criterion function, the WCET, but only for simple processors. This paper investigates how effective this strategy is for more complex processors and to what extent other criteria help guide the search, e.g. the number of cache misses. Not unexpectedly the work shows no single choice of criteria work best across all problems. Based on the findings recommendations are proposed on which criteria are useful in particular situations.

  • 17.
    Bate, Iain
    et al.
    University of York.
    Poulding, Simon
    University of York.
    Call for Papers: Practical Aspects of Search-Based Software Engineering2009In: Software, practice & experience, ISSN 0038-0644, E-ISSN 1097-024X, Vol. 39, no 9, p. 867-868Article in journal (Other academic)
  • 18.
    Bate, Iain
    et al.
    University of York.
    Poulding, Simon
    University of York.
    Editorial for the special issue on search-based software engineering2011In: Software, practice & experience, ISSN 0038-0644, E-ISSN 1097-024X, Vol. 41, no 5, p. 467-468Article in journal (Other academic)
  • 19.
    Bate, Iain
    et al.
    University of York.
    Wu, Yafeng
    University of Virginia.
    Stankovic, John A
    University of Virginia.
    Developing safe and dependable sensornets2011In: Proceedings - 37th EUROMICRO Conference on Software Engineering and Advanced Applications, SEAA 2011, 2011, p. 279-282Conference paper (Refereed)
    Abstract [en]

    Sensor nets are being widely proposed as a solution technology in a wide number of applications, e.g. health care. As part of this work some key challenges for the safety and sensor net communities are established in part by developing parts of a safety case for a fire detection system in a skyscraper. We then demonstrate how some of these issues can be resolved by modifying earlier work on Run Time Assurance of applications to satisfy some key safety and dependability requirements in the context of a sensor net used as part of a fire fighting system.

  • 20.
    Burkimsher, Andrew
    et al.
    University of York.
    Bate, Iain
    University of York.
    Indrusiak, Leandro Soares
    University of York.
    A survey of scheduling metrics and an improved ordering policy for list schedulers operating on workloads with dependencies and a wide variation in execution times2012In: Future Generation Computer Systems, ISSN 0167-739X, Vol. 29, no 8, p. 2009-2025Article in journal (Refereed)
    Abstract [en]

    This paper considers the dynamic scheduling of parallel, dependent tasks onto a static, distributed computing platform, with the intention of delivering fairness and quality of service (QoS) to users. The key QoS requirement is that responsiveness is maintained for workloads with a wide range of execution times (minutes to months) even under transient periods of overload. A survey of schedule QoS metrics is presented, classified into those dealing with responsiveness, fairness and utilisation. These metrics are evaluated as to their ability to detect undesirable features of schedules. The Schedule Length Ratio (SLR) metric is shown to be the most helpful for measuring responsiveness in the presence of dependencies. A novel list scheduling policy called Projected-SLR is presented that delivers good responsiveness and fairness by using the SLR metric in its scheduling decisions. Projected-SLR is found to perform equally as well in responsiveness, fairness and utilisation as the best of the other scheduling policies evaluated (Shortest Remaining Time First/SRTF), using synthetic workloads and an industrial trace. However, Projected-SLR does this with a guarantee of starvation-free behaviour, unlike SRTF.

  • 21.
    Conmy, Philippa
    et al.
    University of York, United Kingdom.
    Bate, Iain
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Assuring safety for component based software engineering2014In: Proceedings - 2014 IEEE 15th International Symposium on High-Assurance Systems Engineering, HASE 2014, 2014, p. 121-128Conference paper (Refereed)
    Abstract [en]

    Developing Safety-Critical Systems (SCS) is an expensive activity largely due to the cost of testing both components and the systems produced by integrating them. In more mainstream system design, Model-Based Development (MBD) and Component- Based Software Engineering (CBSE) are seen as complementary activities that can reduce these costs, however their use is not yet well supported in the safety critical domain, as safety is an emergent property. The contributions of this paper are to describe some of the challenges of using these approaches in SCS, and then argue how through appropriate safety argument patterns the challenges can be addressed.

  • 22.
    Conmy, Philippa
    et al.
    University of York.
    Bate, Iain
    University of York.
    Component-based safety analysis of FPGAs2010In: IEEE Transactions on Industrial Informatics, ISSN 1551-3203, E-ISSN 1941-0050, Vol. 6, no 2, p. 195-205Article in journal (Refereed)
    Abstract [en]

    Component-based and modular software development techniques have become established in recent years. Without complementary verification and certification methods the benefits of these development techniques are reduced. As part of certification, it is necessary to show a system is acceptably safe which subsumes both the normal and abnormal (failure) cases. However, nonfunctional properties, such as safety and failures, are abstraction breakers, cutting across multiple components. Also, much of the work on component-based engineering has been applied to software-based systems rather than field programmable gate array (FPGA)-based systems whose use is becoming more popular in industry. In this paper, we show how a modular design embedded on a FPGA can be exhaustively analyzed (from a safety perspective) to derive the failure and safety properties to give the evidence needed for a safety case. The specific challenges faced are analyzing the fault characteristics of individual electronic components, combining the results across software modules, and then feeding this into a system safety case. A secondary benefit of taking this approach is that there is less uncertainty in the performance of the device, hence, it can be used for higher integrity systems. Finally, design improvements can be specifically targeted at areas of safety concern, leading to more optimal utilization of the FPGA device.

  • 23.
    Conmy, Philippa
    et al.
    University of York.
    Bate, Iain
    Mälardalen University, School of Innovation, Design and Engineering.
    Efficient Task Allocation to FPGAs in the Safety Critical Domain2011In: Proceedings of IEEE Pacific Rim International Symposium on Dependable Computing, PRDC, 2011, p. 119-128Conference paper (Refereed)
    Abstract [en]

    Field Programmable Gate Arrays (FPGAs) are highly configurable programmable logic devices. They offer many benefits over traditional micro-processors such as the ability to efficiently run tasks in parallel and also highly predictable timing performance. They are becoming increasingly popular for use in the safety critical domain where predictability is essential. However, concerns about their dependability, principally their reliability and difficulties in assessing the impact of an internal failure means that current designs are inefficient and conservative. This paper discusses these issues in depth. It also presents an FPGA taskallocation method using simulated annealing to balance efficiency and reliability requirements. This can be used to improve designs of safety critical FPGA based systems.

  • 24.
    Conmy, Philippa
    et al.
    University of York.
    Bate, Iain
    University of York.
    Semi-Automated Safety Analysis for Field Programmable Gate Arrays2009In: Proceedings of the International Symposium and Workshop on Engineering of Computer Based Systems, 2009, p. 166-175Conference paper (Refereed)
    Abstract [en]

    Field Programmable Gate Arrays (FPGAs) are becoming increasingly popular for use in High-Integrity Safety Related and Safety Critical Systems. FPGAs offer a number of potential benefits over traditional microprocessor based software systems, such as predictable timing performance, the ability to perform highly parallel calculations, predictable emulation of obsolete components, and (in the case of SRAM based FPGAs) the ability to reconfigure to avoid hardware failures. However these abilities do not come for free and often designers are forced to make pessimistic safety and reliability assumptions leading to conservative overall slystem designs. In this paper a modular, and hence more scalable approach, to performing FPGA safety analysis is presented.

  • 25.
    Conmy, Philippa
    et al.
    University of York, UK.
    Pygott, Clive
    Columbus Computing Ltd, UK.
    Bate, Iain
    University of York, UK.
    VHDL guidance for safe and certifiable FPGA design2010In: IET Conference Publications, Volume 2010, 2010Conference paper (Refereed)
    Abstract [en]

    Field Programmable Gate Arrays (FPGAs) are becoming increasingly popular for use within high integrity and safety critical systems. One commonly used coding language for their configuration is the VHSIC Hardware Description Language (VHDL). Whilst VHDL is used for hardware description, it is developed in a similar way to traditional software, and many safety critical software certification standards require the use of coding subsets and style guidance in order to ensure known language vulnerabilities are avoided. At present there is no recognized, public domain guidance for VHDL. This paper draws together many different sources to provide a starting discussion for a VHDL subset.

  • 26.
    Drozda, Martin
    et al.
    Leibniz University of Hannover.
    Bate, Iain
    University of York.
    Timmis, Jon
    University of York.
    Bio-inspired error detection for complex systems2011In: Proceedings of IEEE Pacific Rim International Symposium on Dependable Computing, PRDC, 2011, p. 154-163Conference paper (Refereed)
    Abstract [en]

    In a number of areas, for example, sensor networks and systems of systems, complex networks are being used as part of applications that have to be dependable and safe. A common feature of these networks is they operate in a de-centralised manner and are formed in an ad-hoc manner and are often based on individual nodes that were not originally developed specifically for the situation that they are to be used. In addition, the nodes and their environment will have different behaviours over time, and there will be little knowledge during development of how they will interact. A key challenge is therefore how to understand what behaviour is normal from that which is abnormal so that the abnormal behaviour can be detected, and be prevented from affecting other parts of the system where appropriate recovery can then be performed. In this paper we review the state of the art in bio-inspiredapproaches, discuss how they can be used for error detection as part of providing a safe dependable sensor network, and then provide and evaluate an efficient and effective approach to error detection.

  • 27.
    Emberson, Paul
    et al.
    University of York, UK.
    Bate, Iain
    University of York, UK.
    Extending a task allocation algorithm for graceful degradation of real-time distributed embedded systems2008In: Proceedings - Real-Time Systems Symposium 2008, 2008, p. 270-279Conference paper (Refereed)
    Abstract [en]

    Previous research which has considered task allocation and fault-tolerance together has concentrated on constructing schedules which accommodate a fixed number of redundant tasks. Often, all faults are treated as being equally severe. There is little work which combines taskallocation with architectural level fault-tolerance issues such as the number of replicas to use and how they should be configured, both of which are tackled by this work. An accepted method for assessing the impact of a combination of faults is to build a system utility model which can be used to assess how the system degrades when omponents fail. The key challenge addressed here is how to design objective functions based on a utility model which can be incorporated into a search algorithm in order to optimise fault-tolerance properties. Other issues such as how to extend the local search neighbourhood and balance objectives with schedulability constraints are also discussed. 

  • 28.
    Emberson, Paul
    et al.
    University of York.
    Bate, Iain
    University of York.
    Stressing Search with Scenarios for Flexible Solutions to Real-Time Task Allocation Problems2010In: IEEE Transactions on Software Engineering, ISSN 0098-5589, E-ISSN 1939-3520, Vol. 36, no 5, p. 704-718Article in journal (Refereed)
    Abstract [en]

    One of the most important properties of a good software engineering process and of the design of the software it produces is robustness to changing requirements. Scenario-based analysis is a popular method for improving the flexibility of software architectures. This paper demonstrates a search-based technique for automating scenario-based analysis in the software architecture deployment view. Specifically, a novel parallel simulated annealing search algorithm is applied to the real-time task allocation problem to find baseline solutions which require a minimal number of changes in order to meet the requirements of potential upgrade scenarios. Another simulated annealing-based search is used for finding a solution that is similar to an existing baseline when new requirements arise. Solutions generated using a variety of scenarios are judged by how well they respond to different system requirements changes. The evaluation is performed on a set of problems with a controlled set of different characteristics.

  • 29.
    Ghazzawi, Hashem Ali
    et al.
    University of York.
    Bate, Iain
    University of York.
    Indrusiak, Leandro Soares
    University of York.
    A Control Theoretic Approach for Workflow Management2012Conference paper (Refereed)
    Abstract [en]

    This paper explores the performance of feedback control when managing workflows in computing systems. Industrial systems nowadays can consist of geographically diverse and heterogeneous high-performance computing (HPC) clusters. When scheduling workflows over such platforms, it is often desired to observe a number of real-time objectives such as meeting deadlines, reducing slacks, and increasing platform utilisation. We apply a control theoretic approach to address scheduling-related trade-offs of workflows that are executed in HPC platforms. Our results show that model predictive control-based admission controller is efficient for scheduling periodic workflows in a homogeneous HPC cluster with respect to minimum slacks and maximum CPU utilisation.

  • 30.
    Ghazzawi, Hashem Ali
    et al.
    University of York.
    Bate, Iain
    Mälardalen University, School of Innovation, Design and Engineering.
    Indrusiak, Leandro Soares
    University of York.
    MPC vs. PID Controllers in Multi-CPU Multi-Objective Real-Time Scheduling Systems2012Manuscript (preprint) (Other academic)
  • 31.
    Graydon, Patrick
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Bate, Iain
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    On the Nature and Content of Safety Contracts2014In: Proceedings - 2014 IEEE 15th International Symposium on High-Assurance Systems Engineering, HASE 2014, 2014, p. 245-246Conference paper (Refereed)
    Abstract [en]

    Component-based software engineering researchers have explored component reuse, typically at the source-code level. Contracts explicitly describe component behaviour, reducing development risk by exposing potential incompatibilities early. But to benefit fully from reuse, developers of safety-critical systems must also reuse safety evidence. Full reuse would require both extending the existing notion of component contracts to cover safety properties and using these contracts in both component selection and system certification. In this paper, we explore some of the ways in which this is not as simple as it first appears.

  • 32.
    Graydon, Patrick
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Bate, Iain
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Realistic Safety Cases for the Timing of Systems2014In: The Computer Journal, ISSN 1460-2067, Vol. 57, no 5, p. 759-774Article in journal (Refereed)
    Abstract [en]

    Timing is often seen as the most important property of systems after function, and safety-critical systems are no exception. In this paper, we consider how timing is typically treated in safety assurance and in particular the safety arguments being proposed by industry and academia. A critique of these arguments is performed based on how systems are generally developed and how evidence is gathered. Significant weaknesses are exposed resulting in a more appropriate safety argument being proposed. As part of this work techniques for identifying relationships, in the form of contracts, between parts of the argument and the strength of evidence are used. The work is demonstrated using a Computer Assisted Braking example, specifically an Anti-Lock Braking System for a car, as it is a classic example of a component that may be used ?Out of Context?, as discussed in a number of safety standards, and may also be reused across a number of systems as well as part of a product line.

  • 33.
    Graydon, Patrick
    et al.
    Mälardalen University, School of Innovation, Design and Engineering.
    Bate, Iain
    Mälardalen University, School of Innovation, Design and Engineering.
    Safety Assurance Driven Problem Formulation for Mixed-Criticality Scheduling2013Conference paper (Refereed)
  • 34.
    Graydon, Patrick
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Bate, Iain
    University of York, UK.
    The Nature and Content of Safety Contracts: Challenges and Suggestions for a Way Forward2014In: Proceedings of IEEE Pacific Rim International Symposium on Dependable Computing, PRDC, 2014, p. 135-144Conference paper (Refereed)
    Abstract [en]

    Software engineering researchers have extensively explored the reuse of components at source-code level. Contracts explicitly describe component behaviour, reducing development risk by exposing potential incompatibilities early in the development process. But to benefit fully from reuse, developers of safety-critical systems must also reuse safety evidence. Full reuse would require both extending the existing notion of component contracts to cover safety properties and using these contracts in both component selection and system certification. This is not as simple as it first appears. Much of the review, analysis, and test evidence developers provide during certification is system-specific. This makes it difficult to define safety contracts that facilitate both selecting components to reuse and certifying systems. In this paper, we explore the definition and use of safety contracts, identify challenges to component-based software reuse safety-critical systems, present examples to illustrate several key difficulties, and discuss potential solutions to these problems.

  • 35.
    Jaradat, Omar
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Bate, Iain
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. University of York, UK.
    Deriving Hierarchical Safety Contracts2015In: Proceedings: 2015 IEEE 21st Pacific Rim International Symposium on Dependable Computing, PRDC 2015, 2015, Vol. jan, p. 119-128Conference paper (Refereed)
    Abstract [en]

    Safety cases need significant amount of time and effort to produce. The required amount of time and effort can be dramatically increased due to system changes as safety cases should be maintained before they can be submitted for certification or re-certification. Anticipating potential changes is useful since it reveals traceable consequences that will eventually reduce the maintenance efforts. However, considering a complete list of anticipated changes is difficult. What can be easier though is to determine the flexibility of system components to changes. Using sensitivity analysis is useful to measure the flexibility of the different system properties to changes. Furthermore, contracts have been proposed as a means for facilitating the change management process due to their ability to record the dependencies among system’s components. In this paper, we extend a technique that uses a sensitivity analysis to derive safety contracts from Fault Tree Analyses (FTA) and uses these contracts to trace changes in the safety argument. The extension aims to enabling the derivation of hierarchical and correlated safety contracts.We motivate the extension through an illustrative example within which we identify limitations of the technique and discuss potential solutions to these limitations. 

  • 36.
    Jaradat, Omar
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Bate, Iain
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Systematic Maintenance of Safety Cases to Reduce Risk2016In: Lecture Notes in Computer Science, vol. 9923, 2016, p. 17-29Conference paper (Refereed)
    Abstract [en]

    The development of safety cases has become common practice in many safety critical system domains. Safety cases are costly since they need a significant amount of time and efforts to be produced. More- over, safety critical systems are expected to operate for a long period of time and constantly subject to changes during both development and operational phases. Hence, safety cases are built as living documents that should always be maintained to justify the safety status of the associated system and evolve as these system evolve. However, safety cases document highly interdependent elements (e.g., safety goals, evidence, assumptions, etc.) and even seemingly minor changes may have a major impact on them, and thus dramatically increase their cost. In this paper, we identify and discuss some challenges in the maintenance of safety cases. We also present two techniques that utilise safety contracts to facilitate the maintenance of safety cases, we discuss the roles of these techniques in coping with some of the identified maintenance challenges, and we finally discuss potential limitations and suggest some solutions.

  • 37.
    Jaradat, Omar
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Bate, Iain
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Using Safety Contracts to Guide the Maintenance of Systems and Safety Cases2017In: European Dependable Computing Conference EDCC'17, 2017, p. 95-102Conference paper (Refereed)
    Abstract [en]

    Changes to safety-critical systems are inevitable and can impact the safety confidence about a system as their effects can refute articulated claims about safety or challenge the supporting evidence on which this confidence relies. In order to maintain the safety confidence under changes, system developers need to re-analyse and re-verify the system to generate new valid items of evidence. Identifying the effects of a particular change is a crucial step in any change management process as it enables system developers to estimate the required maintenance effort and reduce the cost by avoiding wider analyses and verification than strictly necessary. This paper presents a sensitivity analysis-based technique which aims at measuring the ability of a system to contain a change (i.e., robustness) without the need to make a major re-design. The proposed technique exploits the safety margins in the budgeted failure probabilities of events in a probabilistic fault-tree analysis to compensate for unaccounted deficits or changes due to maintenance. The technique utilises safety contracts to provide prescriptive data for what is needed to be revisited and verified to maintain system safety when changes happen. We demonstrate the technique on an aircraft wheel braking system.

  • 38.
    Jaradat, Omar
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Bate, Iain
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Using Safety Contracts to Guide the Maintenance of Systems and Safety Cases: An Example2017Report (Other academic)
    Abstract [en]

    Changes to safety critical systems are inevitable and can impact the safety confidence about a system as their effects can refute articulated claims about safety or challenge the supporting evidence on which this confidence relies. In order to maintain the safety confidence due to changes, system developers need to re-analyse and re-verify the system to generate new valid items of evidence. Moreover, identifying the effects of a particular change is a crucial step in any change management process as it enables system developers to estimate the required maintenance effort and reduce the cost by avoiding wider analyses and verification than strictly necessary. This paper presents a sensitivity analysis-based technique which aims at measuring the ability of a system to contain a change (i.e., robustness) without the need to make a major re-design. The technique exploits the safety margins in the assigned failure probabilities to the events of a probabilistic fault-tree analysis to compensate some potential deficits in the overall failure probability budget due to changes. The technique also utilises safety contracts to provide prescriptive data for what is needed to be revisited and verified to maintain system safety when changes happen. We demonstrate the technique on a realistic safety critical system.

  • 39.
    Jaradat, Omar
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Bate, Iain
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Punnekkat, Sasikumar
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Facilitating the Maintenance of Safety Cases2015In: The 3rd International Conference on Reliability, Safety and Hazard - Advances in Reliability, Maintenance and Safety ICRES-ARMS'15, 2015, Vol. F5Conference paper (Refereed)
  • 40.
    Jaradat, Omar
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Bate, Iain
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Punnekkat, Sasikumar
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Using Sensitivity Analysis to Facilitate The Maintenance of Safety Cases2015In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) / [ed] Juan Antonio de la Puente, Tullio Vardanega, 2015, Vol. 9111, p. 162-176Conference paper (Refereed)
    Abstract [en]

    A safety case contains safety arguments together with supporting evidence that together should demonstrate that a system is acceptably safe. System changes pose a challenge to the soundness and cogency of the safety case argument. Maintaining safety arguments is a painstaking process because it requires performing a change impact analysis through interdependent elements. Changes are often performed years after the deployment of a system making it harder for safety case developers to know which parts of the argument are affected. Contracts have been proposed as a means for helping to manage changes. There has been significant work that discusses how to represent and to use them but there has been little on how to derive them. In this paper, we propose a sensitivity analysis approach to derive contracts from Fault Tree Analyses and use them to trace changes in the safety argument, thus facilitating easier maintenance of the safety argument. 

  • 41.
    Jaradat, Omar
    et al.
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Graydon, Patrick
    Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems.
    Bate, Iain
    University of York, UK.
    An Approach to Maintaining Safety Case Evidence After A System Change2014In: 2014 Tenth European Dependable Computing Conference EDCC 2014, 2014Conference paper (Refereed)
    Abstract [en]

    Developers of some safety critical systems construct a safety case. Developers changing a system during development or after release must analyse the change's impact on the safety case. Evidence might be invalidated by changes to the system design, operation, or environmental context. Assumptions valid in one context might be invalid elsewhere. The impact of change might not be obvious. This paper proposes a method to facilitate safety case maintenance by highlighting the impact of changes.

  • 42.
    Jaradat, Omar
    et al.
    Mälardalen University, School of Innovation, Design and Engineering.
    Graydon, Patrick
    Mälardalen University, School of Innovation, Design and Engineering.
    Bate, Iain
    Mälardalen University, School of Innovation, Design and Engineering.
    The Role of Architectural Model Checking in Conducting Preliminary Safety Assessment2013Conference paper (Refereed)
    Abstract [en]

    Preliminary safety assessment is an important activity in safety systems development since it provides insight into the proposed system’s ability to meet its safety requirements. Because preliminary safety assessment is conducted before the system is implemented, developers rely on high-level designs of the system to assess safety in order to reduce the risk of finding issues later in the process. Since system architecture is the first design artefact developers produce, developers invest considerable time in assessing the architecture’s impact on system safety. Typical safety standards require developers to show that a plan of safety activities, chosen from recommended options or alternatives, meets a set of objectives. More specifically, the automotive safety standard ISO 26262 recommends formally verifying the software architecture to show that it “complies” with safety requirements. In this paper, we apply an architecture-based verification technique for Architecture Analysis and Design Language (AADL) specifications to an architectural design for a fuel level estimation system to validate certain architectural properties. Subsequently, we build part of the conformance argument to show how the model checking can satisfy some ISO 26262 obligations. Furthermore, we show how the method could be used as a part of preliminary safety assessments and how it can be upheld by the later implementations beside of the other recommend methods.

  • 43.
    Khan, Usman
    et al.
    University of Cambridge, UK.
    Bate, Iain
    University of York, UK.
    WCET analysis of modern processors using multi-criteria optimisation2009Conference paper (Refereed)
  • 44.
    Lau, HuiKeng
    et al.
    University of York.
    Bate, Iain
    University of York.
    Cairns, Paul
    University of York.
    Timmis, Jon
    University of York.
    Adaptive data-driven error detection in swarm robotics with statistical classifiers2011In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 59, no 12, p. 1021-1035Article in journal (Refereed)
    Abstract [en]

    Swarm robotics is an example of a complex system with interactions among distributed autonomous robots as well with the environment. Within the swarm there is no centralised control, behaviour emerges from interactions between agents within the swarm. Agents within the swarm exhibit time varying behaviour in dynamic environments, and are subject to a variety of possible anomalies. The focus within our work is on specific faults in individual robots that can affect the global performance of the robotic swarm. We argue that classical approaches for achieving tolerance through implicit redundancy is insufficient in some cases and additional measures should be explored. Our contribution is to demonstrate that tolerance through explicit detection with statistical techniques works well and is suitable due to its lightweight computation.

  • 45.
    Lau, HuiKeng
    et al.
    University of York.
    Bate, Iain
    University of York.
    Timmis, Jon
    University of York.
    An immuno-engineering approach for anomaly detection in swarm robotics2009In: Artificial Immune Systems: 8th International Conference, ICARIS 2009, York, UK, August 9-12, 2009. Proceedings, Springer Berlin/Heidelberg, 2009, p. 136-150Chapter in book (Other academic)
  • 46.
    Lau, HuiKeng
    et al.
    Applied Computing Group, SKTM, UMS, Malaysia.
    Bate, Iain
    Mälardalen University, School of Innovation, Design and Engineering.
    Timmis, Jon
    University of York, United Kingdom.
    Immune-Inspired Error Detection for Multiple Faulty Robots in Swarm Robotics2013In: Advances in Artificial Life, ECAL 2013: Proceedings of the Twelfth European Conference on the Synthesis and Simulation of Living Systems, MIT Press, 2013, Vol. 12, p. 846-853Conference paper (Refereed)
  • 47.
    Lau, HuiKeng
    et al.
    University of York.
    Timmis, Jon
    University of York.
    Bate, Iain
    University of York.
    Anomaly detection inspired by immune network theory: A proposal2009In: 2009 IEEE Congress on Evolutionary Computation, CEC 2009, 2009, p. 3045-3051Conference paper (Refereed)
    Abstract [en]

    Previous research in supervised and unsupervised anomaly detection normally employ a static model of normal behaviour (normal-model) throughout the lifetime of the system. However, there are real world applications such as swarm robotics and wireless sensor networks where what is perceived as normal behaviour changes accordingly to the changes in the environment. To cater for such systems, dynamically updating the normal-model is required. In this paper, we examine the requirements from a range of distributed autonomous systems and then propose a novel unsupervised anomaly detection architecture capable of online adaptation inspired by the vertebrate immune system.

  • 48.
    Lau, HuiKeng
    et al.
    University of York.
    Timmis, Jon
    University of York.
    Bate, Iain
    Mälardalen University, School of Innovation, Design and Engineering. University of York.
    Collective self-detection scheme for adaptive error detection in a foraging swarm of robots: 10th International Conference, ICARIS 2011, Cambridge, UK, July 18-21, 2011. Proceedings2011In: Artificial Immune Systems: 10th International Conference, ICARIS 2011, Cambridge, UK, July 18-21, 2011. Proceedings, Springer Berlin/Heidelberg, 2011, p. 254-267Chapter in book (Other academic)
    Abstract [en]

    In this paper we present a collective detection scheme using receptor density algorithm to self-detect certain types of failure in swarm robotic systems. Key to any fault-tolerant system, is its ability to be robust to failure and have appropriate mechanisms to cope with a variety of such failures. In this work we present an error detection scheme based on T-cell signalling in which robots in a swarm collaborate by exchanging information with respect to performance on a given task, and self-detect errors within an individual. While this study is focused on deployment in a swarm robotic context, it is possible that our approach could possibly be generalized to a wider variety of multi-agent systems.

  • 49.
    Lay, Nicholas
    et al.
    University of York.
    Bate, Iain
    University of York.
    Improving the reliability of real-time embedded systems using innate immune techniques2008In: Evolutionary Intelligence, Vol. 1, no 2, p. 113-132Article in journal (Refereed)
    Abstract [en]

    Previous work has shown that immune-inspired techniques have good potential for solving problems associated with the development of real-time embedded systems (RTES), where for various reasons traditional real-time development techniques are not suitable. This paper examines in more detail the general applicability of the Dendritic Cell Algorithm (DCA) to the problem of task scheduling in RTES. To make this possible, an understanding of the problem characteristics is formalised, such that the results produced by the DCA can be examined in relation to the overall problem difficulty. The paper then contains a detailed understanding of how well the DCA which demonstrates that it generally performs well, however it clearly identifies properties of anomalies that are difficult to detect. These properties are as anticipated based on real-time scheduling theory.

  • 50.
    Lim, Tiong
    et al.
    University of York.
    Bate, Iain
    University of York.
    Timmis, Jon
    University of York.
    Validation of Performance Data using Experimental Verification Process in Wireless Sensor Network2012Conference paper (Refereed)
    Abstract [en]

    Testing a new network protocol experimentally in WSNs is an important step prior to deployment because theoretical models and assumptions made often differ between real environmental properties and performance. It is imperative to ensure that the results obtained from the test are reliable and the performance observed in simulation is a valid representation of the real world. Thus there is a need to perform extensive experimental analysis and evaluation to produce results with an acceptable level of confidence. In this paper, we outline experimental statistical and analysis techniques that allow us to have some confidence in the results obtained are at least relevant to physical deployment. Using the results from hardware and software experiments, we apply our proposed Experimental Verification Process (EVP) to evaluate the performance of the Multimodal Routing Protocol (MRP) against Adhoc On-demand Distance Vector (AODV) and Not So Tiny-AODV (NST-AODV). With the EVP, we have improved the credibility of MRP.

12 1 - 50 of 89
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf