mdh.sePublications
Change search
Refine search result
1234567 51 - 100 of 388
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51.
    Curuklu, Baran
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Lansner, Anders
    Quantitative Assessment of the Local and Long-Range Horizontal Connections within the Striate Cortex2003In: IEEE Proceedings of the Computational Intelligence, Robotics and Autonomous System, 2003Conference paper (Other academic)
  • 52.
    Curuklu, Baran
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Lansner, Anders
    KTH, Sweden.
    Spike and Burst Synchronization in a Detailed Cortical Network Model with I-F Neurons2001In: Artificial Neural Networks — ICANN 2001, 2001, p. 1095-1102Conference paper (Other academic)
    Abstract [en]

    Previous studies have suggested that synchronized firing is a prominent feature of cortical processing. Simplified network models have replicated such phenomena. Here we study to what extent these results are robust when more biological detail is introduced. A biologically plausible network model of layer of tree shrew primary visual cortex with a columnar architecture and realistic values on unit adaptation, connectivity patterns, axonal delays and synaptic strengths was investigated. A drifting grating stimulus provided afferent noisy input. It is demonstrated that under certain conditions, spike and burst synchronized activity between neurons, situated in different minicolumns, may occur.

  • 53.
    David, Alexandre
    et al.
    Aalborg University, Denmark.
    Håkansson, John
    Mälardalen University, Department of Computer Science and Electronics. Uppsala University, Sweden.
    Guldstrand Larsen, Kim
    Mälardalen University, Department of Computer Science and Electronics. Aalborg University, Denmark.
    Pettersson, Paul
    Mälardalen University, Department of Computer Science and Electronics. Uppsala University, Sweden.
    Model Checking Timed Automata with Priorities using DBM Subtraction2006In: Lecture Notes in Computer Science, vol 4202, 2006, p. 128-142Conference paper (Refereed)
    Abstract [en]

    In this paper we describe an extension of timed automata with priorities, and efficient algorithms to compute subtraction on DBMs (difference bounded matrices), needed in symbolic model-checking of timed automata with priorities. The subtraction is one of the few operations on DBMs that result in a non-convex set needing sets of DBMs for representation. Our subtraction algorithms are efficient in the sense that the number of generated DBMs is significantly reduced compared to a naive algorithm. The overhead in time is compensated by the gain from reducing the number of resulting DBMs since this number affects the performance of symbolic model-checking. The uses of the DBM subtraction operation extend beyond timed automata with priorities. It is also useful for allowing guards on transitions with urgent actions, deadlock checking, and timed games.

  • 54.
    Dobrin, Radu
    Mälardalen University, Department of Computer Science and Electronics.
    Combining Off-line Schedule Construction and Fixed Priority Scheduling in Real-Time Computer Systems2005Doctoral thesis, monograph (Other scientific)
    Abstract [sv]

    Datorer har blivit lika vanliga i samhället under de senaste 10 åren som vanliga mikrovågsugnar i hemmet. Förutom hem-PC som numera finns i nästan alla hushåll, är nästan all elekronik i hemmet (till exempel dvd-spelaren eller tv apparaten) eller i bilen, datorstyrt. I de enklaste fallen består dessa system av en dator och ett antal datorprogram som körs på den.

    I och med att dessa system blir allt mer avancerade så ökar kraven på datorns effektivitet också, till exempel hur många program kan köras på samma dator samtidigt, medan priserna på den färdiga produkten måste hållas så låga som möjligt för att kunna anpassas till marknaden.

    Vissa program i ett datorstyrt system är viktigare än andra att de utförs korrekt med avseende på både funktionalitet och tid. I bilar, till exempel, är det ytterst viktigt att datorprogrammen som styr airbagen eller bromsarna alltid fungerar som de ska, medan cd- äxlaren inte är så kritiskt för passagerarnas säkerhet. Både airbagen och cd-växlaren måste reagera på externa händelser (krock eller tryck på play knappen). Medan airbagen måste aktiveras inom en viss tidsintervall, dvs, inte före en krock, men inte för sent efter en krock heller, så spelar det ingen större roll om det tar en halv sekund extra mellan tiden man trycker på play knappen på cd:n och tiden när låten börjar spelas upp. Alla dessa system måste kunna koexistera utan att påverka varandra på ett negativt sätt, dvs, om cd-växlaren slutar fungera, får det inte påverka bromsarnas funktionalitet.

    I vissa system, är grunddesignen gjort på så sätt att det är svårt att lägga till ytterligare funktionalitet, oftast i form av nya datorprogram. Om man, till exempel, vill lägga till ett antisladd system i en bil, som kommer att styras av bil datorn, så måste man kunna vara säker på att resten av programmen som körs på samma dator, i synnerhet de kritiska delarna (till ex. airbag), fortfarande kommer att fungera felfritt. Å andra sidan, ju mera program man lägger till i systemet, desto svårare blir det för datorn att hantera dem. Detta leder oftast till behovet att förnya datorn till en kraftigare modell som ska lätt hantera de gamla programmen. Samtidigt så måste man fortfarande säkerställa att programmen fungerar korrekt. Att garantera att det nya systemet som består av en ny dator och de gamla programmen uppfyller kraven på korrekt funktionalitet, kan vara ett väldigt svår uppgift.

    I det här arbetet, vi förseslår metoder som gör det möjligt och lätt att utföra ovannämda uppgifter, dvs, att utöka funktionaliteten i befintliga datorsystem eller att uppgradera systemen medan den kritiska beteendet garanteras. Samtidigt, introducerar vi metoder for att förbättra efektiviteten i befintliga datorstyrda system som används i dagens läge i, till exempel, bil och flygindustrin.

  • 55.
    Dobrin, Radu
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Fohler, Gerhard
    Mälardalen University, Department of Computer Science and Electronics.
    Handling Non-periodic Events Together with Complex Constrained Tasks in Distributed Real-Time Systems2007Conference paper (Refereed)
    Abstract [en]

    In this paper we show how off-line scheduling and fixed priority

    scheduling (FPS) can be combined to get the advantages of both - the

    capability to cope with complex timing constraints while providing

    run-time flexibility. We present a method to take advantage of the

    flexibility provided by FPS while guaranteeing complex constraint

    satisfaction on periodic tasks. We provide mechanisms to include FPS

    servers to our previous work, to handle non-periodic events, while

    still fulfilling the original complex constraints on the periodic

    tasks.

    In some cases, e.g., when the complex constraints can not be

    expressed directly by FPS, we split tasks into instances (artifacts)

    to obtain a new task set with consistent FPS attributes. Our method

    is optimal in the sense that it keeps the number of artifacts

    minimized.

  • 56.
    Dodig-Crnkovic, Gordana
    Mälardalen University, Department of Computer Science and Electronics.
    Bildning & Computing2006Conference paper (Refereed)
  • 57.
    Dodig-Crnkovic, Gordana
    Mälardalen University, Department of Computer Science and Electronics.
    Epistemology Naturalized: The Info-Computationalist Approach2007In: APA Newsletter on Philosophy and Computers, Vol. 06, p. 9-13Article in journal (Refereed)
  • 58.
    Dodig-Crnkovic, Gordana
    Mälardalen University, Department of Computer Science and Electronics.
    Ethics and Privacy of Communications in Global E-Village2006In: ENCYCLOPEDIA OF DIGITAL GOVERNMENT, Idea Group Inc., 2006Chapter in book (Other academic)
  • 59.
    Dodig-Crnkovic, Gordana
    Mälardalen University, Department of Computer Science and Electronics.
    Investigations into Information Semantics and Ethics of Computing2006Doctoral thesis, comprehensive summary (Other scientific)
    Abstract [en]

    The recent development of the research field of Computing and Philosophy has triggered investigations into the theoretical foundations of computing and information.

    This thesis consists of two parts which are the result of studies in two areas of Philosophy of Computing (PC) and Philosophy of Information (PI) regarding the production of meaning (semantics) and the value system with applications (ethics).

    The first part develops a unified dual-aspect theory of information and computation, in which information is characterized as structure, and computation is the information dynamics. This enables naturalization of epistemology, based on interactive information representation and communication. In the study of systems modeling, meaning, truth and agency are discussed within the framework of the PI/PC unification.

    The second part of the thesis addresses the necessity of ethical judgment in rational agency illustrated by the problem of information privacy and surveillance in the networked society. The value grounds and socio-technological solutions for securing trustworthiness of computing are analyzed. Privacy issues clearly show the need for computing professionals to contribute to understanding of the technological mechanisms of Information and Communication Technology.

    The main original contribution of this thesis is the unified dual-aspect theory of computation/information. Semantics of information is seen as a part of the data-information-knowledge structuring, in which complex structures are self-organized by the computational processing of information. Within the unified model, complexity is a result of computational processes on informational structures. The thesis argues for the necessity of computing beyond the Turing-Church limit, motivated by natural computation, and wider by pancomputationalism and paninformationalism, seen as two complementary views of the same physical reality. Moreover, it follows that pancomputationalism does not depend on the assumption that the physical world on some basic level is digital. Contrary to many believes it is entirely compatible with dual (analogue/digital) quantum-mechanical computing.

  • 60.
    Dodig-Crnkovic, Gordana
    Mälardalen University, Department of Computer Science and Electronics.
    Knowledge as Computation in vivo: Semantics vs. Pragmatics as Truth vs. Meaning2006In: Proceedings from computers & philosophy,, 2006, p. 202-215Conference paper (Refereed)
    Abstract [en]

    Abstract. Following the worldwide increase in communications through computer networking, not only economies, entertainment, and arts but also research and education are transforming into global systems. Attempts to automate knowledge discovery and enable the communication between computerized knowledge bases encounter the problem of the incompatibility of syntactically identical expressions of different semantic and pragmatic provenance. Coming from different universes, terms with the same spelling may have a continuum of meanings. The formalization problem is related to the characteristics of the natural language semantic continuum. The human brain has through its evolution developed the capability to communicate via natural languages. We need computers able to communicate in similar, more flexible ways, which calls for a new and broader understanding far beyond the limits of formal axiomatic reasoning that characterize the Turing machine paradigm. This paper arguments for the need of a new approach to the ideas of truth and meaning based on logical pluralism, as a consequence of the new interactive understanding of computing, that necessitates going far beyond Turing limit.

  • 61.
    Dodig-Crnkovic, Gordana
    Mälardalen University, Department of Computer Science and Electronics.
    Knowledge Generation as Natural Computation2007In: WMSCI 2007 - The 11th World Multi-Conference on Systemics, Cybernetics and Informatics, Jointly with the 13th International Conference on Information Systems Analysis and Synthesis, ISAS 2007 - Proc., 2007, p. 240-244Conference paper (Refereed)
    Abstract [en]

    Knowledge generation can be naturalized by adopting computational model of cognition and evolutionary approach. In this framework knowledgeis seen as a result of the structuring of input data (data -> information -> knowledge) by an interactive computational process going on in the agent during the adaptive interplay with the environment, which clearly presents developmental advantage by increasing agent's ability to cope with the situation dynamics. This paper addresses the mechanism of knowledge generation, a process that may be modeled as natural computation in order to be better understood and improved.

  • 62.
    Dodig-Crnkovic, Gordana
    Mälardalen University, Department of Computer Science and Electronics.
    Model Validity and Semantics of Information2006Conference paper (Refereed)
  • 63.
    Dodig-Crnkovic, Gordana
    Mälardalen University, Department of Computer Science and Electronics.
    On the Importance of Teaching Professional Ethics to Computer Science Students2006Conference paper (Refereed)
  • 64.
    Dodig-Crnkovic, Gordana
    Mälardalen University, Department of Computer Science and Electronics.
    Philosophy of Information, a New Renaissance and the Discreet Charm of the Computational Paradigm2005In: L . Magnani, Computing, Philospphy and Cognition - Selected Papers from E-CAP 2004, 2005Conference paper (Refereed)
  • 65.
    Dodig-Crnkovic, Gordana
    Mälardalen University, Department of Computer Science and Electronics.
    Privacy and Protection of Personal Integrity in the Working Place2006Conference paper (Refereed)
    Abstract [en]

    Privacy and surveillance is a topic with growing importance for working places.

    Today's rapid technical development has a considerable impact on privacy. The aim of

    this paper is an analysis of the relation between privacy and workplace surveillance. The

    existing techniques, laws and ethical theories and practices are considered.

    The workplace is an official place par excellence. With modern technique it is easy to

    identify and keep under surveillance individuals at the workplace where everything from

    security-cameras to programs for monitoring of computer usage may bring about nearly a

    total control of the employees and their work effort.

    How much privacy can we expect at our workplaces? Can electronic methods of

    monitoring and surveillance be ethically justified? A critical analysis of the idea of

    privacy protection versus surveillance or monitoring of employees is presented.

    One central aspect of the problem is the trend toward the disappearance of boundaries

    between private and professional life. Users today may work at their laptop computers at

    any place. People send their business e-mails from their homes, even while travelling or

    on vacations. How can a strict division be made between private and official information

    in a future world pervaded with ubiquitous computers?

    The important fact is that not everybody is aware of the existence of surveillance, and

    even fewer people are familiar with privacy-protection methods. That is something which

    demands knowledge as well as engagement. The privacy right of the working force is

    grounded in the fundamental human right of privacy recognized in all major international

    agreements regarding human rights such as Article 12 of the Universal Declaration of

    Human Rights (United Nations, 1948).

    The conclusion is that trust must be established globally in the use of ICT (information

    and communication technology), so that both users (cultural aspect) and the technology

    will be trustworthy. That is a long-term project which already has started.

     

  • 66.
    Dodig-Crnkovic, Gordana
    Mälardalen University, Department of Computer Science and Electronics.
    Professional ethics in computing and intelligent systems2006In: Publications of the Finnish Artificial Intelligence Society 2006, 2006, p. 11-16Conference paper (Refereed)
    Abstract [en]

    Research and engineering have a decisive impact on the development of the society, providing not only the material artifacts, but also the ideas and other "tools of thought" used to conceptualize and relate to the world. Scientists and engineers are therefore required to take into consideration the welfare, safety and health of the public affected by their professional activities. Research and Engineering Ethics are highly relevant for the field ofcomputing (with Intelligent Systems/AI as its subfield). Computing Ethics has thus been developed as a particular branch of Applied Ethics. Byprofessional organizations, ethical judgment is considered an essential component of professionalism. This paper will point out the significance of teaching ethics, especially for the future AI professionals. It argues that education in Ethics should be incorporated in computing curricula. Experience from the course "Professional Ethics in Science and Engineering" given at Mälardalen University in Sweden is presented as an illustration.

  • 67.
    Dodig-Crnkovic, Gordana
    Mälardalen University, Department of Computer Science and Electronics.
    Semantics of Information as Interactive Computation2006In: Minds and Machines: Special Issue on the Philosophy of Computer ScienceArticle in journal (Refereed)
  • 68.
    Dodig-Crnkovic, Gordana
    Mälardalen University, Department of Computer Science and Electronics.
    Shifting the Paradigm of the Philosophy of Science: the Philosophy of Information and a New Renaissance2003In: Minds and Machines, ISSN 0924-6495, E-ISSN 1572-8641, Vol. 13, no 4, p. 521-536Article in journal (Refereed)
    Abstract [en]

    Computing is changing the traditional field of Philosophy of Science in a very profound way. First as a methodological tool, computing makes possible ``experimental Philosophy'' which is able to provide practical tests for different philosophical ideas. At the same time the ideal object of investigation of the Philosophy of Science is changing. For a long period of time the ideal science was Physics (e.g., Popper, Carnap, Kuhn, and Chalmers). Now the focus is shifting to the field of Computing/Informatics. There are many good reasons for this paradigm shift, one of those being a long standing need of a new meeting between the sciences and humanities, for which the new discipline of Computing/Informatics gives innumerable possibilities. Contrary to Physics, Computing/Informatics is very much human-centered. It brings a potential for a new Renaissance, where Science and Humanities, Arts and Engineering can reach a new synthesis, so very much needed in our intellectually split culture. This paper investigates contemporary trends and the relation between the Philosophy of Science and the Philosophy of Computing and Information, which is equivalent to the present relation between Philosophy of Science and Philosophy of Physics.

  • 69.
    Dodig-Crnkovic, Gordana
    Mälardalen University, Department of Computer Science and Electronics.
    System Modeling and Information Semantics2006In: Promote IT 2005: Proceedings of the fifth Conference for the Promotion of Research in IT at New Universities and University Colleges in Sweden, Lund: Studentlitteratur, 2006Chapter in book (Other academic)
  • 70.
    Dodig-Crnkovic, Gordana
    Mälardalen University, Department of Computer Science and Electronics.
    Where do New Ideas Come From? How do They Emerge? Epistemology as Computation (Information Processing)2007In: Randomness & Complexity, from Leibniz to Chaitin, World Scientific, 2007, p. 263-279Chapter in book (Other academic)
    Abstract [en]

    This essay presents arguments for the claim that in the best of all

    possible worlds (Leibniz) there are sources of unpredictability and

    creativity for us humans, even given a pancomputational stance. A

    suggested answer to Chaitin's questions: "Where do new mathematical

    and biological ideas come from? How do they emerge?" is that they

    come from the world and emerge from basic physical (computational)

    laws. For humans as a tiny subset of the universe, a part of the new ideas

    comes as the result of the re-configuration and reshaping of already

    existing elements and another part comes from the outside as a

    consequence of openness and interactivity of the system. For the

    universe at large it is randomness that is the source of unpredictability on

    the fundamental level. In order to be able to completely predict the

    Universe-computer we would need the Universe-computer itself to

    compute its next state; as Chaitin already demonstrated there are

    incompressible truths which means truths that cannot be computed by

    any other computer but the universe itself.

  • 71.
    Dodig-Crnkovic, Gordana
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Crnkovic, Ivica
    Mälardalen University, Department of Computer Science and Electronics.
    Increasing Interdisciplinarity by Distance Learning: Examples Connecting Economics with Software Engineering, and Computing with Philosophy2007In: e-mentor, ISSN 1731-6758, Vol. 19, p. 94-100Article in journal (Refereed)
    Abstract [en]

    This paper presents two distance courses aimed at promoting interdisciplinarity. The first one was an internet-based distance undergraduate course in software engineering and management of software development projects for students of management and economy. The goal of the course was to bridge the gap between disciplines of economy (management) and software engineering, transfer knowledge and provide necessary technical background for future managers who very likely in their careers will take part in software intense projects. Both the interdisciplinarity and the advanced e-learning tech-nology of this course made it challenging. The second was a specialized level Swedish National Course in Philosophy of Computing and Informatics for students of computing, philosophy and design, which was a combination of a campus-based and a distance course involving several Swedish univer-sities, with a group of distinguished teachers from both Sweden and abroad. The critical challenge of this course was the establishing of a new inter-discipline and overarching the gaps between traditions of disciplinary thinking.

  • 72.
    Dodig-Crnkovic, Gordana
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Horniak, Virginia
    Ethics and Privacy of Communications in the E-Polis2007In: ENCYCLOPEDIA OF DIGITAL GOVERNMENT / [ed] Ari-Veikko Anttiroiko (University of Tampere, Finland) and Matti Malkia (The Police College of Finland, Finland), Hershey, PA: Idea Group Publishing, 2007, p. 740-744Chapter in book (Other academic)
    Abstract [en]

    The electronic networking of physical space promises wide-ranging advances in science, medicine, delivery of services, environmental monitoring and remediation, industrial production and the monitoring of persons and machines. It can also lead to new forms of social interaction. However, without appropriate architecture and regulatory controls, it can also subvert democratic values. Information technology is not, in fact, neutral in its values; we must be intentional about design for democracy (Pottie, 2004). Information and communication technology (ICT) has led to the emergence of global Web societies. The subject of this article is privacy and its protection in the process of urbanization and socialization of the global digital Web society referred to as the e-polis. Privacy is a fundamental human right recognized in all major international agreements regarding human rights, such as Article 12 of the Universal Declaration of Human Rights (United Nations, 1948), and it is discussed in the article “Different Views of Privacy”. Today’s computer network technologies are sociologically founded on hunter-gatherer principles. As a result, common users may be possible subjects of surveillance and sophisticated Internet-based attacks. A user may be completely unaware of such privacy breaches taking place. At the same time, ICT offers the technical possibilities of embedded privacy protection obtained by making technology trustworthy and legitimate by design. This means incorporating options for socially acceptable behavior in technical systems, and making privacy protection rights and responsibilities transparent to the user. The ideals of democratic government must be respected and even further developed in the future e-government. Ethical questions and privacy of communications require careful analysis, as they have far-reaching consequences affecting the basic principles of e-democracy. 

  • 73.
    Dodig-Crnkovic, Gordana
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Horniak, Virginia
    Mälardalen University, Department of Computer Science and Electronics.
    Togetherness and Respect - Ethical Concerns of Privacy in Global Web Societies2006In: AI & Society, ISSN 0951-5666, Vol. 20, no 3, p. 372-383Article in journal (Refereed)
    Abstract [en]

    Today's computer network technologies are sociologically founded on hunter-gatherer principles; common users may be possible subjects ofsurveillance and sophisticated internet-based attacks are almost impossible to prevent. At the same time, information and communication technology, ICT offers the technical possibility of embedded privacy protection. Making technology legitimate by design is a part of the intentional design for democracy. This means incorporating options for socially acceptable behaviour in technical systems, and making the basic principles ofprivacy protection, rights and responsibilities, transparent to the user. The current global e-polis already has, by means of different technologies, de facto built-in policies that define the level of user-privacy protection. That which remains is to make their ethical implications explicit and understandable to citizens of the global village through interdisciplinary disclosive ethical methods, and to make them correspond to the highethical norms that support trust, the essential precondition of any socialization. The good news is that research along these lines is already inprogress. Hopefully, this will result in a future standard approach to the privacy of network communications.

  • 74.
    Dodig-Crnkovic, Gordana
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Stuart, Susan
    Mälardalen University, Department of Computer Science and Electronics. University of Glasgow, UK.
    Special Issue: Selected Papers From ECAP 2005 - European Computing and Philosophy Conference 20052006In: tripleC, ISSN 1726-670X, Vol. 4, no 2, p. i-iiArticle in journal (Other academic)
  • 75.
    Dunkels, Adam
    Mälardalen University, Department of Computer Science and Electronics.
    Programming Memory-Constrained Networked Embedded Systems2007Doctoral thesis, comprehensive summary (Other scientific)
    Abstract [en]

    Ten years after the Internet revolution are we standing on the brink of another revolution: networked embedded systems that connect the physical world with the computers, enabling new applications ranging from environmental monitoring and wildlife tracking to improvements in health care and medicine. 98% of all microprocessors sold today are used in embedded systems. Those systems have much smaller amounts of memory than PC computers. An embedded system may have as little has a few hundred bytes of memory, which makes programming them a challenge.

    This thesis focus on three topics regarding programming memory-constrained networked embedded systems: the TCP/IP for memory-constrained networked embedded systems, simplifying event-driven programming of memory-constrained systems, and dynamic loading of program modules in my Contiki operating system for memory-constrained systems. I show that the TCP/IP protocol stack can, contrary to previous belief, be used in memory-constrained embedded systems by implementing two small TCP/IP protocol stacks, lwIP and uIP.

    I present a novel programming mechanism called protothreads that I show significantly reduce the complexity of event-driven programming for memory-constrained systems. Protothreads provide a conditional blocked wait mechanism on top of event-driven systems with a much smaller memory overhead than full multithreading; each protothread requires only two bytes of memory.

    I show that dynamic linking of native code in standard ELF object code format is doable and feasible for wireless sensor networks by implementing a dynamic linker in the Contiki operating system. The results show that the energy overhead of dynamic linking of ELF files mainly is due to the ELF file format and not due to the dynamic linking mechanism as such.

    The impact of the research in this thesis has been and continues to be large. The software I have developed as part of this thesis is currently used by hundreds of companies in embedded devices in such diverse systems as car engines and satellites. The papers in this thesis are included as required reading in advanced courses on networked embedded systems and wireless sensor networks.

  • 76.
    Dunkels, Adam
    Mälardalen University, Department of Computer Science and Electronics.
    Towards TCP/IP for wireless sensor networks2005Licentiate thesis, comprehensive summary (Other scientific)
  • 77.
    Dunkels, Adam
    et al.
    Mälardalen University, Department of Computer Science and Electronics. Swedish Institute of Computer Science, Box 1263, SE-164 29 Kista, Sweden.
    Voigt, Thiemo
    Swedish Institute of Computer Science, Box 1263, SE-164 29 Kista, Sweden.
    Alonso, Juan
    Swedish Institute of Computer Science, Box 1263, SE-164 29 Kista, Sweden.
    Ritter, Hartmut
    Institute of Computer Science, Freie Universit¨at Berlin, Takustr. 9, D-14195 Berlin, Germany.
    Schiller, Jochen
    Institute of Computer Science, Freie Universit¨at Berlin, Takustr. 9, D-14195 Berlin, Germany.
    Connecting wireless sensornets with TCP/IP Networks2004In: Wired/Wireless Internet Communications: Second International Conference, WWIC 2004, Frankfurt (Oder), Germany, February 4-6, 2004. Proceedings / [ed] Peter Langendoerfer et. al., Berlin Heidelberg, 2004, p. 143-152Conference paper (Refereed)
    Abstract [en]

    Wireless sensor networks are based on the collaborative efforts of many small wireless sensor nodes, which collectively are able to form networks through which sensor information can be gathered. Such networks usually cannot operate in complete isolation, but must be connected to an external network through which monitoring and controlling entities can reach the sensornet. As TCP/IP, the Internet protocol suite, has become the de-facto standard for large-scale networking, it is interesting to be able to connect sensornets to TCP/IP networks. In this paper, we discuss three different ways to connect sensor networks with TCP/IP networks: proxy architectures, DTN overlays, and TCP/IP for sensor networks. We conclude that the methods are in some senses orthogonal and that combinations are possible, but that TCP/IP for sensor networks currently has a number of issues that require further research before TCP/IP can be a viable protocol family for sensor networking.

  • 78.
    Ekdahl, Fredrik
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Larsson, Stig
    Mälardalen University, Department of Computer Science and Electronics.
    Experience Report: Using Internal CMMI Appraisals to Institutionalize Software Development Performance Improvement2006In: Proceedings - 32nd Euromicro Conference on Software Engineering and Advanced Applications, SEAA, 2006, p. 216-222Conference paper (Refereed)
    Abstract [en]

    Critical to any successful performance improvement initiative is to achieve a state of continuous or institutionalized improvement. Some improvement can happen quickly, but long-term improvement is typically a matter of sustaining focus. This requires an infrastructure that keeps activities focused and drives them forward. In ABB, the IDEALSM model is used as a guide for setting up improvement activities in development centers. Central to the IDEALSM model is the diagnostic activity, i.e. the evaluation of current performance in the unit against a suitable reference model. Over the last eight years, ABB has used diagnostics in the form of internal CMM/CMMI appraisals to lay the foundation for improvement activities. In this experience report, the use of internal appraisals as a means for sustaining improvement focus will be discussed. Experiences and lessons learnt, as well as some of the specifics of ABB's internal appraisals will be presented.

  • 79. Ekelin, Svante
    et al.
    Nilsson, Martin
    Hartikainen, Erik
    Johnsson, Andreas
    Mälardalen University, Department of Computer Science and Electronics.
    Mångs, Jan-Erik
    Melander, Bob
    Björkman, Mats
    Real-time Measurement of End-to-End Available Bandwidth Using Kalman FilteringManuscript (Other academic)
  • 80.
    Ekelin, Svante
    et al.
    Mälardalen University, Department of Computer Science and Electronics. Ericsson Research, Stockholm, Sweden.
    Nilsson, Martin
    Ericsson Research, Stockholm, Sweden.
    Hartikainen, Erik
    Ericsson Research, Stockholm, Sweden.
    Johnsson, Andreas
    Mälardalen University, Department of Computer Science and Electronics.
    Mångs, Jan-Erik
    Ericsson Research, Stockholm, Sweden.
    Melander, Bob
    Ericsson Research, Stockholm, Sweden.
    Björkman, Mats
    Ericsson Research, Stockholm, Sweden.
    Real-time Measurement of End-to-End Available Bandwidth Using Kalman Filtering2006In: IEEE Symposium Record on Network Operations and Management Symposium2006,, 2006, p. 73-84Conference paper (Refereed)
    Abstract [en]

    This paper presents a new method, BART (Bandwidth Available in Real-Time), for estimating the end-to-end available bandwidth over a network path. It estimates bandwidth quasi-continuously, in real-time. The method has also been implemented as a tool. It relies on self-induced congestion, and repeatedly samples the available bandwidth of the network path with sequences of probe packet pairs, sent at randomized rates. BART requires little computation in each iteration, is light-weight with respect to memory requirements, and adds only a small amount of probe traffic. The BART method uses Kalman filtering, which enables real-time estimation (a.k.a. tracking). It maintains a current estimate, which is incrementally improved with each new measurement of the inter-packet time separations in a sequence of probe packet pairs. The measurementmodel has a strong non-linearity, and would not at first sight be considered suitable for Kalman filtering, but we show how this non-linearity can be handled. BART may be tuned according to the specific needs of the measurement application, such as agility vs. stability of the estimate. We have tested an implementation of BART in a physical test network with carefully controlled cross traffic, with good accuracy and agreement. Testmeasurements have also been performed over the Internet. We compare the performance of BART with that of pathChirp, a state-of-the-art tool for measuring end-to-end available bandwidth in real-time. 

  • 81.
    Ekman, M.
    et al.
    Bombardier Transportation, Västerås.
    Thane, Henrik
    Mälardalen University, Department of Computer Science and Electronics.
    Real-time dynamic relinking2008In: IPDPS Miami 2008 - Proceedings of the 22nd IEEE International Parallel and Distributed Processing Symposium, Program and CD-ROM, 2008, p. Article number 4536570-Conference paper (Refereed)
    Abstract [en]

    In this paper we present a method for automatically, on demand, linking entire functions into statically linked running embedded multi-tasking real-time applications. The purpose is to facilitate dynamic instrumentation of deployed systems. The method makes it possible to dynamically instrument the target in run-time, without preparing the source code. Code segments that are modified are substituted on the function level by the introduction of a dynamic relink method. The actual modification of the execution binary is performed in a safe and controlled manner by a low interference task. An algorithm is introduced for reusing memory from obsolete functions.

  • 82.
    Ekman, Mathias
    et al.
    Bombardier Transportation, 721 73 Västerås, Sweden .
    Thane, Henrik
    Mälardalen University, Department of Computer Science and Electronics.
    Dynamic Patching of Embedded Software2007In: Proceedings of the 13th IEEE Real Time and Embedded Technology and Applications Symposium, 2007, p. 337-346Conference paper (Refereed)
    Abstract [en]

    In this paper, we present a method for patching embedded multitasking real-time systems applications during runtime, for instrumentation purposes. The method uses binary modification techniques and automates the entire patch process. The method makes it possible to insert and remove instrumentation code without preparing the original source code. The method makes it possible to invoke code patches during run-time, without having to rely on dynamic linking of object files, or predeployment prepared dormant code. The actual modification of the executing target binary is performed in a safe and controlled manner by a dedicated low interference mutation task.

  • 83.
    El Shobaki, Mohammed
    Mälardalen University, Department of Computer Science and Electronics.
    On-Chip Monitoring for Non-Intrusive Hardware/Software Observability2004Licentiate thesis, monograph (Other scientific)
    Abstract [en]

    The increased integration of hardware and software components intoday's state-of-the-art computer systems make them complex and hardto analyse, test, and debug. Moreover, the advances in hardwaretechnology give system designers enormous possibilities to explorehardware as a means to implement performance demandingfunctionality. We see examples of this trend in novel microprocessors,and Systems-on-Chip, that comprise reconfigurable logic allowing forhardware/software co-design. To succeed in developing computer systemsbased on these premises, it is paramount to have efficient designtools and methods.An important aspect in the development process is observability, i.e.,the ability to observe the system's behaviour at various levels ofdetail. These observations are required for many applications: whenlooking for design errors, during debugging, during performanceassessments and fine-tuning of algorithms, for extraction of designdata, and a lot more. In real-time systems, and computers that allowfor concurrent process execution, the observability must be obtainedwithout compromising the system's functional and timing behaviour.In this thesis we propose a monitoring system that can be applied fornon-intrusive run-time observations of real-time and concurrentcomputer systems. The monitoring system, designatedMultipurpose/Multiprocessor Application Monitor (MAMon), is based on ahardware probe unit (IPU) which is integrated with the observedsystem's hardware. The IPU collects process-level events from ahardware-implemented Real-Time Kernel (RTK), without perturbing thesystem, and transfers the events to an external computer for analysis,debugging, and visualisation. Moreover, the MAMon concept alsofeatures hybrid monitoring for collection of more fine-grainedinformation, such as program instructions and data flows. We describe MAMon's architecture, the implementation of two hardwareprototypes, and validation of the prototypes in differentcase-studies. The main conclusion is that process level events can betraced non-intrusively by integrating the IPU with a hardware RTK. Asubsidiary conclusion, but yet relevant, is that the IPU's smallfootprint makes it attractive for SoC designs, as it providesincreased system observability for a low hardware cost.

  • 84.
    Eldh, Sigrid
    Mälardalen University, Department of Computer Science and Electronics.
    How to Save on Quality Assurance – Challenges in Software Testing2006In: Jornadas sobre Testeo de SoftwareArticle in journal (Refereed)
  • 85.
    Eldh, Sigrid
    Mälardalen University, Department of Computer Science and Electronics.
    How to save on quality assurance challenges in software testing2006Conference paper (Refereed)
    Abstract [en]

    Quality assurance, and in particular software testing and verification, are areas that yet have much to offer to the industry. Companies that develop software need to improve their skills in this area to get the best return on investments. Important future strategies for survival are to collaborate with academia to find solutions to several difficult problems within software testing. Some of the areas and experiences in software testing that needs to be improved from an industry perspective is discussed, like test automation and component test. We have created a way to improve designers testing, which we call software quality rank. This ranking system takes known research results, including knowledge and tools from static- and run-time analysis and making them work in industry. The software quality rank aims to improve testing on the component level. We have saved a lot of money by improving the test area, and we share some of the lessons learned to aid other businesses with the same endeavor.

  • 86.
    Eldh, Sigrid
    Mälardalen University, Department of Computer Science and Electronics.
    On Evaluating Test Techniques in an Industrial Setting2007Licentiate thesis, comprehensive summary (Other scientific)
    Abstract [en]

    Testing is a costly and an important activity in the software industry today. The systems are becoming more complex and the amount of code is constantly increasing. The majority of systems need to rely on its testing to show that it works, is reliable, and performs according to user expectations and specifications.

    Testing is performed in a multitude of ways, using different test approaches. How testing is conducted becomes essential, when time is limited, since exhaustive testing is not an option in large complex systems, Therefore, the design of the individual test case – and what part and aspect of the system it exercises, is the main focus of testing. Not only do we need to create, and execute test cases efficiently, but we also want them to expose important faults in the system. This main topic of testing has long been a focus of practitioners in industry, and there exists over 70 test techniques that aim to describe how to design a test case. Unfortunately, despite the industrial needs, research on test techniques are seldom performed in large complex systems.

    The main purpose of this licentiate thesis is to create an environment and framework where it is possible to evaluate test techniques. Our overall goal is to investigate suitable test techniques for different levels, (e.g. component, integration and system level) and to provide guidelines to industry on what is effective, efficient and applicable to test, based on knowledge of failure-fault distribution in a particular domain. In this thesis, our research has been described through four papers that start from a broad overview of typical industrial systems and arrive at a specific focus on how to set up a controlled experiment in an industrial environment. Our initial paper has stated the status of testing in industry, and aided in identifying specific issues as well as underlined the need for further research. We then made experiments with component test improvements, by simple utilization of known approaches (e.g. static analysis, code reviews and statement coverage). This resulted in a substantial cost-reduction and increased quality, and provided us better understanding of the difficulties in deploying known test techniques in reality, which are described in our second paper. These works lead us to our third paper, which describes the framework and process for evaluating test techniques. The first sub-process in this framework deals with how to prepare the experiment with a known set of faults. We aimed to investigate fault classifications to get a useful set of faults of different types to inject. In addition, we investigated real faults reported in an industrial system, performed controlled experiments, and the results were published in our fourth paper.

    The main contributions of this Licentiate thesis are the valuable insights in the context of evaluation of test techniques, specifically the problems of creating a useful experiment in an industrial setting, in addition to the survey of the state of practice of software testing in Industry. We want to better understand what needs to be done to create efficient evaluations of test techniques, and secondly what is the relation between faults/failures and test techniques. Though our experiments have not yet been able to create ‘the ultimate’ classification for such an aim, the results indicate the appropriateness of this approach. With these valuable insights, we believe that we will be able to direct our future research, to make better evaluations that have a larger potential to generalize and scale.

  • 87.
    Eldh, Sigrid
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Hansson, Hans
    Mälardalen University, Department of Computer Science and Electronics.
    Punnekkat, Sasikumar
    Mälardalen University, Department of Computer Science and Electronics.
    Pettersson, Anders
    Mälardalen University, Department of Computer Science and Electronics.
    Sundmark, Daniel
    Mälardalen University, Department of Computer Science and Electronics.
    Framework for Comparing Efficiency, Effectiveness and Applicability of Software Testing Techniques2006In: Proceedings - Testing: Academic and Industrial Conference - Practice and Research Techniques, TAIC PART 2006, 2006, p. 159-170, article id 1691683Conference paper (Refereed)
    Abstract [en]

    Software testing is expensive for the industry, and always constrained by time and effort. Although there is a multitude of test techniques, there are currently no scientifically based guidelines for the selection of appropriate techniques of different domains and contexts. For large complex systems, some techniques are more efficient in finding failures than others and some are easier to apply than others are. From an industrial perspective, it is important to find the most effective and efficient test design technique that is possible to automate and apply. In this paper, we propose an experimental framework for comparison of test techniques with respect to efficiency, effectiveness and applicability. We also plan to evaluate ease of automation, which has not been addressed by previous studies. We highlight some of the problems of evaluating or comparingtest techniques in an objective manner. We describe our planned process for this multi-phase experimental study. This includes presentation of some of the important measurements to be collected with the dual goals of analyzing the properties of the test technique, as well as validating our experimental framework.

  • 88.
    Eldh, Sigrid
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Punnekkat, Sasikumar
    Hansson, Hans
    Experiments with Component Test to Improve Software QualityManuscript (preprint) (Other academic)
  • 89.
    Eldh, Sigrid
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Punnekkat, Sasikumar
    Mälardalen University, Department of Computer Science and Electronics.
    Hansson, Hans
    Mälardalen University, Department of Computer Science and Electronics.
    Experiments with Component Tests to Improve Software Quality2007Conference paper (Refereed)
    Abstract [en]

    In commercial systems, time to market pressure often result in short cuts in the design phase where component test is most vulnerable. It is hard to define how much testing is cost effective by the individual developers, and hard to judge when testing is enough. Verification activities constitute a major part of the product cost. Failures unearthed during later phases of product development escalate the cost substantially. To reduce cost in later stages of testing by reducing failures is important not only for Ericsson, but for any software producer. At Ericsson, we created a scheme, Software Quality Rank (SQR). SQR is a way to improve quality on components. SQR consists of five steps, where the first is where the actual "ranking" of components takes place. Then a selection of components is targeted for improvement in five levels. Most components are targeted for rank 3, which is the cost-efficient quality level. Rank 5 is the target for safety-critical code. The goal of SQR was to provide developers with a tool that prioritizes what to do before delivery to next system test phase. SQR defines a stepwise plan, which describes how much and what to test on component level for each rank. It gives the process for how to prioritize components; re-introduces reviews; requires usage of static analysis tools and defines what coverage to be achieved. The scheme has been used with great success at different design organizations within and outside Ericsson and we believe it supports industry in defining what cost-efficient component test in a time to market situation.

  • 90.
    Eldh, Sigrid
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Punnekkat, Sasikumar
    Mälardalen University, Department of Computer Science and Electronics.
    Hansson, Hans
    Mälardalen University, Department of Computer Science and Electronics.
    Jönsson, Peter
    Combitech., Ericsson AB.
    Component Testing is Not Enough - A Study of Software Faults in Telecom Middleware2007In: Lecture Notes in Computer Science, vol. 4581, Springer, 2007, p. 74-89Chapter in book (Refereed)
    Abstract [en]

    The interrelationship between software faults and failures is quite intricate and obtaining a meaningful characterization of it would definitely help the testing community in deciding on efficient and effective test strategies. Towards this objective, we have investigated and classified failures observed in a large complex telecommunication industry middleware system during 2003-2006. In this paper, we describe the process used in our study for tracking faults from failures along with the details of failure data. We present the distribution and frequency of the failures along with some interesting findings unravelled while analyzing the origins of these failures. Firstly, though "simple" faults happen, together they account for only less than 10%. The majority of faults come from either missing code or path, or superfluous code, which are all faults that manifest themselves for the first time at integration/system level; not at component level. These faults are more frequent in the early versions of the software, and could very well be attributed to the difficulties in comprehending and specifying the context (and adjacent code) and its dependencies well enough, in a large complex system with time to market pressures. This exposes the limitations of component testing in such complex systems and underlines the need for allocating more resources for higher level integration and system testing.

  • 91.
    Eldh, Sigrid
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Punnekkat, Sasikumar
    Mälardalen University, Department of Computer Science and Electronics.
    Hansson, Hans
    Mälardalen University, Department of Computer Science and Electronics.
    Jönsson, Peter
    Mälardalen University, Department of Computer Science and Electronics.
    Component Testing is not Enough - A Study of Software Faults in Telecom Middleware2007In: Lecture Notes in Computer Science, vol. 4581, 2007, p. 74-89Conference paper (Refereed)
    Abstract [en]

    The interrelationship between software faults and failures is quite intricate and obtaining a meaningful characterization of it would definitely help the testing community in deciding on efficient and effective test strategies. Towards this objective, we have investigated and classified failures observed in a large complex telecommunication industry middleware system during 2003- 2006. In this paper, we describe the process used in our study for tracking faults from failures along with the details of failure data. We present the distribution and frequency of the failures along with some interesting findings unravelled while analyzing the origins of these failures. Firstly, though "simple" faults happen, together they account for only less than 10%. The majority of faults come from either missing code or path, or superfluous code, which are all faults that manifest themselves for the first time at integration/system level; not at component level. These faults are more frequent in the early versions of the software, and could very well be attributed to the difficulties in comprehending and specifying the context (and adjacent code) and its dependencies well enough, in a large complex system with time to market pressures. This exposes the limitations of component testing in such complex systems and underlines the need for allocating more resources for higher level integration and system testing.

  • 92.
    Enblom, Leif
    Mälardalen University, Department of Computer Science and Electronics.
    Utilizing concurrency to gain performance in an industrial automation system2003Licentiate thesis, comprehensive summary (Other scientific)
  • 93.
    Enblom, Leif
    Mälardalen University, Department of Computer Science and Electronics.
    Utilizing Concurrency to Gain Performance in an Industrial Automation System2003Licentiate thesis, monograph (Other scientific)
    Abstract [en]

    This work presents and discusses the results from a study, focused on achieving more performance for an industrial real-time control system. The real-time control system is used to protect electrical power stations from being destroyed by strokes of lightning. Sensors in the system continuously collect information on currents and voltages from the electrical power station which the control system protects. The sensors deliver the collected data to a computer system that bases its decisions on the arriving data. When a dangerous situation is detected circuit breakers decouple the hazardous power line. Today, the computer system is based on a single processor architecture. The problem is that this architecture does not provide enough performance to support demanding system configurations such as more advanced application algorithms and increased amount of data collected from the sensors. In order to obtain correct, timely execution of the protection applications, designers may need to optimize application code aggressively. Unwanted simplifications of algorithms or low sampling frequencies of sensor data may be the result. The motivation of this work is to study how the real-time control system is affected by being adapted to a multiprocessor or distributed architecture in order to increase the available computing resources. The objective is to improve the performance of system components in general and application components in particular. By identifying components in the existing control system that exhibit a large amount of concurrency and a relatively small amount of data exchange the study found a performance improving solution. The I/O system that is responsible for collecting sensor data and the application functionality both exhibit a large amount of mutual concurrency and may therefore scale on a system with multiple processors. In experimental configurations the I/O system components and an application model were arranged to execute in parallel on two processors. This approach exploits the concurrency available at the interface between the I/O system and application components. Results from measurements show that processing resources (up to 66% when compared with a single processor system configuration) can be freed for application components by utilizing this concurrency in a two processor configuration. The advantage gained is an increase in flexibility for application designers to select a multiprocessor system configuration for demanding applications. While parallel architectures are used in some industrial systems, not much has been written about the possibilities and threats when legacy systems are adapted to such architectures. By describing a model of an industrial real-time control system and extending that model with a mechanism that enables multiprocessor execution, we contribute to the understanding of both the functional composition and performance issues concerning parallel execution in such industrial systems.

  • 94.
    Ericsson, AnneMarie
    et al.
    University of Skövde, Sweden .
    Pettersson, Paul
    Mälardalen University, Department of Computer Science and Electronics.
    Berndtsson, Mikael
    University of Skövde, Sweden .
    Seiriö, Marco
    RuleCore, Sweden .
    Seamless Formal Verification of Complex Event Processing Applications2007In: ACM International Conference Proceeding Series, Volume 233, 2007, p. 50-61Conference paper (Refereed)
    Abstract [en]

    Despite proven successful in previous projects, the use of formal methods for enhancing quality of software is still not used in its full potential in industry. We argue that seamless support for formal verification in a high-level specification tool enhances the attractiveness of using a formal approach for increasing software quality.

    Commercial Complex Event Processing (CEP) engines often have support for modelling, debugging and testing CEP applications. However, the possibility of utilizing formal analysis is not considered.

    We argue that using a formal approach for verifying a CEP system can be performed without expertise in formal methods. In this paper, a prototype tool REX is presented with support for specifying both CEP systems and correctness properties of the same application in a high-level graphical language. The specified CEP applications are seamlessly transformed into a timed automata representation together with the high-level properties for automatic verification in the model-checker UPPAAL.

  • 95.
    Ermedahl, Andreas
    Mälardalen University, Department of Computer Science and Electronics.
    A Modular Tool Architecture for Worst-Case Execution Time Analysis2003Doctoral thesis, monograph (Other scientific)
    Abstract [en]

    Estimations of the Worst-Case Execution Time (WCET) are required in providing guarantees for timing of programs used in computer controlled products and other real-time computer systems. To derive program WCET estimates, both the properties of the software and the hardware must be considered. The traditional method to obtain WCET estimates is to test the system and measure the execution time. This is labour-intensive and error-prone work, which unfortunately cannot guarantee that the worst case is actually found. Static WCET analyses, on the other hand, are capable of generating safe WCET estimates without actually running the program. Such analyses use models of program flow and hardware timing to generate WCET estimates. This thesis includes several contributions to the state-of-the-art in static WCET analysis: (1) A tool architecture for static WCET analysis, which divides the WCET analysis into several steps, each with well-defined interfaces. This allows independent replacement of the modules implementing the different steps, which makes it easy to customize a WCET tool for particular target hardware and analysis needs. (2) A representation for the possible executions of a program. Compared to previous approaches, our representation extends the type of program flow information possible to express and handle in WCET analysis. (3) A calculation method which explicitly extracts a longest program execution path. The method is more efficient than previously presented path-based methods, with a computational complexity close to linear in the size of the program. (4) A calculation method using integer linear programming or constraint programming techniques for calculating the WCET estimate. The method extends the power of such calculation methods to handle new types of flow and timing information. (5) A calculation method that first uses flow information to divide the program into smaller parts, then calculates individual WCET estimates for these parts, and finally combines these into an overall program WCET. This novel approach avoids potential complexity problems, while still providing high precision WCET estimates. We have additionally implemented a prototype WCET analysis tool based on the proposed architecture. This tool is used for extensive evaluation of the precision and performance of our proposed methods. The results indicate that it is possible to perform WCET analysis in a modular fashion, and that this analysis produces high quality WCET estimates.

  • 96.
    Ermedahl, Andreas
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Engblom, Jakob
    Mälardalen University, Department of Computer Science and Electronics.
    Execution Time Analysis for Embedded Real-Time Systems2007In: Handbook of Real-Time Embedded Systems, CRC Press, 2007, p. 35.1-Chapter in book (Other academic)
    Abstract [en]

    Knowing the execution-time characteristics of a program is fundamental to the successful design, validation and deployment of real-time systems. This chapter deals with the problem of how to estimate, measure and analyze the execution time of embedded real-time programs. Of particular interest is the worst-case execution time (WCET). It covers the reasons for program execution time variation, including both the software and hardware complexity inherent in today's embedded systems. It gives an overview of various techniques used to derive execution time estimates. Finally we summarize a number of industrial case-studies of timing analysis, to show how timing analysis works in practice.

  • 97.
    Ermedahl, Andreas
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Sandberg, Christer
    Mälardalen University, Department of Computer Science and Electronics.
    Gustafsson, Jan
    Mälardalen University, Department of Computer Science and Electronics.
    Bygde, Stefan
    Mälardalen University, Department of Computer Science and Electronics.
    Lisper, Björn
    Mälardalen University, Department of Computer Science and Electronics.
    Loop Bound Analysis based on a Combination of Program Slicing, Abstract Interpretation, and Invariant Analysis2007In: OpenAccess Series in Informatics, Volume 6, 2007, 2007Conference paper (Refereed)
    Abstract [en]

    Static Worst-Case Execution Time (WCET) analysis is a technique to derive upper bounds for the execution times of programs. Such bounds are crucial when designing and verifying real-time systems. A key component for static derivation of precise WCET estimates is upper bounds on the number of times different loops can be iterated. In this paper we present an approach for deriving upper loop bounds based on a combination of standard program analysis techniques. The idea is to bound the number of different states in the loop which can influence the exit conditions. Given that the loop terminates, this number provides an upper loop bound. An algorithm based on the approach has been implemented in our WCET analysis tool SWEET. We evaluate the algorithm on a number of standard WCET benchmarks, giving evidence that it is capable to derive valid bounds for many types of loops.

  • 98.
    Fard, Ali
    Mälardalen University, Department of Computer Science and Electronics.
    Analysis and Design of Low-Phase-Noise Integrated Voltage-Controlled Oscillators for Wide-Band RF Front-Ends2006Doctoral thesis, comprehensive summary (Other scientific)
    Abstract [en]

    The explosive development of wireless communication services creates a demand for more flexible and cost-effective communication systems that offer higher data rates. The obvious trend towards small-size and ultra low power systems, in combination with the ever increasing number of applications integrated in a single portable device, tightens the design constraints at hardware and software level. The integration of current mobile systems with the third generation systems exemplifies and emphasizes the need of monolithic multi-band transceivers. A long term goal is a software defined radio, where several communication standards and applications are embedded and reconfigured by software. This motivates the need for highly flexible and reconfigurable analog radio frequency (RF) circuits that can be fully integrated in standard low-cost complementary metal-oxide-semiconductor (CMOS) technologies.

    In this thesis, the Voltage-Controlled Oscillator (VCO), one of the main challenging RF circuits within a transceiver, is investigated for today’s and future communication systems. The contributions from this work may be divided into two parts. The first part exploits the possibility and design related issues of wide-band reconfigurable integrated VCOs in CMOS technologies. Aspects such as frequency tuning, power dissipation and phase noise performance are studied and design oriented techniques for wide-band circuit solutions are proposed. For demonstration of these investigations several fully functional wide-band multi-GHz VCOs are implemented and characterized in a 0.18µm CMOS technology.

    The second part of the thesis concerns theoretical analysis of phase noise in VCOs. Due to the complex process of conversion from component noise to phase noise, computer aided methods or advanced circuit simulators are usually used for evaluation and prediction of phase noise. As a consequence, the fundamental properties of different noise sources and their impact on phase noise in commonly adopted VCO topologies have so far not been completely described. This in turn makes the optimization process of integrated VCOs a very complex task. To aid the design and to provide a deeper understanding of the phase noise mechanism, a new approach based on a linear time-variant model is proposed in this work. The theory allows for derivation of analytic expressions for phase noise, thereby, providing excellent insight on how to minimize and optimize phase noise in oscillators as a function of circuit related parameters. Moreover, it enables a fair performance comparison of different oscillator topologies in order to ascertain which structure is most suitable depending on the application of interest. The proposed method is verified with very good agreement against both advanced circuit simulations and measurements in CMOS and bipolar technologies. As a final contribution, using the knowledge gained from the theoretical analysis, a fully integrated 0.35µm CMOS VCO with superior phase noise performance and power dissipation is demonstrated.

  • 99.
    Fard, Ali
    Mälardalen University, Department of Computer Science and Electronics.
    Phase Noise and Amplitude Issues of a Wide Band VCO Utilizing a Switched Tuning Resonator2005In: Proceedings - IEEE International Symposium on Circuits and Systems, 2005, p. 2691-2694Conference paper (Other academic)
    Abstract [en]

    A 3.5-5.3 GHz, low phase noise CMOS VCO with switched tuning for multi-standard radios is presented in this paper. Design of low phase noise and small amplitude variations across the operating frequency is shown to be important aspects in wide-band VCOs. An analytic expression for the output amplitude of the VCO is derived as a function of the switched capacitor resonator Q. The linear-time variant model was used for prediction of the phase noise and for deciding a proper tank current to achieve the minimum phase noise and amplitude variations across the frequency range. The results are verified in a fully integrated 0.18μm VCO with measured phase noise levels of less than -115 dBc/Hz at 1 MHz offset from the carrier while dissipating 6 mW of power. 

  • 100.
    Fard, Ali
    et al.
    Mälardalen University, Department of Computer Science and Electronics.
    Andreani, Pietro
    Technical University of Denmark, Denmark.
    A Low-Phase-Noise Wide-Band CMOS Quadrature VCO for Multi-Standard RF Front-Ends2005In: Digest of Papers - IEEE Radio Frequency Integrated Circuits Symposium, 2005, p. 539-542Conference paper (Other academic)
    Abstract [en]

    A low phase noise CMOS LC quadrature VCO (QVCO) with a wide frequency range of 3.6-5.6 GHz, designed in a standard 0.18 μm process for multi-standard front-ends, is presented. A significant advantage of the topology is the larger oscillation amplitude when compared to other conventional QVCO structures. The QVCO is compared to a double cross-coupled LC-tank differential oscillator, both in theory and experiments, for evaluation of its phase noise, providing a good insight into its performance. The measured data displays up to 2 dBc/Hz lower phase noise in the 1/f2 region for the QVCO, when consuming twice the current of the differential VCO, based on an identical LC-tank. Experimental results on the QVCO show a phase noise level of -127.5 dBc/Hz at 3 MHz offset from a 5.6 GHz carrier while dissipating 8 mA of current, resulting in a figure of merit of 181.3 dBc/Hz.

1234567 51 - 100 of 388
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf