Underwater imaging has become an active research area in recent years as an effect of increased interest in underwater environments and is getting potential impact on the world economy, in what is called blue growth. Since sound propagates larger distances than electromagnetic waves underwater, sonar is typically used for underwater imaging. One interesting sonar image setting is comprised of using two parts (left and right) and is usually referred to as sidescan sonar. The image resulted from sidescan sonars, which is called waterfall image, usually has to distinctive parts, the water column and the image seabed. Therefore, the edge separating these two parts, which is called the first bottom return, is the real distance between the sonar and the seabed bottom (which is equivalent to sensor primary altitude). The sensory primary altitude can be measured if the imaging sonar is complemented by interferometric sonar, however, simple sonar systems have no way to measure the first bottom returns other than signal processing techniques. In this work, we propose two methods to detect the first bottom returns; the first is based on smoothing cubic spline regression and the second is based on a moving average filter to detect signal variations. The results of both methods are compared to the sensor primary altitude and have been successful in 22 images out of 25.
The Internet of Things (IoT) is growing at a fast pace with new devices getting connected all the time. A new emerging group of these devices is the wearable devices, and the wireless sensor networks are a good way to integrate them in the IoT concept and bring new experiences to the daily life activities. In this paper, we present an everyday life application involving a WSN as the base of a novel context-awareness sports scenario, where physiological parameters are measured and sent to the WSN by wearable devices. Applications with several hardware components introduce the problem of heterogeneity in the network. In order to integrate different hardware platforms and to introduce a service-oriented semantic middleware solution into a single application, we propose the use of an enterprise service bus (ESB) as a bridge for guaranteeing interoperability and integration of the different environments, thus introducing a semantic added value needed in the world of IoT-based systems. This approach places all the data acquired (e.g., via internet data access) at application developers disposal, opening the system to new user applications. The user can then access the data through a wide variety of devices (smartphones, tablets, and computers) and operating systems (Android, iOS, Windows, Linux, etc.).
Applications based on Wireless Sensor Networks for Internet of Things scenarios are on the rise. The multiple possibilities they offer have spread towards previously hard to imagine fields, like e-health or human physiological monitoring. An application has been developed for its usage in scenarios where data collection is applied to smart spaces, aiming at its usage in fire fighting and sports. This application has been tested in a gymnasium with real, non-simulated nodes and devices. A Graphic User Interface has been implemented to suggest a series of exercises to improve a sportsman/woman s condition, depending on the context and their profile. This system can be adapted to a wide variety of e-health applications with minimum changes, and the user will interact using different devices, like smart phones, smart watches and/or tablets.
The emergence of novel pervasive networks that consist of tiny embedded nodes have reduced the gap between real and virtual worlds. This paradigm has opened the Service Cloud to a variety of wireless devices especially those with sensorial and actuating capabilities. Those pervasive networks contribute to build new context-aware applications that interpret the state of the physical world at real-time. However, traditional Service-Oriented Architectures (SOA), which are widely used in the current Internet are unsuitable for such resource-constraint devices since they are too heavy. In this research paper, an internetworking approach is proposed in order to address that important issue. The main part of our proposal is the Knowledge-Aware and Service-Oriented (KASO) Middleware that has been designed for pervasive embedded networks. KASO Middleware implements a diversity of mechanisms, services and protocols which enable developers and business processing designers to deploy, expose, discover, compose, and orchestrate real-world services (i.e. services running on sensor/actuator devices). Moreover, KASO Middleware implements endpoints to offer those services to the Cloud in a REST manner. Our internetworking approach has been validated through a real healthcare telemonitoring system deployed in a sanatorium. The validation tests show that KASO Middleware successfully brings pervasive embedded networks to the Service Cloud.
Many applications and services have emerged in the frame of new Internet of Things paradigm. This novel view has opened the Web services to a variety of devices especially to tiny and resource-constrained devices. Wireless Sensor and Actuator Networks belong to that kind of devices. Those networks have become one of the more promising technologies to take part in the Future Internet. However, the integration of Sensor and Actuator Networks into the Service Cloud is a hard challenge requiring specific new architectures and protocols. This paper presents a middleware approach addressing this important issue. A Knowledge-Aware and Service-Oriented Middleware (KASOM) for pervasive embedded networks is proposed. The major aim of KASOM is to offer advanced and enriched pervasive services to everyone connected to Internet. In this sense, KASOM implements mechanisms and protocols which allow managing the knowledge generated in pervasive embedded networks in order to expose it to Internet users in a readable way. General functional requirements of embedded sensor and actuator platforms have been taken into account when designing KASOM, with special attention in energy consumption, memory and bandwidth. The KASOM evaluation and validation will be demonstrated through a real Wireless Sensor and Actuator Network deployment based on integral healthcare services in a sanatorium.
Java is a successful programming environment and its use has grown from little embedded applications until enterprise network servers based on J2EE. This intensive use of Java demands the validation of their fault tolerance mechanisms to avoid unexpected behavior of the applications at runtime. This paper describes the design and implementation of a fault injector for the "Exhaustif®" SWIFI tool. Aspecific fault model for java applications that include class corruption/substitution at loading time, method call interception and unexpected exception thrown is proposed. The injector uses the JVMTI (JavaVirtual Machine Tool Interface) to perform bytecode instrumentation at runtime to carry out the fault model previously defined. Finally a XML formalization of the specific Java fault model is proposed. This approach, JVMTI + XML fault model description, provides complete independency between the system under test and the fault injection tool, as well the interoperability with another SWIFI tools.
Software implemented fault injection tools (SWIFI) use fault injectors to carry out the fault injection campaign defined in a GUI-based application. However, the communication between the fault injector and the application is defined in an ad-hoc manner. This paper describes an XML schema formalisation approach for the definition of fault sets which specify low level memory and/or register value corruptions in embedded microprocessor-based systems and resource usage faults in host based systems. Through this proposed XML schema definition, different injectors could be used to carry out the same fault set injection. To validate this approach an experimental tool called Exhaustif®, consisting of a GUI Java application for defining the fault sets and injection policies, one injector for Windows hosts systems and two injectors for Sparc and i386 architectures under RTEMS have been developed.
This paper presents Exhaustif®, a SWIFI fault injection tool for fault tolerance verification and the validation of embedded software in distributedheterogeneous systems. Exhaustif® mainly consists of two parts: EEM and FIK. Exhaustif® Executive Manager (EEM) is a GUI Java application to define the fault injection campaign that uses a SQL database to save the test results obtained from the System under Test (SUT) in order to carry out a post injection data analysis. FIK is under the command of EEM to cary out fault injections in applications running under diverse operating systems using pure SWIFI techniques. Exhaustif® carries out floating point register and memory corruptions using temporary triggers and uses an optimized routine interception mechanism to cany out argument and return value corruption with a minimal time overhead. Two experimentalFault Injector Kernels (FIK) under the RTEMS operating system for an EADS-Astrium SPARC ERC32-based MCM processor board and i386 standard PC mainboard have been developed.
The increase in physical resources for the next-generation of embedded computing devices, as well as in the efforts carried out by the scientific and research communities, is paving the way for Smart Infrastructures based on Wireless Sensor Networks. In such manner, not only open and formally defined Service-Oriented Frameworks are required, but also efficient middleware technologies that ease the development of new sensor-based services. From the cornerstone analysis of pervasive computing principles applied to Smart Spaces creation, this article presents the μSMS (micro Subscription Management System) middleware. This approach specifies and develops the notion of virtual sensor services created for Smart Environments over sensor networks from tiny in-network services based on agent technology. In the framework of the μSWN European Research Project, this architecture has been validated in a Smart Hospital real-world scenario using a healthcare virtual sensor service. Considering such purpose, medical status monitoring, location tracking and perimeter surveillance agent-based services have been developed. This study is concluded by a comparative analysis of the system considering memory overhead, packet delivery ratio, average end-to-end delay and battery lifetime as evaluation metrics. The results show a lightweight middleware implementation with good overall system and network performance.
In the twenty-first century, the impact of wireless and ubiquitous technologies is changing the way people perceive and interact with the physical world. These communication paradigms promise to change and redefine, in a reasonably short period of time, the most common way of our everyday living. The continuous advances in the field of Wireless Sensor Networks and their direct application in Smart Spaces are clear examples of it. However, in order for this kind of new generation infrastructures to have a large-scale dissemination, there are still some open issues to tackle. In this way, this paper presents nSOM, a service-oriented framework based on sensor network design that provides internetworking services with the Internet cloud. This lightweight middleware architecture implements an agent-based virtual sensor service approach which is a compact semantic knowledge management scheme based on a dynamic composition model. Copyright © 2012 Miguel S. Familiar et al.
Wireless sensor networks (WSNs) consist of thousands of nodes that need to communicate with each other. However, it is possible that some nodes are isolated from other nodes due to limited communication range. This paper focuses on the influence of communication range on the probability that all nodes are connected under two conditions, respectively: (1) all nodes have the same communication range, and (2) communication range of each node is a random variable. In the former case, this work proves that, for 0 < ε < e - 1, if the probability of the network being connected is 0.36 ε, by means of increasing communication range by constant C (ε), the probability of network being connected is at least 1 - ε. Explicit function C (ε) is given. It turns out that, once the network is connected, it also makes the WSNs resilient against nodes failure. In the latter case, this paper proposes that the network connection probability is modeled as Cox process. The change of network connection probability with respect to distribution parameters and resilience performance is presented. Finally, a method to decide the distribution parameters of node communication range in order to satisfy a given network connection probability is developed.
This paper proposes an architecture for Personal Computers (PC) to avoid BIOS alteration and unauthorized access to resources. This proposal is based on results obtained from study of most popular PC platforms security mechanisms. Authentication controls which are established in PC platform in order to grant operating system booting or BIOS integrity of code mechanism incorporated to secureand avoid executing disallowed code are quite easy to break. The architecture described in the present work (Advanced Secure Architecture - ASA) increase the overall information and system security since prevents an unauthorized platform booting and it provides procedures for BIOS code authentication. On the other hand, ASA overcomes the users' authentication challenge in a corporative environment as well as it offers a very flexible way to specify the Personal Computers Corporation set that a user is allowed to access.
Nowadays, proliferation of embedded systems is enhancing the possibilities of gathering information by using wireless sensor networks (WSNs). Flexibility and ease of installation make these kinds of pervasive networks suitable for security and surveillance environments. Moreover, the risk for humans to be exposed to these functions is minimized when using these networks. In this paper, a virtual perimeter surveillance agent, which has been designed to detect any person crossing an invisible barrier around a marked perimeter and send an alarm notification to the security staff, is presented. This agent works in a state of 'low power consumption' until there is a crossing on the perimeter. In our approach, the 'intelligence' of the agent has been distributed by using mobile nodes in order to discern the cause of the event of presence. This feature contributes to saving both processing resources and power consumption since the required code that detects presence is the only system installed. The research work described in this paper illustrates our experience in the development of a surveillance system using WNSs for a practical application as well as its evaluation in real-world deployments. This mechanism plays an important role in providing confidence in ensuring safety to our environment.
Research in Wireless Sensor Networks (WSN) necessarily touches on many research topics of Computer Science, Electronic Engineering and Telecommunication, basing on the existing work in related fields. However, the peculiarity of the WSN field is the interplay and integration of these foundation subjects, yielding a distinct topic worthy of further study in its own right. One of the main open issues in WSN research is to abstract the applications of complex low-level mechanisms, and one of the most powerful and flexible ways to achieve this is creating a Middleware layer that cover all this functionality, provide services to applications, allows the intercommunication among components, adapts dynamically to different operation modes and clearly differentiated from low-level components. Knowledgemanagement and ontologies are also helpful when WSN are used to monitoring and taking decisions. We deploy a WSN in a testing scenario in order to control environmental parameters according to user profile stored in the system.
Providing necessary background for provisioning of a new generation of enriched services over Wireless Sensor Networks is the main effort that the scientific community is currently carrying out. These services have improved a great number of aspects related to pervasive systems such as saving resources, efficiency, reliability, scalability and low power consumption. In this paper, μSMS middleware, using an event-based service model, is presented. This novel approach makes up the design requirements previously mentioned by implementing a dynamic memory kernel and a variable payload multiplexing mechanism for the information events in order to provide advanced services. The results obtained over real-world deployments, especially those related with provision of e-Health services, reflect a significant improvement over other similar proposals, such as the RUNES approach: 50% lower memory overhead, 53% lower software components load time and 12% lower event's propagation time.
A wireless sensor network (WSN) is a wireless network composed of spatially distributed and tiny autonomous nodes - smart dust sensors, motes - which cooperatively monitor physical or environmental conditions. Nowadays these kinds of networks support a wide range of applications, such as target tracking, security, environmental control, habitat monitoring, source detection, source localization, vehicular and traffic monitoring, health monitoring, building and industrial monitoring, etc. Generally, these applications have strong and strict requirements for end-to-end delaying and loosing during data transmissions. In this paper, we propose a realistic scenario for application of the WSN field in order to illustrate selection of an appropriate approach for guaranteeing performance in a WSN-deployed application. The methodology we have used includes four major phases: 1) Requirements analysis of the application scenario; 2) QoS modeling in different layers of the communications protocol stack and selection of more suitable QoS protocols and mechanisms; 3) Definition of a simulation model based on an application scenario, to which we applied the protocols and mechanisms selected in the phase 2; and 4) Validation of decisions by means of simulation and analysis of results. This work has been partially financed by the "Universidad Politécnica de Madrid" and the "Comunidad de Madrid" in the framework of the project CRISAL - M0700204174.
A wireless sensor network (WSN) is a computer wireless network composed of spatially distributed and autonomous tiny nodes - smart dustsensors, motes -, which cooperatively monitor physical or environmental conditions. Nowadays these kinds of networks support a wide range of applications, such as target tracking, security, environmental control, habitat monitoring, source detection, source localization, vehicular and traffic monitoring, health monitoring, building and industrial monitoring, etc. Many of these applications have strong requirements for end-to-end delay and losses during data transmissions. In this work we have classified the main mechanisms that have been proposed to provide Quality of Service (QoS) in WSN at Medium Access Control (MAC) and network layers. Finally, taking into account some particularities of the studied MAC- andnetwork-layer protocols, we have selected a real application scenario in order to show how to choose an appropriate approach for guaranteeing performance in a WSN deployed application.
Nowadays WSNs support applications such as target tracking, environmental control or vehicles traffic monitoring. Generally, these applications have strong and strict requirements for end-to-end delaying and loosing during data transmissions. In this paper, we propose a practical scenario for application of the WSN field in order to illustrate selection of an appropriate approach for guaranteeing performance in a WSN-deployed application. The methodology we have used includes four major phases: 1) Requirements analysis of the application scenario; 2) QoS modelling in different layers of the communications protocol stack and selection of more suitable QoS protocols and mechanisms; 3) Definition of a simulation model based on an application scenario, to which we applied the protocols and mechanisms selected in phase 2; 4) Validation of decisions by means of simulation; and 5) analysis of results. This work has being partially developed in the framework of the CRISAL - M0700204174 project (partially funded by “Universidad Politécnica de Madrid” and “Comunidad de Madrid”, Spain).
The present work describes how the concepts and foundations defined for multi-agent system technology can be applied in to a Wireless SensorNetwork (WSN), specifically it is focused on how multi-agent system technology's mechanisms and implementations could facilitate the development of systems based on WSN. In this respect, an architectural model where the above mentioned concepts, foundations and mechanisms come together is proposed, in order to define applications and services on WSN. Validation of the proposed architecture is made by means of its use in a perimeter security scenario ("tracking"). It is important to mention, that partial results of this work have been developed in the project PROPSI (Perimeter Protection by means of Wireless Sensor Networks).
The Knowledge and Information Society development has promoted the evolution of telematic services, such as those for e-govemment and e-health, which aim to offer feasible solutions to social and citizens problems that will strengthen the democracy, foster citizens equal opportunities and ease their participation in the processes of the public administration and the help to the needy. Though the different telematic services are internally constituted for similar or even identical subservices, the interoperability is not possible among them and components sharing either. In order to encourage the development of the above mentioned type of services, the present article proposes an architecture that contributes to the interoperability and sharing of subservices. This architecture derives from the study of two scenarios, one of e-health and other one of e-goverment, with similar needs of security for the user' or elements authentication and the operations authorization.
The traditional power grid is just a one-way supplier that gets no feedback data about the energy delivered, what tariffs could be the most suitable ones forcustomers, the shifting daily needs of electricity in a facility, etc. Therefore, it is only natural that efforts are being invested in improving power grid behavior and turning it into a Smart Grid. However, to this end, several components have to be either upgraded or created from scratch. Among the new components required,middleware appears as a critical one, for it will abstract all the diversity of the used devices for power transmission (smart meters, embedded systems, etc.) and will provide the application layer with a homogeneous interface involving power production and consumption management data that were not able to be provided before. Additionally, middleware is expected to guarantee that updates to the current metering infrastructure (changes in service or hardware availability) or any added legacy measuring appliance will get acknowledged for any future request. Finally, semantic features are of major importance to tackle scalability and interoperability issues. A survey on the most prominent middleware architectures for Smart Grids is presented in this paper, along with an evaluation of their features and their strong points and weaknesses.