In this paper, an algorithm is developed to invisibly watermark a cover object (color image) using watermark object (iconic image). The algorithm is based on the distances among the addresses of values of the cover object. These distances use to make the embedding. The order of manipulating these distances are specified by the values of the watermark data which is dealt with serially. The algorithm serves and achieves self encryption key. Each watermark object has its unique pattern of distances at different possible lengths of distance bits. This enhances the complexity of sequential embedding. The algorithm is tested using direct and single level and double level of Two-Dimensional Discrete Wavelet Transform (2D DWT) embeddings. Two important issues are addressed. Firstly is to achieve a high Peak Signal to Noise Ratio (PSNR). The ratio was found to increase with an increasing of distance bits. Secondly is that the watermarked object retains the same properties of cover object. The algorithm shows resisting and withstanding against the most important attacks. Some of these include the lossy compression, blurring, resize and some types of Noise.
Most of the interfaces which are designed to control or program industrial robots are complex and require special training for the user. This complexity alongside the changing environment of small medium enterprises (SMEs) has lead to absence of robots from SMEs. The costs of (re)programming the robots and (re)training the robot users exceed initial costs of installation. In order to solve this shortcoming, we propose a new interface which uses augmented reality (AR) and multimodal human-robot interaction. We show that such an approach allows easier manipulation of robots at industrial environments.
In this paper, scheduling of robot cells that produce multiple object types in low volumes are considered. The challenge is to maximize the number of objects produced in a given time window as well as to adopt the schedule for changing object types. Proposed algorithm, POPStar, is based on a partial order planner which is guided by best-first search algorithm and landmarks. The best-first search, uses heuristics to help the planner to create complete plans while minimizing the makespan. The algorithm takes landmarks, which are extracted from user's instructions given in structured English as input. Using different topologies for the landmark graphs, we show that it is possible to create schedules for changing object types, which will be processed in different stages in the robot cell. Results show that the POPStar algorithm can create and adapt schedules for robot cells with changing product types in low volume production.
Developing easy to use, intuitive interfaces is crucial to introduce robotic automation to many small medium sized enterprises (SMEs). Due to their continuously changing product lines, reprogramming costs exceed installation costs by a large margin. In addition, traditional programming methods for industrial robots is too complex for an inexperienced robot programmer, thus external assistance is often needed. In this paper a new incremental multimodal language, which uses augmented reality (AR) environment, is presented. The proposed language architecture makes it possible to manipulate, pick or place the objects in the scene. This approach shifts the focus of industrial robot programming from coordinate based programming paradigm, to object based programming scheme. This makes it possible for non-experts to program the robot in an intuitive way, without going through rigorous training in robot programming.
Over the past few decades the use of industrial robots has increased a company's efficiency as well as strengthening their competitiveness in the market.
Despite this fact, in many cases, robot automation investments are considered to be technically challenging as well as costly by small and medium sized enterprises (SME). We hypothesize that in order to make industrial robots more common within the SME sector, the robots should be reprogrammable by task experts rather than robot programming experts. Within this project we propose to develop a high level language for intelligent human robot interaction that relies on multi-sensor inputs providing an abstract instructional programming environment for the user. Eventually to bring robot programming to stage where it is as easy as working together with a colleague
Typical pick and place, and machine tending applications often require an industrial robot to be embedded in a cell and to communicate with other devices in the cell. Programming the program logic is a tedious job, requiring expert programming knowledge, and it can take more time than programming the specific robot movements itself. We propose a new system, which takes in the description of the whole manufacturing process in natural language as input, fills in the implicit actions, and plans the sequence of actions to accomplish the task described in minimal makespan using a modified partial planning algorithm. Finally we demonstrate that the proposed system can come up with a sensible plan for the given instructions.
In this paper we present a new simplified natural language that makes use of spatial relations between the objects in scene to navigate an industrial robot for simple pick and place applications. Developing easy to use, intuitive interfaces is crucial to introduce robotic automation to many small medium sized enterprises (SMEs). Due to their continuously changing product lines, reprogramming costs are far higher than installation costs. In order to hide the complexities of robot programming we propose a natural language where the use can control and jog the robot based on reference objects in the scene. We used Gaussian kernels to represent spatial regions, such as left or above. Finally we present some dialogues between the user and robot to demonstrate the usefulness of the proposed system.
In this paper a system, which is driven through natural language, that allows operators to select and manipulate objects in the environment using an industrial robot is proposed. In order to hide the complexities of robot programming we propose a natural language where the user can control and jog the robot based on reference objects in the scene. We used semantic networks to relate different types of objects in the scene
Sonar imaging is currently the exemplary choice used in underwater imaging. However, since sound signals are absorbed by water, an image acquired by a sonar will have gradient illumination; thus, underwater maps will be difficult to process. In this work, we investigated this phenomenon with the objective to propose methods to normalize the images with regard to illumination. We propose to use MIxed exponential Regression Analysis (MIRA) estimated from each image that requires normalization. Two sidescan sonars have been used to capture the seabed in Lake Vattern in Sweden in two opposite directions west-east and east-west; hence, the task is extremely difficult due to differences in the acoustic shadows. Using the structural similarity index, we performed similarity analyses between corresponding regions extracted from the sonar images. Results showed that MIRA has superior normalization performance. This work has been carried out as part of the SWARMs project (http://www.swarms.eu/).
Underwater imaging has become an active research area in recent years as an effect of increased interest in underwater environments and is getting potential impact on the world economy, in what is called blue growth. Since sound propagates larger distances than electromagnetic waves underwater, sonar is typically used for underwater imaging. One interesting sonar image setting is comprised of using two parts (left and right) and is usually referred to as sidescan sonar. The image resulted from sidescan sonars, which is called waterfall image, usually has to distinctive parts, the water column and the image seabed. Therefore, the edge separating these two parts, which is called the first bottom return, is the real distance between the sonar and the seabed bottom (which is equivalent to sensor primary altitude). The sensory primary altitude can be measured if the imaging sonar is complemented by interferometric sonar, however, simple sonar systems have no way to measure the first bottom returns other than signal processing techniques. In this work, we propose two methods to detect the first bottom returns; the first is based on smoothing cubic spline regression and the second is based on a moving average filter to detect signal variations. The results of both methods are compared to the sensor primary altitude and have been successful in 22 images out of 25.
Complex underwater missions involving heterogeneous groups of AUVs and other types of vehicles require a number of steps from defining and planning the mission, orchestration during the mission execution, recovery of the vehicles, and finally post-mission data analysis. In this work the Mission Management Tool (MMT), a software solution for addressing the above-mentioned services is proposed. As demonstrated in the real-world tests the MMT is able to support the mission operators. The MMT hides the complex system consisting of software solutions, hardware, and vehicles from the user, and allows intuitive interaction with the vehicles involved in a mission. The tool can adapt to a wide spectrum of missions assuming different types of robotic systems and mission objectives.
With the rapidly growing use of Multi-Agent Systems (MASs), which can exponentially increase the system complexity, the problem of planning a mission for MASs became more intricate. In some MASs, human operators are still involved in various decision-making processes, including manual mission planning, which can be an ineffective approach for any non-trivial problem. Mission planning and re-planning can be represented as a combinatorial optimization problem. Computing a solution to these types of problems is notoriously difficult and not scalable, posing a challenge even to cutting-edge solvers. As time is usually considered an essential resource in MASs, automated solvers have a limited time to provide a solution. The downside of this approach is that it can take a substantial amount of time for the automated solver to provide a sub-optimal solution. In this work, we are interested in the interplay between a human operator and an automated solver and whether it is more efficient to let a human or an automated solver handle the planning and re-planning problems, or if the combination of the two is a better approach. We thus propose an experimental setup to evaluate the effect of having a human operator included in the mission planning and re-planning process. Our tests are performed on a series of instances with gradually increasing complexity and involve a group of human operators and a metaheuristic solver based on a genetic algorithm. We measure the effect of the interplay on both the quality and structure of the output solutions. Our results show that the best setup is to let the operator come up with a few solutions, before letting the solver improve them.
Face-to-face human communication is a multimodal and incremental process. An intelligent robot that operates in close relation with humans should have the ability to communicate with its human colleagues in such manner. The process of understanding and responding to multimodal inputs has been an interesting field of research and resulted in advancements in areas such as syntactic and semantic analysis, modality fusion and dialogue management. Some approaches in syntactic and semantic analysis take incremental nature of human interaction into account. Our goal is to unify syntactic/semantic analysis, modality fusion and dialogue management processes into an incremental multimodal interaction manager. We believe that this approach will lead to a more robust system which can perform faster than today's systems.
Humans employ different information channels (modalities) such as speech, pictures and gestures in their commu- nication. It is believed that some of these modalities are more error-prone to some specific type of data and therefore multimodality can help to reduce ambiguities in the interaction. There have been numerous efforts in implementing multimodal interfaces for computers and robots. Yet, there is no general standard framework for developing them. In this paper we propose a general framework for implementing multimodal interfaces. It is designed to perform natural language understanding, multi- modal integration and semantic analysis with an incremental pipeline and includes a multimodal grammar language, which is used for multimodal presentation and semantic meaning generation.
In this paper, some of the factors that affect classification performance of EEG based Brain-Computer Interfaces (BCI) is studied. Study is specified on P300 speller system which is also an EEG based BCI system. P300 is a physiological signal that represents a response of brain to a given stimulus which occurs right 300ms afier the stimulus onset. When this signal occurs, it changes the continuous EEG some micro volts. Since this is not a very distinguished change, some other physiological signals (movement of muscles and heart, blinking or other neural activities) may distort this signal. In order to understand if there is really a P300 component in the signal, consecutive P300 epochs are averaged over trials. In this study, we have been tried two different multi channel data handling methods with two different frequency windows. Resulted data have been classified using Support Vector Machines (SVM). It has been shown that proposed method has a better classification performance.
How do children (aged 6-12 years) understand and make use of a digital tool that is under development? This article builds on an ongoing interdisciplinary research project in which children, social workers (the inventers of this social innovation) and researchers together develop an interactive digital tool (application) to strengthen children's participation during the planning and process of welfare assessments. Departing from social constructionism, and using a discursive narrative approach with visual ethnography, the aim of the article is to display how the children co-construct the application and contribute with "stories of life situations" by drawing themselves as characters and the places they frequent. The findings show that the children improved the application by suggesting more affordances so that they could better create themselves/others, by discovering bugs, and by showing how it could appeal to children of various ages. The application helped the children to start communicating and bonding when creating themselves in detail, drawing places/characters and describing events associated with them, and sharing small life stories. The application can help children and social workers to connect and facilitate children's participation by allowing them to focus on their own perspectives when drawing and sharing stories.
Farming is facing many economic challenges in terms of productivity and cost-effectiveness. Labor shortage partly due to depopulation of rural areas, especially in Europe, is another challenge. Domain specific problems such as accurate identification and proper quantification of pathogens affecting plant and animal health are key factors for minimizing economical risks, and not risking human health. The ECSEL AFarCloud (Aggregate FARming in the CLOUD) project will provide a distributed platform for autonomous farming that will allow the integration and cooperation of agriculture Cyber Physical Systems in real-time in order to increase efficiency, productivity, animal health, food quality and reduce farm labour costs. This platform will be integrated with farm management software and will support monitoring and decision-making solutions based on big data and real-time data mining techniques.
Farming is facing many economic challenges in terms of productivity and cost-effectiveness. Labor shortage partly due to depopulation of rural areas, especially in Europe, is another challenge. Domain specific problems such as accurate monitoring of soil and crop properties and animal health are key factors for minimizing economical risks, and not risking human health. The ECSEL AFarCloud (Aggregate Farming in the Cloud) project will provide a distributed platform for autonomous farming that will allow the integration and cooperation of agriculture Cyber Physical Systems in real-time in order to increase efficiency, productivity, animal health, food quality and reduce farm labor costs. Moreover, such a platform can be integrated with farm management software to support monitoring and decision-making solutions based on big data and real-time data mining techniques.
Ny datormodell visar hur hjärnan behandlar information
Baran Çürüklüs forskning handlar om att förstå hur syncentret i hjärnan fungerar. Detta är viktigt för forskningen inom neurovetenskap och artificiell intelligens.
Under de senaste decennierna har hjärnforskningen visat att olika centra av hjärnbarken hos en och samma art har liknande struktur och att det finns stora likheter mellan olika arters hjärnbark. Dessa resultat tyder också på att nerv cellerna använder ett universellt språk när de kommunicerar med varandra. Dessutom verkar det finns generella regler som kan förklara hur hjärnan utvecklas och får sin slutliga form. En direkt konsekvens av dessa hypoteser är att Baran Çürüklüs forskning på syncentret kan ha stor inverkan på forskning på andra delar av hjärnan.
Syncentret är den del av hjärnbarken som tar emot de inkommande signaler från ögat. Syncentret är en mycket viktig del av hjärnan och innehåller uppskattningsvis 40 % av hjärnbarkens nerv celler. Baran Çürüklü har i detalj kartlagt svarsegenskaperna hos nerv cellerna i den primära visuella hjärnbarken under hjärnans utvecklingsförlopp. Detta arbete bygger på upptäckten av Hubel och Wiesel om att nerv cellerna i den primära visuella hjärnbarken reagerar på kontrastkanter. Deras forskning har resulterat i feedforward modellen som är en viktig del av arbetet som har gett dem Nobelpriset i fysiologi/medicin (1981).
Trots att denna modell har varit den mest refererade modellen i litteraturen så återstår fortfarande mycket forskning för att förstå nerv cellernas svarsegenskaper. Baran Çürüklüs modell kompletterar feedforward-modellen genom att bl.a. förklara hur hjärnan kan känna igen former under olika kontrastförhållanden. Modellen visar också hur omgivningen inverkar på syncentrets utvecklingsförlopp.
We propose a developmental model of the summation pools within the layer 4. The model is based on the modular structure of the neocortex and captures some of the known properties of layer 4. Connections between the orientation minicolumns are developed during exposure to visual input. Excitatory local connections are dense and biased towards the iso-orientation domain. Excitatory long-range connections are sparse and target all orientation domains equally. Inhibition is local. The summation pools are elongated along the orientation axis. These summation pools can facilitate weak and poorly tuned LGN input and explain improved visibility as an effect of enlargement of a stimulus.
An abstract model of a cortical hypercolumn is presented. This model could replicate experimental findings relating to the orientation tuning mechanism in the primary visual cortex. Properties of the orientation selective cells in the primary visual cortex like, contrast-invariance and response saturation were demonstrated in simulations. We hypothesize that broadly tuned inhibition and local excitatory connections are sufficient for achieving this behavior. We have shown that the local intracortical connectivity of the model is to some extent biologically plausible.
Previous studies have suggested that synchronized firing is a prominent feature of cortical processing. Simplified network models have replicated such phenomena. Here we study to what extent these results are robust when more biological detail is introduced. A biologically plausible network model of layer of tree shrew primary visual cortex with a columnar architecture and realistic values on unit adaptation, connectivity patterns, axonal delays and synaptic strengths was investigated. A drifting grating stimulus provided afferent noisy input. It is demonstrated that under certain conditions, spike and burst synchronized activity between neurons, situated in different minicolumns, may occur.
Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots.
Moving nodes in a Mobile Wireless Sensor Network (MWSN) typically have two maintenance objectives: (i) extend the coverage of the network as long as possible to a target area, and (ii) extend the longevity of the network as much as possible. As nodes move and also route traffic in the network, their battery levels deplete differently for each node. Dead nodes lead to loss of connectivity and even to disengaging full parts of the network. Several reactive and rule-based approaches have been proposed to solve this issue by adapting redeployment to depleted nodes. However, in large networks a cooperative approach may increase performance by taking the evolution of node battery and traffic into account. In this paper, we present a hybrid agent-based architecture that addresses the problem of depleting nodes during the maintenance phase of a MWSN. Agents, each assigned to a node, collaborate and adapt their behaviour to their battery levels. The collaborative behavior is modeled through the willingness to interact abstraction, which defines when agents ask and give help to one another. Thus, depleting nodes may ask to be replaced by healthier counterparts and move to areas with less traffic or to a collection point. At the lower level, negotiations trigger a reactive navigation behaviour based on Social Potential Fields (SPF). It is shown that the proposed method improves coverage and extends network longevity in an environment without obstacles as compared to SPF alone.
Adaptive autonomy allows agents to change their autonomy levels based on circumstances, e.g. when they decide to rely upon one another for completing tasks. In this paper, two configurations of agent models for adaptive autonomy are discussed. In the former configuration, the adaptive autonomous behavior is modeled through the willingness of an agent to assist others in the population. An agent that completes a high number of tasks, with respect to a predefined threshold, increases its willingness, and vice-versa. Results show that, agents complete more tasks when they are willing to give help, however the need for such help needs to be low. Agents configured to be helpful will perform well among alike agents. The second configuration extends the first by adding the willingness to ask for help. Furthermore, the perceived helpfulness of the population and of the agent asking for help are used as input in the calculation of the willingness to give help. Simulations were run for three different scenarios. (i) A helpful agent which operates among an unhelpful population, (ii) an unhelpful agent which operates in a helpful populations, and (iii) a population split in half between helpful and unhelpful agents. Results for all scenarios show that, by using such trait of the population in the calculation of willingness and given enough interactions, helpful agents can control the degree of exploitation by unhelpful agents. © Springer International Publishing AG, part of Springer Nature 2018.
Adaptive autonomy (AA) is a behavior that allows agents to change their autonomy levels by reasoning on their circumstances. Previous work has modeled AA through the willingness to interact, composed of willingness to ask and give assistance. The aim of this paper is to investigate, through computer simulations, the behavior of agents given the proposed computational model with respect to different initial configurations, and level of dependencies between agents. Dependency refers to the need for help that one agent has. Such need can be fulfilled by deciding to depend on other agents. Results show that, firstly, agents whose willingness to interact changes during run-time perform better compared to those with static willingness parameters, i.e. willingness with fixed values. Secondly, two strategies for updating the willingness are compared, (i) the same fixed value is updated on each interaction, (ii) update is done on the previous calculated value. The maximum number of completed tasks which need assistance is achieved for (i), given specific initial configurations.
Adaptive autonomy plays a major role in the design of multi-robots and multi-agent systems, where the need of collaboration for achieving a common goal is of primary importance. In particular, adaptation becomes necessary to deal with dynamic environments, and scarce available resources. In this paper, a mathematical framework for modelling the agents' willingness to interact and collaborate, and a dynamic adaptation strategy for controlling the agents' behavior, which accounts for factors such as progress toward a goal and available resources for completing a task among others, are proposed. The performance of the proposed strategy is evaluated through a fire rescue scenario, where a team of simulated mobile robots need to extinguish all the detected fires and save the individuals at risk, while having limited resources. The simulations are implemented as a ROS-based multi agent system, and results show that the proposed adaptation strategy provides a more stable performance than a static collaboration policy.
Adaptive autonomy enables agents operating in an environment to change, or adapt, their autonomy levels by relying on tasks executed by others. Moreover, tasks could be delegated between agents, and as a result decision-making concerning them could also be delegated. In this work, adaptive autonomy is modeled through the willingness of agents to cooperate in order to complete abstract tasks, the latter with varying levels of dependencies between them. Furthermore, it is sustained that adaptive autonomy should be considered at an agent's architectural level. Thus the aim of this paper is two-fold. Firstly, the initial concept of an agent architecture is proposed and discussed from an agent interaction perspective. Secondly, the relations between static values of willingness to help, dependencies between tasks and overall usefulness of the agents' population are analysed. The results show that a unselfish population will complete more tasks than a selfish one for low dependency degrees. However, as the latter increases more tasks are dropped, and consequently the utility of the population degrades. Utility is measured by the number of tasks that the population completes during run-time. Finally, it is shown that agents are able to finish more tasks by dynamically changing their willingness to cooperate.
Multi-robot systems can be prone to failures during plan execution, depending on the harshness of the environment they are deployed in. As a consequence, initially devised plans may no longer be feasible, and a re-planning process needs to take place to re-allocate any pending tasks. Two main approaches emerge as possible solutions, a global re-planning technique using a centralized planner that will redo the task allocation with the updated world state information, or a decentralized approach that will focus on the local plan reparation, i.e., the re-allocation of those tasks initially assigned to the failed robots.The former approach produces an overall better solution, while the latter is less computationally expensive.The goal of this paper is to exploit the benefits of both approaches, while minimizing their drawbacks. To this end, we propose a hybrid approach {that combines a centralized planner with decentralized multi-agent planning}. In case of an agent failure, the local plan reparation algorithm tries to repair the plan through agent negotiation. If it fails to re-allocate all of the pending tasks, the global re-planning algorithm is invoked, which re-allocates all unfinished tasks from all agents.The hybrid approach was compared to planner approach, and it was shown that it improves on the makespan of a mission in presence of different numbers of failures,as a consequence of the local plan reparation algorithm.
Adaptive autonomous (AA) agents are able to make their own decisions on when and with whom to share their autonomy based on their states. Whereas dependability gives evidence on whether a system, (e.g. an agent team), and its provided services are to be trusted. In this paper, an initial analysis on AA agents with respect to dependability is conducted. Firstly, AA is modeled through a pairwise relationship called willingness of agents to interact, i.e. to ask for and give assistance. Secondly, dependability is evaluated by considering solely the reliability attribute, which presents the continuity of correct services. The failure analysis is realized by modeling the agents through Petri Nets. Simulation results indicate that agents drop slightly more tasks when they are more willing to interact than otherwise, especially when the fail-rate of individual agents increases. Conclusively, the willingness should be tweaked such that there is compromise between performance and helpfulness.
Mission planning for multi-agent autonomous systems aims to generate feasible and optimal mission plans that satisfy the given requirements. In this article, we propose a mission-planning methodology that combines (i) a path-planning algorithm for synthesizing path plans that are safe in environments with complex road conditions, and (ii) a task-scheduling method for synthesizing task plans that schedule the tasks in the right and fastest order, taking into account the planned paths. The task-scheduling method is based on model checking, which provides means of automatically generating task execution orders that satisfy the requirements and ensure the correctness and efficiency of the plans by construction. We implement our approach in a tool named MALTA, which offers a user-friendly GUI for configuring mission requirements, a module for path planning, an integration with the model checker UPPAAL, and functions for automatic generation of formal models, and parsing of the execution traces of models. Experiments with the tool demonstrate its applicability and performance in various configurations of an industrial case study of an autonomous quarry. We also show the adaptability of our tool by employing it on a special case of the industrial case study.
The advent of powerful control units and the widespread availability of cheap computers have significantly increased the role of artificial intelligence (AI) in various sectors. In the field of maritime applications, this progress has led to the emergence of Edge AI as an important technology. This research focuses on the application of Edge AI to maritime vessels, addressing key aspects of maritime operations. Using Edge AI, we aim to improve the situation awareness and operational efficiency of marine vessels. This study explores Edge AI integration into marine environments and emphasizes its potential to improve on-board safety, navigation and decision-making processes. Our approach shows how smart units decentralized in large central systems can lead to more efficient and adaptive maritime operations and paving the way for a new era of technologically advanced and environmentally conscious maritime practices.
A new approach to interact with an industrial robot using hand gestures is presented. System proposed here can learn first time user's hand gestures rapidly. This improves product usability and acceptability. Artificial neural networks trained with the evolution strategy technique are found to be suited for this problem. The gesture recognition system is an integrated part of a larger project for addressing intelligent human-robot interaction using a novel multi-modal paradigm. The goal of the overall project is to address complexity issues related to robot programming by providing a multi-modal user friendly interacting system that can be used by SMEs.
The aim of the Smart and Networking Underwater Robots in Cooperation Meshes (SWARMs) project is to make autonomous underwater vehicles (AUVs), remote operated vehicles (ROVs) and unmanned surface vehicles (USVs) more accessible and useful. To achieve cooperation and communication between different AUVs, these must be able to exchange messages, so an efficient and reliable communication network is necessary for SWARMs. In order to provide an efficient and reliable communication network for mission execution, one of the important and necessary issues is the topology control of the network of AUVs that are cooperating underwater. However, due to the specific properties of an underwater AUV cooperation network, such as the high mobility of AUVs, large transmission delays, low bandwidth, etc., the traditional topology control algorithms primarily designed for terrestrial wireless sensor networks cannot be used directly in the underwater environment. Moreover, these algorithms, in which the nodes adjust their transmission power once the current transmission power does not equal an optimal one, are costly in an underwater cooperating AUV network. Considering these facts, in this paper, we propose a Probabilistic Topology Control (PTC) algorithm for an underwater cooperating AUV network. In PTC, when the transmission power of an AUV is not equal to the optimal transmission power, then whether the transmission power needs to be adjusted or not will be determined based on the AUV’s parameters. Each AUV determines their own transmission power adjustment probability based on the parameter deviations. The larger the deviation, the higher the transmission power adjustment probability is, and vice versa. For evaluating the performance of PTC, we combine the PTC algorithm with the Fuzzy logic Topology Control (FTC) algorithm and compare the performance of these two algorithms. The simulation results have demonstrated that the PTC is efficient at reducing the transmission power adjustment ratio while improving the network performance.
Emerging precision agriculture techniques rely on the frequent collection of high-quality data which can be acquired efficiently by unmanned aerial systems (UAS). The main obstacle for wider adoption of this technology is related to UAS operational costs. The path forward requires a high degree of autonomy and integration of the UAS and other cyber physical systems on the farm into a common Farm Management System (FMS) to facilitate the use of big data and artificial intelligence (AI) techniques for decision support. Such a solution has been implemented in the EU project AFarCloud (Aggregated Farming in the Cloud). The regulation of UAS operations is another important factor that impacts the adoption rate of agricultural UAS. An analysis of the new European UAS regulations relevant for autonomous operation is included. Autonomous UAS operation through the AFarCloud FMS solution has been demonstrated at several test farms in multiple European countries. Novel applications have been developed, such as the retrieval of data from remote field sensors using UAS and in situ measurements using dedicated UAS payloads designed for physical contact with the environment. The main findings include that (1) autonomous UAS operation in the agricultural sector is feasible once the regulations allow this; (2) the UAS should be integrated with the FMS and include autonomous data processing and charging functionality to offer a practical solution; and (3) several applications beyond just asset monitoring are relevant for the UAS and will help to justify the cost of this equipment.
In this paper, a centralized mission planner is presented. The planner employs a genetic algorithm for the optimization of the temporal planning problem. With the knowledge of agents’ specification and capabilities, as well as constraints and parameters for each task, the planner can produce plans that utilize multi-agent tasks, concurrency on agent level, and heterogeneous agents. Numerous optimization criteria that can be of use to the mission operator are tested on the same mission data set. Promising results and effectiveness of this approach are presented in the case study section.