mdh.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 43) Show all publications
Ahlberg, C., Ekstrand, F., Ekström, M., Spampinato, G. & Asplund, L. (2015). GIMME2 - An embedded system for stereo vision and processing of megapixel images with FPGA-acceleration. In: 2015 International Conference on ReConFigurable Computing and FPGAs, ReConFig 2015: . Paper presented at International Conference on ReConFigurable Computing and FPGAs, ReConFig 2015, 7 December 2015 through 9 December 2015.
Open this publication in new window or tab >>GIMME2 - An embedded system for stereo vision and processing of megapixel images with FPGA-acceleration
Show others...
2015 (English)In: 2015 International Conference on ReConFigurable Computing and FPGAs, ReConFig 2015, 2015Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents GIMME2, an embedded stereovision system, designed to be compact, power efficient, cost effective, and high performing in the area of image processing. GIMME2 features two 10 megapixel image sensors and a Xilinx Zynq, which combines FPGA-fabric with a dual-core ARM CPU on a single chip. This enables GIMME2 to process video-rate megapixel image streams at real-time, exploiting the benefits of heterogeneous processing.

Keywords
Cost effectiveness, Field programmable gate arrays (FPGA), Image processing, Pixels, Reconfigurable architectures, Reconfigurable hardware, Stereo vision, Video signal processing, Cost effective, FPGA fabric, Heterogeneous processing, Image streams, Power efficient, Process video, Single chips, Stereo-vision system, Stereo image processing
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:mdh:diva-31587 (URN)10.1109/ReConFig.2015.7393318 (DOI)000380437700038 ()2-s2.0-84964335178 (Scopus ID)9781467394062 (ISBN)
Conference
International Conference on ReConFigurable Computing and FPGAs, ReConFig 2015, 7 December 2015 through 9 December 2015
Available from: 2016-05-13 Created: 2016-05-13 Last updated: 2016-12-22Bibliographically approved
Bruhn, F., Brunberg, K., Hines, J., Asplund, L. & Norgren, M. (2015). Introducing Radiation Tolerant Heterogeneous Computers for Small Satellites. In: IEEE Aerospace Conference Proceedings, vol. 2015: . Paper presented at IEEE Aerospace Conference 2015 IEEEAC2015, 7-14 Mar 2015, Big Sky, United States (pp. Article number 7119158). , 2015
Open this publication in new window or tab >>Introducing Radiation Tolerant Heterogeneous Computers for Small Satellites
Show others...
2015 (English)In: IEEE Aerospace Conference Proceedings, vol. 2015, 2015, Vol. 2015, p. Article number 7119158-Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents results and conclusions from design, manufacturing, and benchmarking of a heterogeneous computing low power fault tolerant computer, realized on an industrial Qseven® small form factor (SFF) platform. A heterogeneous computer in this context features multi-core processors (CPU), a graphical processing unit (GPU), and a field programmable gate array (FPGA). The x86 compatible CPU enables the use of vast amounts of commonly available software and operating systems, which can be used for space and harsh environments. The developed heterogeneous computer shares the same core architecture as game consoles such as Microsoft Xbox One and Sony Playstation 4 and has an aggregated computational performance in the TFLOP range. The processing power can be used for on-board intelligent data processing and higher degrees of autonomy in general. The module feature quad core 1.5 GHz 64 bit CPU (24 GFLOPs), 160 GPU shader cores (127 GFLOPs), and a 12 Mgate equivalent FPGA fabric with a safety critical ARM® Cortex-M3 MCU.

Keywords
Heterogeneous computing, heterogeneous system architecture, onboard processing
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mdh:diva-28127 (URN)10.1109/AERO.2015.7119158 (DOI)000380501302091 ()2-s2.0-84940703986 (Scopus ID)9781479953790 (ISBN)
Conference
IEEE Aerospace Conference 2015 IEEEAC2015, 7-14 Mar 2015, Big Sky, United States
Projects
GIMME3 - Semi-fault tolerant next generation high performance computer architecture based on screened industrial components
Available from: 2015-06-12 Created: 2015-06-08 Last updated: 2018-01-11Bibliographically approved
Spampinato, G., Lidholm, J., Ahlberg, C., Ekstrand, F., Ekström, M. & Asplund, L. (2013). An embedded stereo vision module for industrial vehicles automation. In: Proceedings of the IEEE International Conference on Industrial Technology: . Paper presented at 2013 IEEE International Conference on Industrial Technology, ICIT 2013; Cape Town; South Africa; 25 February 2013 through 28 February 2013 (pp. 52-57). IEEE
Open this publication in new window or tab >>An embedded stereo vision module for industrial vehicles automation
Show others...
2013 (English)In: Proceedings of the IEEE International Conference on Industrial Technology, IEEE , 2013, p. 52-57Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents an embedded vision system based on reconfigurable hardware (FPGA) to perform stereo image processing and 3D mapping of sparse features for autonomous navigation and obstacle detection in industrial settings. We propose an EKF based visual SLAM to achieve a 6D localization of the vehicle even in non flat scenarios. The system uses vision as the only source of information. As a consequence, it operates regardless of the odometry from the vehicle since visual odometry is used. © 2013 IEEE.

Place, publisher, year, edition, pages
IEEE, 2013
Keywords
3D vision, Embedded, portable, SLAM
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mdh:diva-19056 (URN)10.1109/ICIT.2013.6505647 (DOI)000322785200006 ()2-s2.0-84877621168 (Scopus ID)9781467345699 (ISBN)
Conference
2013 IEEE International Conference on Industrial Technology, ICIT 2013; Cape Town; South Africa; 25 February 2013 through 28 February 2013
Available from: 2013-05-24 Created: 2013-05-24 Last updated: 2018-08-07Bibliographically approved
Akan, B., Çürüklü, B. & Asplund, L. (2013). Scheduling POP-Star for Automatic Creation of Robot Cell Programs. In: : . Paper presented at 18th IEEE International Conference on Emerging Technologies & Factory Automation ETFA'13, Cagliari, Italy (pp. 1-4).
Open this publication in new window or tab >>Scheduling POP-Star for Automatic Creation of Robot Cell Programs
2013 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Typical pick and place, and machine tending applications often require an industrial robot to be embedded in a cell and to communicate with other devices in the cell. Programming the program logic is a tedious job, requiring expert programming knowledge, and it can take more time than programming the specific robot movements itself. We propose a new system, which takes in the description of the whole manufacturing process in natural language as input, fills in the implicit actions, and plans the sequence of actions to accomplish the task described in minimal makespan using a modified partial planning algorithm. Finally we demonstrate that the proposed system can come up with a sensible plan for the given instructions.

Keywords
robot cell scheduling, partial order planning, astar
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mdh:diva-23600 (URN)10.1109/ETFA.2013.6648129 (DOI)2-s2.0-84890694425 (Scopus ID)978-1-4799-0862-2 (ISBN)
Conference
18th IEEE International Conference on Emerging Technologies & Factory Automation ETFA'13, Cagliari, Italy
Projects
Robot Colleague - A Project in Collaborative Robotics
Available from: 2013-12-16 Created: 2013-12-16 Last updated: 2016-05-17Bibliographically approved
Ahlberg, C., Asplund, L., Campeanu, G., Ciccozzi, F., Ekstrand, F., Ekström, M., . . . Segerblad, E. (2013). The Black Pearl: An Autonomous Underwater Vehicle.
Open this publication in new window or tab >>The Black Pearl: An Autonomous Underwater Vehicle
Show others...
2013 (English)Report (Other academic)
Abstract [en]

The Black Pearl is a custom made autonomous underwater vehicle developed at Mälardalen University, Sweden. It is built in a modular fashion, including its mechanics, electronics and software. After a successful participation at the RoboSub competition in 2012 and winning the prize for best craftsmanship, this year we made minor improvements to the hardware, while the focus of the robot's evolution shifted to the software part. In this paper we give an overview of how the Black Pearl is built, both from the hardware and software point of view.

Keywords
under water robotembedded systems
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mdh:diva-25159 (URN)
Projects
RALF3 - Software for Embedded High Performance Architectures
Note

Published as part of the AUVSI Foundation and ONR's 16th International RoboSub Competition, San Diego, CA

Available from: 2014-06-05 Created: 2014-06-05 Last updated: 2014-09-26Bibliographically approved
Ekstrand, F., Ahlberg, C., Ekström, M., Asplund, L. & Spampinato, G. (2012). Utilization and Performance Considerations in Resource Optimized Stereo Matching for Real-Time Reconfigurable Hardware. In: VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Application, vol. 2: . Paper presented at International Conference on Computer Vision Theory and Applications, VISAPP 2012; Rome; 24 February 2012 (pp. 415-418).
Open this publication in new window or tab >>Utilization and Performance Considerations in Resource Optimized Stereo Matching for Real-Time Reconfigurable Hardware
Show others...
2012 (English)In: VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Application, vol. 2, 2012, p. 415-418Conference paper, Published paper (Other academic)
Abstract [en]

This paper presents a set of approaches for increasing the accuracy of basic area-based stereo matching methods. It is targeting real-time FPGA systems for dense disparity map estimation. The methods are focused on low resource usage and maximized improvement per cost unit to enable the inclusion of an autonomous system in an FPGA. The approach performs on par with other area-matching implementations, but at substantially lower resource usage. Additionally, the solution removes the requirement for external memory for reconfigurable hardware together with the limitation in image size accompanying standard methods. As a fully piped complete on-chip solution, it is highly suitable for real-time stereo-vision systems, with a frame rate over 100 fps for Megapixel images.

National Category
Robotics Embedded Systems
Research subject
Electronics
Identifiers
urn:nbn:se:mdh:diva-13007 (URN)9789898565044 (ISBN)
Conference
International Conference on Computer Vision Theory and Applications, VISAPP 2012; Rome; 24 February 2012
Available from: 2011-09-14 Created: 2011-09-14 Last updated: 2015-01-09Bibliographically approved
Pordel, M., Khalilzad, N. M., Yekeh, F. & Asplund, L. (2011). A component based architecture to improve testability, targeted FPGA-based vision systems. In: 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011: . Paper presented at 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011, 27 May 2011 through 29 May 2011, Xi'an (pp. 601-605).
Open this publication in new window or tab >>A component based architecture to improve testability, targeted FPGA-based vision systems
2011 (English)In: 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011, 2011, p. 601-605Conference paper, Published paper (Refereed)
Abstract [en]

FPGA has been used in many robotics projects for real-time image processing. It provides reliable systems with low execution time and simplified timing analysis. Many of these systems take a lot of time in development and testing phases. In some cases, it is not possible to test the system in real environments very often, due to accessibility, availability or cost problems. This paper is the result of a case study on vision systems for two robotics projects in which the vision team consisted of seven students working for six months fulltime on developing and implementing different image algorithms. While FPGA has been used for real-time image processing, some steps have been taken in order to reduce the development and testing phases. The main focus of the project is to integrate different testing methods with FPGA development. It includes a component based solution that uses a two-way communication with a PC controller for system evaluation and testing. Once the data is acquired from the vision board, the system stores it and simulates the same environment that has been captured earlier by feeding back the obtained data to FPGA. This approach addresses and implements a debugging methodology for FPGA based solutions which accelerate the development phase. In order to transfer massive information of images, RMII which is an interface for Ethernet communication, has been investigated and implemented. The provided solution makes changes easier, saves time and solves the problems mentioned earlier.

Keywords
Component Based, FPGA, Robotics, Testability, Vision, Component-based architecture, Cost problems, Development phase, Ethernet communications, Execution time, Image algorithms, Real environments, Real-time image processing, Reliable systems, System evaluation, Testing method, Timing Analysis, Two way communications, Vision systems, Communication, Ethernet, Field programmable gate arrays (FPGA), Image processing, Robots, Real time systems
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mdh:diva-16019 (URN)10.1109/ICCSN.2011.6014162 (DOI)2-s2.0-80053162945 (Scopus ID)9781612844855 (ISBN)
Conference
2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011, 27 May 2011 through 29 May 2011, Xi'an
Available from: 2012-11-02 Created: 2012-10-29 Last updated: 2018-01-12Bibliographically approved
Ameri E., A., Akan, B., Çürüklü, B. & Asplund, L. (2011). A General Framework for Incremental Processing of Multimodal Inputs. In: Proceedings of the 13th international conference on multimodal interfaces: . Paper presented at International Conference on Multimodal Interaction - ICMI 2011 (pp. 225-228). New York: ACM Press
Open this publication in new window or tab >>A General Framework for Incremental Processing of Multimodal Inputs
2011 (English)In: Proceedings of the 13th international conference on multimodal interfaces, New York: ACM Press, 2011, p. 225-228Conference paper, Published paper (Refereed)
Abstract [en]

Humans employ different information channels (modalities) such as speech, pictures and gestures in their commu- nication. It is believed that some of these modalities are more error-prone to some specific type of data and therefore multimodality can help to reduce ambiguities in the interaction. There have been numerous efforts in implementing multimodal interfaces for computers and robots. Yet, there is no general standard framework for developing them. In this paper we propose a general framework for implementing multimodal interfaces. It is designed to perform natural language understanding, multi- modal integration and semantic analysis with an incremental pipeline and includes a multimodal grammar language, which is used for multimodal presentation and semantic meaning generation.

Place, publisher, year, edition, pages
New York: ACM Press, 2011
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mdh:diva-13586 (URN)10.1145/2070481.2070521 (DOI)2-s2.0-83455176699 (Scopus ID)978-1-4503-0641-6 (ISBN)
Conference
International Conference on Multimodal Interaction - ICMI 2011
Available from: 2011-12-15 Created: 2011-12-15 Last updated: 2018-01-12Bibliographically approved
Spampinato, G., Lidholm, J., Ahlberg, C., Ekstrand, F., Ekström, M. & Asplund, L. (2011). An Embedded Stereo Vision Module for 6D Pose Estimation and Mapping. In: Proceedings of the IEEE international conference on Intelligent Robots and Systems IROS2011: . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems Location: San Francisco, CA Date: SEP 25-30, 2011 (pp. 1626-1631). New York: IEEE Press
Open this publication in new window or tab >>An Embedded Stereo Vision Module for 6D Pose Estimation and Mapping
Show others...
2011 (English)In: Proceedings of the IEEE international conference on Intelligent Robots and Systems IROS2011, New York: IEEE Press, 2011, p. 1626-1631Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents an embedded vision system based on reconfigurable hardware (FPGA) and two CMOS cameras to perform stereo image processing and 3D mapping for autonomous navigation. We propose an EKF based visual SLAM and sparse feature detectors to achieve 6D localization of the vehicle in non flat scenarios. The system can operate regardless of the odometry information from the vehicle since visual odometry is used. As a result, the final system is compact and easy to install and configure.

Place, publisher, year, edition, pages
New York: IEEE Press, 2011
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mdh:diva-13603 (URN)10.1109/IROS.2011.6048395 (DOI)000297477501148 ()2-s2.0-84455195713 (Scopus ID)978-1-61284-455-8 (ISBN)
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems Location: San Francisco, CA Date: SEP 25-30, 2011
Available from: 2011-12-15 Created: 2011-12-15 Last updated: 2016-06-02Bibliographically approved
Ryberg, A., Lennartson, B., Christiansson, A.-K. -., Ericsson, M. & Asplund, L. (2011). Analysis and evaluation of a general camera model. Computer Vision and Image Understanding, 115(11), 1503-1515
Open this publication in new window or tab >>Analysis and evaluation of a general camera model
Show others...
2011 (English)In: Computer Vision and Image Understanding, ISSN 1077-3142, E-ISSN 1090-235X, Vol. 115, no 11, p. 1503-1515Article in journal (Refereed) Published
Abstract [en]

A versatile General Camera Model, GCM, has been developed, and is described in detail. The model is general in the sense that it can capture both fisheye and conventional as well as catadioptric cameras in a unified framework. The camera model includes efficient handling of non-central cameras as well as compensations for decentring distortion. A novel way of analysing radial distortion functions of camera models leads to a straightforward improvement of conventional models with respect to generality, accuracy and simplicity. Different camera models are experimentally compared for two cameras with conventional and fisheye lenses, and the results show that the overall performance is favourable for the GCM. (C) 2011 Elsevier Inc. All rights reserved.

Keywords
Camera models, Fisheye, Catadioptric camera, Central camera, Non-central camera, Radial distortion, Decentring distortion, Stereo vision
National Category
Engineering and Technology
Identifiers
urn:nbn:se:mdh:diva-15520 (URN)10.1016/j.cviu.2011.06.009 (DOI)000295424200004 ()2-s2.0-80052517386 (Scopus ID)
Available from: 2012-10-24 Created: 2012-10-10 Last updated: 2017-12-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-5141-7242

Search in DiVA

Show all publications