mdh.sePublikasjoner
Endre søk
Link to record
Permanent link

Direct link
BETA
Alternativa namn
Publikasjoner (10 av 43) Visa alla publikasjoner
Ahlberg, C., Ekstrand, F., Ekström, M., Spampinato, G. & Asplund, L. (2015). GIMME2 - An embedded system for stereo vision and processing of megapixel images with FPGA-acceleration. In: 2015 International Conference on ReConFigurable Computing and FPGAs, ReConFig 2015: . Paper presented at International Conference on ReConFigurable Computing and FPGAs, ReConFig 2015, 7 December 2015 through 9 December 2015.
Åpne denne publikasjonen i ny fane eller vindu >>GIMME2 - An embedded system for stereo vision and processing of megapixel images with FPGA-acceleration
Vise andre…
2015 (engelsk)Inngår i: 2015 International Conference on ReConFigurable Computing and FPGAs, ReConFig 2015, 2015Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

This paper presents GIMME2, an embedded stereovision system, designed to be compact, power efficient, cost effective, and high performing in the area of image processing. GIMME2 features two 10 megapixel image sensors and a Xilinx Zynq, which combines FPGA-fabric with a dual-core ARM CPU on a single chip. This enables GIMME2 to process video-rate megapixel image streams at real-time, exploiting the benefits of heterogeneous processing.

Emneord
Cost effectiveness, Field programmable gate arrays (FPGA), Image processing, Pixels, Reconfigurable architectures, Reconfigurable hardware, Stereo vision, Video signal processing, Cost effective, FPGA fabric, Heterogeneous processing, Image streams, Power efficient, Process video, Single chips, Stereo-vision system, Stereo image processing
HSV kategori
Identifikatorer
urn:nbn:se:mdh:diva-31587 (URN)10.1109/ReConFig.2015.7393318 (DOI)000380437700038 ()2-s2.0-84964335178 (Scopus ID)9781467394062 (ISBN)
Konferanse
International Conference on ReConFigurable Computing and FPGAs, ReConFig 2015, 7 December 2015 through 9 December 2015
Tilgjengelig fra: 2016-05-13 Laget: 2016-05-13 Sist oppdatert: 2016-12-22bibliografisk kontrollert
Bruhn, F., Brunberg, K., Hines, J., Asplund, L. & Norgren, M. (2015). Introducing Radiation Tolerant Heterogeneous Computers for Small Satellites. In: IEEE Aerospace Conference Proceedings, vol. 2015: . Paper presented at IEEE Aerospace Conference 2015 IEEEAC2015, 7-14 Mar 2015, Big Sky, United States (pp. Article number 7119158). , 2015
Åpne denne publikasjonen i ny fane eller vindu >>Introducing Radiation Tolerant Heterogeneous Computers for Small Satellites
Vise andre…
2015 (engelsk)Inngår i: IEEE Aerospace Conference Proceedings, vol. 2015, 2015, Vol. 2015, s. Article number 7119158-Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

This paper presents results and conclusions from design, manufacturing, and benchmarking of a heterogeneous computing low power fault tolerant computer, realized on an industrial Qseven® small form factor (SFF) platform. A heterogeneous computer in this context features multi-core processors (CPU), a graphical processing unit (GPU), and a field programmable gate array (FPGA). The x86 compatible CPU enables the use of vast amounts of commonly available software and operating systems, which can be used for space and harsh environments. The developed heterogeneous computer shares the same core architecture as game consoles such as Microsoft Xbox One and Sony Playstation 4 and has an aggregated computational performance in the TFLOP range. The processing power can be used for on-board intelligent data processing and higher degrees of autonomy in general. The module feature quad core 1.5 GHz 64 bit CPU (24 GFLOPs), 160 GPU shader cores (127 GFLOPs), and a 12 Mgate equivalent FPGA fabric with a safety critical ARM® Cortex-M3 MCU.

Emneord
Heterogeneous computing, heterogeneous system architecture, onboard processing
HSV kategori
Identifikatorer
urn:nbn:se:mdh:diva-28127 (URN)10.1109/AERO.2015.7119158 (DOI)000380501302091 ()2-s2.0-84940703986 (Scopus ID)9781479953790 (ISBN)
Konferanse
IEEE Aerospace Conference 2015 IEEEAC2015, 7-14 Mar 2015, Big Sky, United States
Prosjekter
GIMME3 - Semi-fault tolerant next generation high performance computer architecture based on screened industrial components
Tilgjengelig fra: 2015-06-12 Laget: 2015-06-08 Sist oppdatert: 2018-01-11bibliografisk kontrollert
Spampinato, G., Lidholm, J., Ahlberg, C., Ekstrand, F., Ekström, M. & Asplund, L. (2013). An embedded stereo vision module for industrial vehicles automation. In: Proceedings of the IEEE International Conference on Industrial Technology: . Paper presented at 2013 IEEE International Conference on Industrial Technology, ICIT 2013; Cape Town; South Africa; 25 February 2013 through 28 February 2013 (pp. 52-57). IEEE
Åpne denne publikasjonen i ny fane eller vindu >>An embedded stereo vision module for industrial vehicles automation
Vise andre…
2013 (engelsk)Inngår i: Proceedings of the IEEE International Conference on Industrial Technology, IEEE , 2013, s. 52-57Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

This paper presents an embedded vision system based on reconfigurable hardware (FPGA) to perform stereo image processing and 3D mapping of sparse features for autonomous navigation and obstacle detection in industrial settings. We propose an EKF based visual SLAM to achieve a 6D localization of the vehicle even in non flat scenarios. The system uses vision as the only source of information. As a consequence, it operates regardless of the odometry from the vehicle since visual odometry is used. © 2013 IEEE.

sted, utgiver, år, opplag, sider
IEEE, 2013
Emneord
3D vision, Embedded, portable, SLAM
HSV kategori
Identifikatorer
urn:nbn:se:mdh:diva-19056 (URN)10.1109/ICIT.2013.6505647 (DOI)000322785200006 ()2-s2.0-84877621168 (Scopus ID)9781467345699 (ISBN)
Konferanse
2013 IEEE International Conference on Industrial Technology, ICIT 2013; Cape Town; South Africa; 25 February 2013 through 28 February 2013
Tilgjengelig fra: 2013-05-24 Laget: 2013-05-24 Sist oppdatert: 2018-08-07bibliografisk kontrollert
Akan, B., Çürüklü, B. & Asplund, L. (2013). Scheduling POP-Star for Automatic Creation of Robot Cell Programs. In: : . Paper presented at 18th IEEE International Conference on Emerging Technologies & Factory Automation ETFA'13, Cagliari, Italy (pp. 1-4).
Åpne denne publikasjonen i ny fane eller vindu >>Scheduling POP-Star for Automatic Creation of Robot Cell Programs
2013 (engelsk)Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Typical pick and place, and machine tending applications often require an industrial robot to be embedded in a cell and to communicate with other devices in the cell. Programming the program logic is a tedious job, requiring expert programming knowledge, and it can take more time than programming the specific robot movements itself. We propose a new system, which takes in the description of the whole manufacturing process in natural language as input, fills in the implicit actions, and plans the sequence of actions to accomplish the task described in minimal makespan using a modified partial planning algorithm. Finally we demonstrate that the proposed system can come up with a sensible plan for the given instructions.

Emneord
robot cell scheduling, partial order planning, astar
HSV kategori
Identifikatorer
urn:nbn:se:mdh:diva-23600 (URN)10.1109/ETFA.2013.6648129 (DOI)2-s2.0-84890694425 (Scopus ID)978-1-4799-0862-2 (ISBN)
Konferanse
18th IEEE International Conference on Emerging Technologies & Factory Automation ETFA'13, Cagliari, Italy
Prosjekter
Robot Colleague - A Project in Collaborative Robotics
Tilgjengelig fra: 2013-12-16 Laget: 2013-12-16 Sist oppdatert: 2016-05-17bibliografisk kontrollert
Ahlberg, C., Asplund, L., Campeanu, G., Ciccozzi, F., Ekstrand, F., Ekström, M., . . . Segerblad, E. (2013). The Black Pearl: An Autonomous Underwater Vehicle.
Åpne denne publikasjonen i ny fane eller vindu >>The Black Pearl: An Autonomous Underwater Vehicle
Vise andre…
2013 (engelsk)Rapport (Annet vitenskapelig)
Abstract [en]

The Black Pearl is a custom made autonomous underwater vehicle developed at Mälardalen University, Sweden. It is built in a modular fashion, including its mechanics, electronics and software. After a successful participation at the RoboSub competition in 2012 and winning the prize for best craftsmanship, this year we made minor improvements to the hardware, while the focus of the robot's evolution shifted to the software part. In this paper we give an overview of how the Black Pearl is built, both from the hardware and software point of view.

Emneord
under water robotembedded systems
HSV kategori
Identifikatorer
urn:nbn:se:mdh:diva-25159 (URN)
Prosjekter
RALF3 - Software for Embedded High Performance Architectures
Merknad

Published as part of the AUVSI Foundation and ONR's 16th International RoboSub Competition, San Diego, CA

Tilgjengelig fra: 2014-06-05 Laget: 2014-06-05 Sist oppdatert: 2014-09-26bibliografisk kontrollert
Ekstrand, F., Ahlberg, C., Ekström, M., Asplund, L. & Spampinato, G. (2012). Utilization and Performance Considerations in Resource Optimized Stereo Matching for Real-Time Reconfigurable Hardware. In: VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Application, vol. 2: . Paper presented at International Conference on Computer Vision Theory and Applications, VISAPP 2012; Rome; 24 February 2012 (pp. 415-418).
Åpne denne publikasjonen i ny fane eller vindu >>Utilization and Performance Considerations in Resource Optimized Stereo Matching for Real-Time Reconfigurable Hardware
Vise andre…
2012 (engelsk)Inngår i: VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Application, vol. 2, 2012, s. 415-418Konferansepaper, Publicerat paper (Annet vitenskapelig)
Abstract [en]

This paper presents a set of approaches for increasing the accuracy of basic area-based stereo matching methods. It is targeting real-time FPGA systems for dense disparity map estimation. The methods are focused on low resource usage and maximized improvement per cost unit to enable the inclusion of an autonomous system in an FPGA. The approach performs on par with other area-matching implementations, but at substantially lower resource usage. Additionally, the solution removes the requirement for external memory for reconfigurable hardware together with the limitation in image size accompanying standard methods. As a fully piped complete on-chip solution, it is highly suitable for real-time stereo-vision systems, with a frame rate over 100 fps for Megapixel images.

HSV kategori
Forskningsprogram
elektronik
Identifikatorer
urn:nbn:se:mdh:diva-13007 (URN)9789898565044 (ISBN)
Konferanse
International Conference on Computer Vision Theory and Applications, VISAPP 2012; Rome; 24 February 2012
Tilgjengelig fra: 2011-09-14 Laget: 2011-09-14 Sist oppdatert: 2015-01-09bibliografisk kontrollert
Pordel, M., Khalilzad, N. M., Yekeh, F. & Asplund, L. (2011). A component based architecture to improve testability, targeted FPGA-based vision systems. In: 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011: . Paper presented at 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011, 27 May 2011 through 29 May 2011, Xi'an (pp. 601-605).
Åpne denne publikasjonen i ny fane eller vindu >>A component based architecture to improve testability, targeted FPGA-based vision systems
2011 (engelsk)Inngår i: 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011, 2011, s. 601-605Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

FPGA has been used in many robotics projects for real-time image processing. It provides reliable systems with low execution time and simplified timing analysis. Many of these systems take a lot of time in development and testing phases. In some cases, it is not possible to test the system in real environments very often, due to accessibility, availability or cost problems. This paper is the result of a case study on vision systems for two robotics projects in which the vision team consisted of seven students working for six months fulltime on developing and implementing different image algorithms. While FPGA has been used for real-time image processing, some steps have been taken in order to reduce the development and testing phases. The main focus of the project is to integrate different testing methods with FPGA development. It includes a component based solution that uses a two-way communication with a PC controller for system evaluation and testing. Once the data is acquired from the vision board, the system stores it and simulates the same environment that has been captured earlier by feeding back the obtained data to FPGA. This approach addresses and implements a debugging methodology for FPGA based solutions which accelerate the development phase. In order to transfer massive information of images, RMII which is an interface for Ethernet communication, has been investigated and implemented. The provided solution makes changes easier, saves time and solves the problems mentioned earlier.

Emneord
Component Based, FPGA, Robotics, Testability, Vision, Component-based architecture, Cost problems, Development phase, Ethernet communications, Execution time, Image algorithms, Real environments, Real-time image processing, Reliable systems, System evaluation, Testing method, Timing Analysis, Two way communications, Vision systems, Communication, Ethernet, Field programmable gate arrays (FPGA), Image processing, Robots, Real time systems
HSV kategori
Identifikatorer
urn:nbn:se:mdh:diva-16019 (URN)10.1109/ICCSN.2011.6014162 (DOI)2-s2.0-80053162945 (Scopus ID)9781612844855 (ISBN)
Konferanse
2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011, 27 May 2011 through 29 May 2011, Xi'an
Tilgjengelig fra: 2012-11-02 Laget: 2012-10-29 Sist oppdatert: 2018-01-12bibliografisk kontrollert
Ameri E., A., Akan, B., Çürüklü, B. & Asplund, L. (2011). A General Framework for Incremental Processing of Multimodal Inputs. In: Proceedings of the 13th international conference on multimodal interfaces: . Paper presented at International Conference on Multimodal Interaction - ICMI 2011 (pp. 225-228). New York: ACM Press
Åpne denne publikasjonen i ny fane eller vindu >>A General Framework for Incremental Processing of Multimodal Inputs
2011 (engelsk)Inngår i: Proceedings of the 13th international conference on multimodal interfaces, New York: ACM Press, 2011, s. 225-228Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

Humans employ different information channels (modalities) such as speech, pictures and gestures in their commu- nication. It is believed that some of these modalities are more error-prone to some specific type of data and therefore multimodality can help to reduce ambiguities in the interaction. There have been numerous efforts in implementing multimodal interfaces for computers and robots. Yet, there is no general standard framework for developing them. In this paper we propose a general framework for implementing multimodal interfaces. It is designed to perform natural language understanding, multi- modal integration and semantic analysis with an incremental pipeline and includes a multimodal grammar language, which is used for multimodal presentation and semantic meaning generation.

sted, utgiver, år, opplag, sider
New York: ACM Press, 2011
HSV kategori
Identifikatorer
urn:nbn:se:mdh:diva-13586 (URN)10.1145/2070481.2070521 (DOI)2-s2.0-83455176699 (Scopus ID)978-1-4503-0641-6 (ISBN)
Konferanse
International Conference on Multimodal Interaction - ICMI 2011
Tilgjengelig fra: 2011-12-15 Laget: 2011-12-15 Sist oppdatert: 2018-01-12bibliografisk kontrollert
Spampinato, G., Lidholm, J., Ahlberg, C., Ekstrand, F., Ekström, M. & Asplund, L. (2011). An Embedded Stereo Vision Module for 6D Pose Estimation and Mapping. In: Proceedings of the IEEE international conference on Intelligent Robots and Systems IROS2011: . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems Location: San Francisco, CA Date: SEP 25-30, 2011 (pp. 1626-1631). New York: IEEE Press
Åpne denne publikasjonen i ny fane eller vindu >>An Embedded Stereo Vision Module for 6D Pose Estimation and Mapping
Vise andre…
2011 (engelsk)Inngår i: Proceedings of the IEEE international conference on Intelligent Robots and Systems IROS2011, New York: IEEE Press, 2011, s. 1626-1631Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

This paper presents an embedded vision system based on reconfigurable hardware (FPGA) and two CMOS cameras to perform stereo image processing and 3D mapping for autonomous navigation. We propose an EKF based visual SLAM and sparse feature detectors to achieve 6D localization of the vehicle in non flat scenarios. The system can operate regardless of the odometry information from the vehicle since visual odometry is used. As a result, the final system is compact and easy to install and configure.

sted, utgiver, år, opplag, sider
New York: IEEE Press, 2011
HSV kategori
Identifikatorer
urn:nbn:se:mdh:diva-13603 (URN)10.1109/IROS.2011.6048395 (DOI)000297477501148 ()2-s2.0-84455195713 (Scopus ID)978-1-61284-455-8 (ISBN)
Konferanse
IEEE/RSJ International Conference on Intelligent Robots and Systems Location: San Francisco, CA Date: SEP 25-30, 2011
Tilgjengelig fra: 2011-12-15 Laget: 2011-12-15 Sist oppdatert: 2016-06-02bibliografisk kontrollert
Ryberg, A., Lennartson, B., Christiansson, A.-K. -., Ericsson, M. & Asplund, L. (2011). Analysis and evaluation of a general camera model. Computer Vision and Image Understanding, 115(11), 1503-1515
Åpne denne publikasjonen i ny fane eller vindu >>Analysis and evaluation of a general camera model
Vise andre…
2011 (engelsk)Inngår i: Computer Vision and Image Understanding, ISSN 1077-3142, E-ISSN 1090-235X, Vol. 115, nr 11, s. 1503-1515Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

A versatile General Camera Model, GCM, has been developed, and is described in detail. The model is general in the sense that it can capture both fisheye and conventional as well as catadioptric cameras in a unified framework. The camera model includes efficient handling of non-central cameras as well as compensations for decentring distortion. A novel way of analysing radial distortion functions of camera models leads to a straightforward improvement of conventional models with respect to generality, accuracy and simplicity. Different camera models are experimentally compared for two cameras with conventional and fisheye lenses, and the results show that the overall performance is favourable for the GCM. (C) 2011 Elsevier Inc. All rights reserved.

Emneord
Camera models, Fisheye, Catadioptric camera, Central camera, Non-central camera, Radial distortion, Decentring distortion, Stereo vision
HSV kategori
Identifikatorer
urn:nbn:se:mdh:diva-15520 (URN)10.1016/j.cviu.2011.06.009 (DOI)000295424200004 ()2-s2.0-80052517386 (Scopus ID)
Tilgjengelig fra: 2012-10-24 Laget: 2012-10-10 Sist oppdatert: 2017-12-07bibliografisk kontrollert
Organisasjoner
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0001-5141-7242