https://www.mdu.se/

mdu.sePublikationer
Ändra sökning
Länk till posten
Permanent länk

Direktlänk
Alternativa namn
Publikationer (10 of 43) Visa alla publikationer
Ahlberg, C., Ekstrand, F., Ekström, M., Spampinato, G. & Asplund, L. (2015). GIMME2 - An embedded system for stereo vision and processing of megapixel images with FPGA-acceleration. In: 2015 International Conference on ReConFigurable Computing and FPGAs, ReConFig 2015: . Paper presented at International Conference on ReConFigurable Computing and FPGAs, ReConFig 2015, 7 December 2015 through 9 December 2015.
Öppna denna publikation i ny flik eller fönster >>GIMME2 - An embedded system for stereo vision and processing of megapixel images with FPGA-acceleration
Visa övriga...
2015 (Engelska)Ingår i: 2015 International Conference on ReConFigurable Computing and FPGAs, ReConFig 2015, 2015Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

This paper presents GIMME2, an embedded stereovision system, designed to be compact, power efficient, cost effective, and high performing in the area of image processing. GIMME2 features two 10 megapixel image sensors and a Xilinx Zynq, which combines FPGA-fabric with a dual-core ARM CPU on a single chip. This enables GIMME2 to process video-rate megapixel image streams at real-time, exploiting the benefits of heterogeneous processing.

Nyckelord
Cost effectiveness, Field programmable gate arrays (FPGA), Image processing, Pixels, Reconfigurable architectures, Reconfigurable hardware, Stereo vision, Video signal processing, Cost effective, FPGA fabric, Heterogeneous processing, Image streams, Power efficient, Process video, Single chips, Stereo-vision system, Stereo image processing
Nationell ämneskategori
Elektroteknik och elektronik
Identifikatorer
urn:nbn:se:mdh:diva-31587 (URN)10.1109/ReConFig.2015.7393318 (DOI)000380437700038 ()2-s2.0-84964335178 (Scopus ID)9781467394062 (ISBN)
Konferens
International Conference on ReConFigurable Computing and FPGAs, ReConFig 2015, 7 December 2015 through 9 December 2015
Tillgänglig från: 2016-05-13 Skapad: 2016-05-13 Senast uppdaterad: 2020-10-22Bibliografiskt granskad
Bruhn, F., Brunberg, K., Hines, J., Asplund, L. & Norgren, M. (2015). Introducing Radiation Tolerant Heterogeneous Computers for Small Satellites. In: IEEE Aerospace Conference Proceedings, vol. 2015: . Paper presented at IEEE Aerospace Conference 2015 IEEEAC2015, 7-14 Mar 2015, Big Sky, United States (pp. Article number 7119158). , 2015
Öppna denna publikation i ny flik eller fönster >>Introducing Radiation Tolerant Heterogeneous Computers for Small Satellites
Visa övriga...
2015 (Engelska)Ingår i: IEEE Aerospace Conference Proceedings, vol. 2015, 2015, Vol. 2015, s. Article number 7119158-Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

This paper presents results and conclusions from design, manufacturing, and benchmarking of a heterogeneous computing low power fault tolerant computer, realized on an industrial Qseven® small form factor (SFF) platform. A heterogeneous computer in this context features multi-core processors (CPU), a graphical processing unit (GPU), and a field programmable gate array (FPGA). The x86 compatible CPU enables the use of vast amounts of commonly available software and operating systems, which can be used for space and harsh environments. The developed heterogeneous computer shares the same core architecture as game consoles such as Microsoft Xbox One and Sony Playstation 4 and has an aggregated computational performance in the TFLOP range. The processing power can be used for on-board intelligent data processing and higher degrees of autonomy in general. The module feature quad core 1.5 GHz 64 bit CPU (24 GFLOPs), 160 GPU shader cores (127 GFLOPs), and a 12 Mgate equivalent FPGA fabric with a safety critical ARM® Cortex-M3 MCU.

Nyckelord
Heterogeneous computing, heterogeneous system architecture, onboard processing
Nationell ämneskategori
Data- och informationsvetenskap
Identifikatorer
urn:nbn:se:mdh:diva-28127 (URN)10.1109/AERO.2015.7119158 (DOI)000380501302091 ()2-s2.0-84940703986 (Scopus ID)9781479953790 (ISBN)
Konferens
IEEE Aerospace Conference 2015 IEEEAC2015, 7-14 Mar 2015, Big Sky, United States
Projekt
GIMME3 - Semi-fault tolerant next generation high performance computer architecture based on screened industrial components
Tillgänglig från: 2015-06-12 Skapad: 2015-06-08 Senast uppdaterad: 2018-01-11Bibliografiskt granskad
Spampinato, G., Lidholm, J., Ahlberg, C., Ekstrand, F., Ekström, M. & Asplund, L. (2013). An embedded stereo vision module for industrial vehicles automation. In: Proceedings of the IEEE International Conference on Industrial Technology: . Paper presented at 2013 IEEE International Conference on Industrial Technology, ICIT 2013; Cape Town; South Africa; 25 February 2013 through 28 February 2013 (pp. 52-57). IEEE
Öppna denna publikation i ny flik eller fönster >>An embedded stereo vision module for industrial vehicles automation
Visa övriga...
2013 (Engelska)Ingår i: Proceedings of the IEEE International Conference on Industrial Technology, IEEE , 2013, s. 52-57Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

This paper presents an embedded vision system based on reconfigurable hardware (FPGA) to perform stereo image processing and 3D mapping of sparse features for autonomous navigation and obstacle detection in industrial settings. We propose an EKF based visual SLAM to achieve a 6D localization of the vehicle even in non flat scenarios. The system uses vision as the only source of information. As a consequence, it operates regardless of the odometry from the vehicle since visual odometry is used. © 2013 IEEE.

Ort, förlag, år, upplaga, sidor
IEEE, 2013
Nyckelord
3D vision, Embedded, portable, SLAM
Nationell ämneskategori
Teknik och teknologier
Identifikatorer
urn:nbn:se:mdh:diva-19056 (URN)10.1109/ICIT.2013.6505647 (DOI)000322785200006 ()2-s2.0-84877621168 (Scopus ID)9781467345699 (ISBN)
Konferens
2013 IEEE International Conference on Industrial Technology, ICIT 2013; Cape Town; South Africa; 25 February 2013 through 28 February 2013
Tillgänglig från: 2013-05-24 Skapad: 2013-05-24 Senast uppdaterad: 2020-10-22Bibliografiskt granskad
Akan, B., Çürüklü, B. & Asplund, L. (2013). Scheduling POP-Star for Automatic Creation of Robot Cell Programs. In: : . Paper presented at 18th IEEE International Conference on Emerging Technologies & Factory Automation ETFA'13, Cagliari, Italy (pp. 1-4).
Öppna denna publikation i ny flik eller fönster >>Scheduling POP-Star for Automatic Creation of Robot Cell Programs
2013 (Engelska)Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Typical pick and place, and machine tending applications often require an industrial robot to be embedded in a cell and to communicate with other devices in the cell. Programming the program logic is a tedious job, requiring expert programming knowledge, and it can take more time than programming the specific robot movements itself. We propose a new system, which takes in the description of the whole manufacturing process in natural language as input, fills in the implicit actions, and plans the sequence of actions to accomplish the task described in minimal makespan using a modified partial planning algorithm. Finally we demonstrate that the proposed system can come up with a sensible plan for the given instructions.

Nyckelord
robot cell scheduling, partial order planning, astar
Nationell ämneskategori
Teknik och teknologier
Identifikatorer
urn:nbn:se:mdh:diva-23600 (URN)10.1109/ETFA.2013.6648129 (DOI)2-s2.0-84890694425 (Scopus ID)978-1-4799-0862-2 (ISBN)
Konferens
18th IEEE International Conference on Emerging Technologies & Factory Automation ETFA'13, Cagliari, Italy
Projekt
Robot Colleague - A Project in Collaborative Robotics
Tillgänglig från: 2013-12-16 Skapad: 2013-12-16 Senast uppdaterad: 2016-05-17Bibliografiskt granskad
Ahlberg, C., Asplund, L., Campeanu, G., Ciccozzi, F., Ekstrand, F., Ekström, M., . . . Segerblad, E. (2013). The Black Pearl: An Autonomous Underwater Vehicle.
Öppna denna publikation i ny flik eller fönster >>The Black Pearl: An Autonomous Underwater Vehicle
Visa övriga...
2013 (Engelska)Rapport (Övrigt vetenskapligt)
Abstract [en]

The Black Pearl is a custom made autonomous underwater vehicle developed at Mälardalen University, Sweden. It is built in a modular fashion, including its mechanics, electronics and software. After a successful participation at the RoboSub competition in 2012 and winning the prize for best craftsmanship, this year we made minor improvements to the hardware, while the focus of the robot's evolution shifted to the software part. In this paper we give an overview of how the Black Pearl is built, both from the hardware and software point of view.

Nyckelord
under water robotembedded systems
Nationell ämneskategori
Teknik och teknologier
Identifikatorer
urn:nbn:se:mdh:diva-25159 (URN)
Projekt
RALF3 - Software for Embedded High Performance Architectures
Anmärkning

Published as part of the AUVSI Foundation and ONR's 16th International RoboSub Competition, San Diego, CA

Tillgänglig från: 2014-06-05 Skapad: 2014-06-05 Senast uppdaterad: 2020-10-22Bibliografiskt granskad
Ekstrand, F., Ahlberg, C., Ekström, M., Asplund, L. & Spampinato, G. (2012). Utilization and Performance Considerations in Resource Optimized Stereo Matching for Real-Time Reconfigurable Hardware. In: VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Application, vol. 2: . Paper presented at International Conference on Computer Vision Theory and Applications, VISAPP 2012; Rome; 24 February 2012 (pp. 415-418).
Öppna denna publikation i ny flik eller fönster >>Utilization and Performance Considerations in Resource Optimized Stereo Matching for Real-Time Reconfigurable Hardware
Visa övriga...
2012 (Engelska)Ingår i: VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Application, vol. 2, 2012, s. 415-418Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
Abstract [en]

This paper presents a set of approaches for increasing the accuracy of basic area-based stereo matching methods. It is targeting real-time FPGA systems for dense disparity map estimation. The methods are focused on low resource usage and maximized improvement per cost unit to enable the inclusion of an autonomous system in an FPGA. The approach performs on par with other area-matching implementations, but at substantially lower resource usage. Additionally, the solution removes the requirement for external memory for reconfigurable hardware together with the limitation in image size accompanying standard methods. As a fully piped complete on-chip solution, it is highly suitable for real-time stereo-vision systems, with a frame rate over 100 fps for Megapixel images.

Nationell ämneskategori
Robotteknik och automation Inbäddad systemteknik
Forskningsämne
elektronik
Identifikatorer
urn:nbn:se:mdh:diva-13007 (URN)9789898565044 (ISBN)
Konferens
International Conference on Computer Vision Theory and Applications, VISAPP 2012; Rome; 24 February 2012
Tillgänglig från: 2011-09-14 Skapad: 2011-09-14 Senast uppdaterad: 2020-10-22Bibliografiskt granskad
Pordel, M., Khalilzad, N. M., Yekeh, F. & Asplund, L. (2011). A component based architecture to improve testability, targeted FPGA-based vision systems. In: 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011: . Paper presented at 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011, 27 May 2011 through 29 May 2011, Xi'an (pp. 601-605).
Öppna denna publikation i ny flik eller fönster >>A component based architecture to improve testability, targeted FPGA-based vision systems
2011 (Engelska)Ingår i: 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011, 2011, s. 601-605Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

FPGA has been used in many robotics projects for real-time image processing. It provides reliable systems with low execution time and simplified timing analysis. Many of these systems take a lot of time in development and testing phases. In some cases, it is not possible to test the system in real environments very often, due to accessibility, availability or cost problems. This paper is the result of a case study on vision systems for two robotics projects in which the vision team consisted of seven students working for six months fulltime on developing and implementing different image algorithms. While FPGA has been used for real-time image processing, some steps have been taken in order to reduce the development and testing phases. The main focus of the project is to integrate different testing methods with FPGA development. It includes a component based solution that uses a two-way communication with a PC controller for system evaluation and testing. Once the data is acquired from the vision board, the system stores it and simulates the same environment that has been captured earlier by feeding back the obtained data to FPGA. This approach addresses and implements a debugging methodology for FPGA based solutions which accelerate the development phase. In order to transfer massive information of images, RMII which is an interface for Ethernet communication, has been investigated and implemented. The provided solution makes changes easier, saves time and solves the problems mentioned earlier.

Nyckelord
Component Based, FPGA, Robotics, Testability, Vision, Component-based architecture, Cost problems, Development phase, Ethernet communications, Execution time, Image algorithms, Real environments, Real-time image processing, Reliable systems, System evaluation, Testing method, Timing Analysis, Two way communications, Vision systems, Communication, Ethernet, Field programmable gate arrays (FPGA), Image processing, Robots, Real time systems
Nationell ämneskategori
Data- och informationsvetenskap
Identifikatorer
urn:nbn:se:mdh:diva-16019 (URN)10.1109/ICCSN.2011.6014162 (DOI)2-s2.0-80053162945 (Scopus ID)9781612844855 (ISBN)
Konferens
2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011, 27 May 2011 through 29 May 2011, Xi'an
Tillgänglig från: 2012-11-02 Skapad: 2012-10-29 Senast uppdaterad: 2018-01-12Bibliografiskt granskad
Ameri E., A., Akan, B., Çürüklü, B. & Asplund, L. (2011). A General Framework for Incremental Processing of Multimodal Inputs. In: Proceedings of the 13th international conference on multimodal interfaces: . Paper presented at International Conference on Multimodal Interaction - ICMI 2011 (pp. 225-228). New York: ACM Press
Öppna denna publikation i ny flik eller fönster >>A General Framework for Incremental Processing of Multimodal Inputs
2011 (Engelska)Ingår i: Proceedings of the 13th international conference on multimodal interfaces, New York: ACM Press, 2011, s. 225-228Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Humans employ different information channels (modalities) such as speech, pictures and gestures in their commu- nication. It is believed that some of these modalities are more error-prone to some specific type of data and therefore multimodality can help to reduce ambiguities in the interaction. There have been numerous efforts in implementing multimodal interfaces for computers and robots. Yet, there is no general standard framework for developing them. In this paper we propose a general framework for implementing multimodal interfaces. It is designed to perform natural language understanding, multi- modal integration and semantic analysis with an incremental pipeline and includes a multimodal grammar language, which is used for multimodal presentation and semantic meaning generation.

Ort, förlag, år, upplaga, sidor
New York: ACM Press, 2011
Nationell ämneskategori
Data- och informationsvetenskap
Identifikatorer
urn:nbn:se:mdh:diva-13586 (URN)10.1145/2070481.2070521 (DOI)2-s2.0-83455176699 (Scopus ID)978-1-4503-0641-6 (ISBN)
Konferens
International Conference on Multimodal Interaction - ICMI 2011
Tillgänglig från: 2011-12-15 Skapad: 2011-12-15 Senast uppdaterad: 2018-01-12Bibliografiskt granskad
Spampinato, G., Lidholm, J., Ahlberg, C., Ekstrand, F., Ekström, M. & Asplund, L. (2011). An Embedded Stereo Vision Module for 6D Pose Estimation and Mapping. In: Proceedings of the IEEE international conference on Intelligent Robots and Systems IROS2011: . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems Location: San Francisco, CA Date: SEP 25-30, 2011 (pp. 1626-1631). New York: IEEE Press
Öppna denna publikation i ny flik eller fönster >>An Embedded Stereo Vision Module for 6D Pose Estimation and Mapping
Visa övriga...
2011 (Engelska)Ingår i: Proceedings of the IEEE international conference on Intelligent Robots and Systems IROS2011, New York: IEEE Press, 2011, s. 1626-1631Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

This paper presents an embedded vision system based on reconfigurable hardware (FPGA) and two CMOS cameras to perform stereo image processing and 3D mapping for autonomous navigation. We propose an EKF based visual SLAM and sparse feature detectors to achieve 6D localization of the vehicle in non flat scenarios. The system can operate regardless of the odometry information from the vehicle since visual odometry is used. As a result, the final system is compact and easy to install and configure.

Ort, förlag, år, upplaga, sidor
New York: IEEE Press, 2011
Nationell ämneskategori
Teknik och teknologier
Identifikatorer
urn:nbn:se:mdh:diva-13603 (URN)10.1109/IROS.2011.6048395 (DOI)000297477501148 ()2-s2.0-84455195713 (Scopus ID)978-1-61284-455-8 (ISBN)
Konferens
IEEE/RSJ International Conference on Intelligent Robots and Systems Location: San Francisco, CA Date: SEP 25-30, 2011
Tillgänglig från: 2011-12-15 Skapad: 2011-12-15 Senast uppdaterad: 2020-10-22Bibliografiskt granskad
Ryberg, A., Lennartson, B., Christiansson, A.-K. -., Ericsson, M. & Asplund, L. (2011). Analysis and evaluation of a general camera model. Computer Vision and Image Understanding, 115(11), 1503-1515
Öppna denna publikation i ny flik eller fönster >>Analysis and evaluation of a general camera model
Visa övriga...
2011 (Engelska)Ingår i: Computer Vision and Image Understanding, ISSN 1077-3142, E-ISSN 1090-235X, Vol. 115, nr 11, s. 1503-1515Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

A versatile General Camera Model, GCM, has been developed, and is described in detail. The model is general in the sense that it can capture both fisheye and conventional as well as catadioptric cameras in a unified framework. The camera model includes efficient handling of non-central cameras as well as compensations for decentring distortion. A novel way of analysing radial distortion functions of camera models leads to a straightforward improvement of conventional models with respect to generality, accuracy and simplicity. Different camera models are experimentally compared for two cameras with conventional and fisheye lenses, and the results show that the overall performance is favourable for the GCM. (C) 2011 Elsevier Inc. All rights reserved.

Nyckelord
Camera models, Fisheye, Catadioptric camera, Central camera, Non-central camera, Radial distortion, Decentring distortion, Stereo vision
Nationell ämneskategori
Teknik och teknologier
Identifikatorer
urn:nbn:se:mdh:diva-15520 (URN)10.1016/j.cviu.2011.06.009 (DOI)000295424200004 ()2-s2.0-80052517386 (Scopus ID)
Tillgänglig från: 2012-10-24 Skapad: 2012-10-10 Senast uppdaterad: 2017-12-07Bibliografiskt granskad
Organisationer
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0001-5141-7242

Sök vidare i DiVA

Visa alla publikationer