mdh.sePublikasjoner
Endre søk
Link to record
Permanent link

Direct link
BETA
Publikasjoner (10 av 17) Visa alla publikasjoner
Ahlberg, C., Leon, M., Ekstrand, F. & Ekström, M. (2019). Unbounded Sparse Census Transform using Genetic Algorithm. In: 2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV): . Paper presented at 19th IEEE Winter Conference on Applications of Computer Vision (WACV), JAN 07-11, 2019, Waikoloa Village, HI (pp. 1616-1625). IEEE
Åpne denne publikasjonen i ny fane eller vindu >>Unbounded Sparse Census Transform using Genetic Algorithm
2019 (engelsk)Inngår i: 2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), IEEE , 2019, s. 1616-1625Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

The Census Transform (CT) is a well proven method for stereo vision that provides robust matching, with respect to object boundaries, outliers and radiometric distortion, at a low computational cost. Recent CT methods propose patterns for pixel comparison and sparsity, to increase matching accuracy and reduce resource requirements. However, these methods are bounded with respect to symmetry and/or edge length. In this paper, a Genetic algorithm (GA) is applied to find a new and powerful CT method. The proposed method, Genetic Algorithm Census Transform (GACT), is compared with the established CT methods, showing better results for benchmarking datasets. Additional experiments have been performed to study the search space and the correlation between training and evaluation data.

sted, utgiver, år, opplag, sider
IEEE, 2019
Serie
IEEE Winter Conference on Applications of Computer Vision, ISSN 2472-6737
HSV kategori
Identifikatorer
urn:nbn:se:mdh:diva-44332 (URN)10.1109/WACV.2019.00177 (DOI)000469423400170 ()2-s2.0-85063571752 (Scopus ID)978-1-7281-1975-5 (ISBN)
Konferanse
19th IEEE Winter Conference on Applications of Computer Vision (WACV), JAN 07-11, 2019, Waikoloa Village, HI
Tilgjengelig fra: 2019-06-20 Laget: 2019-06-20 Sist oppdatert: 2019-12-18bibliografisk kontrollert
Forsberg, H., Lundqvist, K., Ekstrand, F. & Otterskog, M. (2017). Early Results and Ideas for Enhancements of the Master of Engineering Programme in Dependable Aerospace Systems. In: The 6th Development Conference for Swedish Engineering USIU2017: . Paper presented at The 6th Development Conference for Swedish Engineering USIU2017, 22 Nov 2017, Gothenburg, Sweden.
Åpne denne publikasjonen i ny fane eller vindu >>Early Results and Ideas for Enhancements of the Master of Engineering Programme in Dependable Aerospace Systems
2017 (engelsk)Inngår i: The 6th Development Conference for Swedish Engineering USIU2017, 2017Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

The five-year Master of Engineering Programme in Dependable Aerospace Systems, with dependability as its silver thread, started at Mälardalen University (MDH) in 2015. This paper presents selected ideas behind the creation of the programme, together with some preliminary analysis of current results and suggested enhancements for the programme’s fourth and fifth years.

Emneord
Dependability, Aerospace Systems, Unified Engineering, Undergraduate Research Opportunities
HSV kategori
Identifikatorer
urn:nbn:se:mdh:diva-38620 (URN)
Konferanse
The 6th Development Conference for Swedish Engineering USIU2017, 22 Nov 2017, Gothenburg, Sweden
Prosjekter
AVANS - civilingenjörsprogrammet i tillförlitliga flyg- och rymdsystem
Tilgjengelig fra: 2018-03-06 Laget: 2018-03-06 Sist oppdatert: 2018-03-06bibliografisk kontrollert
Ahlberg, C., Ekstrand, F., Ekström, M., Spampinato, G. & Asplund, L. (2015). GIMME2 - An embedded system for stereo vision and processing of megapixel images with FPGA-acceleration. In: 2015 International Conference on ReConFigurable Computing and FPGAs, ReConFig 2015: . Paper presented at International Conference on ReConFigurable Computing and FPGAs, ReConFig 2015, 7 December 2015 through 9 December 2015.
Åpne denne publikasjonen i ny fane eller vindu >>GIMME2 - An embedded system for stereo vision and processing of megapixel images with FPGA-acceleration
Vise andre…
2015 (engelsk)Inngår i: 2015 International Conference on ReConFigurable Computing and FPGAs, ReConFig 2015, 2015Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

This paper presents GIMME2, an embedded stereovision system, designed to be compact, power efficient, cost effective, and high performing in the area of image processing. GIMME2 features two 10 megapixel image sensors and a Xilinx Zynq, which combines FPGA-fabric with a dual-core ARM CPU on a single chip. This enables GIMME2 to process video-rate megapixel image streams at real-time, exploiting the benefits of heterogeneous processing.

Emneord
Cost effectiveness, Field programmable gate arrays (FPGA), Image processing, Pixels, Reconfigurable architectures, Reconfigurable hardware, Stereo vision, Video signal processing, Cost effective, FPGA fabric, Heterogeneous processing, Image streams, Power efficient, Process video, Single chips, Stereo-vision system, Stereo image processing
HSV kategori
Identifikatorer
urn:nbn:se:mdh:diva-31587 (URN)10.1109/ReConFig.2015.7393318 (DOI)000380437700038 ()2-s2.0-84964335178 (Scopus ID)9781467394062 (ISBN)
Konferanse
International Conference on ReConFigurable Computing and FPGAs, ReConFig 2015, 7 December 2015 through 9 December 2015
Tilgjengelig fra: 2016-05-13 Laget: 2016-05-13 Sist oppdatert: 2019-12-04bibliografisk kontrollert
Ekstrand, F., Ahlberg, C., Ekström, M. & Spampinato, G. (2015). High-speed segmentation-driven high-resolution matching. In: Proceedings of SPIE - The International Society for Optical Engineering, vol. 9445: . Paper presented at 7th International Conference on Machine Vision, ICMV 2014, 19 November 2014 through 21 November 2014 (pp. Article number 94451Y). , 9445
Åpne denne publikasjonen i ny fane eller vindu >>High-speed segmentation-driven high-resolution matching
2015 (engelsk)Inngår i: Proceedings of SPIE - The International Society for Optical Engineering, vol. 9445, 2015, Vol. 9445, s. Article number 94451Y-Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

This paper proposes a segmentation-based approach for matching of high-resolution stereo images in real time. The approach employs direct region matching in a raster scan fashion influenced by scanline approaches, but with pixel decoupling. To enable real-time performance it is implemented as a heterogeneous system of an FPGA and a sequential processor. Additionally, the approach is designed for low resource usage in order to qualify as part of unified image processing in an embedded system.

Emneord
FPGA, high-resolution, matching, real-time, stereo, vision, Computer vision, Field programmable gate arrays (FPGA), Image processing, Image segmentation, Stereo vision, Heterogeneous systems, High resolution, High resolution stereo, Real time, Real time performance, Sequential processors, Stereo image processing
HSV kategori
Identifikatorer
urn:nbn:se:mdh:diva-27712 (URN)10.1117/12.2181365 (DOI)000350961700070 ()2-s2.0-84924353756 (Scopus ID)9781628415605 (ISBN)
Konferanse
7th International Conference on Machine Vision, ICMV 2014, 19 November 2014 through 21 November 2014
Tilgjengelig fra: 2015-03-19 Laget: 2015-03-19 Sist oppdatert: 2015-04-16bibliografisk kontrollert
Ekstrand, F., Ahlberg, C., Ekström, M. & Spampinato, G. (2014). Towards an Embedded Real-Time High Resolution Vision System. In: Bebis, G Boyle, R Parvin, B Koracin, D McMahan, R Jerald, J Zhang, H Drucker, SM Kambhamettu, C ElChoubassi, M Deng, Z Carlson, M (Ed.), ADVANCES IN VISUAL COMPUTING (ISVC 2014), PT II: . Paper presented at 10th International Symposium on Visual Computing (ISVC), DEC 08-10, 2014, Las Vegas, NV (pp. 541-550). SPRINGER-VERLAG BERLIN
Åpne denne publikasjonen i ny fane eller vindu >>Towards an Embedded Real-Time High Resolution Vision System
2014 (engelsk)Inngår i: ADVANCES IN VISUAL COMPUTING (ISVC 2014), PT II / [ed] Bebis, G Boyle, R Parvin, B Koracin, D McMahan, R Jerald, J Zhang, H Drucker, SM Kambhamettu, C ElChoubassi, M Deng, Z Carlson, M, SPRINGER-VERLAG BERLIN , 2014, s. 541-550Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

This paper proposes an approach to image processing for high performance vision systems. Focus is on achieving a scalable method for real-time disparity estimation which can support high resolution images and large disparity ranges. The presented implementation is a non-local matching approach building on the innate qualities of the processing platform which, through utilization of a heterogeneous system, combines low-complexity approaches into performing a high-complexity task. The complementary platform composition allows for the FPGA to reduce the amount of data to the CPU while at the same time promoting the available informational content, thus both reducing the workload as well as raising the level of abstraction. Together with the low resource utilization, this allows for the approach to be designed to support advanced functionality in order to qualify as part of unified image processing in an embedded system.

sted, utgiver, år, opplag, sider
SPRINGER-VERLAG BERLIN, 2014
Serie
Lecture Notes in Computer Science, ISSN 0302-9743 ; 8888
HSV kategori
Identifikatorer
urn:nbn:se:mdh:diva-38383 (URN)000354700300052 ()2-s2.0-84916625525 (Scopus ID)978-3-319-14364-4 (ISBN)
Konferanse
10th International Symposium on Visual Computing (ISVC), DEC 08-10, 2014, Las Vegas, NV
Tilgjengelig fra: 2018-02-12 Laget: 2018-02-12 Sist oppdatert: 2019-12-04bibliografisk kontrollert
Spampinato, G., Lidholm, J., Ahlberg, C., Ekstrand, F., Ekström, M. & Asplund, L. (2013). An embedded stereo vision module for industrial vehicles automation. In: Proceedings of the IEEE International Conference on Industrial Technology: . Paper presented at 2013 IEEE International Conference on Industrial Technology, ICIT 2013; Cape Town; South Africa; 25 February 2013 through 28 February 2013 (pp. 52-57). IEEE
Åpne denne publikasjonen i ny fane eller vindu >>An embedded stereo vision module for industrial vehicles automation
Vise andre…
2013 (engelsk)Inngår i: Proceedings of the IEEE International Conference on Industrial Technology, IEEE , 2013, s. 52-57Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

This paper presents an embedded vision system based on reconfigurable hardware (FPGA) to perform stereo image processing and 3D mapping of sparse features for autonomous navigation and obstacle detection in industrial settings. We propose an EKF based visual SLAM to achieve a 6D localization of the vehicle even in non flat scenarios. The system uses vision as the only source of information. As a consequence, it operates regardless of the odometry from the vehicle since visual odometry is used. © 2013 IEEE.

sted, utgiver, år, opplag, sider
IEEE, 2013
Emneord
3D vision, Embedded, portable, SLAM
HSV kategori
Identifikatorer
urn:nbn:se:mdh:diva-19056 (URN)10.1109/ICIT.2013.6505647 (DOI)000322785200006 ()2-s2.0-84877621168 (Scopus ID)9781467345699 (ISBN)
Konferanse
2013 IEEE International Conference on Industrial Technology, ICIT 2013; Cape Town; South Africa; 25 February 2013 through 28 February 2013
Tilgjengelig fra: 2013-05-24 Laget: 2013-05-24 Sist oppdatert: 2018-08-07bibliografisk kontrollert
Ahlberg, C., Asplund, L., Campeanu, G., Ciccozzi, F., Ekstrand, F., Ekström, M., . . . Segerblad, E. (2013). The Black Pearl: An Autonomous Underwater Vehicle.
Åpne denne publikasjonen i ny fane eller vindu >>The Black Pearl: An Autonomous Underwater Vehicle
Vise andre…
2013 (engelsk)Rapport (Annet vitenskapelig)
Abstract [en]

The Black Pearl is a custom made autonomous underwater vehicle developed at Mälardalen University, Sweden. It is built in a modular fashion, including its mechanics, electronics and software. After a successful participation at the RoboSub competition in 2012 and winning the prize for best craftsmanship, this year we made minor improvements to the hardware, while the focus of the robot's evolution shifted to the software part. In this paper we give an overview of how the Black Pearl is built, both from the hardware and software point of view.

Emneord
under water robotembedded systems
HSV kategori
Identifikatorer
urn:nbn:se:mdh:diva-25159 (URN)
Prosjekter
RALF3 - Software for Embedded High Performance Architectures
Merknad

Published as part of the AUVSI Foundation and ONR's 16th International RoboSub Competition, San Diego, CA

Tilgjengelig fra: 2014-06-05 Laget: 2014-06-05 Sist oppdatert: 2014-09-26bibliografisk kontrollert
Ekstrand, F., Ahlberg, C., Ekström, M., Asplund, L. & Spampinato, G. (2012). Utilization and Performance Considerations in Resource Optimized Stereo Matching for Real-Time Reconfigurable Hardware. In: VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Application, vol. 2: . Paper presented at International Conference on Computer Vision Theory and Applications, VISAPP 2012; Rome; 24 February 2012 (pp. 415-418).
Åpne denne publikasjonen i ny fane eller vindu >>Utilization and Performance Considerations in Resource Optimized Stereo Matching for Real-Time Reconfigurable Hardware
Vise andre…
2012 (engelsk)Inngår i: VISAPP 2012 - Proceedings of the International Conference on Computer Vision Theory and Application, vol. 2, 2012, s. 415-418Konferansepaper, Publicerat paper (Annet vitenskapelig)
Abstract [en]

This paper presents a set of approaches for increasing the accuracy of basic area-based stereo matching methods. It is targeting real-time FPGA systems for dense disparity map estimation. The methods are focused on low resource usage and maximized improvement per cost unit to enable the inclusion of an autonomous system in an FPGA. The approach performs on par with other area-matching implementations, but at substantially lower resource usage. Additionally, the solution removes the requirement for external memory for reconfigurable hardware together with the limitation in image size accompanying standard methods. As a fully piped complete on-chip solution, it is highly suitable for real-time stereo-vision systems, with a frame rate over 100 fps for Megapixel images.

HSV kategori
Forskningsprogram
elektronik
Identifikatorer
urn:nbn:se:mdh:diva-13007 (URN)9789898565044 (ISBN)
Konferanse
International Conference on Computer Vision Theory and Applications, VISAPP 2012; Rome; 24 February 2012
Tilgjengelig fra: 2011-09-14 Laget: 2011-09-14 Sist oppdatert: 2015-01-09bibliografisk kontrollert
Spampinato, G., Lidholm, J., Ahlberg, C., Ekstrand, F., Ekström, M. & Asplund, L. (2011). An Embedded Stereo Vision Module for 6D Pose Estimation and Mapping. In: Proceedings of the IEEE international conference on Intelligent Robots and Systems IROS2011: . Paper presented at IEEE/RSJ International Conference on Intelligent Robots and Systems Location: San Francisco, CA Date: SEP 25-30, 2011 (pp. 1626-1631). New York: IEEE Press
Åpne denne publikasjonen i ny fane eller vindu >>An Embedded Stereo Vision Module for 6D Pose Estimation and Mapping
Vise andre…
2011 (engelsk)Inngår i: Proceedings of the IEEE international conference on Intelligent Robots and Systems IROS2011, New York: IEEE Press, 2011, s. 1626-1631Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

This paper presents an embedded vision system based on reconfigurable hardware (FPGA) and two CMOS cameras to perform stereo image processing and 3D mapping for autonomous navigation. We propose an EKF based visual SLAM and sparse feature detectors to achieve 6D localization of the vehicle in non flat scenarios. The system can operate regardless of the odometry information from the vehicle since visual odometry is used. As a result, the final system is compact and easy to install and configure.

sted, utgiver, år, opplag, sider
New York: IEEE Press, 2011
HSV kategori
Identifikatorer
urn:nbn:se:mdh:diva-13603 (URN)10.1109/IROS.2011.6048395 (DOI)000297477501148 ()2-s2.0-84455195713 (Scopus ID)978-1-61284-455-8 (ISBN)
Konferanse
IEEE/RSJ International Conference on Intelligent Robots and Systems Location: San Francisco, CA Date: SEP 25-30, 2011
Tilgjengelig fra: 2011-12-15 Laget: 2011-12-15 Sist oppdatert: 2016-06-02bibliografisk kontrollert
Ahlberg, C., Lidholm, J., Ekstrand, F., Spampinato, G., Ekström, M. & Asplund, L. (2011). GIMME - A General Image Multiview Manipulation Engine. In: Proceedings of the International Conference on ReConFigurable Computing and FPGAs (ReConFig 2011). Paper presented at 2011 International Conference on Reconfigurable Computing and FPGAs, ReConFig 2011;Cancun, Quintana Roo;30 November 2011through2 December 2011. Los Alamitos, Calif: IEEE Computer Society
Åpne denne publikasjonen i ny fane eller vindu >>GIMME - A General Image Multiview Manipulation Engine
Vise andre…
2011 (engelsk)Inngår i: Proceedings of the International Conference on ReConFigurable Computing and FPGAs (ReConFig 2011), Los Alamitos, Calif: IEEE Computer Society, 2011Konferansepaper, Publicerat paper (Fagfellevurdert)
Abstract [en]

This paper presents GIMME (General Image Multiview Manipulation Engine), a highly flexible reconfigurable stand-alone mobile two-camera vision platform with stereo-vision capability. GIMME relies on reconfigurable hardware (FPGA) to perform application-specific low to medium-level image-processing at video rate. The Qseven-extension enables additional processing power. Thanks to its compact design, low power consumption and standardized interfaces (power and communication), GIMME is an ideal vision platform for autonomous and mobile robot applications.

sted, utgiver, år, opplag, sider
Los Alamitos, Calif: IEEE Computer Society, 2011
Identifikatorer
urn:nbn:se:mdh:diva-13576 (URN)10.1109/ReConFig.2011.44 (DOI)2-s2.0-84856884110 (Scopus ID)978-076954551-6 (ISBN)
Konferanse
2011 International Conference on Reconfigurable Computing and FPGAs, ReConFig 2011;Cancun, Quintana Roo;30 November 2011through2 December 2011
Tilgjengelig fra: 2011-12-15 Laget: 2011-12-15 Sist oppdatert: 2019-12-04bibliografisk kontrollert
Organisasjoner
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0001-7934-6917