mdh.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Embedded high-resolution stereo-vision of high frame-rate and low latency through FPGA-acceleration
Mälardalen University, School of Innovation, Design and Engineering, Embedded Systems. (Robotik)ORCID iD: 000-0003-4907-9816
2020 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Autonomous agents rely on information from the surrounding environment to act upon. In the array of sensors available, the image sensor is perhaps the most versatile, allowing for detection of colour, size, shape, and depth. For the latter, in a dynamic environment, assuming no a priori knowledge, stereo vision is a commonly adopted technique. How to interpret images, and extract relevant information, is referred to as computer vision. Computer vision, and specifically stereo-vision algorithms, are complex and computationally expensive, already considering a single stereo pair, with results that are, in terms of accuracy, qualitatively difficult to compare. Adding to the challenge is a continuous stream of images, of a high frame rate, and the race of ever increasing image resolutions. In the context of autonomous agents, considerations regarding real-time requirements, embedded/resource limited processing platforms, power consumption, and physical size, further add up to an unarguably challenging problem.

This thesis aims to achieve embedded high-resolution stereo-vision of high frame-rate and low latency, by approaching the problem from two different angles, hardware and algorithmic development, in a symbiotic relationship. The first contributions of the thesis are the GIMME and GIMME2 embedded vision platforms, which offer hardware accelerated processing through FGPAs, specifically targeting stereo vision, contrary to available COTS systems at the time. The second contribution, toward stereo vision algorithms, is twofold. Firstly, the problem of scalability and the associated disparity range is addressed by proposing a segment-based stereo algorithm. In segment space, matching is independent of image scale, and similarly, disparity range is measured in terms of segments, indicating relatively few hypotheses to cover the entire range of the scene. Secondly, more in line with the conventional stereo correspondence for FPGAs, the Census Transform (CT) has been identified as a recurring cost metric. This thesis proposes an optimisation of the CT through a Genetic Algorithm (GA) - the Genetic Algorithm Census Transform (GACT). The GACT shows promising results for benchmark datasets, compared to established CT methods, while being resource efficient.

Abstract [sv]

Autonoma agenter är beroende av information från den omgivande miljön för att agera. I en mängd av tillgängliga sensorer är troligtvis bildsensorn den mest mångsidiga, då den möjliggör särskillnad av färg, storlek, form och djup. För det sistnämnda är, i en dynamisk miljö utan krav på förkunskaper, stereovision en vanligt tillämpad teknik. Tolkning av bildinnehåll och extrahering av relevant information går under benämningen datorseende. Datorseende, och specifikt stereoalgoritmer, är redan för ett enskilt bildpar komplexa och beräkningsmässigt kostsamma, och ger resultat som, i termer av noggrannhet, är kvalitativt svåra att jämföra. Problematiken utökas vidare av kontinuerlig ström av bilder, med allt högre bildfrekvens och upplösning. För autonoma agenter krävs dessutom överväganden vad gäller realtidskrav, inbyggda system/resursbegränsade beräkningsplattformar, strömförbrukning och fysisk storlek, vilket summerar till ett otvetydigt utmanande problem.

Den här avhandlingen syftar till att åstadkomma högupplöst stereovision med hög uppdateringsfrekvens och låg latens på inbyggda system. Genom att närma sig problemet från två olika vinklar, hårdvaru- och algoritmmässigt, kan ett symbiotiskt förhållande däremellan säkerställas.Avhandlingens första bidrag är GIMME och GIMME2 inbyggda visionsplattformar, som erbjuder FPGA-baserad hårdvaruaccelerering, med särskilt fokus på stereoseende, i kontrast till för tidpunkten kommersiellt tillgängliga system.Det andra bidraget, härrörande stereoalgoritmer, är tudelat.Först hanteras skalbarhetproblemet, sammankopplat med disparitetsomfånget, genom att föreslå en segmentbaserad stereoalgoritm.I segmentrymden är matchningen oberoende av bildupplösningen, samt att disparitetsomfånget definieras i termer av segment, vilket antyder att relativt få hypoteser behövs för att omfatta hela scenen.I det andra bidraget på algoritmnivå, mer i linje med konventionella stereoalgoritmer för FPGAer, har Censustransformen (CT) identifierats som ett återkommande kostnadsmått för likhet. Här föreslås en optimering av CT genom att tillämpa genetisk algoritm (GA) - Genetisk Algoritm Census Transform (GACT). GACT visar lovande resultat för referensdataset jämfört med etablerade CT-metoder, men är samtidigt resurseffektiv.

Place, publisher, year, edition, pages
Västerås: Mälardalen University , 2020.
Series
Mälardalen University Press Dissertations, ISSN 1651-4238 ; 304
Keywords [en]
Computer vision, stereo vision, FPGA, embedded systems
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computer Science
Identifiers
URN: urn:nbn:se:mdh:diva-46240ISBN: 978-91-7485-453-4 (print)OAI: oai:DiVA.org:mdh-46240DiVA, id: diva2:1375116
Public defence
2020-01-28, Kappa, Mälardalens högskola, Västerås, 09:15 (English)
Opponent
Supervisors
Available from: 2019-12-04 Created: 2019-12-04 Last updated: 2020-01-10Bibliographically approved
List of papers
1. GIMME - A General Image Multiview Manipulation Engine
Open this publication in new window or tab >>GIMME - A General Image Multiview Manipulation Engine
Show others...
2011 (English)In: Proceedings of the International Conference on ReConFigurable Computing and FPGAs (ReConFig 2011), Los Alamitos, Calif: IEEE Computer Society, 2011Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents GIMME (General Image Multiview Manipulation Engine), a highly flexible reconfigurable stand-alone mobile two-camera vision platform with stereo-vision capability. GIMME relies on reconfigurable hardware (FPGA) to perform application-specific low to medium-level image-processing at video rate. The Qseven-extension enables additional processing power. Thanks to its compact design, low power consumption and standardized interfaces (power and communication), GIMME is an ideal vision platform for autonomous and mobile robot applications.

Place, publisher, year, edition, pages
Los Alamitos, Calif: IEEE Computer Society, 2011
Identifiers
urn:nbn:se:mdh:diva-13576 (URN)10.1109/ReConFig.2011.44 (DOI)2-s2.0-84856884110 (Scopus ID)978-076954551-6 (ISBN)
Conference
2011 International Conference on Reconfigurable Computing and FPGAs, ReConFig 2011;Cancun, Quintana Roo;30 November 2011through2 December 2011
Available from: 2011-12-15 Created: 2011-12-15 Last updated: 2019-12-04Bibliographically approved
2. Towards an Embedded Real-Time High Resolution Vision System
Open this publication in new window or tab >>Towards an Embedded Real-Time High Resolution Vision System
2014 (English)In: ADVANCES IN VISUAL COMPUTING (ISVC 2014), PT II / [ed] Bebis, G Boyle, R Parvin, B Koracin, D McMahan, R Jerald, J Zhang, H Drucker, SM Kambhamettu, C ElChoubassi, M Deng, Z Carlson, M, SPRINGER-VERLAG BERLIN , 2014, p. 541-550Conference paper, Published paper (Refereed)
Abstract [en]

This paper proposes an approach to image processing for high performance vision systems. Focus is on achieving a scalable method for real-time disparity estimation which can support high resolution images and large disparity ranges. The presented implementation is a non-local matching approach building on the innate qualities of the processing platform which, through utilization of a heterogeneous system, combines low-complexity approaches into performing a high-complexity task. The complementary platform composition allows for the FPGA to reduce the amount of data to the CPU while at the same time promoting the available informational content, thus both reducing the workload as well as raising the level of abstraction. Together with the low resource utilization, this allows for the approach to be designed to support advanced functionality in order to qualify as part of unified image processing in an embedded system.

Place, publisher, year, edition, pages
SPRINGER-VERLAG BERLIN, 2014
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 8888
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mdh:diva-38383 (URN)000354700300052 ()2-s2.0-84916625525 (Scopus ID)978-3-319-14364-4 (ISBN)
Conference
10th International Symposium on Visual Computing (ISVC), DEC 08-10, 2014, Las Vegas, NV
Available from: 2018-02-12 Created: 2018-02-12 Last updated: 2019-12-04Bibliographically approved
3. GIMME2 - An embedded system for stereo vision and processing of megapixel images with FPGA-acceleration
Open this publication in new window or tab >>GIMME2 - An embedded system for stereo vision and processing of megapixel images with FPGA-acceleration
Show others...
2015 (English)In: 2015 International Conference on ReConFigurable Computing and FPGAs, ReConFig 2015, 2015Conference paper, Published paper (Refereed)
Abstract [en]

This paper presents GIMME2, an embedded stereovision system, designed to be compact, power efficient, cost effective, and high performing in the area of image processing. GIMME2 features two 10 megapixel image sensors and a Xilinx Zynq, which combines FPGA-fabric with a dual-core ARM CPU on a single chip. This enables GIMME2 to process video-rate megapixel image streams at real-time, exploiting the benefits of heterogeneous processing.

Keywords
Cost effectiveness, Field programmable gate arrays (FPGA), Image processing, Pixels, Reconfigurable architectures, Reconfigurable hardware, Stereo vision, Video signal processing, Cost effective, FPGA fabric, Heterogeneous processing, Image streams, Power efficient, Process video, Single chips, Stereo-vision system, Stereo image processing
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:mdh:diva-31587 (URN)10.1109/ReConFig.2015.7393318 (DOI)000380437700038 ()2-s2.0-84964335178 (Scopus ID)9781467394062 (ISBN)
Conference
International Conference on ReConFigurable Computing and FPGAs, ReConFig 2015, 7 December 2015 through 9 December 2015
Available from: 2016-05-13 Created: 2016-05-13 Last updated: 2019-12-04Bibliographically approved
4. Unbounded Sparse Census Transform using Genetic Algorithm
Open this publication in new window or tab >>Unbounded Sparse Census Transform using Genetic Algorithm
2019 (English)In: 2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), IEEE , 2019, p. 1616-1625Conference paper, Published paper (Refereed)
Abstract [en]

The Census Transform (CT) is a well proven method for stereo vision that provides robust matching, with respect to object boundaries, outliers and radiometric distortion, at a low computational cost. Recent CT methods propose patterns for pixel comparison and sparsity, to increase matching accuracy and reduce resource requirements. However, these methods are bounded with respect to symmetry and/or edge length. In this paper, a Genetic algorithm (GA) is applied to find a new and powerful CT method. The proposed method, Genetic Algorithm Census Transform (GACT), is compared with the established CT methods, showing better results for benchmarking datasets. Additional experiments have been performed to study the search space and the correlation between training and evaluation data.

Place, publisher, year, edition, pages
IEEE, 2019
Series
IEEE Winter Conference on Applications of Computer Vision, ISSN 2472-6737
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mdh:diva-44332 (URN)10.1109/WACV.2019.00177 (DOI)000469423400170 ()2-s2.0-85063571752 (Scopus ID)978-1-7281-1975-5 (ISBN)
Conference
19th IEEE Winter Conference on Applications of Computer Vision (WACV), JAN 07-11, 2019, Waikoloa Village, HI
Available from: 2019-06-20 Created: 2019-06-20 Last updated: 2019-12-18Bibliographically approved
5. The Genetic Algorithm Census Transform
Open this publication in new window or tab >>The Genetic Algorithm Census Transform
(English)Manuscript (preprint) (Other academic)
National Category
Embedded Systems
Identifiers
urn:nbn:se:mdh:diva-46244 (URN)
Available from: 2019-12-04 Created: 2019-12-04 Last updated: 2019-12-04Bibliographically approved

Open Access in DiVA

fulltext(17119 kB)33 downloads
File information
File name FULLTEXT03.pdfFile size 17119 kBChecksum SHA-512
56a150295546cf621c691e48d5898a365c39adae05a07f05cd6e971da1d2d98253b328f5531e99602792cb25b54a7c4daf69eca778172af81ec4c6c89abe2404
Type fulltextMimetype application/pdf

Authority records BETA

Ahlberg, Carl

Search in DiVA

By author/editor
Ahlberg, Carl
By organisation
Embedded Systems
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar
Total: 39 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 401 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf