https://www.mdu.se/

mdu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
MetaTraj: Meta-Learning for Cross-Scene Cross-Object Trajectory Prediction
Mälardalen University, School of Business, Society and Engineering, Future Energy Center. Univ Tokyo, Ctr Spatial Informat Sci, Kashiwa, Chiba 2778568, Japan..
Peking Univ, Sch Urban Planning & Design, Shenzhen 518055, Guangdong, Peoples R China..
Univ Tokyo, Ctr Spatial Informat Sci, Kashiwa, Chiba 2778568, Japan..
Univ Tokyo, Ctr Spatial Informat Sci, Kashiwa, Chiba 2778568, Japan..
2023 (English)In: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016Article in journal (Refereed) Published
Abstract [en]

Long-term pedestrian trajectory prediction in crowds is highly valuable for safety driving and social robot navigation. The recent research of trajectory prediction usually focuses on solving the problems of modeling social interactions, physical constraints and multi-modality of futures without considering the generalization of prediction models to other scenes and objects, which is critical for real-world applications. In this paper, we propose a general framework that makes trajectory prediction models able to transfer well across unseen scenes and objects by quickly learning the prior information of trajectories. The trajectory sequences are closely related to the circumstance setting (e.g. exits, roads, buildings, entries etc.) and the objects (e.g. pedestrians, bicycles, vehicles etc.). We argue that those trajectory information varying across scenes and objects makes a trained prediction model not perform well over unseen target data. To address it, we introduce MetaTraj that contains carefully designed sub-tasks and meta-tasks to learn prior information of trajectories related to scenes and objects, which then contributes to accurate long-term future prediction. Both sub-tasks and meta-tasks are generated from trajectory sequences effortlessly and can be easily integrated into many prediction models. Extensive experiments over several trajectory prediction benchmarks demonstrate that MetaTraj can be applied to multiple prediction models and enables them generalize well to unseen scenes and objects.

Place, publisher, year, edition, pages
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC , 2023.
Keywords [en]
Trajectory prediction, transfer learning, cross-scene, cross-object, meta learning
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:mdh:diva-64166DOI: 10.1109/TITS.2023.3299112ISI: 001051283900001Scopus ID: 2-s2.0-85167800997OAI: oai:DiVA.org:mdh-64166DiVA, id: diva2:1794626
Available from: 2023-09-06 Created: 2023-09-06 Last updated: 2023-09-06Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Shi, Xiaodan

Search in DiVA

By author/editor
Shi, Xiaodan
By organisation
Future Energy Center
In the same journal
IEEE transactions on intelligent transportation systems (Print)
Computer Vision and Robotics (Autonomous Systems)

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 94 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf