mdh.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (3 of 3) Show all publications
Akan, B., Ameri E., A. & Curuklu, B. (2014). Scheduling for Multiple Type Objects Using POPStar Planner. In: Proceedings of the 19th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA'14), Barcelona, Spain, September, 2014: . Paper presented at 19th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA'14), Barcelona, Spain, 16-19 September, 2014 (pp. Article number 7005148).
Open this publication in new window or tab >>Scheduling for Multiple Type Objects Using POPStar Planner
2014 (English)In: Proceedings of the 19th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA'14), Barcelona, Spain, September, 2014, 2014, p. Article number 7005148-Conference paper, Published paper (Refereed)
Abstract [en]

In this paper, scheduling of robot cells that produce multiple object types in low volumes are considered. The challenge is to maximize the number of objects produced in a given time window as well as to adopt the  schedule for changing object types. Proposed algorithm, POPStar, is based on a partial order planner which is guided by best-first search algorithm and landmarks. The best-first search, uses heuristics to help the planner to create complete plans while minimizing the makespan. The algorithm takes landmarks, which are extracted from user's instructions given in structured English as input. Using different topologies for the landmark graphs, we show that it is possible to create schedules for changing object types, which will be processed in different stages in the robot cell. Results show that the POPStar algorithm can create and adapt schedules for robot cells with changing product types in low volume production.

National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-26465 (URN)10.1109/ETFA.2014.7005148 (DOI)000360999100099 ()2-s2.0-84946692437 (Scopus ID)978-147994846-8 (ISBN)
Conference
19th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA'14), Barcelona, Spain, 16-19 September, 2014
Available from: 2014-11-05 Created: 2014-11-05 Last updated: 2016-01-18Bibliographically approved
Ameri E., A., Akan, B., Çürüklü, B. & Asplund, L. (2011). A General Framework for Incremental Processing of Multimodal Inputs. In: Proceedings of the 13th international conference on multimodal interfaces: . Paper presented at International Conference on Multimodal Interaction - ICMI 2011 (pp. 225-228). New York: ACM Press
Open this publication in new window or tab >>A General Framework for Incremental Processing of Multimodal Inputs
2011 (English)In: Proceedings of the 13th international conference on multimodal interfaces, New York: ACM Press, 2011, p. 225-228Conference paper, Published paper (Refereed)
Abstract [en]

Humans employ different information channels (modalities) such as speech, pictures and gestures in their commu- nication. It is believed that some of these modalities are more error-prone to some specific type of data and therefore multimodality can help to reduce ambiguities in the interaction. There have been numerous efforts in implementing multimodal interfaces for computers and robots. Yet, there is no general standard framework for developing them. In this paper we propose a general framework for implementing multimodal interfaces. It is designed to perform natural language understanding, multi- modal integration and semantic analysis with an incremental pipeline and includes a multimodal grammar language, which is used for multimodal presentation and semantic meaning generation.

Place, publisher, year, edition, pages
New York: ACM Press, 2011
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:mdh:diva-13586 (URN)10.1145/2070481.2070521 (DOI)2-s2.0-83455176699 (Scopus ID)978-1-4503-0641-6 (ISBN)
Conference
International Conference on Multimodal Interaction - ICMI 2011
Available from: 2011-12-15 Created: 2011-12-15 Last updated: 2018-01-12Bibliographically approved
Ameri E., A., Akan, B. & Çürüklü, B. (2010). Incremental Multimodal Interface for Human-Robot Interaction. In: Proceedings of the 15th IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2010: . Paper presented at 15th IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2010; Bilbao; 13 September 2010 through 16 September 2010 (pp. Art.nr. 5641234).
Open this publication in new window or tab >>Incremental Multimodal Interface for Human-Robot Interaction
2010 (English)In: Proceedings of the 15th IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2010, 2010, p. Art.nr. 5641234-Conference paper, Published paper (Refereed)
Abstract [en]

Face-to-face human communication is a multimodal and incremental process. An intelligent robot that operates in close relation with humans should have the ability to communicate with its human colleagues in such manner. The process of understanding and responding to multimodal inputs has been an interesting field of research and resulted in advancements in areas such as syntactic and semantic analysis, modality fusion and dialogue management. Some approaches in syntactic and semantic analysis take incremental nature of human interaction into account. Our goal is to unify syntactic/semantic analysis, modality fusion and dialogue management processes into an incremental multimodal interaction manager. We believe that this approach will lead to a more robust system which can perform faster than today's systems.

National Category
Engineering and Technology
Identifiers
urn:nbn:se:mdh:diva-10788 (URN)10.1109/ETFA.2010.5641234 (DOI)000313616400112 ()2-s2.0-78650547906 (Scopus ID)
Conference
15th IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2010; Bilbao; 13 September 2010 through 16 September 2010
Available from: 2010-11-10 Created: 2010-11-10 Last updated: 2018-08-13Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-9437-6599

Search in DiVA

Show all publications