https://www.mdu.se/

mdu.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 38) Show all publications
Franklin, A., Hedin, D., Lindell, R. & Frisk, H. (2024). Merging Places: A Real-Time Distributed Live Reverberation Chamber. In: 2024 16th International Conference on Quality of Multimedia Experience, QoMEX 2024: . Paper presented at 2024 16th International Conference on Quality of Multimedia Experience, QoMEX 2024 (pp. 54-57). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Merging Places: A Real-Time Distributed Live Reverberation Chamber
2024 (English)In: 2024 16th International Conference on Quality of Multimedia Experience, QoMEX 2024, Institute of Electrical and Electronics Engineers Inc. , 2024, p. 54-57Conference paper, Published paper (Refereed)
Abstract [en]

We present Auxtrument, a prototype instrument that allows audiences to experience the acoustic qualities of remote locations. Using the metaphor of a reverberation chamber, artists send signals from the mixers, bus, or aux via the Auxtrument's network connections from a concert hall to a few different locations. At each location the signal is played through loudspeakers and captured, colored by the acoustics and noises of the place, via a stereo or ambisonics microphone. The signal is sent back and played for the audience in a surround sound system conveying the spatial qualities of the places. The Auxtrument allows us to merge and layer the different locations in the concert hall. However, this arrangement places great demands on the network. The audio signals need to be high-resolution to preserve the inherent quality of the sounds, which creates large streams. The system must be equipped to work over different types of networks, and to enable any location, the system must work with mobile devices. After ruling out several commercial and opensource solutions, we built the Auxtrument with web technologies: mainly node.js, WebSockets, WebRTC, and WebAudio. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2024
Keywords
distributed performance, IoT, merged reality, Anechoic chambers, Architectural acoustics, Audio acoustics, Conveying, Internet of things, Loudspeakers, Reverberation, Acoustic quality, Concert hall, Network connection, Prototype instrument, Real- time, Remote location, Reverberation chambers, Location
National Category
Fluid Mechanics and Acoustics
Identifiers
urn:nbn:se:mdh:diva-68218 (URN)10.1109/QoMEX61742.2024.10598285 (DOI)2-s2.0-85201054170 (Scopus ID)9798350361582 (ISBN)
Conference
2024 16th International Conference on Quality of Multimedia Experience, QoMEX 2024
Available from: 2024-08-21 Created: 2024-08-21 Last updated: 2024-08-21Bibliographically approved
Lindell, R. (2024). The dialectics of digitalisation: A critique of the modernistic imperative for the development of digital technology. Futures: The journal of policy, planning and futures studies, 162, Article ID 103428.
Open this publication in new window or tab >>The dialectics of digitalisation: A critique of the modernistic imperative for the development of digital technology
2024 (English)In: Futures: The journal of policy, planning and futures studies, ISSN 0016-3287, E-ISSN 1873-6378, Vol. 162, article id 103428Article in journal (Refereed) Published
Abstract [en]

This text discusses today's digital transformation through the lens of Horkheimer and Adornos’ study of the enlightenment. Policy and public discourse around digitalisation embrace and adhere to the narrow tenets enlightenment thinking; the idea that rationality, individual freedom, and a society free from superstition are necessary and attainable goals. The costs of what has come to be called ‘Modernity’ are many. Through the application of rationality to all spheres of life, married with disruptive technological advancement, humanity has diminished its’ imagination – its ability to seek new directions. To paraphrase Horkheimer and Adorno, Modernism fights against nature, of which we are a part, and thus, paradoxically, sets us in a fight against ourselves. Environmental degradation, the price of progress, being just one example of this – deadening work, consumerism and severed social connections being amongst others. In this framing, digitalisation itself comes to be understood itself as akin to a force of nature – one that we can do little about, other than adjust and adapt or be swept away. But this by no means a foregone conclusion, there is light at the end of the optical fibre. Albeit that recent technical developments around artificial intelligence appears to be pushing policy makers into hasty decisions, the pace of the technical development is not as fast as we believe, and in comparison with the Reformation – we have time. If we can restrain ourselves from the resist, adapt or die responses promoted in popular discourse in face of the shock of large language models and rising threat of automation, then we create room to consider economic, social, and ecological alignment and accord, in the decision making and design of future interactive artefacts and digital services. The article argues that through postdigital aesthetics, technology makers can embrace materiality and the inherent qualities of digital technology to formulate a critique of existing trajectories in digital transformation, with consequences for a more sustainable future.

Place, publisher, year, edition, pages
Elsevier Ltd, 2024
Keywords
Dialectics, Digital transformation, Digitalisation, Postdigital
National Category
Other Social Sciences
Identifiers
urn:nbn:se:mdh:diva-68106 (URN)10.1016/j.futures.2024.103428 (DOI)2-s2.0-85198511248 (Scopus ID)
Available from: 2024-07-24 Created: 2024-07-24 Last updated: 2024-07-24Bibliographically approved
Lindström, H., Jörgensen, T. & Lindell, R. (2023). Inner City in the Listener's Auditory Bubble: Altering the Listener's Perception of the Inner City through the Intervention of Composed Soundscapes. In: ACM Int. Conf. Proc. Ser.: . Paper presented at ACM International Conference Proceeding Series (pp. 24-29). Association for Computing Machinery
Open this publication in new window or tab >>Inner City in the Listener's Auditory Bubble: Altering the Listener's Perception of the Inner City through the Intervention of Composed Soundscapes
2023 (English)In: ACM Int. Conf. Proc. Ser., Association for Computing Machinery , 2023, p. 24-29Conference paper, Published paper (Refereed)
Abstract [en]

This paper describes the effect on the listeners' experience of headphone listening to a music composition including inner-city sound while being in an inner-city environment, using a research through design approach. The study focuses on the listeners' described experiences through the lens of Berleant's aesthetic sensibility and Bull's phenomenon of the auditory bubble. We produce a composition which participants listen to in an urban context and discuss the two main themes found, soundtrack and awareness, together with the indications of the possibility to direct listeners' attention and level of immersion by including inner-city ambience and sound in music when listening with headphones in an urban environment.

Place, publisher, year, edition, pages
Association for Computing Machinery, 2023
Keywords
artistic research, audio walk, auditory bubble, awareness., research through design, soundtrack, Audio acoustics, Design approaches, Music composition, Soundscapes, Through the lens, Music
National Category
Music
Identifiers
urn:nbn:se:mdh:diva-64691 (URN)10.1145/3616195.3616229 (DOI)001141338800004 ()2-s2.0-85175398524 (Scopus ID)9798400708183 (ISBN)
Conference
ACM International Conference Proceeding Series
Available from: 2023-11-13 Created: 2023-11-13 Last updated: 2024-02-07Bibliographically approved
Saghafian, M., Sitompul, T. A., Laumann, K., Sundnes, K. & Lindell, R. (2021). Application of Human Factors in the Development Process of Immersive Visual Technologies: Challenges and Future Improvements. Frontiers in Psychology, 12, Article ID 634352.
Open this publication in new window or tab >>Application of Human Factors in the Development Process of Immersive Visual Technologies: Challenges and Future Improvements
Show others...
2021 (English)In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 12, article id 634352Article in journal (Refereed) Published
Abstract [en]

This study investigates how Human Factors (HF) is applied when designing and developing Immersive Visual Technologies (IVT), including Augmented Reality, Mixed Reality, and Virtual Reality. We interviewed fourteen people working at different organizations, that develop IVT applications in the Nordic region. We used thematic analysis to derive themes from the interviews. The results showed an insufficient knowledge and application of HF in IVT development, due to the lack of awareness of both scope and significance of HF, resource allocation strategy, market inertia, stakeholder's involvement, standardization of HF application and IVT uses, and technology maturity. This situation could be improved by allocating experts, adjusting organizational strategy to balance resource allocation, training developers and user organizations to raise awareness and to encourage co-creative design and knowledge sharing, create a sense of ownership amongst stakeholders, and ensure the usefulness of the technology to the user's work.

Place, publisher, year, edition, pages
Lausanne, Switzerland: Frontiers Media S.A., 2021
Keywords
human factors, augmented reality, mixed reality, virtual reality, training, organizational strategy
National Category
Other Electrical Engineering, Electronic Engineering, Information Engineering
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-53665 (URN)10.3389/fpsyg.2021.634352 (DOI)000628777600001 ()33732195 (PubMedID)2-s2.0-85102568078 (Scopus ID)
Projects
Immersive Visual Technologies for Safety-critical Applications (ImmerSAFE)
Funder
EU, Horizon 2020, 764951
Available from: 2021-03-19 Created: 2021-03-19 Last updated: 2023-09-15Bibliographically approved
Forsman, V., Wallmyr, M., Sitompul, T. A. & Lindell, R. (2021). Classifying Excavator Collisions Based on Users’ Visual Perception in the Mixed Reality Environment. In: Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: HUCAPP: . Paper presented at 5th International Conference on Human Computer Interaction Theory and Applications (HUCAPP 2021) (pp. 255-262). Setúbal, Portugal: SciTePress, 2
Open this publication in new window or tab >>Classifying Excavator Collisions Based on Users’ Visual Perception in the Mixed Reality Environment
2021 (English)In: Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: HUCAPP, Setúbal, Portugal: SciTePress, 2021, Vol. 2, p. 255-262Conference paper, Published paper (Refereed)
Abstract [en]

Visual perception plays an important role for recognizing possible hazards. In the context of heavy machinery, relevant visual information can be obtained from the machine's surrounding and from the human-machine interface that exists inside the cabin. In this paper, we propose a method that classifies the occurring collisions by combining the data collected by the eye tracker and the automatic logging mechanism in the mixed reality simulation. Thirteen participants were asked to complete a test scenario in the mixed reality simulation, while wearing an eye tracker. The results demonstrate that we could classify the occurring collisions based on two visual perception conditions: (1) whether the colliding objects were visible from the participants' field of view and (2) whether the participants have seen the information presented on the human-machine interface before the collisions occurred. This approach enabled us to interpret the occurring collisions differently, compared to the traditional approach that uses the total number of collisions as the representation of participants' performance.

Place, publisher, year, edition, pages
Setúbal, Portugal: SciTePress, 2021
Keywords
Mixed Reality, Visual Perception, Collision, Eye Tracking, Human-machine Interface, Excavator, Heavy Machinery
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-53493 (URN)10.5220/0010386702550262 (DOI)000661279100027 ()2-s2.0-85102966174 (Scopus ID)978-989-758-488-6 (ISBN)
Conference
5th International Conference on Human Computer Interaction Theory and Applications (HUCAPP 2021)
Projects
ITS-EASY Post Graduate School for Embedded Software and SystemsImmersive Visual Technologies for Safety-critical Applications (ImmerSAFE)
Funder
EU, Horizon 2020, 764951
Note

This is the accepted version of the conference paper presented at HUCAPP 2021 (http://www.hucapp.visigrapp.org/?y=2021). The final published version is available at Scitepress via https://doi.org/10.5220/0010386702550262. Personal use of this material is permitted. Permission from Scitepress must be obtained for all other uses.

Available from: 2021-02-19 Created: 2021-02-19 Last updated: 2023-09-15Bibliographically approved
Roysson, S., Sitompul, T. A. & Lindell, R. (2021). Using Artificial Neural Network to Provide Realistic Lifting Capacity in the Mobile Crane Simulation. In: Proceedings of the 22nd Engineering Applications of Neural Networks Conference: . Paper presented at 22nd Engineering Applications of Neural Networks Conference (EANN 2021) (pp. 448-462). Cham, Switzerland: Springer
Open this publication in new window or tab >>Using Artificial Neural Network to Provide Realistic Lifting Capacity in the Mobile Crane Simulation
2021 (English)In: Proceedings of the 22nd Engineering Applications of Neural Networks Conference, Cham, Switzerland: Springer, 2021, p. 448-462Conference paper, Published paper (Refereed)
Abstract [en]

Simulations are often used for training novice operators to avoid accidents, while they are still polishing their skills. To ensure the experience gained in the simulation be applicable in real-world scenarios, the simulation has to be made as realistic as possible. This paper investigated how to make the lifting capacity of a virtual mobile crane behave similarly like its real counterpart. We initially planned to use information from the load charts, which document how the lifting capacity of a mobile crane works, but the data in the load charts were very limited. To mitigate this issue, we trained an artificial neural network (ANN) using 90% of random data from two official load charts of a real mobile crane. The trained model could predict the lifting capacity based on the real-time states of the boom length, the load radius, and the counterweight of the virtual mobile crane. To evaluate the accuracy of the ANN predictions, we conducted a real-time experiment inside the simulation, where we compared the lifting capacity predicted by the ANN and the remaining 10% of the data from the load charts. The results showed that the ANN could predict the lifting capacity with small deviation rates. The deviation rates also had no significant impact on the lifting capacity, except when both boom length and load radius were approaching their maximum states. Therefore, the predicted lifting capacity generated by the ANN could be assumed to be close enough to the values in the load charts.

Place, publisher, year, edition, pages
Cham, Switzerland: Springer, 2021
Keywords
neural network, virtual reality, mobile crane, lifting capacity, realism
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-55142 (URN)10.1007/978-3-030-80568-5_37 (DOI)978-3-030-80567-8 (ISBN)
Conference
22nd Engineering Applications of Neural Networks Conference (EANN 2021)
Projects
ImmerSafe - Immersive Visual Technologies for Safety-critical Applications
Funder
EU, Horizon 2020, 764951
Note

This is an Author Accepted Manuscript version of the following chapter: S. Roysson, T. A. Sitompul, and R. Lindell, Using Artificial Neural Network to Provide Realistic Lifting Capacity in the Mobile Crane Simulation, published in Proceedings of the 22nd Engineering Applications of Neural Networks Conference, edited by L. Iliadis, J. Macintyre, C. Jayne, and E. Pimenidis, 2021, Springer reproduced with permission of Springer. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-80568-5 37.

Available from: 2021-06-24 Created: 2021-06-24 Last updated: 2023-09-15Bibliographically approved
Sitompul, T. A., Wallmyr, M. & Lindell, R. (2020). Conceptual design and evaluation of windshield displays for excavators. Multimodal Technologies and Interaction, 4(4), 1-19, Article ID 86.
Open this publication in new window or tab >>Conceptual design and evaluation of windshield displays for excavators
2020 (English)In: Multimodal Technologies and Interaction, E-ISSN 2414-4088, Vol. 4, no 4, p. 1-19, article id 86Article in journal (Refereed) Published
Abstract [en]

This paper investigates the possible visualization using transparent displays, which could be placed on the excavator’s windshield. This way, the information could be presented closer to operators’ line of sight, without fully obstructing their view. Therefore, excavator operators could acquire the supportive information provided by the machine without diverting their attention from operational areas. To ensure that there is a match between the supportive information and operators’ contextual needs, we conducted four different activities as parts of our design process. Firstly, we looked at four relevant safety guidelines to determine which information is essential to perform safe operations. Secondly, we reviewed all commercially available technologies to discover their suitability in the excavator context. Thirdly, we conducted a design workshop to generate ideas on how the essential information should look like and behave based on the performed operation and the chosen available technology. Fourthly, we interviewed seven excavator operators to test their understanding and obtain their feedback on the proposed visualization concepts. The results indicated that four out of six visualization concepts that we proposed could be understood easily by the operators and we also revised them to better suit the operators’ way of thinking. All the operators also positively perceived this approach, since all of them included at least three visualization concepts to be presented on the windshield.

Place, publisher, year, edition, pages
Basel, Switzerland: MDPI AG, 2020
Keywords
Design process, Excavators, Heavy machinery, Human-machine interface, Information visualization, Paper prototype, Transparent displays
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-52861 (URN)10.3390/mti4040086 (DOI)000623577200019 ()2-s2.0-85097228074 (Scopus ID)
Projects
ImmerSAFE - Immersive Visual Technologies for Safety-critical Applications
Funder
EU, Horizon 2020, 764951
Available from: 2020-12-17 Created: 2020-12-17 Last updated: 2021-11-17Bibliographically approved
Sitompul, T. A., Lindell, R., Wallmyr, M. & Siren, A. (2020). Presenting Information Closer to Mobile Crane Operators' Line of Sight: Designing and Evaluating Visualization Concepts Based on Transparent Displays. In: Proceedings - Graphics Interface 2020: . Paper presented at Graphics Interface 2020, GI 2020; Toronto, Virtual, Online; Canada; 28 May 2020 through 29 May 2020.. Canadian Information Processing Society
Open this publication in new window or tab >>Presenting Information Closer to Mobile Crane Operators' Line of Sight: Designing and Evaluating Visualization Concepts Based on Transparent Displays
2020 (English)In: Proceedings - Graphics Interface 2020, Canadian Information Processing Society , 2020Conference paper, Published paper (Refereed)
Abstract [en]

We have investigated the visualization of safety information for mobile crane operations utilizing transparent displays, where the information can be presented closer to operators' line of sight with minimum obstruction on their view. The intention of the design is to help operators in acquiring supportive information provided by the machine, without requiring them to divert their attention far from operational areas. We started the design process by reviewing mobile crane safety guidelines to determine which information that operators need to know in order to perform safe operations. Using the findings from the safety guidelines review, we then conducted a design workshop to generate design ideas and visualisation concepts, as well as to delineate their appearances and behaviour based on the capability of transparent displays. We transformed the results of the workshop to a low-fidelity paper prototype, and then interviewed six mobile crane operators to obtain their feedback on the proposed concepts. The results of the study indicate that, as information will be presented closer to operators' line of sight, we need to be selective on what kind of information and how much information that should be presented to operators. However, all the operators appreciated having information presented closer to their line of sight, as an approach that has the potential to improve safety in their operations.

Place, publisher, year, edition, pages
Canadian Information Processing Society, 2020
Keywords
Human-centered computing-Visualization-Visualization application domains-Information visualization, Visualization design and evaluation methods, Design, Visualization, Design workshops, Operational area, Paper prototypes, Presenting informations, Safe operation, Safety guidelines, Safety information, Transparent displays, Cranes
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-50908 (URN)10.20380/GI2020.41 (DOI)2-s2.0-85090761923 (Scopus ID)
Conference
Graphics Interface 2020, GI 2020; Toronto, Virtual, Online; Canada; 28 May 2020 through 29 May 2020.
Projects
ImmerSAFE - Immersive Visual Technologies for Safety-critical Applications
Funder
EU, Horizon 2020, 764951
Available from: 2020-09-25 Created: 2020-09-25 Last updated: 2021-11-17Bibliographically approved
Wallmyr, M., Sitompul, T. A., Holstein, T. & Lindell, R. (2019). Evaluating Mixed Reality Notifications to Support Excavator Operator Awareness. In: 17th IFIP TC 13 International Conference, Paphos, Cyprus, September 2–6, 2019, Proceedings, Part I: . Paper presented at 17th IFIP TC13 International Conference on Human-Computer Interaction, INTERACT 2019; Paphos; Cyprus; 2 September 2019 through 6 September 2019 (pp. 743-762). Cham: Springer, 11746
Open this publication in new window or tab >>Evaluating Mixed Reality Notifications to Support Excavator Operator Awareness
2019 (English)In: 17th IFIP TC 13 International Conference, Paphos, Cyprus, September 2–6, 2019, Proceedings, Part I, Cham: Springer, 2019, Vol. 11746, p. 743-762Conference paper, Published paper (Refereed)
Abstract [en]

Operating heavy vehicles, for instance an excavator, requires a high level of attention to the operation done using the vehicle and awareness of the surroundings. Digital transformation in heavy vehicles aims to improve productivity and user experience, but it can also increase the operators mental load because of a higher demand of attention to instrumentation and controls, subsequently leading to reduced situation awareness. One way to mitigate this, is to display information within the operators’ field of view, which enhances information detectability through quick glances, using mixed reality interfaces. This work explores two types of mixed reality visualizations and compares them to a traditional display setup in a simulated excavator environment. We have utilized eye-tracking glasses to study users’ attention to the task, surrounding awareness, and interfaces, followed by a NASA-RTLX questionnaire to evaluate the users’ reported mental workload. The results indicate benefits for the mixed reality approaches, with lower workload ratings together with an improved rate in detection of presented information.

Place, publisher, year, edition, pages
Cham: Springer, 2019
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 11746
Keywords
Excavator, Head-up display, Heavy-vehicles, Human-vehicle interaction, Mixed reality, Situational awareness
National Category
Computer Systems
Research subject
Computer Science
Identifiers
urn:nbn:se:mdh:diva-45583 (URN)10.1007/978-3-030-29381-9_44 (DOI)000561042200044 ()2-s2.0-85072870944 (Scopus ID)
Conference
17th IFIP TC13 International Conference on Human-Computer Interaction, INTERACT 2019; Paphos; Cyprus; 2 September 2019 through 6 September 2019
Projects
Immersive Visual Technologies for Safety-critical Applications (ImmerSAFE)ITS-EASY Post Graduate School for Embedded Software and Systems
Funder
EU, Horizon 2020, 764951
Available from: 2019-10-17 Created: 2019-10-17 Last updated: 2021-11-17Bibliographically approved
Kade, D., Lindell, R., Ürey, H. & Özcan, O. (2018). Evaluation of a mixed reality head-mounted projection display to support motion capture acting. In: Lecture Notes in Computer Science: . Paper presented at 14th International Conference on Advances in Computer Entertainment Technology, ACE 2017; London; United Kingdom; 14 December 2017 through 16 December 2017 (pp. 14-31). Springer Verlag, 714
Open this publication in new window or tab >>Evaluation of a mixed reality head-mounted projection display to support motion capture acting
2018 (English)In: Lecture Notes in Computer Science, Springer Verlag , 2018, Vol. 714, p. 14-31Conference paper, Published paper (Refereed)
Abstract [en]

Motion capture acting is a challenging task, it requires trained and experienced actors who can highly rely on their acting and imagination skills to deliver believable performances. This is especially the case when preparation times are short and scenery needs to be imagined, as it is commonly the case for shoots in the gaming industry. To support actors in such cases, we developed a mixed reality application that allows showing digital scenery and triggering emotions while performing. In this paper we tested our hypothesis that a mixed reality head-mounted projection display can support motion capture acting through the help of experienced motion capture actors performing short acting scenes common for game productions. We evaluated our prototype with four motion capture actors and four motion capture experts. Both groups considered our application as helpful, especially as a rehearsal tool to prepare performances before capturing the motions in a studio. Actors and experts indicated that our application could reduce the time to prepare performances and supports the set up of physical acting scenery.

Place, publisher, year, edition, pages
Springer Verlag, 2018
Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 10714
National Category
Media and Communication Technology Computer Systems
Identifiers
urn:nbn:se:mdh:diva-38870 (URN)10.1007/978-3-319-76270-8_2 (DOI)000432607700002 ()2-s2.0-85043530133 (Scopus ID)9783319762692 (ISBN)
Conference
14th International Conference on Advances in Computer Entertainment Technology, ACE 2017; London; United Kingdom; 14 December 2017 through 16 December 2017
Available from: 2018-03-22 Created: 2018-03-22 Last updated: 2018-06-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-3163-6039

Search in DiVA

Show all publications