Show simple item record

dc.contributor.authorRoche, Jamie
dc.contributor.authorDe-Silva, Varuna
dc.contributor.authorKondoz, Ahmet
dc.identifier.citationJ. Roche, V. De-Silva and A. Kondoz, "A Multimodal Perception-Driven Self Evolving Autonomous Ground Vehicle," in IEEE Transactions on Cybernetics, vol. 52, no. 9, pp. 9279-9289, Sept. 2022, doi: 10.1109/TCYB.2021.3113804en_US
dc.description© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.description.abstractIncreasingly complex automated driving functions, specifically those associated with free space detection (FSD), are delegated to convolutional neural networks (CNNs). If the dataset used to train the network lacks diversity, modality, or sufficient quantities, the driver policy that controls the vehicle may induce safety risks. Although most autonomous ground vehicles (AGVs) perform well in structured surroundings, the need for human intervention significantly rises when presented with unstructured niche environments. To this end, we developed an AGV for seamless indoor and outdoor navigation to collect realistic multimodal data streams. We demonstrate one application of the AGV when applied to a self-evolving FSD framework that leverages online active machine-learning (ML) paradigms and sensor data fusion. In essence, the self-evolving AGV queries image data against a reliable data stream, ultrasound, before fusing the sensor data to improve robustness. We compare the proposed framework to one of the most prominent free space segmentation methods, DeepLabV3+ [1]. DeepLabV3+ [1] is a state-of-the-art semantic segmentation model composed of a CNN and an autodecoder. In consonance with the results, the proposed framework outperforms DeepLabV3+ [1]. The performance of the proposed framework is attributed to its ability to self-learn free space. This combination of online and active ML removes the need for large datasets typically required by a CNN. Moreover, this technique provides case-specific free space classifications based on the information gathered from the scenario at hand.en_US
dc.relation.ispartofIEEE Transactions on Cyberneticsen_US
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United States*
dc.subjectAutonomous vehiclesen_US
dc.subjectNeural networks (Computer science)en_US
dc.subjectTraffic safetyen_US
dc.subjectConvolutional neural networksen_US
dc.subjectOptical sensorsen_US
dc.titleA Multimodal Perception-Driven Self Evolving Autonomous Ground Vehicle /en_US
dc.identifier.issue9 (September 2022)en_US
dc.subject.departmentDept of Mechanical & Electronic Engineering, ATU Sligoen_US

Files in this item


This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 United States
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States