Show simple item record

dc.contributor.authorHussein, Ahmed
dc.contributor.authorGaber, Mohamed Medhat
dc.contributor.authorElyan, Eyad
dc.date.accessioned2017-06-06T09:33:19Z
dc.date.available2017-06-06T09:33:19Z
dc.date.issued2016-08-19en
dc.identifier.citationHUSSEIN, A., GABER, M.M. and ELYAN, E. 2016. Deep active learning for autonomous navigation. In Jayne, C. and Iliadis, L. (eds.) Communications in computer and information science, 629. Engineering applications of neural networks: proceedings of the 17th International conference on engineering applications of neural networks (EANN 2016), 2 - 5 September 2016, Aberdeen, UK. Cham: Springer [online], pages 3-17. Available from: https://doi.org/10.1007/978-3-319-44188-7_1en
dc.identifier.isbn9783319441870en
dc.identifier.isbn9783319441887en
dc.identifier.issn1865-0929en
dc.identifier.issn1865-0937en
dc.identifier.urihttp://hdl.handle.net/10059/2361
dc.description.abstractImitation learning refers to an agent's ability to mimic a desired behavior by learning from bservations. A major challenge facing learning from demonstrations is to represent the demonstrations in a manner that is adequate for learning and efficient for real time decisions. Creating feature representations is especially challenging when extracted from high dimensional visual data. In this paper, we present a method for imitation learning from raw visual data. The proposed method is applied to a popular imitation learning domain that is relevant to a variety of real life applications; namely navigation. To create a training set, a teacher uses an optimal policy to perform a navigation task, and the actions taken are recorded along with visual footage from the first person perspective. Features are automatically extracted and used to learn a policy that mimics the teacher via a deep convolutional neural network. A trained agent can then predict an action to perform based on the scene it finds itself in. This method is generic, and the network is trained without knowledge of the task, targets or environment in which it is acting. Another common challenge in imitation learning is generalizing a policy over unseen situation in training data. To address this challenge, the learned policy is subsequently improved by employing active learning. While the agent is executing a task, it can query the teacher for the correct action to take in situations where it has low confidence. The active samples are added to the training set and used to update the initial policy. The proposed approach is demonstrated on 4 different tasks in a 3D simulated environment. The experiments show that an agent can effectively perform imitation learning from raw visual data for navigation tasks and that active learning can significantly improve the initial policy using a small number of samples. The simulated test bed facilitates reproduction of these results and comparison with other approaches.en
dc.language.isoengen
dc.publisherSpringeren
dc.rightshttps://creativecommons.org/licenses/by-nc/4.0en
dc.rightsAttribution-NonCommercial 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by-nc/4.0/*
dc.subjectImitation learningen
dc.subjectRobotsen
dc.subjectOptimal policyen
dc.subjectVisual dataen
dc.titleDeep active learning for autonomous navigation.en
dc.typeConference publicationsen
dc.publisher.urihttps://doi.org/10.1007/978-3-319-44188-7_1en
dcterms.dateAccepted2016-06-05en
dcterms.publicationdate2016-09-30en
refterms.accessExceptionNAen
refterms.dateDeposit2017-06-06en
refterms.dateFCA2017-06-06en
refterms.dateFCD2017-06-06en
refterms.dateFreeToDownload2017-06-06en
refterms.dateFreeToRead2017-06-06en
refterms.dateToSearch2017-06-06en
refterms.depositExceptionNAen
refterms.panelBen
refterms.technicalExceptionNAen
refterms.versionAMen
rioxxterms.publicationdate2016-08-19en
rioxxterms.typeConference Paper/Proceeding/Abstracten
rioxxterms.versionAMen


Files in this item

This item appears in the following Collection(s)

Show simple item record

https://creativecommons.org/licenses/by-nc/4.0
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by-nc/4.0