Imitation learning: a suvey of learning methods.
Gaber, Mohamed Medhat
MetadataShow full item record
HUSSEIN, A., GABER, M.M., ELYAN, E. and JAYNE, C. 2017 Imitation learning: a suvey of learning methods. ACM computing surveys [online], 50(2), article 21. Available from: https://doi.org/10.1145/3054912
Imitation learning techniques aim to mimic human behavior in a given task. An agent (a learning machine) is trained to perform a task from demonstrations by learning a mapping between observations and actions. The idea of teaching by imitation has been around for many years, however, the field is gaining attention recently due to advances in computing and sensing as well as rising demand for intelligent applications. The paradigm of learning by imitation is gaining popularity because it facilitates teaching complex tasks with minimal expert knowledge of the tasks. Generic imitation learning methods could potentially reduce the problem of teaching a task to that of providing demonstrations; without the need for explicit programming or designing reward functions specific to the task. Modern sensors are able to collect and transmit high volumes of data rapidly, and processors with high computational power allow fast processing that maps the sensory data to actions in a timely manner. This opens the door for many potential AI applications that require real-time perception and reaction such as humanoid robots, self-driving vehicles, human computer interaction and computer games to name a few. However, specialized algorithms are needed to effectively and robustly learn models as learning by imitation poses its own set of challenges. In this paper, we survey imitation learning methods and present design options in different steps of the learning process. We introduce a background and motivation for the field as well as highlight challenges specific to the imitation problem. Methods for designing and evaluating imitation learning tasks are categorized and reviewed. Special attention is given to learning methods in robotics and games as these domains are the most popular in the literature and provide a wide array of problems and methodologies. We extensively discuss combining imitation learning approaches using different sources and methods, as well as incorporating other motion learning methods to enhance imitation. We also discuss the potential impact on industry, present major applications and highlight current and future research directions.
Permalink for this recordhttp://hdl.handle.net/10059/2298
Collections in which this item appears
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by-nc/4.0
Showing items related by title, author, creator and subject.
The significance of personal learning environments (PLEs) in nursing education: extending current conceptualizations. Patterson, Christopher; Stephens, Moira; Chang, Vico; Price, Ann M.; Work, Fiona; Snelgrove-Clarke, Erna; Harada, Theresa (Elsevier http://dx.doi.org/10.1016/j.nedt.2016.09.010, 2016-09-26)PATTERSON, C., STEPHENS, M. CHANG, V., PRICE, A.M., WORK, F., SNELGROVE-CLARKE, E. and HARADA, T. 2016. The significance of personal learning environments (PLEs) in nursing education: extending current conceptualizations. Nurse education today [online],48, pages 99-105. Available from: http://dx.doi.org/10.1016/j.nedt.2016.09.010Background - Personal learning environments (PLE) have been shown to be a critical part of how students negotiate and manage their own learning. Understandings of PLEs appear to be constrained by narrow definitions that ...
Hussein, Ahmed; Elyan, Eyad; Gaber, Mohamed Medhat; Jayne, Chrisina (IEEE https://doi.org/10.1109/IJCNN.2017.7965896, 2017-05-14)HUSSEIN, A., ELYAN, E., GABER, M.M. and JAYNE, C. 2017. Deep reward shaping from demonstrations. In Proceedings of the International joint conference on neural networks (IJCNN 2017), 14 - 19 May 2017, Anchorage, USA. Piscataway, NJ: IEEE [online], pages 510-517. Available from: https://doi.org/10.1109/IJCNN.2017.7965896Deep reinforcement learning is rapidly gaining attention due to recent successes in a variety of problems. The combination of deep learning and reinforcement learning allows for a generic learning process that does not ...
Craw, Susan; Wiratunga, Nirmalie; Rowe, Ray (Elsevier http://dx.doi.org/10.1016/j.artint.2006.09.001, 2006-11)CRAW, S., WIRATUNGA, N. and ROWE, R., 2006. Learning adaptation knowledge to improve case-based reasoning. Artificial Intelligence, 170 (16-17), pp. 1175-1192.Case-Based Reasoning systems retrieve and reuse solutions for previously solved problems that have been encountered and remembered as cases. In some domains, particularly where the problem solving is a classification task, ...