OpenAIR @ RGU >
Design and Technology >
Theses (Engineering) >
Please use this identifier to cite or link to this item:
|Title: ||Vision systems for a mobile robot based on line detection using the Hough Transform and artificial neural networks.|
|Authors: ||Damaryam, Gideon Kanji|
|Supervisors: ||Dunbar, Graeme|
Maxwell, Grant M.
|Issue Date: ||Nov-2008|
|Publisher: ||The Robert Gordon University|
|Citation: ||Damaryam, G., & Dunbar, G. (2005). A Mobile Robot Vision System for Self navigation using the Hough Transform and neural networks. In: Proceedings of EOS Conference on Industrial Imaging and Machine Vision, Munich, 13-15 June, pp. 72.|
|Abstract: ||This project contributes to the problem of mobile robot self-navigation within a
rectilinear framework based on visual data. It proposes a number of vision
systems based on detection of straight lines in images captured by a robot using
the Hough transform and artificial neural networks as core algorithms. The Hough
transform is a robust method for detection of basic features (Boyce et al 1987).
However, it is so computationally demanding that it is not commonly used in real
time applications and applications which utilise anything but small images (Song
and Lyu 2005). (Dempsey and McVey 1992) have suggested that this problem
might be resolved if the Hough transform were implemented with artificial neural
networks. This project investigates the feasibility of systems using these core
algorithms, and systems that are hybrids of them.
Prior to application of the core algorithms to a captured image, various stages of
pre-processing are carried out including resizing for optimum results, edgedetection,
and edge thinning using an adaptation of the thinning method of (Park,
2000) proposed by this work. An analysis of the costs and benefits of thinning as
part of pre-processing has also been performed.
The Hough transform based system, which has been largely successful, has
involved a number of new approaches. These include a peak detection scheme;
post-processing schemes which find valid sub-lines of lines found by the peak
detection process, and establish which high-level features these sub-lines
represent; and an appropriate navigation scheme.
Two artificial neural network systems were designed based on lines detection and
sub-lines detection respectively. The first was able to detect long lines, but not
shorter (even though navigationally important) lines, and so was aborted. The
second system has two major stages. Networks of stage 1 developed to detect
sub-lines in sub-images derived by breaking down the original images, did so
passibly well. A network in stage 2 designed to use the results of stage 1 to guide
the robot’s motion did not do so well for most test images. The networks of stage
1, however, have been helpful with development of a hybrid vision system.
Suggestions have been made on how this work can be furthered.|
|Appears in Collections:||Theses (Engineering)|
All items in OpenAIR are protected by copyright, with all rights reserved.