Q-Learning for robot control
Gaskett, Chris (2002) Q-Learning for robot control. PhD thesis, Australian National University.
Full text not available from this repository.
View at Publisher Website: http://hdl.handle.net/1885/47080
Q-Learning is a method for solving reinforcement learning problems. Reinforcement learning problems require improvement of behaviour based on received rewards. Q-Learning has the potential to reduce robot programming effort and increase the range of robot abilities. However, most currentQ-learning systems are not suitable for robotics problems: they treat continuous variables, for example speeds or positions, as discretised values. Discretisation does not allow smooth control and does not fully exploit sensed information. A practical algorithm must also cope with real-time constraints, sensing and actuation delays, and incorrect sensor data.
This research describes an algorithm that deals with continuous state and action variables without discretising. The algorithm is evaluated with vision-based mobile robot and active head gaze control tasks. As well as learning the basic control tasks, the algorithm learns to compensate for delays in sensing and actuation by predicting the behaviour of its environment. Although the learned dynamic model is implicit in the controller, it is possible to extract some aspects of the model. The extracted models are compared to theoretically derived models of environment behaviour.
The difficulty of working with robots motivates development of methods that reduce experimentation time. This research exploits Q-learning’s ability to learn by passively observing the robot’s actions—rather than necessarily controlling the robot. This is a valuable tool for shortening the duration of learning experiments.
|Item Type:||Thesis (PhD)|
This thesis is openly accessible from the link to the Australian National University's institutional repository, ANU Digital Collections.
|Keywords:||robotics, learning, control, reinforcement, neural networks|
|FoR Codes:||08 INFORMATION AND COMPUTING SCIENCES > 0801 Artificial Intelligence and Image Processing > 080108 Neural, Evolutionary and Fuzzy Computation @ 50%|
08 INFORMATION AND COMPUTING SCIENCES > 0801 Artificial Intelligence and Image Processing > 080101 Adaptive Agents and Intelligent Robotics @ 50%
|SEO Codes:||89 INFORMATION AND COMMUNICATION SERVICES > 8902 Computer Software and Services > 890202 Application Tools and System Utilities @ 100%|
|Deposited On:||05 Oct 2006|
|Last Modified:||19 Jun 2013 15:53|
Last 12 Months: 17
Repository Staff Only: item control page