The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here: https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

A new Q-learning algorithm based on the Metropolis criterion

Author

Summary, in English

The balance between exploration and exploitation is one of the key problems of action selection in Q-learning. Pure exploitation causes the agent to reach the locally optimal policies quickly, whereas excessive exploration degrades the performance of the Q-learning algorithm even if it may accelerate the learning process and allow avoiding the locally optimal policies. In this paper, finding the optimum policy in Q-learning is de scribed as search for the optimum solution in combinatorial optimization. The Metropolis criterion of simulated annealing algorithm is introduced in order to balance exploration and exploitation of Q-learning, and the modified Q-learning algorithm based on this criterion, SA-Q-learning, is presented. Experiments show that SA-Q-learning converges more quickly than Q-learning or Boltzmann exploration, and that the search does not suffer of performance degradation due to excessive exploration.

Department/s

  • Computer Science

Publishing year

2004

Language

English

Pages

2140-2143

Publication/Series

IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics

Volume

34

Issue

5

Document type

Journal article

Publisher

IEEE - Institute of Electrical and Electronics Engineers Inc.

Topic

  • Computer Science

Keywords

  • reinforcement learning
  • Q-learning
  • metropolis criterion
  • exploitation
  • exploration

Status

Published

ISBN/ISSN/Other

  • ISSN: 1083-4419