The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here: https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

Routing using Safe Reinforcement Learning

Author

Summary, in English

The ever increasing number of connected devices has lead to a metoric rise in the amount data to be processed. This has caused computation to be moved to the edge of the cloud increasing the importance of efficiency in the whole of cloud. The use of this fog computing for time-critical control applications is on the rise and requires robust guarantees on transmission times of the packets in the network while reducing total transmission times of the various packets.

We consider networks in which the transmission times that may vary due to mobility of devices, congestion and similar artifacts. We assume knowledge of the worst case tranmssion times over each link and evaluate the typical tranmssion times through exploration. We present the use of reinforcement learning to find optimal paths through the network while never violating preset deadlines. We show that with appropriate domain knowledge, using popular reinforcement learning techniques is a promising prospect even in time-critical applications.

Publishing year

2020-02-20

Language

English

Publication/Series

2nd Workshop on Fog Computing and the Internet of Things

Document type

Conference paper

Topic

  • Control Engineering

Conference name

2nd Workshop on Fog Computing and the Internet of Things

Conference date

2020-04-21

Status

Inpress

Project

  • ELLIIT LU P02: Co-Design of Robust and Secure Networked Embedded Control Systems

ISBN/ISSN/Other

  • ISBN: 978-395977144-3