A Deep Reinforcement Learning Network for Traffic Light Cycle Control

Xiaoyuan Liang, Xunsheng Du, Guiling Wang, Zhu Han

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

Existing inefficient traffic light cycle control causes numerous problems, such as long delay and waste of energy. To improve efficiency, taking real-time traffic information as an input and dynamically adjusting the traffic light duration accordingly is a must. Existing works either split the traffic signal into equal duration or only leverage limited traffic information. In this paper, we study how to decide the traffic signal duration based on the collected data from different sensors. We propose a deep reinforcement learning model to control the traffic light cycle. In the model, we quantify the complex traffic scenario as states by collecting traffic data and dividing the whole intersection into small grids. The duration changes of a traffic light are the actions, which are modeled as a high-dimension Markov decision process. The reward is the cumulative waiting time difference between two cycles. To solve the model, a convolutional neural network is employed to map states to rewards. The proposed model incorporates multiple optimization elements to improve the performance, such as dueling network, target network, double Q-learning network, and prioritized experience replay. We evaluate our model via simulation on a Simulation of Urban MObility simulator. Simulation results show the efficiency of our model in controlling traffic lights.

Original languageEnglish (US)
Article number8600382
Pages (from-to)1243-1253
Number of pages11
JournalIEEE Transactions on Vehicular Technology
Volume68
Issue number2
DOIs
StatePublished - Feb 1 2019

Fingerprint

Reinforcement learning
Reinforcement Learning
Telecommunication traffic
Traffic
Cycle
Traffic signals
Reward
Simulators
Q-learning
Neural networks
Markov Decision Process
Multiple Models
Sensors
Waiting Time
Leverage
Model
Higher Dimensions
Simulation
Simulation Model
Simulator

All Science Journal Classification (ASJC) codes

  • Aerospace Engineering
  • Applied Mathematics
  • Electrical and Electronic Engineering
  • Automotive Engineering

Keywords

  • Reinforcement learning
  • deep learning
  • traffic light control
  • vehicular network

Cite this

Liang, Xiaoyuan ; Du, Xunsheng ; Wang, Guiling ; Han, Zhu. / A Deep Reinforcement Learning Network for Traffic Light Cycle Control. In: IEEE Transactions on Vehicular Technology. 2019 ; Vol. 68, No. 2. pp. 1243-1253.
@article{2308646b3c3c456ea022b3cd08cad105,
title = "A Deep Reinforcement Learning Network for Traffic Light Cycle Control",
abstract = "Existing inefficient traffic light cycle control causes numerous problems, such as long delay and waste of energy. To improve efficiency, taking real-time traffic information as an input and dynamically adjusting the traffic light duration accordingly is a must. Existing works either split the traffic signal into equal duration or only leverage limited traffic information. In this paper, we study how to decide the traffic signal duration based on the collected data from different sensors. We propose a deep reinforcement learning model to control the traffic light cycle. In the model, we quantify the complex traffic scenario as states by collecting traffic data and dividing the whole intersection into small grids. The duration changes of a traffic light are the actions, which are modeled as a high-dimension Markov decision process. The reward is the cumulative waiting time difference between two cycles. To solve the model, a convolutional neural network is employed to map states to rewards. The proposed model incorporates multiple optimization elements to improve the performance, such as dueling network, target network, double Q-learning network, and prioritized experience replay. We evaluate our model via simulation on a Simulation of Urban MObility simulator. Simulation results show the efficiency of our model in controlling traffic lights.",
keywords = "Reinforcement learning, deep learning, traffic light control, vehicular network",
author = "Xiaoyuan Liang and Xunsheng Du and Guiling Wang and Zhu Han",
year = "2019",
month = "2",
day = "1",
doi = "https://doi.org/10.1109/TVT.2018.2890726",
language = "English (US)",
volume = "68",
pages = "1243--1253",
journal = "IEEE Transactions on Vehicular Technology",
issn = "0018-9545",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "2",

}

A Deep Reinforcement Learning Network for Traffic Light Cycle Control. / Liang, Xiaoyuan; Du, Xunsheng; Wang, Guiling; Han, Zhu.

In: IEEE Transactions on Vehicular Technology, Vol. 68, No. 2, 8600382, 01.02.2019, p. 1243-1253.

Research output: Contribution to journalArticle

TY - JOUR

T1 - A Deep Reinforcement Learning Network for Traffic Light Cycle Control

AU - Liang, Xiaoyuan

AU - Du, Xunsheng

AU - Wang, Guiling

AU - Han, Zhu

PY - 2019/2/1

Y1 - 2019/2/1

N2 - Existing inefficient traffic light cycle control causes numerous problems, such as long delay and waste of energy. To improve efficiency, taking real-time traffic information as an input and dynamically adjusting the traffic light duration accordingly is a must. Existing works either split the traffic signal into equal duration or only leverage limited traffic information. In this paper, we study how to decide the traffic signal duration based on the collected data from different sensors. We propose a deep reinforcement learning model to control the traffic light cycle. In the model, we quantify the complex traffic scenario as states by collecting traffic data and dividing the whole intersection into small grids. The duration changes of a traffic light are the actions, which are modeled as a high-dimension Markov decision process. The reward is the cumulative waiting time difference between two cycles. To solve the model, a convolutional neural network is employed to map states to rewards. The proposed model incorporates multiple optimization elements to improve the performance, such as dueling network, target network, double Q-learning network, and prioritized experience replay. We evaluate our model via simulation on a Simulation of Urban MObility simulator. Simulation results show the efficiency of our model in controlling traffic lights.

AB - Existing inefficient traffic light cycle control causes numerous problems, such as long delay and waste of energy. To improve efficiency, taking real-time traffic information as an input and dynamically adjusting the traffic light duration accordingly is a must. Existing works either split the traffic signal into equal duration or only leverage limited traffic information. In this paper, we study how to decide the traffic signal duration based on the collected data from different sensors. We propose a deep reinforcement learning model to control the traffic light cycle. In the model, we quantify the complex traffic scenario as states by collecting traffic data and dividing the whole intersection into small grids. The duration changes of a traffic light are the actions, which are modeled as a high-dimension Markov decision process. The reward is the cumulative waiting time difference between two cycles. To solve the model, a convolutional neural network is employed to map states to rewards. The proposed model incorporates multiple optimization elements to improve the performance, such as dueling network, target network, double Q-learning network, and prioritized experience replay. We evaluate our model via simulation on a Simulation of Urban MObility simulator. Simulation results show the efficiency of our model in controlling traffic lights.

KW - Reinforcement learning

KW - deep learning

KW - traffic light control

KW - vehicular network

UR - http://www.scopus.com/inward/record.url?scp=85062995903&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85062995903&partnerID=8YFLogxK

U2 - https://doi.org/10.1109/TVT.2018.2890726

DO - https://doi.org/10.1109/TVT.2018.2890726

M3 - Article

VL - 68

SP - 1243

EP - 1253

JO - IEEE Transactions on Vehicular Technology

JF - IEEE Transactions on Vehicular Technology

SN - 0018-9545

IS - 2

M1 - 8600382

ER -