Learning-based framework for policy-aware cognitive radio emergency networking

Eun Kyung Lee, Hariharasudhan Viswanathan, Dario Pompili

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations


Uncertainties in the wireless communication medium do not allow for guarantees in network performance for cognitive radio applications envisaged for mobile ad hoc emergency networking. The novel concept of mission policies, which specify the Quality of Service (QoS) requirements of the incumbent network as well as the cognitive radio networks, is introduced. The use of mission policies, which vary over time and space, enables graceful degradation in the QoS of incumbent network (only when necessary) based on mission-policy specifications. A Multi-Agent Reinforcement Learning (MARL)-based cross-layer communication framework, RescueNet, is proposed for self-adaptation of nodes in cognitive radio networks. Also, the novel idea of knowledge sharing among the agents (nodes) is introduced to significantly improve the performance of the proposed solution.

Original languageEnglish (US)
Title of host publication2013 IEEE Global Communications Conference, GLOBECOM 2013
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages6
ISBN (Print)9781479913534
StatePublished - 2013
Event2013 IEEE Global Communications Conference, GLOBECOM 2013 - Atlanta, GA, United States
Duration: Dec 9 2013Dec 13 2013

Publication series

NameGLOBECOM - IEEE Global Telecommunications Conference


Other2013 IEEE Global Communications Conference, GLOBECOM 2013
Country/TerritoryUnited States
CityAtlanta, GA

ASJC Scopus subject areas

  • Electrical and Electronic Engineering


  • Cognitive Radio
  • Licensed Spectrum
  • Mission Policies
  • Multi-agent Systems
  • Reinforcement Learning


Dive into the research topics of 'Learning-based framework for policy-aware cognitive radio emergency networking'. Together they form a unique fingerprint.

Cite this