Demo: Fast and accurate object analysis at the edge for mobile augmented reality

Qiang Liu, Siqi Huang, Tao Han

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Augmented reality (AR) augments a real-world environment by computer-generated sensory information such as text, sound, and graphics. With advanced AR technologies, the information about a person's surrounding physical environment can be brought out of the digitalworld and overlaid with the person's perceived realworld. Pokemon Go is an example of location-based AR applications [1]. According to Digi-Capital, mobile AR could become the primary driver of a $108 billion VR/AR market by 2021 [2]. Since AR performs in the semantic context of the real-world, the fast and accurate object analysis is the key component for integrating digital information with environment elements in mobile AR applications. Although the object detection and recognition have been well studied in the computer vision research [3], the existing solutions are designed to run in machines with powerful CPU/GPU computing capability. Owing to the limited computing capability on mobile devices, these existing solutions are not appropriate for mobile AR applications. Hence, the fast and accurate object analysis for mobile AR is a research gap that hinders the development and growth of mobile AR services. Mobile edge computing (MEC) is emerging as a new networking paradigm which enables low-latency networking and highperformance computing at the edge of mobile networks [4]. Leveraging MEC, we design and demonstrate a system for the Fast and Accurate objeCt analysis at the Edge (FACE) for mobile AR applications. The FACE system consists of three major components: An AR application on the client side, a computer vision algorithm for the object analysis on the server side, and a latency and accuracy optimization algorithm implemented as an edge computing controller. The latency of the object analysis is determined by the network status, server computing loads, and the frame size of the video captured by an AR device. The accuracy of the object analysis is related to the frame size of the AR video. We observe that a larger frame size usually leads to a higher detection and recognition accuracy but longer transmission and computing latency in the network and server, respectively. The FACE system optimizes the frame size of AR video and balances computing loads among edge servers, and thus improves the object analysis performance in terms of latency and accuracy. This demo shows the impact of the frame size, network status, and computing loads on the performance of the object analysis in mobile AR applications and compares the performance of the FACE system with other solutions.

Original languageEnglish (US)
Title of host publication2017 2nd ACM/IEEE Symposium on Edge Computing, SEC 2017
PublisherAssociation for Computing Machinery, Inc
ISBN (Electronic)9781450350877
DOIs
StatePublished - Oct 12 2017
Externally publishedYes
Event2nd IEEE/ACM Symposium on Edge Computing, SEC 2017 - San Jose, United States
Duration: Oct 12 2017Oct 14 2017

Publication series

Name2017 2nd ACM/IEEE Symposium on Edge Computing, SEC 2017

Conference

Conference2nd IEEE/ACM Symposium on Edge Computing, SEC 2017
Country/TerritoryUnited States
CitySan Jose
Period10/12/1710/14/17

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Demo: Fast and accurate object analysis at the edge for mobile augmented reality'. Together they form a unique fingerprint.

Cite this