Augmented reality (AR) augments a real-world environment by computer-generated sensory information such as text, sound, and graphics. With advanced AR technologies, the information about a person's surrounding physical environment can be brought out of the digitalworld and overlaid with the person's perceived realworld. Pokemon Go is an example of location-based AR applications . According to Digi-Capital, mobile AR could become the primary driver of a $108 billion VR/AR market by 2021 . Since AR performs in the semantic context of the real-world, the fast and accurate object analysis is the key component for integrating digital information with environment elements in mobile AR applications. Although the object detection and recognition have been well studied in the computer vision research , the existing solutions are designed to run in machines with powerful CPU/GPU computing capability. Owing to the limited computing capability on mobile devices, these existing solutions are not appropriate for mobile AR applications. Hence, the fast and accurate object analysis for mobile AR is a research gap that hinders the development and growth of mobile AR services. Mobile edge computing (MEC) is emerging as a new networking paradigm which enables low-latency networking and highperformance computing at the edge of mobile networks . Leveraging MEC, we design and demonstrate a system for the Fast and Accurate objeCt analysis at the Edge (FACE) for mobile AR applications. The FACE system consists of three major components: An AR application on the client side, a computer vision algorithm for the object analysis on the server side, and a latency and accuracy optimization algorithm implemented as an edge computing controller. The latency of the object analysis is determined by the network status, server computing loads, and the frame size of the video captured by an AR device. The accuracy of the object analysis is related to the frame size of the AR video. We observe that a larger frame size usually leads to a higher detection and recognition accuracy but longer transmission and computing latency in the network and server, respectively. The FACE system optimizes the frame size of AR video and balances computing loads among edge servers, and thus improves the object analysis performance in terms of latency and accuracy. This demo shows the impact of the frame size, network status, and computing loads on the performance of the object analysis in mobile AR applications and compares the performance of the FACE system with other solutions.