SHF: Small: Collaborative Research: LDPD-Net: A Framework for Accelerated Architectures for Low-Density Permuted-Diagonal Deep Neural Networks

Project Details

Description

Deep learning has emerged as an important form of machine-learning where multiple layers of neural networks can learn the system function from available input-output data. Deep learning has outperformed traditional machine-learning algorithms based on feature engineering in fields such as image recognition, healthcare, and autonomous vehicles. These are widely used in cloud computing where large amount of computational resources are available. Deep neural networks are typically trained using graphic processing units (GPUs) or tensor processing units (TPUs). The training time and energy consumption grow with the complexity of the neural network. This project attempts to impose sparsity and regularity as constraints on the structure of the deep neural networks to reduce complexity and energy consumption by orders of magnitude, possibly at the expense of a slight degradation in the performance. The impacts lie in the formulation of a new family of structures for neural networks referred to as Low-Density Permuted Diagonal Network or LDPD-Net. The approach will enable the deployment of deep neural networks in energy-constrained and resource-constrained embedded platforms for inference tasks, including, but not limited to, unmanned vehicles/aerial systems, personalized healthcare, wearable and implantable devices, and mobile intelligent systems. In addition, the design methodology/techniques developed in this project can facilitate investigation of efficient computing of other matrix/tensor-based big data processing and analysis approaches. These approaches may also find applications in data-driven neuroscience and data-driven signal processing. In addition to graduate students, the project will involve undergraduates via senior design projects and research experiences for undergraduates. The results of the project will be disseminated to the broader community by publications, presentations, talks at various industries and other academic institutions.

The main barriers to wide adoption of deep learning networks include computational resource constraints and energy consumption constraints. These barriers can be relaxed by imposing sparsity and regularity among different layers of the deep neural network. The proposed low-density permuted-diagonal (LDPD) network can lead to orders of magnitude reduction in computation complexity, storage space and energy consumption. The LDPD-Net will not be retrained by first training a regular network and then only retaining the weights corresponding to the LDPD-Net. Instead, the proposed network will be trained from scratch. The proposed LDPD-Net can enable scaling of the network for a specified computational platform. The proposed research has three thrusts: 1) develop novel resource-constrained and energy-constrained inference and training systems; 2) develop novel efficient hardware architectures that can fully exploit the advantages of the LDPD-Net to achieve high performance; and 3) perform novel software and hardware co-design and co-optimization to explore the design space of the LDPD-Net. Using these, the efficacy of the proposed LDPD-net will be validated and evaluated, via software implementations on high-performance systems, low-power embedded systems, and a hardware prototype on FPGA development boards.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

StatusFinished
Effective start/end date8/15/189/30/21

Funding

  • National Science Foundation: $224,997.00

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.