Efficient Hardware-Software Architectures for Deep-Learning Applications in IoT Architectures
After the technological waves of computing; the internet; ubiquitous mobile communication we are currently experiencing a new technological wave of "deep learning", whereby systems are not preprogrammed for certain applications, but can learn complex tasks by themselves: either in a supervised or unsupervised way. Recently the potential and applicability of deep learning has been demonstrated in various application domains such as image recognition, image classification, object detection, speech recognition, automatic language translation, storytelling etc ... Popular models use a deep layering of artificial neural networks with millions of weights. The computational load for training is huge. Also the inference of specific recognition instances requires a lot of computation. Such computations are often done in data centers of large internet companies, or using power hungry general purpose GPU's and floating point calculations. This Ph.D. research aims at developing novel hardware/software architectures for deep learning applications in embedded and Internet-of-Things (IoT) applications. Hereby special emphasis is taken on aspects of ultra-low power consumption, dedicated processing and memory architectures for activation- and weight management. Depending on the application requirements, trade-offs for low fixed-point down to single bit-widths for both activations and weights, recognition accuracy and power consumption should be possible.
Period of project
01 January 2019 - 31 December 2022