Optimized Hardware Architectures for Ultra Low-Latency Object Detection Applications (Research)
Although artificial intelligence (AI) has been an important research topic for over sixty years, it is only in 2012 that with the application of deep neural network (DNN) models based on convolutional neural networks (CNN's), with the use of computationally effective non-linear activations and the availability of sufficient computational power that deep learning has ignited a new wave of machine learning. Current deep learning applications need to evaluate deep neural networks with tens to hundreds of layers of neural networks, requiring millions of parameters to be trained and evaluated. Therefore high-performance multi-processors and/or GP-GPU's are required. For many applications, such as automotive, self-driving, visual sorting etc. ultra-low latency and fast reaction responses are required that current multi-processor hardware cannot provide. This Ph.D. research focusses on novel architectures for direct hardware CNN based object detection applications, with emphasis on ultra-low latency.
Period of project
01 January 2019 - 30 June 2024