Product Inspection with Little Supervision
Product and assembly quality control based on computer vision is becoming ubiquitous. It consists of a vision system (e.g. one or more cameras) and a classification or regression algorithm. The detection algorithm is often a machine learning one (e.g. a convolutional neural network). These algorithms can work in a very robust manner and achieve a very high identification accuracy if they can be trained with a sufficiently large input data set. The data set is composed of images in which the feature that needs to be detected (the defect) is present. In the training phase the images must be labeled, meaning that the system needs to know if the image depicts a quality issue or not. Typically, the data sets are built by manually acquiring images of the features under different conditions and by manually identifying and labeling the pixels belonging to the feature. This project aims to reduce the deployment costs of computer vision systems for quality control in manufacturing and assembly, by virtue of requiring less data acquisition and labeling effort (i.e. supervision) while increasing the robustness. The main goal of this project is to research how the amount of real-world training data can be reduced to make visual inspection algorithms work in a low volume manufacturing context. A common approach to cope with a small training data set is data augmentation: slightly distorting the available data points to create new points that still belong to the same category. In vision, this usually consists of randomly applying straightforward variations such as cropping, rotating, scaling, mirroring, color balancing, and/or adjusting brightness of the entire collection of training pictures to create many slightly modified copies. The actual products are three-dimensional objects and their visual appearance is governed by complex material properties, lighting conditions, and geometrical detail. This project will try to accomplish this by integrating computer graphics and computer vision technology to generate synthetic variations of the above-mentioned complex visual effects. Furthermore, synthetic defects can be introduced. In this way, a large labeled synthetic data set is obtained and can be used as training input for a visual inspection algorithm. The framework developed in this project can be used to significantly accelerate the development of specialized machine learning algorithms to identify a product, perceive its pose, detect defects, and track the progress of assembly.
Period of project
01 July 2019 - 30 September 2023