The increased availability of all types of data, together with tools that support the complete workflow—from data processing to deployment—means that AI models are becoming important in applications beyond the most commonly recognized ones, such as robotics and automated driving.
To develop an AI-driven product, engineers need to incorporate AI into the entire system design workflow. This workflow includes four main stages:
Simulation and test
Stages of the AI workflow. Each stage builds on the previous one and involves building an AI model to incorporate into a complete AI system.
While the workflow is the same for most engineering projects, regardless of application, the end results are quite different, as the following examples show.
Automatic Defect Detection
Automated inspection and defect detection are critical for high-throughput quality control in production systems. Inspection and defect detection systems are used in many industries to detect flaws on manufactured surfaces. Deployed AI defect detection algorithms can be faster and more robust than traditional methods such as image processing.
A simple architecture for a CNN. Features are automatically learned from images to identify different classes of objects, in this case, normal and defective parts.
Airbus built an AI model to automatically detect defects in pipes in an aircraft. They videotaped pipes on the aircraft in different lighting conditions, angles, and positions. After labeling the video data, they designed and trained a deep learning network that uses techniques such as semantic segmentation to identify the positions of ventilation holes and wires on the pipe. A user interface displays the defect-detection results in real time.
Decoding MEG Signals
Using signal data in an AI system workflow comes with its own challenges. Raw signal data is rarely added directly to AI models, as signal data tends to be noisy and memory-intensive. Instead, time-frequency techniques are often incorporated to transform the data to gather the most important features the models will learn. Engineers can transform their data in a variety of ways for an AI model—for example, they can convert raw signal data into “images” using wavelet scattering.
Signal data may be transformed using a variety of methods. The images can then be used in a CNN architecture to classify signal data using deep learning.
For patients with advanced amyotrophic lateral sclerosis (ALS), communication becomes increasingly difficult as the disease progresses. Researchers at University of Texas at Austin have developed a noninvasive technology that uses wavelets and deep neural networks to decode magnetoencephalography (MEG) signals and detect entire phrases as the patient imagines speaking them.
The researchers used wavelet multiresolution analysis to denoise and decompose the MEG signals to specific neural oscillation bands. They extracted features from the denoised and decomposed signals and used the features to train a support vector machine (SVM) and a shallow artificial neural network (ANN). The team then customized three pretrained deep convolutional neural networks—AlexNet, ResNet, and Inception-ResNet—to decode MEG signals, increasing classification accuracy from 80% to more than 96%.
Radar-Based Object Detection
In autonomous cars, radar-based systems can detect pedestrians and other objects better than cameras at night, in inclement weather, and at greater distance. AI classification algorithms can be used on radar signals to identify distinct groups of objects based on their signatures.
Radar signals as spectrograms used to classify three objects with distinct signatures.
To enable this capability, the Radar team at PathPartner developed a classifier based on radar point cloud detection. They implemented the classifier on an embedded platform and verified it in actual test scenarios.
In early testing, the classifier took 5–8 seconds to detect a human—too long to be effective. The team resolved the delay by increasing the frame time from 3 frames per second to 5 and creating a new set of features that were moving average values from the previous set of features. Through testing and rapid design iterations, they achieved object detection accuracy of 99%.
Machine failures that result in downtime can be costly for companies that rely on them for manufacturing and production. Deploying health monitoring and predictive maintenance systems can minimize these costs and maximize efficiency. Predictive maintenance applications use advanced statistics and machine learning algorithms to identify potential issues with the machines before they occur.
Mondi Gronau’s plastic production plant delivers about 18 million tons of plastic and thin film products annually. The plant’s 900 workers operate approximately 60 plastic extrusion, printing, gluing, and winding machines 24 hours a day, 365 days a year. Mondi developed a health monitoring and predictive maintenance application that incorporates predictions from the machine learning model. The application enables equipment operators to receive warnings about potential failures before they occur. Mondi created a standalone executable version of the application, which is now used in production at the plant.
Diagnosing faults using the Classification Learner app, which compares a variety of machine learning algorithms to identify the most accurate model prior to deployment.
Source: MATLAB - MathWork.com
The Tech Platform