New deep learning model boosts additive manufacturing X-ray analysis

How do faults form in molten pools during 3D printing? Using a database of over 10000 X-ray synchrotron images, researchers set out to create a deep learning model that would analyse and identify these features. The team from the Materials, Structure, and Manufacturing Group, based at Research Complex and UCL, have published a new study on their results. 

 

Additive Manufacturing (AM) or 3D printing has the potential to revolutionise traditional manufacturing methods. However, there have been reliability issues with the new technology, preventing its use for safety-critical components. Imaging technologies are crucial in understanding why these problems occur, helping to identify defects at various stages of the process and propose new printing strategies to improve the quality of components. Synchrotron X-ray imaging provides important insights into the complex physical phenomena that take place during AM. However, this method produces large volumes of images, making manual data-processing overly time-consuming.

Existing machine-learning approaches are limited in their ability to account for the nature of melt pool (molten material) dynamics, which plays an important role in determining the microstructural properties of a material. Furthermore, existing approaches rely upon data from a single synchrotron facility, reducing their general applicability for extraction of data from different methods. Researchers from University College London, UK, sought to build on prior research to create a more accurate and efficient model, trained with over 10 000 images from three different synchrotron facilities, including images obtained at beamline ID19 at the ESRF.

The resulting model, called AM-SegNet, has been designed for the automatic segmentation and quantification of high-resolution X-ray images. This semantic segmentation works by assigning a specific label to each pixel within the image, allowing for feature quantification and correlation across a large dataset with high confidence. Crucially, thanks to the lightweight convolution block proposed in this study (see Figure 1), this high level of accuracy has been achieved without compromising on speed. AM-SegNet has the highest segmentation accuracy (~96%) as well as the fastest processing speed (<4 ms per frame), outperforming other state-of-the-art models. A well-trained AM-SegNet was used to perform segmentation analysis on time-series X-ray images in AM experiments, and its application has been extended to other advanced manufacturing processes such as high-pressure die-casting, with reasonable success.

Figure 1
Figure 1: Pipeline of X-ray image processing, including flat field correction, background subtraction, image cropping and pixel labelling.

 

In summary, AM-SegNet allows for faster and more accurate processing of X-ray imaging data, granting researchers greater insight into the AM process. Moving beyond AM, the automatic analysis achieved through this model has potential applications for many more processes in advanced manufacturing. Finally, the broad scope of data used to train AM-SegNet creates a benchmark database that can be adopted by other researchers to compare against their own models. This development anticipates a near future where high-speed synchrotron experiments will have their images segmented and quantified in real-time through deep learning.

The source codes of AM-SegNet are publicly available at GitHub (https://github.com/UCL-MSMaH/AM-SegNet).

 

This showcase article was first published as part of the TechTalk Series for The European Synchrotron. Find the original article here.