[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
NIST Author Manuscripts logoLink to NIST Author Manuscripts
. Author manuscript; available in PMC: 2021 Jul 9.
Published in final edited form as: Proc IEEE Inst Electr Electron Eng. 2021 Apr;109(4):10.1109/JPROC.2020.3034519. doi: 10.1109/JPROC.2020.3034519

Six-Sigma Quality Management of Additive Manufacturing

HUI YANG 1, PRAHALAD RAO 2, TIMOTHY SIMPSON 3,4, YAN LU 5, PAUL WITHERELL 6, ABDALLA R NASSAR 7, EDWARD REUTZEL 8, SOUNDAR KUMARA 9
PMCID: PMC8269016  NIHMSID: NIHMS1701342  PMID: 34248180

Abstract

Quality is a key determinant in deploying new processes, products, or services and influences the adoption of emerging manufacturing technologies. The advent of additive manufacturing (AM) as a manufacturing process has the potential to revolutionize a host of enterprise-related functions from production to the supply chain. The unprecedented level of design flexibility and expanded functionality offered by AM, coupled with greatly reduced lead times, can potentially pave the way for mass customization. However, widespread application of AM is currently hampered by technical challenges in process repeatability and quality management. The breakthrough effect of six sigma (6S) has been demonstrated in traditional manufacturing industries (e.g., semiconductor and automotive industries) in the context of quality planning, control, and improvement through the intensive use of data, statistics, and optimization. 6S entails a data-driven DMAIC methodology of five steps—define, measure, analyze, improve, and control. Notwithstanding the sustained successes of the 6S knowledge body in a variety of established industries ranging from manufacturing, healthcare, logistics, and beyond, there is a dearth of concentrated application of 6S quality management approaches in the context of AM. In this article, we propose to design, develop, and implement the new DMAIC methodology for the 6S quality management of AM. First, we define the specific quality challenges arising from AM layerwise fabrication and mass customization (even one-of-a-kind production). Second, we present a review of AM metrology and sensing techniques, from materials through design, process, and environment, to postbuild inspection. Third, we contextualize a framework for realizing the full potential of data from AM systems and emphasize the need for analytical methods and tools. We propose and delineate the utility of new data-driven analytical methods, including deep learning, machine learning, and network science, to characterize and model the interrelationships between engineering design, machine setting, process variability, and final build quality. Fourth, we present the methodologies of ontology analytics, design of experiments (DOE), and simulation analysis for AM system improvements. In closing, new process control approaches are discussed to optimize the action plans, once an anomaly is detected, with specific consideration of lead time and energy consumption. We posit that this work will catalyze more in-depth investigations and multidisciplinary research efforts to accelerate the application of 6S quality management in AM.

Keywords: Additive manufacturing (AM), artificial intelligence (AI), data analytics, engineering design, quality management, sensor systems, simulation modeling

I. INTRODUCTION

Additive manufacturing (AM), also known as 3-D printing, is a collective term for processes in which a product is made by layer-upon-layer deposition of materials. The advent of commercial AM systems has enabled the fabrication of parts with complex geometry directly from computer-aided design (CAD) models with minimal intervening steps. Until recently, AM parts were primarily restricted to prototype-demonstrator roles; the viability of AM parts has now evolved to the extent that they are used in production and final assemblies. AM provides significant advantages over traditional subtractive (machining) and formative (casting, welding, and molding) manufacturing processes, such as eliminating specialized tooling costs, reducing material waste, and life-cycle costs, enabling the creation of intricate and free-form geometries, and expanding product functionality for a variety of industrial applications.

The powder bed fusion (PBF) process is commonly used for the AM of products from the bed of powdered materials. Examples of PBF printing techniques include direct metal laser sintering (DMLS), electron beam melting (EBM), selective heat sintering (SHS), selective laser melting (SLM), and selective laser sintering (SLS) that use different types of energy sources (e.g., laser, electron beams, or heat) to melt or sinter powders together to fabricate the solid 3-D parts. Note that LPBF leverages the laser source to sinter metal powders in a layer-by-layer fashion to create the final build. In addition to the PBF AM process, there exists a variety of other AM processes, such as material jetting, binder jetting, materials extrusion, directed energy deposition (DED), sheet lamination, and vat polymerization. The choice of materials ranges from metals, composites, polymers, biomaterials, to ceramics.

Notably, technical challenges in quality management hamper widespread adoption of AM technology in the industry. For example, the microstructure and mechanical properties of AM builds are influenced by complex, hard to model, process phenomena (e.g., thermal effects and residual stresses). These intricate process interactions, in turn, can lead to hidden internal defects that deteriorate the quality of the parts. As a result, the rejection rate of AM parts is high, particularly when considering one-of-a-kind production. In real-world case studies, it is not uncommon that parts that are built simultaneously with the same CAD model in the same commercial AM machine may yield different quality outcomes. As shown in Fig. 1, seven parts are built simultaneously with the same CAD model in the same commercial AM machine, and only two of which are defect-free. The high rejection rate of AM builds and associated costs significantly hinder the wider exploitation of AM capabilities, beyond the current rapid prototyping status quo.

Fig. 1.

Fig. 1.

Seven stainless steel parts built on a commercial AM system in a case study at the University of Nebraska–Lincoln. The parts only differ in their orientation, with all other process conditions identical.

Six sigma (6S) is a widely used practice in traditional manufacturing industries (e.g., semiconductor and automotive industries) for quality planning, quality assurance (QA), quality control (QC), and continuous improvements with the extensive use of data, statistics, and optimization [5], [6]. As shown in Fig. 2, 6S entails a data-driven Define, Measure, Analyze, Improve, and Control (DMAIC) methodology.

Fig. 2.

Fig. 2.

DMAIC methodology for 6S quality management.

  1. Define: Outline the quality challenges based on customer requirements.

  2. Measure: Collect data about key process variables from the manufacturing systems.

  3. Analyze: Extract useful information pertinent to defect-causing factors.

  4. Improve: Design solutions and methods to improve the manufacturing system.

  5. Control: Develop process management plans and optimal control policies when the manufacturing system is out of control.

The goal of the 6S techniques is to identify and remove the root causes of defects and further improve the quality of final products. The success of 6S can be seen through Motorola’s application of its philosophies. In 1978, the company had a net income of $2.3 billion. By 1988, the net income had increased to $8.3 billion; this is roughly a 260% increase. Similarly, General Electric saw massive successes with their own 6S program and achieved $4 billion in savings per year. The list goes on with other notable examples, including Toyota, Ford, Polaroid, General Motors, and many more.

Although 6S has achieved significant success in a host of domains ranging from manufacturing, healthcare, and logistics, more research needs to be done to initiate the practice of 6S quality management in the specific context of AM. In this article, we propose to design, develop, and implement the new DMAIC methodology for the 6S quality management of AM. First, we define the specific quality challenges arising from AM layerwise fabrication and mass customization (even one-of-a-kind production). Second, we present a review of AM metrology and sensing techniques, from materials through design, process, environment, to postbuild inspection. Third, realizing the full potential of AM-sensing data depends, to a great extent, on the availability of analytical methods and tools. Accordingly, we propose and develop new data-driven analytical methods, including artificial intelligence (AI), machine learning, and network science, to characterize and model the interrelationships between engineering design, machine setting, process variability, and final build quality. Fourth, we present the methodologies of ontology modeling, design of experiments (DOE), and simulation analysis for continuous quality improvements. In the end, new control approaches are discussed to optimize the action plans, once an anomaly is detected, with specific considerations of lead time and energy consumption. It is worth noting that this review article mainly focuses on metal AM processes given the popularity in high-value industries, such as aerospace, automotive, and healthcare. However, the proposed 6S framework is applicable, in general, for quality management of different AM processes through the intensive use of data, statistics, and optimization. We hope that this article can help catalyze more in-depth investigations and multidisciplinary research efforts to lay the foundation of a new scientific basis of 6S quality management for AM processes.

The rest of this article is organized as follows. Section II discusses specific quality challenges arising from unique AM characteristics, such as mass customization (even one-of-a-kind), low-volume production, multilayer part fabrication, and sequential manufacturing. Section III reviews the development of advanced sensing and measurement systems to increase information visibility for AM quality management. Then, we present AM data analytics in Section IV. Continuous quality improvements for AM are discussed in Section V, and Section VI presents the sequential optimization of layerwise control strategies for AM. Section VII discusses the 6S quality management for AM and concludes this article.

II. DEFINE QUALITY CHALLENGES

AM’s capability to build objects from the ground stimulates the imagination, causing one to envision a broader range of possibilities during design. Nonetheless, AM faces a broad range of quality challenges that hamper the wider adoption of AM in the industry. The urgent need to produce complex builds in low volume and high mix, combined with rapid advancements in AM technology, poses significant challenges to current paradigms for AM quality management. As such new standards are being developed for material and process qualification and part certification [7], [8], countless experiments and modeling/simulation studies are being conducted to gain insights into the complex physics of AM processes [9]–[11], new in situ sensing capabilities and process monitoring strategies are developed for process control [12]–[16], and efforts are underway to capture, store, manage, and assure pedigreed data for QA/QC of AM parts [17], [18]. In spite of these advances, repeatability and reliability issues seen in many metal AM processes [e.g., laser PBF (LPBF) and DED] unfortunately exacerbate these challenges, particularly when trying to produce end-use parts for critical applications and highly regulated industries (e.g., aerospace and medical) [19]–[22].

A. Quality Management for High-Volume–Low-Mix Production

It is well known that “quality is inversely proportional to variability” [23], [24]. Fig. 3 shows the mass manufacturing that focuses on the production of a large volume of parts with a low-level mix. Traditionally, the measure of “variability” often refers to the scenario of high-volume–low-mix production in the context of mass manufacturing. In other words, if there is a large number of parts produced from the manufacturing systems, then it will be a logical step to characterize and measure the process variability and repeatability. The variability can be due to random or assignable causes in the manufacturing process. If the quality variations are solely because of random factors (i.e., nonassignable causes, not identifiable), then the distribution should be normal. However, if there are assignable causes, then statistical control charts are often used to monitor the process and detect when and how the process performance is affected. As such, the process can be stopped to look for assignable causes and eliminate them to resume normal production. Quality improvement involves a series of managerial, operational, and engineering activities to reduce the variability in the process. Especially, statistical DOE is utilized to realize a robust process by studying the effects of controllable settings under the uncertainty of uncontrollable factors, also called “robust parameter design” [25].

Fig. 3.

Fig. 3.

High-volume–low-mix production scheme in mass manufacturing.

As a result, the 6S program emerged to meet the needs of mass manufacturing in the automotive and semiconductor industries and has achieved enormous successes in the past century. As shown in Fig. 4, the 6S program utilizes the DMAIC methodology for the reduction of process variability to the level that failures and defects are extremely unlikely. If the 3σ limits overlap with product specification limits, then the probability for a part falling outside the μ ± 3σ limit is 0.27%, which means that the number of defective parts per million (PPM) is about 2700. For the μ ± 6σ limit, the probability will be 0.0000002%, which means that the PPM is 0.002 (i.e., extremely unlikely). In the 6S scenario, if a finished product has 100 components and each component must be nondefective for the product to be nondefective, then the probability of the product to be nondefective is (0.999999998)100 ≈ 1.0. The 6S concepts (e.g., design for 6S, lean production, and variation reduction) have been widely used to improve the capability of many business processes nowadays. The development of the 6S program has gone through three phases as follows.

Fig. 4.

Fig. 4.

Area under the normal curve and the proportion of defectives produced.

  1. Phase I: Address process monitoring, defect elimination, and variability reduction.

  2. Phase II: Reduce total production cost and increase system performance.

  3. Phase III: Emphasize the value creation to business organizations.

However, AM moves toward a high level of customization by enabling low-volume–high-mix production (even one-of-a-kind production) directly from the digital designs from the customers, resulting in “economies of one” [26]. The large quantity of parts produced from the same design is not available anymore, as in the traditional paradigm of mass manufacturing, to establish and measure process variability. Therefore, the 6S practice from mass manufacturing tends to be limited in the ability to be generally applicable to AM. There is an urgent need to push forward the next phase of the 6S program for AM. Fig. 5 shows the low-volume–high-mix production scheme for a customized design, which may only be fabricated once or in low volume. Note that there are significant layer-to-layer variations in part geometry. AM presents new QA/QC challenges: mass customization, low-volume production, and layer-to-layer variations in part geometry. In particular, because of the customized design and layer-by-layer fabrication in AM, it is not uncommon that each layer is different in terms of part geometry. Hence, it is difficult to characterize and measure the process variability and repeatability from one layer to another or from one build to the next.

Fig. 5.

Fig. 5.

Low-volume–high-mix production scheme for a customized design with layer-to-layer variations in part geometry in 3-D printing.

B. Multilayer and Sequential Manufacturing Process

The layer-by-layer approach to AM brings significant challenges for QA/QC. Many AM processes use the raw materials of metal powders, where particle sizes and shapes vary between batches. Also, a laser or electron beam is utilized as the heating source in LPBF and DED. Slight variations in the intensity and diameter of the beam contribute to the issue of repeatability both between different machines and between the same machines at different locations on the build plate. Thus, every parameter that affects the end result of the process must be tailored to the materials used [27]. Furthermore, an AM system can utilize different layer thicknesses when manufacturing parts. A 2-cm-high object that uses a layer thickness of 100 μm will require 200 layers. IF the layer thickness is 50 μm, then the number of layers would be 400. Each of these layers has the opportunity for failure. Even if a single layer has a small probability of having a defect, the overall build will have a high probability of having at least one defect. To illustrate the effects and challenges of multilayer fabrication, consider the following example.

  1. If the probability to contain defects is 0.0114 in a layer, then what is the probability for this layer to be nondefective?
    10.0114=98.86%.
  2. For an AM build with 100 layers, what is the probability to have no defects?
    (10.0114)100=31.77%.
  3. For an AM build with 100 layers, what is the probability of having at least a defect?
    1(10.0114)100=68.23%.
  4. If the probability of a build to contain defects is specified to be less than 10%, then what should be the probability for a layer to have defects?
    1(1x)100=10%x=0.0011.

It is worth noting that this example assumes that each layer is independent of each other. However, AM is highly correlated from one layer to another layer. In other words, the defects in one layer can be corrected during the processing of the subsequent layer or can negatively impact the next layer and all the subsequent layers. This is analogous to the multistage assembly line in the traditional manufacturing paradigm. In the automotive industry, a car body assembly often involves a sequence of assembly operations. The variations in one assembly step can potentially introduce a stream of variations in the following steps [28]. However, the physics of multistage assembly operations are different from multilayer AM with LPBF in each layer. A 6S program for multistage manufacturing systems typically analyzes the current state of a process and then incrementally improves system performance with statistical methods and tools.

Establishing a 6S paradigm for AM calls upon new innovations to tackle these emerging quality challenges, including mass customization, low-volume production, lay-to-layer variations, and multilayer manufacturing process, which are unique when moving from traditional mass production to the new paradigm of AM. “Measure” requires the design and development of new sensor technologies for materials, processes, and postbuild inspections at different stages of AM. “Analyze” should be able to handle and connect the big data that are generated during the AM product lifecycle. “Improve” calls upon a better understanding of the process physics and an ontological knowledge of the underlying phenomena through statistical DOE on physical machines, AM processes, and/or computer experiments on simulation models. “Control” should consider the sequential decision-making problem for the multilayer fabrication process in AM and further address the multiobjective optimization of AM, for example, minimizing total cost (e.g., energy or time) consumed in the LPBF process and maximizing the quality of final parts. The new scientific basis of 6S quality management will impact the production-scale viability of AM and enable wider exploitation of AM capabilities beyond the current rapid prototyping status quo.

III. MEASURE AM

In the DMAIC approach, the measure step is aimed at collecting data from key variables during the AM process, such as: 1) process input variables (e.g., characteristics of metal powders and design parameters); 2) in situ variables (e.g., machine settings, layerwise imaging, and thermal maps); and 3) process output variables (e.g., postbuild CT scans). Modern manufacturing industries have invested in advanced sensing and measurement systems to cope with high levels of complexity in AM and increase the information visibility about key variables from raw materials, manufacturing process to final products. As mentioned in Section II, the low-volume–high-mix production presents specific challenges to AM quality management. With rich data readily available from the step of “measure AM,” this provides an opportunity for the “analyze” step to develop an in-depth understanding of the current state and performance of the AM process. Here, data could be collected online (i.e., in the layer-by-layer fabrication process) or offline (i.e., prebuild material characterization or postbuild CT scan). The offline measurements allow for the inspection of quality but are limited in the ability to help in-process corrections or repairs because the defects are often embedded within the build already. Online sensing captures the dynamics of process–machine interactions and offers a higher level of flexibility for on-the-fly control actions. The data collected in the “measure” step can be visualized in different ways to provide comprehensible information about the AM process, for example, image stacks, 3-D point clouds, histograms, network representation, and Fourier and wavelet transformations. An effective visualization further helps the “analyze” step to estimate and extract salient features about the process variability or product defects.

A. Prebuild Measurement and Characterization

Fig. 6 shows a broad representation of AM qualification flow about the material, process, and product. Metal powders are used as the input to the LPBF (and many DED) AM machines. Material qualification is indispensable to avoid the scenario of “garbage in, garbage out.” Standard powder characterization techniques include X-ray photoelectron spectroscopy, sieve analysis, inert gas fusion, scanning electron microscopy, laser light diffraction, and differential thermal analysis. These techniques allow the characterization of powders in three main aspects: particle morphology and distribution (e.g., the shape, surface roughness, or size), powder chemistry (i.e., elemental composition), and powder microstructure (e.g., porosity and rheology) (see [29] for a review of AM powder characterization). The standard practices for sampling metal powders are provided by standards organizations, such as ASTM International B215 and Metal Powder Industries Federation (MPIF). These sampling standards provide practical guidelines to obtain a representative sample from the whole lot and then apply the powder characterization techniques to measure the powder properties. Furthermore, manufacturers will be able to leverage the characterization results to pose requirements for suppliers, select the best supplier, and improve the powder reuse practices.

Fig. 6.

Fig. 6.

Broad representation of AM qualification flow about the material, process, and product.

After prebuild material qualification, there are also system qualifications in the AM process and performance qualification of the part after the build is completed (see Fig. 6). In this article, we mainly focus on the in situ sensing of AM process performance to improve the understanding of machine–process physics, in-process monitoring, diagnostics, and prognostics (see details in Section III-B), Then, we briefly discuss postbuild measurement and inspection in Section III-C.

B. In Situ Sensing and Measurement

The in situ sensing of AM is a rapidly developing area encompassing new hardware systems, approaches for system integration, and data analytics. The need for in situ sensing in AM is motivated by the fact that a defect in any layer, if not detected and promptly corrected, will remain permanently sealed in on the deposition of subsequent layers. Recent review articles in this area include Grasso and Colosimo [30], Mani et al. [31], [32], Moylan et al. [33], Everton et al. [34], Spears and Gold [35], and Tapia and Elwany [36]. The challenges for in situ sensing of AM are steep and discussed as follows.

  1. Each type of AM process (there are currently seven) imposes a unique layer bonding mechanism ranging from photochemical-initiated bonding to thermal-induced bonding; therefore, it is not possible to devise a generalized sensing scenario that is decoupled from the process physics.

  2. The defects in AM are multifarious and are linked to specific process phenomena that range across length scales [37]. For example, delamination and cracking in LPBF processes occur at the part level (100 μm and above, and extending to the millimeter scale and beyond) due to thermal-induced residual stresses. In contrast, balling and keyhole melting are related to the instability at the meltpool level (less than 100 μm). A single sensor is not likely capable of capturing these diverse phenomena.

  3. Integrating sensors into AM machines is difficult due to the tight form factor and mechanics of the process [38]. In the fused filament fabrication process, for instance, material in the form of a polymer filament is heated past its glass transition temperature and deposited by a nozzle. The gap between the nozzle and the top of the part is of the order of tens of millimeters. Therefore, sensors, such as an infrared (IR) thermal camera, are intractable to be mounted near the nozzle to obtain the surface distribution. This is because a large surface of the part will be blocked by the nozzle as it translates over the part [39]. A similar argument is made for the material jetting process.

  4. In the LPBF process, layers of the powder material are spread across a bed and melted with a laser. The temperature gradient in the part is responsible for a host of defects, such as microstructural heterogeneity and delamination [40]–[42]. However, it is tractable only to obtain an estimate of the surface temperature distribution with the use of IR cameras and pyrometers. The temperature at the bottom layer is not easy to obtain in LPBF because the part is surrounded by powder, which acts as an insulating medium and progressively attenuates the thermal signatures generated as the laser melts the material on the layers near the top.

    Moreover, it is not possible to obtain the temperature distribution in the interior of the part without altering the process flow, for example, a thermocouple can be introduced inside the part by stopping the process [43], [44]. However, this will lead to loss of the chamber atmosphere and invariably alter the thermal profile. Researchers in Penn State’s CIMP-3D have pioneered wireless sensing attachments that fit into the power bed and collect temperature information from thermocouples and strain gages [43], [44]. Moreover, the thermal phenomena in LPBF occur at multiple spatial and temporal scales. For example, the meltpool-related thermal phenomena are at the order of a few micrometers and last for tenths of seconds, with cooling rates exceeding 105 °/s. In the same vein, the surface-level thermal signatures last for a few seconds. Hence, different thermal imaging modalities are required for measuring meltpool-level and part-level phenomena. For the meltpool thermal imaging, a high frame-rate thermal camera with imaging range in the shortwave IR region is typically used, while, at the part level, a long-wave IR camera with a large field-of-view and smaller frame rate and integration time is used [33], [45], [46].

  5. Even though the process dynamics might be notionally similar, such as DED and LPBF, the sensors from one process cannot be readily transferred between them. For example, in the DED AM process, the meltpool is several orders of magnitude larger than in LPBF, and in the former, the meltpool can approach the millimeter level, while, in LPBF, it is close to 100 μm [47]. Likewise, the deposition rates in DED can be more than ten times that of LPBF. Moreover, in DED, the part is exposed on all sides of the chamber and therefore convection forces (due to carrier gases from the nozzle) and radiation are all active at the same time. Consequently, it is exceedingly difficult to demarcate and measure all of these heat transfer mechanisms.

  6. The sensor measurements must be synchronized with the state of the process if the data is to be used for process control. Furthermore, the data from multiple sensors must be synchronized with each other. From an LPBF perspective, recording the process state would involve capturing the position of the laser (i.e., the angular displacement of the galvanometer) and merging the laser position with the sensor data being acquired. In other words, the data acquisition system must communicate with the AM machine and sensor hardware with temporal error in the microsecond range (the laser in LPBF can translate at a velocity exceeding 0.5–1.0 m/s). The challenge is further complicated given that the sensor array may include both temporal sensors, such as photodetectors, and image-based sensors, such as thermal and optical cameras.

To overcome these barriers, researchers use heterogeneous sensing modalities [47]. A notable example of such a multiphenomena sensing array in LPBF is the so-called open architecture LPBF platform at the Edison Welding Institute (EWI), which is currently instrumented with the following sensors [48], [49]:

  1. local sensors for monitoring the meltpool-level phenomena (10–200 μm scale):
    1. a coaxial shortwave IR thermal camera for meltpool temperature measurement (85 frames per second (fps), 13.4 × 7.12 mm field of view, and 5-μm spatial resolution);
    2. a coaxial high-speed camera to track the meltpool shape (1000 fps and 10-μm resolution);
    3. a photodetector to record the meltpool intensity (350–1100 nm and 10-kHz sampling rate);
    4. an spectrometer to measure the optical emission in the meltpool region (200–1100 nm and 1 kHz).
  2. global sensors for monitoring phenomena at the bulk part level (500 μm–100 mm):
    1. a coaxial short-wave IR thermal camera focused on the powder bed to detect part temperature gradients (4 fps, 127 × 95 mm field of view, and 400-μm resolution);
    2. a laser interferometer (405 nm) for measuring surface finish and distortion in a layer;
    3. an structured light optical imaging of the powder bed with two digital cameras to detect distortion of the part (21 fps, 25.4 × 14.7 mm field of view, 6.6-μm pixel resolution, and 165-pixel/mm fidelity);
    4. an acoustic microphone and a surface acoustic wave transducer to detect when the part cracks due to distortion or makes contact with the powder recoater (sampling rate of 10–40 kHz).
  3. sensor data acquisition, data synchronization with the laser position, and noise isolation:
    1. close to two terabytes of sensor data are acquired in a 12-h build cycle on EWI’s LPBF platform. Researchers at EWI have built the hardware and software mechanisms to ensure the seamless acquisition of sensor data of such high volume, variety, and sampling speed (a big data problem).

EWI’s open-architecture LPBF platform provides the capability to measure the temperature distribution in the part and track changes of thermal gradients that are not available on other commercial LPBF systems. Another recently operational and comparable apparatus is the Additive Manufacturing Metrology Testbed at the National Institute of Standards and Technology (NIST). In addition, CIMP-3D at Penn State developed a multisensor suite for monitoring and control of a commercial 3D System ProX 320 PBFAM system, as shown in Fig. 7. The multisensor suite has also been demonstrated on 3D Systems ProX 200, EOS M280, and GE Concept Laser M2 machines. The system consists of a variety of sensors as follows:

  1. a high-resolution/high-magnification imaging system (six differing lighting schemes);

  2. two high-speed/high-magnification cameras, including a coaxial camera with 405-nm filter and a front-facing camera with 520-nm filter;

  3. high-speed video (>33 000 fps);

  4. optical process emissions (100 kHz), including a spectrometer and multispectral sensors;

  5. acoustic sensors (100 kHz);

  6. a thermal imaging and DMP meltpool sensor.

Fig. 7.

Fig. 7.

Illustration of multisensor suite for monitoring a Commercial ProX 320 PBFAM system.

This multisensor suite includes an optical layerwise imaging system to monitor the LPBF AM process, which consists of a 36.3 Mpixel digital single-lens-reflex (DSLR) camera that is placed inside the chamber of the EOS M280 machine [15]. In-process optical images have also been collected and used to identify and characterize defects caused by lack-of-fusion in the LPBF process [50]. Stutzman et al. [13], Nassar et al. [51], and Dunbar and Nassar [52] describe the use of an in situ optical emission spectroscopy system consisting of two filtered photodetectors in a series of papers.

Montazeri et al. [53] demonstrated the use of this relatively inexpensive system to monitor lack-of-fusion porosity in Inconel 718 test parts an use features derived from the line-to-continuum ratio as inputs to detect lack-of-fusion porosity. Inconel has chromium as an alloying element. When Inconel is fused (melted) by the laser, atomically excited chromium is vaporized and emits photons corresponding to electronic transition. One set of transitions occurs in the wavelength around 520 nm [54]. If melting is stable, so will be line emission from the vapor. A key innovation is the use of two photodetectors, one of which is filtered to have a frequency spectrum in a region where line emissions are not likely and measures emissions pertaining to the background radiation (a wavelength different from the line emission wavelength, called the continuum emission spectrum). Furthermore, Nassar et al. [51], [52] divide this difference (line emission intensity minus continuum emission intensity) by the continuum emission intensity; this ratio is called the line-to-continuum ratio. In summary, multisensor systems generate high-dimensional and heterogeneous data (e.g., time series, video, and image profiles) that provide rich information about AM processes. However, realizing the full potential of these data for AM system qualification depends, to a great extent, on the development of analytical methods to characterize, represent, and extract useful information about the defective state in each layer of AM builds, as detailed in Section IV.

C. Postbuild Measurement and Inspection

As shown in Fig. 8, postbuild quality inspection and function integrity assessment for AM are often performed with radiographic-based computed tomography (CT). Here, CT scans of AM builds are collected with a GE vTomex M300 microfocus X-ray CT (XCT) scanner and are processed using the Volume Graphic myVGL3.0 software to extract the 2-D image profiles of every layer in an AM build. CT reconstructs hundreds to thousands of 2-D radiographs in a 3-D volume of voxels. The resolution of image profiles is determined by the CT voxel size, typically with a pixel size of 10–50 μm or less. These data will enable the investigation of the effect of design parameters or LPBF process settings, for example, hatch spacing (H), scan velocity (V ), and laser power (P), on the defect patterns in AM image profiles. The sensor data and offline CT scans can be used to create a library of (sensor) patterns that correlate to specific defects using sensor fusion and predictive analytics. These sensor signal patterns, which exemplify specific process defects, can be integrated with prescriptive models (i.e., for decision-making) to optimize the selection of corrective action in case an anomaly is detected in the process. The focus is to minimize defects, delamination, and warpage of the final workpiece and maximize final strength and fatigue resistance. In addition, other equipment, such as coordinate measuring machines (CMMs) and surface probing machines, provide important information about part dimensional metrology and surface roughness [55], [56].

Fig. 8.

Fig. 8.

Radiographic-based CT for postbuild inspection.

D. AM Data Management

Large amounts of data are generated, exchanged, and used dynamically during AM test coupon and part development processes. As the volume of data grows with increased in situ sensing and nondestructive examination (NDE), the types of data generated by AM activities also become richer. The information necessary for AM process qualification includes not only measurement data but also material/machine specifications, design models, control, and management data. Characterizing the entire AM process demands a comprehensive analysis of all the information collected through the build history of thousands of parts and coupons, in the context of the complete AM value chain. As a result, it requires an effective and efficient AM data management system to ensure that data are captured, stored, and used appropriately.

In the area of data management, several AM information management systems are available as commercial products. For example, the Senvol Database (http://senvol.com/database/) provides researchers and manufacturers with open access to information about industrial AM machines and materials. Granta, a material information management technology provider, offers the product GRANTA MI: Additive Manufacturing, specifically customized for AM data capturing and use. At the same time, multiple database and data management systems are built to organize and manage the data generated from research and industry projects. The Data Management System for Additive Manufacturing (DMSAM) was developed by researchers at Penn State’s CIMP-3D (http://www.cimp3d.org/datamanagement). DMSAM is a schema-based software tool that stores and tracks all of the data and information related to an AM part, including the state of associated AM resources (e.g., powder, software, and machine), part requirements for sponsors, 3-D solid models, part workflow, build plan, postprocessing plan, and all data associated with part properties, in situ monitoring, postprocessing, testing, and inspection. DMSAM stores data locally, communicating with global (i.e., shared) databases and generating build reports for QA/QC as needed through XML, as well as Excel. NIST’s Additive Manufacturing Materials Database (AMMD) [57] is a data management system built with Not Only Structured Query Language (NoSQL) database technology and provides a Representational State Transfer (REST) interface for application integration. The database captures rich research data sets generated by the NIST AM program (https://ammd.nist.gov/) based on an open XML schema. In addition, as an open data management platform, the AMMD system is set to evolve through codevelopment of the AM schema and contributions of data from the AM community.

However, due to the multifarious factors that could affect AM part quality, existing data-driven AM process qualification requires extensive testing of material, which is beyond the capability of any individual organization. None of the existing databases provide comprehensive data sets with a multitude of geometries and processes settings by itself to qualify an AM process for parts with various features and specifications. In order to significantly reduce the cost and time associated with the data management for AM process qualification, a collaborative data space is required, and a collaborative data management system is necessary. Fig. 9 shows a multitier AM collaborative data management system with the characteristics: 1) distributed data storage facilitated by using common data terms and definitions; 2) collaborative linked data through federation based on neutral data formats; 3) continuous knowledge management by extracting AM material process–structure–property relationship automatically from AM data; 4) lifecycle and value chain-based decision support; and 5) an adaptive data generation system that helps AM community to efficiently design experiments. The collaborative data management system is set to identify, generate, curate, and analyze AM data through AM product lifecycle and can significantly reduce the cost and time associated with AM product deployment.

Fig. 9.

Fig. 9.

Multitier data management system for AM.

IV. ANALYZE THE DATA

The “analyze” step focuses on the extraction of useful information from online and/or offline data collected in the “measure” step or from historical data available in the AM data management system. The main purpose is to explore the interrelations among key variables (i.e., process inputs, outputs, and in-process variables) during the AM process, model causal relationships between these variables, and quality problems and further develop a new understanding of how they contribute to the process variability and product defects. In other words, multiple sources of variability may exist in the AM process and can potentially lead to quality problems in products and customer services. The “analyze” step helps delineate and determine the random causes and assignable causes to quality problems. If only random causes (i.e., nonassignable factors, not identifiable) are presented in the process, then the distribution should be normal [1]–[3]. However, if there are assignable causes, then the “analyze” tools should be able to monitor the process and detect when and how the process performance is affected. As such, the process can be stopped to look for assignable causes and eliminate them to resume normal production.

However, advanced sensing systems bring more and more complex-structured data from the “measure” step for AM quality management, which are different from geometric features, linear, and nonlinear profiles generated in conventional manufacturing settings [4], [58]. For example, CT scanning and layerwise imaging result in high-dimensional image profiles from the AM process. As such, traditional “analyze” tools, such as control charts and confidence intervals, are limited in the ability to handle such high-dimensional image profiles. Control charts and confidence intervals are much easier to establish for a single random variable or multiple variables (e.g., geometric features of products) in the setting of mass manufacturing but are more difficult to be developed for high-dimensional images; let alone geometrical structures in these images may vary from one layer to another layer in the AM builds. Hence, new “analyze” tools are urgently needed to help handle and connect large amounts of data, model the cause-and-effect relationships among key process variables, and pinpoint potential root causes to quality problems during the AM process. This, in turn, will help the “improve” step (see Section V) to further identify and develop new strategies for quality improvements. New experiments can then be designed to test the effectiveness of these improvement strategies on either physical AM machines or computer simulation models

A. Engineering Design Versus Build Quality

Engineering design and relevant parameters are some of the key process input variables during the AM process. Traditional subtractive manufacturing tends to be limited in the ability to handle complex designs. “Design for manufacturing” refers to the conventional scheme that adapts a design to enable manufacturing within the capability of available machines and tools. AM offers a higher level of design freedom and enables the new scheme of “manufacturing for design.” Complex designs can now be manufactured in a layer-upon-layer fashion with the new generation of AM technology. Nonetheless, complex designs still pose quality challenges on AM-fabricated products, despite the fact that AM can handle certain aspects of fabrication better than traditional manufacturing technologies.

The new research question is whether and how design parameters influence the quality of AM builds? Our prior work has designed and performed experiments on an LPBF machine to investigate how design parameters (i.e., height, width, recoating orientation, and hatching pattern) impact the quality in the final build of thin-wall structures [59], [60] that are widely used in the fabrication of heat exchangers. As shown in Fig. 10, our experiments built the thin-wall structures with a variety of design parameters, that is, heights, widths, recoating orientations, and hatching patterns. The metal powder is Spherical ASTM B348 Grade 23 Ti-6Al-4V, with the distribution of powder size in the range of 14–45 μm. Each build includes 25 thin walls that are fabricated on a 15 mm × 15 mm × 55 mm platform. Experimental factors such as height, width, recoating orientation, and hatching pattern are detailed as follows.

Fig. 10.

Fig. 10.

Illustration of design parameters (i.e., orientation, width, height, and hatching pattern) for the thin-wall structure.

  1. Height: The height of thin walls varies from 0.6 to 3.0 mm with a step size of 0.1 mm. The height-to-width ratio is 0.1 in each thin wall.

  2. Width: The width of thin walls ranges from 0.06 to 0.3 mm with a step size of 0.01 mm.

  3. Orientation: Thin-wall structures are fabricated vertically upward with the layer thickness of 60 μm in three orientations with respect to the travel direction of the recoater blade (i.e., 0°, 60°, and 90°). Fig. 10 shows three orientations with the reference of recoating direction.

  4. Hatching: The hatching patterns of thin walls follow the standard processing path of EOS machines, but various categories of scan paths are utilized when the width increases (see Fig. 10). Fins 1 and 2 have two inner rectangle paths, two outer layer paths (or contours), and rotating diagonal hatching from rectangles. In Fins 3–14, there are three outer layer paths and rotating diagonal hatching inside the innermost rectangle. In Fins 15–18, there is one rectangular hatching. In Fins 19–25, there is one thin area path.

As shown in Fig. 10, we fabricate three thin-wall parts in this experimental study, each of which includes 25 thin walls. The orientations are different for three thin-wall parts on the build plate. In other words, the orientation of each thin-wall part is adjusted to the degree of 0°, 60°, or 90° with respect to the travel direction of the recoater blade in the EOS machine. After fabrication, we scan each build with XCT. These XCT images are then registered with the original CAD models to extract the quality characteristics (e.g., edge roughness and defect levels) in each layer of the thin wall. Here, the edge roughness refers to the geometric deviation of build boundary between CT scans and CAD designs. The defect level refers to the number and degree of defects in each layer of the thin wall. These quality characteristics are tracked from one layer to another for the detection of the impending collapse of thin-wall failures (see [59] and [60] for the analysis of variance with respect to design parameters).

Through the analysis of XCT data and in-process imaging data, experimental results show that the build quality of thin-wall parts is impacted by design parameters (height, width, and height-to-length ratio) and machine settings (hatching and recoating orientation). This study helps provide a set of design guidelines on the use of LPBF machines for the fabrication of thin-wall structures as follows.

  1. The 0° orientation gives a superior quality in the thin-wall builds to other orientations. Fewer defects are generated when the travel direction of the recoater blade is parallel to the long edge of a thin wall. The 90° orientation should be avoided to build thin-wall structures, which tends to generate more flaws by making the recoater motion perpendicular to the long edge of a thin wall.

  2. The height of a thin wall should not be more than nine times its width. Otherwise, this thin-wall build tends to collapse. The LPBF machine in this experiment is limited to build the thin-wall structures with a width that is smaller than 0.15 mm. If the length-to-width ratio exceeds 73 (11 mm/0.15 mm), thin walls also tend to collapse.

This study made an attempt to answer the research question about whether and how design complexity influences quality characteristics of AM thin-wall builds. There is more research to be done to optimize the engineering design for AM. For example, it is imperative to generalize design guidelines for different LPBF machines, process conditions, or thin walls with overhang structures.

B. Machine Setting Versus Build Quality

Machine settings (e.g., hatching space, laser power, and scan velocity) often influence the final outcomes of the AM manufacturing process, including the cosmetic appearance and build quality. To increase the information visibility and cope with the complexity in the machine–process interactions, advanced sensing is increasingly employed in AM (see the multisensor suite and CT scanner in Figs. 7 and 8), thereby generating large amounts of data (e.g., optical images and postbuild CT scans). Realizing the full potential of sensor data hinges on the development of new statistical QC (SQC) methods. Existing SQC methods for conventional manufacturing processes are more concerned about key features of finished products (e.g., dimensional accuracy) and linear and nonlinear profiles, as opposed to high-dimensional sensor data. The research on AM sensing, machine–process interaction, and QA/QC poses several new challenges:

  1. Sensor-based metrology for in situ quality inspection: Traditional QA/QC techniques, such as surface metrology geometric and dimensioning and tolerancing (GD&T), are more concerned about the Euclidean features of the finished AM products. They are offline and not amenable to the inspection of internal defects in AM parts with complex geometries [61]–[64]. In the absence of sensor-based approaches for in situ quality monitoring, benchmarking of AM builds remains relegated to postbuild inspection and qualitative attributes [65]–[67].

  2. Statistical quality management for AM: Current quality monitoring approaches are offline, based on purely data-driven techniques (neural networks, mixture Gaussian modeling, and statistical analysis), or lumped-mass formulations [68]–[71]. Very little has been done to investigate AM quality management using sensor-based analytical models and layerwise AM QA/QC strategies. In situ monitoring provides an opportunity to in-process AM defect mitigation that is indispensable for manufacturing industries mandating stringent quality standards and product esthetics.

Hence, the first step is to extract useful information from AM sensing data and then estimate the defect levels in AM builds. Fig. 11 shows an illustration of AM images in different scales, where multiscale self-similarity can be observed to some extent. In other words, fine-grained images of AM builds can often show multifractal characteristics over a range of scales. Traditional linear methods are limited in the ability to handle nonlinear fractals and irregular patterns in the images. Fractal analysis extracts a single fractal dimension that describes the self-similarity (scale-invariant) behavior of fractal objects but cannot fully characterize multifractal patterns that are often shown in real-world objects [72], [73]. However, image profiles of AM builds are often comprised of complex self-similar patterns that are not due to a single fractal but rather the existence of a spectrum of fractal dimensions. These fractals interact with each other and then generate highly nonlinear and complex self-similar behaviors (see Fig. 11).

Fig. 11.

Fig. 11.

Illustration of multifractal patterns in the image profiles of an AM build.

Little has been done to characterize multifractal patterns in large amounts of image profiles to investigate how machine settings influence the AM build quality. Our prior work has developed new multifractal methods for the analysis of large amounts of AM imaging data and extracts features that are sensitive to the defects, instead of extraneous factors and random noises [72]–[76]. As shown in Fig. 12, multifractal analysis characterizes the nonlinear and self-similar behaviors of AM images in multiscale lenses, ranging from large-scale approximations to small-scale details. AM images are then decomposed as an interwoven set of fractals with different dimensions, which is shown as the multifractal spectrum. In addition, lacunarity measures the degree or extent to which this set of fractals fill the space, which cannot be provided by multifractal analysis alone. Therefore, we developed the method of joint multifractal and lacunarity analysis to characterize and quantify the nonlinear and multifractal patterns in AM images that cannot be otherwise achieved by either traditional statistical methods or fractal analysis.

Fig. 12.

Fig. 12.

Multiscale analysis of fractal and lacunarity patterns in the layerwise AM images with Voronoi tessellation from 100, 1000 to 10000 cells and Delaunay triangulation from 100, 1000 to 10000 cells for multiresolution quality inspection of the layerwise AM build.

After the multifractal characterization results of AM images, we investigated how AM machine settings [e.g., laser power (P), hatch spacing (H), and velocity (V)] influence the build quality. In the experimental study, we printed cylinder parts in the EOS M280 machine with varying levels of machine settings (see Fig. 20). Especially, laser scanning velocity is increased from 1250, 1562.5, to 1875 mm/s. The hatching space is varied from 0.12, 0.15, to 0.18, and laser power is decreased from 340 250, to 170 W. Furthermore, a regression model is constructed to predict the relationship between machine settings with the Hotelling T2 indices of build quality, which are computed with multifractal and lacunarity features of XCT image profiles [72]–[76]. The model achieves the adjusted R2 value of 94.76%, showing a strong correlation between process conditions and build quality.

Fig. 20.

Fig. 20.

XCT of the four disks, and their relative placement on the build platen.

C. In Situ Sensing Variables Versus Build Quality

CT scans help characterize the quality of a finished build but cannot detect the flaws during the AM process. In situ sensing provides a means for on-the-fly defect characterization. As shown in Fig. 13, a drag link part with complex geometry was printed in CIMP-3D with intentional defects at four layers (i.e., 1.5, 6.7, 12.0, and 16.0 mm), each of which includes eight defects as follows:

  1. 0.050-, 0.250-, 0.500-, and 0.750-mm cubed defects are on each plane.

  2. 0.050-, 0.250-, 0.500-, and 0.750-mm diameter cylinder defects are also on each plane surrounding the cubes. All cylinders have a 1:1 diameter to depth ratio except for 0.050, which has a depth of 0.250 mm.

  3. The top of the defect is the flat plane in the build direction.

Fig. 13.

Fig. 13.

(a) Four layers of intentional defects. (b) Different shapes and sizes of defects.

In situ optical images are recorded after each layer is printed. This experimental study is aimed at predicting incipient defects from in situ imaging data for QC in the AM processes. The state-of-the-art deep neural network (DNN) models show superior performance in the handling of imaging data. However, layerwise imaging data from AM processes pose significant challenges to DNN defect analysis.

  1. Region of Interests (ROIs): Each image contains not only metal powders but also many AM parts in the build plate. As such, there is a need to delineate the image for a specific part. Often, a squared region is cropped around the part, and then, the images of layers are fed to the DNN model. This guarantees the same dimensionality of input images to the DNN model. However, due to the broad geometrical diversity from one layer to another, images of some layers will have small part geometries and large powder areas, while others have large part geometries and small powder areas. DNN learning will be biased by the layerwise geometrical diversity, as well as the varying areas of unfused powders. Therefore, it is more desirable to leverage CAD files to delineate and register the ROI for the part geometry in each layer (see Fig. 14).

  2. Layer-to-Layer Geometry Variation: AM provides a higher level of flexibility for the low-volume and high-mix production, even for a one-of-a-kind design. As shown in Fig. 5, AM fabricates the build directly from a complex CAD design through layer-upon-layer deposition of materials. Although we may register the ROI for the part geometry in each layer, there will be ROI variations among layers. Hence, both the shape and dimensionality of ROIs will be varying from one layer to another. The inconsistent ROIs consist of different numbers of pixels and cannot be used as inputs to the DNN models for learning the incipient defects in the layers.

  3. ROI Segmentation and Spatial Characterization: To tackle the challenge of inconsistent ROIs, one approach is to extract features from layerwise ROIs (e.g., mean, median, and variance). However, statistical features tend to aggregate useful information within the ROIs, thereby leading to the deficiency in defect characterization and predictive modeling. The other approach is to segment ROIs into smaller ROIs with the same number of pixels. Although the dimensionality of ROIs is changing from one layer to another, the greatest common divisor (GCD) for ROIs of all layers can be leveraged to segment ROIs with the same number of pixels. However, these ROIs may still have variations in shapes. Furthermore, spatial characterization can be used to measure spatial correlations among pixels and describe pertinent patterns about defects in the ROIs. As the number of pixels is constant in the ROIs, characterization images share the same dimensionality that can be fed into DNN models for the learning and prediction of defects in each ROI in the AM processes.

Fig. 14.

Fig. 14.

Schematic illustration of deep learning of incipient defects from in situ image profiles.

As shown in Fig. 14, our prior work has designed a new DNN model to learn incipient defects from sROIs of in situ image profiles [77], [78]. The experimental study provides large amounts of images taken for each layer with different lighting schemes. To tackle the aforementioned challenges, DNN learning of in situ AM defects consists of the following critical steps.

  1. Image Registration and Segmentation: We first used the CAD design to perform shape-to-image registration and extract the ROIs of 362 layers in the drag link part. Then, these ROIs are segmented into 1708 sROIs, each of which has the same number of pixels. Furthermore, the dyadic partitioning of sROIs can be used to split each region into smaller subregions and provides a large amount of data for multiresolution DNN learning of layerwise AM defects.

  2. Spatial Characterization: Although these sROIs are in different shapes, we utilized the spatial characterization to extract pertinent patterns about defects from sROIs and then fed images of spatial correlations for deep learning.

  3. Deep Learning: The DNN model includes a series of convolutional layers to learn sROI characterization images with multiple levels of abstraction. Each hidden layer is followed by nonlinear modules, which transforms the representation at one level into a representation at a higher, slightly more abstract level [77]. The DNN builds up effective learning and representations of various intentional defects [i.e., embedded in the drag link part (see Fig. 13)] that help significantly reduce the size of state space and state-action pairs for predictive modeling and optimization in the following.

The DNN model described earlier is shown to effectively predict the layerwise defects with the specificity of 93.85±0.83%, the sensitivity 90.01±1.56%, the negative predictive value of 93.83±0.67%, the positive predictive value of 90.03±2.34%, and the accuracy of 92.50±1.03%. This experimental study avoids the use of DNN as a blackbox by just feeding cropped images of layers (i.e., with the broad geometrical diversity) into the neural networks and then letting AI classify ROIs and identify the defects. Indeed, engineering domain knowledge is indispensable to preprocessing AM training data and developing effective AI methods for in situ AM defect learning and analysis.

V. IMPROVE THE SYSTEM

This section presents a set of statistical methodologies— ontology models, DOE, and simulation analysis—for the quality improvement of AM processes. The “measure” step provides rich data about key variables to increase information visibility during the AM process. The “analyze” step extracts useful information from the data and performs the cause-and-effect analysis between and among these key process variables. Now, the “improve” step exploits data-driven knowledge to look for changes or parameter designs that can be made to the AM process so that the performance can be improved.

Ontology provides a high-level map that is useful to explore and understand the interrelationships of parameters, elements, and variables during the AM process. Hundreds of terms may be involved in the AM process ontology to describe input–output parameters of the laser, thermal, microstructure, and mechanical properties of AM parts. These terms may be physical parameters or concepts that are based on mathematical modeling and physical phenomenon characterizing the AM system. For instance, the laser source affects the thermal behavior and microstructure evolution during an AM process [79], and the thermal distribution of the heat source affects the microstructure behavior and mechanical properties [80]. As a result, ontology models relate process parameters to mechanical properties and material characteristics and can be used for process redesign, sensor selection, and quality improvement.

Furthermore, DOE is one of the most widely used tools for quality improvement. Note that the “analyze” step delineates multiple sources of variability in the AM process, for example, assignable causes or random causes. Therefore, the “improve” step can then choose experimental factors and vary the factor levels with statistical designs (e.g., randomized block design, factorial design, and response surface design) to investigate how these factors influence the quality of AM process and final builds. Most importantly, optimal factor settings can be determined to ensure that the desired performance of the AM process can be achieved, which is robust to uncontrollable factors and/or random noises [25].

It should be noted that the designed experiments can be conducted on physical AM machines, computer simulation models, or both to improve the performance of AM processes. Simulation analysis involves the design of computer experiments that is often faster and cheaper than physical experiments. As such, before expensive experiments are undertaken on AM machines, simulation analysis can help screen the process variables to reduce the number of factors and design more cost-effective experiments in the “improve” step. If the AM process is far from the desired level of performance and produces a large number of defective builds, then it may be necessary to abandon the old process and redesign a new AM process. In this way, the “improve” step is converted into a “design” step in the DMAIC approach.

A. Ontology Modeling

As shown in Fig. 15, the growing body of AM research exists in many forms (e.g., papers, models, simulations, graphs, and data) and is both specific to a given AM process and generalizable to AM more broadly. Several complementary efforts are underway to develop data management systems by NIST,1 CIMP-3D,2 Granta,3 and many others. Also, numerous sensing capabilities (e.g., photodiodes, cameras, pyrometers, thermocouples, and spectrometers) are available for metal AM processes (e.g., PBF and DED). Different sensors have been installed on different AM systems to generate empirical data to help validate simulation models, for instance, or develop process maps for different AM systems and materials. The challenge now lies in the integration of all the information into useful AM knowledge, which includes the selection of the right sensors to generate the right data for the right analytics for QA/QC.

Fig. 15.

Fig. 15.

Challenges navigating research, sensing, and data management for metal AM.

Our previous work developed an ontology to support AM process model development and reuse [81], [82]. The AM ontologies sought to overcome pertinent challenges about disparate AM process models and simulations (e.g., with variations in the input–output specification), not to mention the levels of detail, fidelity, and composability. This limitation restricts their reuse and makes it difficult to integrate different models from different groups into the most accurate AM simulation model or for different use cases. The AM ontologies developed by Penn State and NIST allowed users to navigate complex relationships and understand the connections between different process parameters, microstructural characteristics, and mechanical properties for AM parts. A sample of the ontology is shown in Fig. 16 where the details on the class hierarchy for AM thermal models can be seen along with the definition of the Absorbed_laser_power class.

Fig. 16.

Fig. 16.

Sample of AM ontology showing detail for absorbed laser power class definition.

The AM process ontology generates a network of parameters that can be visualized as a graph to look for similarities and differences across different models from different researchers. We will refer to these as knowledge graphs as they can be navigated forward (or backward) to identify important relationships between parameters and phenomena that were previously disconnected. Two examples of navigating such a knowledge graph to identify important relationships during AM are shown in Fig. 17. In Fig. 17(a), the knowledge graph is used to trace a process parameter that we can measure (i.e., meltpool area) to understand how it influences different mechanical properties that may be of interest (e.g., tensile strength, yield strength, elongation, and the Vickers hardness). The graph does not tell us exactly how they are related, but we know from the AM ontology that these parameters influence each other based on data and models in the literature.

Fig. 17.

Fig. 17.

Examples of using knowledge graphs from AM ontology to identify relationships between measurable process parameters and potential requirements for metal AM part. (a) Example of using a knowledge graph to navigate from a measurable process parameter (meltpool area) to mechanical properties of interest (tensile strength, yield strength, and so on). (b) Example of navigating knowledge graph backward to trace a requirement (Vickers hardness) to two measurable process parameters (scanning speed and absorbed laser power).

These same ontologies developed to manage process models can be easily extended to support data management and configuration. As noted in Section III, a vast amount of AM process data is being measured, often used for the development and validation of AM process models. Fig. 17(b) shows an example of how the AM ontology can be leveraged to navigate the knowledge graph in reverse to identify what sensor data should be captured to help ensure that a requirement is met. In this example, we assume that a requirement is specified on the Vickers hardness of the part, and then, we navigate the knowledge graph backward until we find process parameters that we might be able to sense, namely, scanning speed and absorbed laser power in this case. While we may not be able to measure absorbed laser power directly, this, nonetheless, provides an indication of what we might want to sense during the process to gather data to help ensure that our requirement is met.

The AM ontology and corresponding knowledge graphs can also be used to support the analysis of process parameters and sensor data. For instance, Table 1 shows data from an experiment where several input process parameters (e.g., laser power, velocity, and spot size) were varied, and sensors were used to capture meltpool depth and width; deposition height and width were also measured for each test specimen [83]. Linear regression was then used to analyze the data in Table 1, and Radjusted2 values for deposition height and deposition depth are 91.75 and 89.97, respectively, as a function of the process parameters that were varied. The Radjusted2 value for meltpool width is also good (94.85); however, the Radjusted2 value for meltpool depth is not (68.00). When we trace these relationships in our AM ontology, we find that the Marangoni effect, the velocity of the fluid, and the Buoyancy effect have a relationship with meltpool depth, yet none of these are in the data because they were not measured or sensed during the experiment. Had the researchers had the ontology, the corresponding knowledge graph could have been used to plan the experiment more carefully, that is, what data to sense and capture based on what they wanted to analyze after the experiment. This simple example demonstrates what might be achieved (and potentially avoided) by using a knowledge graph, such as the AM ontology to guide sensing and inform the analysis.

Table 1.

Experimental Data for Metal AM [83] and Results of Linear Regression Analysis

Sample No. Power [W] Velocity [in/min] Velocity [mm/s] Spot Size [mm] Power Density [W/mm2] Run time[sec] Energy per mm [W/mm] Melt-pool Depth Melt-pool Width Deposition Height Deposition Width
Ti9 1000 10 4.23 1.89 318.31 34.116 236.22 86.5 2886.5 2108.1 4005.4
Ti10 1000 10 4.23 2.89 79.58 34.116 236.22 75.7 3308.1 1800 4070.3
Ti11 1000 10 4.23 3.89 35.37 34.116 236.22 75.7 2637,8 1935.1 4302,7
Ti12 1000 25 10.58 1.89 318.31 13.6464 94.49 102.7 2508.1 437.8 2637.8
Ti13 1000 25 10.58 2.89 79.58 13.6464 94.49 156.8 2378.4 637.8 2432.4
Ti14 1000 25 10.58 3.89 35.37 13.6464 94.49 59.5 2400 756.8 3005.4
Ti15 1000 40 16.93 1.89 318.31 8.59 59.06 162.2 2108.1 383.8 2394.6
Ti16 1000 40 16.93 2.89 79.58 8.59 59.06 178.4 2059.5 616.2 2237.8
Ti17 1000 40 16.93 3.89 35.37 8.59 59.06 59.5 1524.3 475.7 1735.1
Ti18 2000 10 4.23 1.89 636.62 34.116 472.44 681.1 5005.4 864.9 5156.8
Ti19 2000 10 4.23 2.89 159.15 34.116 472.44 140.5 4994.6 2064.9 5070.3
Ti20 2000 10 4.23 3.89 70.74 34.116 472.44 97.3 4816.2 2108.1 5821.6
Ti21 2000 25 10.58 1.89 636.62 13.6464 188.98 173 3902.7 1124.3 4010.8
Ti22 2000 25 10.58 2.89 159.15 13.6464 188.98 291,9 3875.7 1135.1 3956.8
Ti23 2000 25 10.58 3.89 70.74 13.6464 188.98 281.1 3881.1 724.3 4313.5
Ti24 2000 40 16.93 1.89 636.62 8.529 118.11 389,2 3232.4 702.7 3491.9
Ti25 2000 40 16.93 2.89 159.15 8.529 118.11 454.1 3351.4 518.9 3535.1
Ti26 2000 40 16.93 3.89 70.74 8.529 118.11 324.3 3145.9 540.5 3340.5
R2adjusted values for linear regression against power, velocity, spot size: 68.00 94.85 91.75 89.97

B. Design of Experiments

The distinctive aspects of AM compared to traditional subtractive and formative manufacturing processes are the relative tight coupling of the part geometry (shape), microstructure evolved, and process conditions [84]–[86]. In other words, the shape, microstructure, and process conditions interact to influence the functional integrity aspects of the part, such as its strength, fatigue life, adherence to geometric, and dimensional specifications, among others. This coupling of part shape, process parameters, microstructure, and part properties is rather weak in conventional manufacturing; for instance, in subtractive machining, although the near-subsurface microstructure is influenced by the cutting conditions and geometry, the bulk microstructure is largely unaltered. Some of these process–structure–property relationships in AM are exemplified in Fig. 18.

Fig. 18.

Fig. 18.

Complex part design-process parameters—property linkages in AM.

This intricate interaction in AM lies at the crux of the large uncertainty in part quality aspects, and accordingly, the use of traditional DOE-based methods to achieve the optimal processing conditions is constrained for the following reasons.

1). Large Number of Key Process Input Variables Can Be Adjusted and Several Output Variables Need to Be Simultaneously Optimized:

For example, in the LPBF process alone, a schematic of which is shown in Fig. 19, over 50 process input variables are known to influence the part properties [76], [87]. Taking just the example of LPBF, the key input variables can be categorized into two main categories, namely boundary condition factors and input parameters, as demarcated in Table 2. Within the former boundary, condition-related factors are again divided into two: 1) part design related and 2) material-related aspects. Under the category of controllable input factors, condition-related factors are three further subdivisions: 1) environmental factors; 2) process–machine factors; and 3) the characteristics of the energy source, such as the laser, optics, and scanning factors.

Fig. 19.

Fig. 19.

Large number of process variables in the LPBF AM process makes process optimization using DOE expensive and untenable.

Table 2.

Boundary Conditions and Controllable Input Parameters in LPBF Processes

Boundary Condition Factors Controllable Input Parameters
Part and Build Factors Material Factors Environmental Factors Process-Machine Factors Energy Factors
• Location of support structures
• Contact area and type of supports
• Part orientation
• Overhang
• Platen (Substrate) thickness and finish
• Placement of parts on the bed
• Material type and purity
• Powder particle size and distribution
• Amount of powder reused from previous builds
• Powder compaction
• Foreign residue as a result of re-processing
• Absorptivity and Emissivity characteristics
• Oxygen concentration
• Chamber temperature
• Gas flow
• Spatter and debris
• Chamber evacuation gas (nitrogen, argon)
• Cleanliness of the lens and exhaust efficiency
• Presences of residue from previous builds
• Bed alignment (gap and skew)
• Bed temperature
• Layer height
• Precision of machine elements
• Blade type and rake angle
• Blade/Roller defects
• Blade scan speed and dosing parameters
• Interlayer cooling time.
• Laser, optics and scanning factors
• Laser power,
• Rastering pattern, scan speed, hatch distance
• Lens integrity
• Focus height above the bed.

Moreover, researchers have found that key process output variables may conflict with each other. For instance, part strength and geometric integrity are known to conflict, while increasing the infill percentage can increases the strength of the part, the increase in material density due to the added material has the tendency to create large residual stresses, which causes the part to warp [88], [89].

2). Influence of Part Geometry, Process Parameters, and Build Strategy on the Build Quality:

In AM, the mechanical and physical properties of the final part are governed by thermal aspects, such as heat flux and cooling time between layers. These thermal aspects are, in turn, functions of the process parameter, part geometry, support structures, and build plan. Hence, process parameters optimized for rudimentary test coupons established for one type of geometry may not typically carry over to another geometry. To explain further, currently, in the metal AM processes, such as LPBF, process parameters, such as the laser power (P) [W], velocity (V) [m·s−1], hatch spacing (H) [m], and layer height (T) [m], are aggregated in terms of the incident laser energy per unit volume, called global energy density, Ev = P/(V × H × T) [J/mm3], which, when coupled with the scanning strategy, determines the average rate of heat input at the build surface. However, the global energy density is not sufficient to ensure part quality because, apart from the part geometry and process parameters, the placement of parts on the build plate, shape, and placement of other parts in the build plan (build layout) also influence the cooling rate. For example, Fig. 20 depicts the XCT images of the cross section of an Inconel 718 cylinder made using the LPBF process [53]. The parts are built simultaneously using a commercial LPBF machine. The part demarcated as Disc B is built under the so-called default, factory optimized process conditions recommended by the manufacturer for Inconel 718. Nonetheless, the part shows pronounced lack-of-fusion porosity.

Lack-of-fusion porosity, also called acicular porosity, occurs due to poor consolidation of the material with insufficient energy. The energy density for Disc B is close to 80 J/mm3. However, increasing, indeed doubling, the energy density Ev to 160 J/mm3 as in the case of Disc A did not eliminate the lack-of-fusion porosity. The reason for this observation can be explained on the basis of the placement of the parts on the build plate and requires an understanding of the manner in which the laser beam is focused on the powder bed. In LPBF systems, typically, the laser is in the IR region with a wavelength in the vicinity of 1050 nm, and the beam is rastered with the galvanic mirror assembly in the xy plane and focused on the build plate by means of an optic called the fθ lens. This lens is designed to maintain a constant focal length (f) irrespective of the angle of incidence (θ) of the laser beam after it is directed by the galvanic mirror assembly. A drawback with the fθ lens is that, at extreme incidence angles, corresponding to the edges of the build plate, the focal length tends to deviate from the desired setpoint. In other words, the beam tends to become defocused at the edges, and hence, building parts near the edges is not advisable, as the energy delivered will not be sufficient to melt the material. Some of the newer LPBF systems, such as the Renishaw RenAM 500M system, have overcome this problem by replacing the fθ lens with a dynamic focusing system.

We note that Both Disc B and Disc A are placed on the far corners of the build plate (the recoater scans from right to left), and since the LPBF system uses an fθ lens, there is a possibility of exacerbated defocusing of the laser beam. This claim is substantiated in the case of Disc D, which has a smaller global energy density applied to it (107 J/mm3) as opposed to Disc A, but is nominally devoid of porosity. This example serves to demonstrate that setting the process parameters to offline-optimized process conditions based on ideal conditions is not guaranteed to result in flaw-free parts in AM. Indeed, the placement of the parts on the build plate is also an important factor.

The geometry of the part beneath the powder bed in LPBF determines the rate at which the heat is conducted away from the build surface (heat flux) and, hence, governs the cooling rate, which, in turn, influences defects, such as cracking and microstructure heterogeneity. The placement of supports bares an important aspect of the part geometry because they serve as conduits for heat to dissipate [37], [90].

Furthermore, if more parts are added onto the build plate, the time for scanning a layer increases, and therefore, the heat from a previously melted region has a longer time to dissipate, which, in turn, alters the cooling rate. Thus, if any aspect of the build layout changes, for instance, new parts are added or taken away, the orientation of a component is altered, the scanning strategy and order are varied, and then these changes will affect not just one part but potentially every part present on the platen during that build. Consequently, a part must be requalified when it is built as part of a different build layout.

3). Empirical Testing Is Expensive:

In AM, and more so in metal AM, the consumables are prohibitively expensive (the cost of powder material, such as titanium, can exceed several hundred dollars per pound), the process is slow (Φ 8 mm × 60 mm-tall build takes approximately 180 min), and only a few parts can be made at a time. Moreover, postprocess destructive mechanical testing is expensive, and there are no standard approaches to ascertain the mechanical properties of complex objects, such as lattices. Indeed, nondestructive testing approaches, such as XCT, are cumbersome, and the resolution progressively degrades with the material density and size.

4). Sensitivity to Disturbances (Nonstationarity) Makes Maintaining Stable Experimental Conditions Difficult:

One of the main tenets of statistical DOE is that the process should remain stationary during the duration of the test. This condition is not strictly true in AM, as the process parameters tend to fluctuate. For instance, in LPBF, during long experimental builds, the hot residue, such as vaporized material from the printing process, tends to accumulate in the cooler areas of the machine. For instance, soot buildup on the optics leads to occlusion of the laser beam during long builds. Consequently, the shape of the laser beam and the power delivered tend to drift over time, which, in turn, affects the part properties.

Likewise, the morphology of the top surface of the part tends to change in DED. In contrast to LPBF, the top surface in DED is not relatively flat but has an uneven wavy surface. This wavy surface emerges because only a part of the material may be melted and adhered to the surface due to a variety of reasons, such as insufficient energy to melt the surface, loss of powder in the stream, and either too much or too little material flow. Subsequently, the distance between the top surface and the powder delivery nozzle (called the standoff distance) varies from its initial setpoint. If the standoff distance between the part and nozzle decreases, more power tends to be delivered, and accordingly, more volume of the powder is melted, leading to a further decrease in the standoff distance. Eventually, the deviation of the standoff distance from the setpoint will rapidly exacerbate; the standoff distance will decrease and the nozzle may eventually crash into the part.

On the other hand, if the standoff distance increases, the power delivered is insufficient to melt the powder, and the standoff distance will decrease, causing the laser beam to walk off from the part. Such process drifts inherent to AM processes cause the part properties to vary along the build direction and, as a consequence, induce a large spread in the measurement of the output variables.

For example, the XCT image of a titanium alloy coupon deposited using the DED process is shown in Fig. 21 [91]. One of these parts is deposited under suboptimal process conditions; the laser power (300 W) is insufficient to melt the material and manifests long lack-of-fusion flaws. When after extensive testing, it was found that, when the laser power is increased from 300 to 475 W, the lack-of-fusion flaws are mitigated; however, a relatively small flaw is still evident, whose root cause cannot be pinpointed. In other words, there is a stochastic (random) aspect to defect formation.

Fig. 21.

Fig. 21.

Two DED parts (15 mm × 15 mm × 10 mm) show that (left) systemic flaws due to poor selection of processing conditions and (right) random (stochastic) flaws tend to occur even under flaw-free conditions [91].

These challenges pose considerable uncertainty in the generalizability and effectiveness concerning the conventional statistical DOE in AM. To address these concerns, researchers have explored several strategies. First, to reduce the number of expensive empirical tests required, sequential and evolutionary DOE strategies have been demonstrated [92]. The key idea of the evolutionary optimization approach is to use previous experiments to inform the next set of experiments. One approach to evolutionary optimization is to conduct a set of experiments and test for the key process output variables. Based on the results, the next set of experiments is conducted in the vicinity of those process settings that result in outcomes closer to the desired. Another approach is to use a technique called minimum-energy DOE, which provides a set of candidate points using a Bayesian analysis [93].

Another strategy is to augment DOE with machine learning models trained on the available data set. In this regard, King et al. suggest including results from simulation models to rapidly narrow the process conditions. With regard to the development of experimental data sets, extensive part design and testing strategies have been formalized by the ASTM F42 Committee.

Note that the global energy density is not sufficient to ensure part quality because, apart from the part geometry and process parameters, the shape and placement of other parts in the build plan (build layout) also influence the cooling rate. The uncertainty introduced in the component quality due to the complex interdependence between material, part geometry, process parameters, and build plan negates one of the most attractive aspects of AM: the flexibility to implement changes to the part design without the need for extensive optimization of the process parameters. This process complexity in AM strengthens the case for supplanting an empirical build-and-test optimization approach with a thermal physics-driven methodology.

C. Simulation Modeling and Analysis

Computationally efficient and accurate physical models are critically needed for AM to: 1) narrow the process parameter space for a property of interest; 2) identify red-flag problems in the part design; 3) aid support placement, build orientation, and build plans; 4) predict the distortion and microstructure evolved; and 5) augment process control by providing a model-based baseline for adjusting the process (feedforward control) [94]–[97]. From a broader vista, simulation in AM can be categorized into three classes contingent on the dominant phenomena: thermal, fluid, and photopolymerization based. To explain further, in the metal AM processes, such as LPBF and DED, the energy applied in joining the layers is supplied by a laser; accordingly, researchers have focused on modeling the thermal phenomena in metal AM. Melting- and extrusion-based polymer AM approaches, such as fused filament fabrication, may also be considered to fall under the category of thermal initiated AM. Processes such as binder jetting and aerosol jet printing are governed by the mechanics of droplet formation, fluid flow, and wetting. Finally, material jetting stereolithography is governed by photochemical reactions.

In this article, we have chosen to focus on metal AM processes given their popularity in high-value industries, such as aerospace and biomedical. The industrial interests in LPBF and DED have propelled active research in simulation modeling of these processes, with several commercial ventures being initiated in the last decade. The three key problems faced by researchers in this area are as follows:

  1. simulation time;

  2. coupling of phenomena across multiple scales;

  3. difficulty in experimental validation.

These difficulties originate because thermal modeling in LPBF and DED involves multiscale physics, which starts at the meltpool level, progressing to the layer level, and, finally, the part level [42], [96], [98]. The various process-part thermal interactions in the LPBF and DED processes are depicted in Fig. 22. The meltpool or particle-level dynamics are tied to material solidification rates and the interaction of the laser beam with the powder, and hence, it is the key to predict the microstructure evolved and, as a consequence, mechanical properties, such as hardness, strength, and fatigue life [99]. Next, in ascending order, is the so-called mesoscale or track level, which ranges from a few hundreds of micrometers to under a millimeter. The aim of track-level simulations is to predict consolidation of the powder and dynamic evolution of the meltpool as the laser is scanned, which is consequential to the density of the part formed. Finally, at the macroscopic or part level, which ranges from millimeters and beyond, the thrust is to predict the thermal-related residual stresses and geometric deformation.

Fig. 22.

Fig. 22.

Thermal phenomena in metal AM processes range across multiple scales, starting from the meltpool level to the part level [100].

At the meltpool or particle level, the interaction of the laser beam with particles is the focus. Particularly, in LPBF, the energy absorbed by the material is a function of its reflectivity (in electron beam PBF, the electronegativity is of importance). Highly reflective material will tend to absorb a smaller magnitude of the incident laser energy. Furthermore, the laser is reflected repeatedly by the powder particles when it is incident on the powder bed. This is advantageous to material melting as the energy absorbed by the material increases on account of multiple reflections. The laser–particle interaction is also important to understand the formation of the pinhole (due to vaporization) and keyhole-type porosity. The former occurs due to one of three reasons: first, the vaporization of remnants of moisture on the surface of powder particles; second, the escaping gases trapped within the meltpool; and third, due to the vaporization of impurities within the powder that has a lower melting point than the desired alloy. Keyhole melting porosity occurs at inordinately high laser energy conditions, which causes the powder to vaporize and create a cavity. This cavity serves to further focus the laser into a narrow beam, exacerbating the vaporization of more material. Eventually, the surrounding material falls into the cavity and fills it incompletely, causing a void (keyhole collapse). In the case of the DED process, the simulation at this scale includes modeling the interaction of the falling powder with the carrier gas, as well as the laser.

At the track levels, the simulations must take into account surface tension-related phenomena, such as the Plateau–Rayleigh effect, which is the root cause of meltpool instability and, consequently, inferior consolidation of the material. Furthermore, at the meltpool level, the material changes from solid to liquid and back to solid again; as a result, latent heat effects cannot be neglected. The simulations at this scale have been used to model the segregation or breakup of the meltpool into discrete chunks, called balling. This phenomenon is typically observed underneath unsupported features in the part and is related to the accumulation of heat in a region. The temperature increase causes the surface tension of the meltpool to decrease, which, in turn, leads to an increase in its length. The inordinate increase in the meltpool causes the onset of the Plateau–Rayleigh instability causing the meltpool to break up into discrete chunks. Each of these chunks eventually coalesces into spheroid shapes. The occurrence of balling phenomena is tied closely to the laser power and hatch spacing. This example serves to emphasize that the dynamics of the meltpool and track levels involve both fluid and heat transfer phenomena.

Finally, at the part level, the prediction of the temperature distribution has garnered commercial interest, with the emphasis on four aspects: 1) predicting distortion during and after the build; 2) possibility of a recoater crash due to part distortion during the build; 3) optimizing part orientation and placement of supports; and 4) build layout planning. To explain further, the three main factors that influence the thermal distribution at the part level in LPBF are as follows:

  1. the geometry of the part, including features such as steep overhangs, and the presence of anchoring supports [90], [101]–[103];

  2. type and characteristics of the feedstock material and process parameters, such as the laser power, hatch spacing, layer thickness, laser scan velocity, and scanning strategy, which influences the average heat input (global energy density) [104];

  3. the time required for scanning a layer and the interval between the melting of successive layers (interlayer cooling time), which are functions of the build layout determined by the number, geometry, orientation, placement, and scanning sequence of other parts on the build plate.

At the part level, the effect of meltpool-level phenomena (e.g., latent heat aspects) is neglected to aid computation. Mathematically, the aim is to solve the heat diffusion equation, in which conduction is the model heat transfer, and radiative and convective effects are considered postfacto, that is, after the heat diffusion equation is solved. The heat diffusion equation takes the following form:

ρcpTtk(2x2+2y2+2z2)T=Q. (1)

Solving the heat equation results in the instantaneous temperature T(x, y, z, t) at a time t for a Cartesian spatial coordinate (x, y, z). The temporal map of T(x, y, z, t), that is, the trace of the temperature T at the location (x, y, z) over time, gives the temperature history in the part for that location. The right-hand side term is the energy supplied by the laser per unit volume of the material per second (Q). Although the units are identical to the global energy density (Ev), Q is a more encompassing term because of the flexibility to include the effect of the beam shape.

The benchmark computational approach for solving the heat equation originates in the welding literature, as exemplified in the work of Goldak et al. [105]. This model called Goldak’s double-ellipsoid model considers, as the name suggests, the laser source to be ellipsoidal in shape. The beam energy is assumed to be concentrated in the center and dissipates near the boundary of the ellipse. In the AM context, researchers tend to model the beam to be ellipsoidal and the energy distribution within its Gaussian. A key difference between welding and AM is that, in the latter, the heat source has a smaller profile, and the translation speed (scan velocity) is a magnitude higher. Consequently, the cooling rates in LPBF approach the order of nearly 105 °/s. In DED, the spot sizes are much larger than LPBF.

The main problem faced at the part-level thermal modeling is the evolving nature of the part geometry in AM. To explain further, the model must take into account the change in the computational domain and boundary conditions as the material is deposited layer-upon-layer. The key challenge is to keep track of the elements from a finite-element modeling perspective [106]. Typically, this is done through the element birth-and-death approach, where the elements are slowly activated. The second is the quiet element method, wherein the part was meshed beforehand, but the thermal properties of an element are activated at the appropriate interval. Commercial software, such as Netfabb, makes use of a hybrid strategy involving both the quite element and birth-and-death approach. It may be noted that researchers at the Lawrence Livermore National Laboratories have developed comprehensive multiscale modeling tools based on their extensive code base, at the mesoscale (ALE3D) to the part level (Diablo). Techniques such as finite difference and discrete element methods have been employed to solve the heat diffusion equation [107]. Newer approaches based on circuit theory and graph analysis have been introduced for mapping the thermal distribution in AM [100], [108].

Fig. 23 shows a schematic of the mesh-free graph theory to solve the heat diffusion equation. The key idea is that the discrete heat diffusion equation is solved as a function of the eigenvalues and eigenvectors of the Laplacian matrix of a graph projected onto the geometry of the part. The main advantage of the graph theory approach is that the temperature distribution part can be potentially computed many times faster compared to FE because: 1) graph theory eliminates time-consuming meshing steps and 2) it avoids cumbersome matrix inversion operations needed to solve the heat equation and, instead, uses the matrix eigendecomposition.

Fig. 23.

Fig. 23.

A graph theory approach for simulating the LPBF process: Step 1 convert he geometry to a set of discrete nodes; Step 2 network construction; Step 3 simulation modeling of laser sintering and heat transfer; and Step 4: analysis of temperature distribution [100], [108].

Furthermore, the graph theory approach is verified with a finite-element implementation of the so-called gold standard Goldak’s double-ellipsoid model, which has its genesis in welding [105]. The graph theory solution was also quantitatively compared with the commercial Netfabb solution. The results for three-part geometries are shown in Fig. 24. The graph theory simulation accurately predicts the accumulation of heat in the overhang region of a C-shaped part. Moreover, the approach also predicts that heat trapped in an overhang region can be dissipated by build extra supports. More pertinently, at the graph theory, the approach converged to within 90% of Goldak’s solution within 10% of the computation time. The fast convergence of the graph theory approach opens the possibility of recognizing and correcting red flag problems in part design even before the part is printed. In other words, thermal simulations can be used as a viable path for design optimization in AM.

Fig. 24.

Fig. 24.

Comparison of the graph theory approach with an FE-implementation of Goldak’s model and the commercial Netfabb software for three different part geometries [100], [108]. The images are the temperature distribution in the last layer of the part (the part is 20-mm long × 2-mm wide × 20-mm tall). The temperature distribution is shown in normalized units.

VI. CONTROL THE PROCESS

This section presents the learning and optimization of action strategies for AM QC when the state of the build is dynamically evolving from one layer to another. As the finish in each layer will impact the next layer and all subsequent layers, this is a typical sequential decision-making program under real-world uncertainty (e.g., random variations, perturbations, or errors from measurements, machine settings, environments, and statistical estimation). Furthermore, we present a constrained framework for sequential decision-making. Examples of constraints include the lead time to complete a build, materials, and/or energy consumption in the manufacturing process.

A. Sequential Decision-Making Under Uncertainty

Modern industries pose more stringent standards in product esthetics, QA, and functional integrity. Thus, it is critical that AM machines can mitigate incipient defects. Hybrid machines with both additive and subtractive manufacturing abilities provide an opportunity to take corrective actions and perform layerwise repairs, thereby realizing a new paradigm of zero-defect AM [109]. For instance, sensor-based analytical methods (see Section IV) help characterize and estimate the state of defects in each layer of the AM build. If a layer is estimated to have a small likelihood sl to contain defects, the AM process will continue and take no corrective action, denoted as aW. On the other hand, if a layer has a high likelihood sh to have embedded defects, the AM process will pause and take an action to machine off this defective layer, denoted as aM. The number of available actions depends, to a great extent, on the technological advancement of hybrid machines. For example, for the defects due to lack of fusion, a potential action is to refuse with the laser and mitigate such defects, denoted as aL. If there are more actions available after each layer is built, then dynamic transitions among state-action pairs will become more complex. This is mainly due to the fact that AM layers are not independent but rather highly interrelated with each other. As shown in Fig. 25(a), the action chosen for one layer will impact the evolving dynamics of defect states in the next layer and, through that, all subsequent layers. In addition, there are uncertainties in the sensor measurements, machine settings, environments, defect estimation, and layer-to-layer transitions. The new sequential optimization framework needs to account for the uncertainty in AM processes and realize the zero-defect AM by minimizing the expected cumulative cost at the end when all layers are completed.

Fig. 25.

Fig. 25.

(a) Illustration of state-action transition diagram. Note that sh, sm, and sl denote the high-, median-, and low-defect states of an AM layer. (b) MDP for smart AM.

As shown in Fig. 25(b), each layer of AM builds will be captured by the sensors (i.e., high-resolution cameras) as imaging profiles. The probability for a layer to contain defects (e.g., sh, sm, and sl) will be estimated with sensor-based analytical methods, such as layerwise deep learning of incipient defects [77], [78]. The sequential decision-making framework for smart AM is formulated as a Markov decision process (MDP) model. Although the MDP has been widely used and proved to be effective in the management of engineering systems [110]–[113], very little has been done to realize smart AM using MDPs. Our prior work formulated this problem as an MDP corresponding to a five-tuple (Ω, S, A, T, and R), where Ω is the set of sensor observations, S is the set of defective states, A is the set of actions, T : S × A × S represents the state transition, and R is the reward function. The main objective is to search for an optimal policy π*(s) specifying the optimal action a* in state s, which will maximize the sum of rewards after taking the action a* and, thereafter, keeping being optimal.

  1. States, Actions, and Observations: The complexity of AM poses challenges in measuring and characterizing the exact defective state of a layerwise build. As shown in Fig. 14, we have developed a DNN learning method that tackles the challenge of layerwise geometrical variations and then estimate the risk probability of defects in a layer. As such, we can take full advantage of in-process image profiles and integrate them with MDP models. Each action affects the state transitions between layerwise builds in the AM process. Here, actions that are generally available in hybrid AM may include doing nothing, cutting off a layer, refusion, or process adjustments.

  2. State Transitions: p(s, a, s′) provides the probability that an intervention a in state s at layer i will lead to the state s′ at layer i + 1. The transition can be estimated from rich data collected in the AM processes, but it is influenced by the uncertainty in sensor measurements and process conditions. Few works in the AM literature studied sequential decision-making under uncertainty.

  3. Reward Function: R(s, a, s′) is a reward that the decision agent receives for a specific state transition. For example, if an action drives the defect likelihood from high to low, it will be rewarded. Otherwise, this action will be penalized. The utility V* (s) represents the sum of rewards received when starting in the state s and acting optimally, and Q*(s, a) is the utility when taking the action a from the state s and, thereafter, acting optimally.

Furthermore, we performed preliminary studies to develop a novel “sensing-modeling-optimization” framework that is tailored for AM processes. First, we leveraged the advanced sensing capabilities readily available in Penn State CIMP-3D to collect large amounts of layerwise image data. Second, we developed new sensor-based models to estimate the risk probability for a layer to contain defects and then predict the evolving dynamics of defect conditions from one layer to the next. Third, new MDP models are developed to model state-action transition dynamics among layers as a stochastic Markov process and further derive the optimal control policy [114], [115]. The new “sensing–modeling–optimization” framework enables the implementation of in-process corrective actions to repair and counteract incipient defects in each layer of AM builds prior to the completion. The propagation of defects will be detected by sensor-based modeling and analysis of in situ data and will be mitigated long before they reach the nonrecoverable stage.

B. Constrained Optimization of AM Processes

MDP helps optimize the policy to choose layerwise actions by maximizing expected rewards (or minimizing the expected cumulative cost incurred by the AM defects) for a sequential decision-making problem in the real-world AM environment. Traditional MDP frameworks commonly focus on a single objective (e.g., minimizing the defects in each layer of AM build) [114] and are less concerned about multiple simultaneous objectives that may be added to the AM processes (i.e., minimizing total cost—wasted materials, consumed energy, or lead time, as well as improving the quality). As a vertical step to advance smart and sustainable AM, there is an urgent need to investigate the multiobjective optimization of sequential decision-making problems for 6S quality management of AM.

If there are multiple objectives, for example, minimizing total cost (e.g., lead time or consumed energy) in the AM process while improving the quality of layerwise builds, then sequential optimization becomes a challenging task because some objectives may be conflicting with others. For instance, if we increase the frequency to take corrective actions and make sure that each layer has a small likelihood to contain defects, then the lead time to complete the build will be longer, and more energy will be consumed. In other words, the number of defects will be minimized in each layer of the build, but the total cost will be high. On the other hand, if we do not take as many corrective actions as needed, the build can be completed in a shorter period of lead time, and less energy will be consumed. The total cost is low, but the likelihood to contain defects in the AM build will be higher. In the state of the art, few, if any, previous works have considered multiobjective optimization of the sequential decision-making strategy for AM processes. In particular, there is a need to balance multiple conflicting objectives for the quality management of AM builds.

To address these challenges, our prior work proposed a new constrained MDP (CMDP) framework to derive the optimal control policy in each layer of the AM processes that minimize the total cost (e.g., lead time or consumed energy) and makes sure that the quality standards are met for the AM builds [115]. The CMDP formulation is detailed as follows:

  1. State Space: The state space is defined as S = (T, S), where T = {1,2,..., T} denotes the set of layer index, and S is the set of defect states, i.e., s1, s2, . . ., sl, which is structured in the increasing order of defect levels (i.e., s1 is the lowest defect level, and sl denotes the highest defect level).

  2. Action Space: In this study, the action space is simplified to include three actions, A = {aM, aL, aW}, where aM denotes the action of removing a layer with the cost of cM, aL is the action of laser repair and refusion with the cost of cL, and aW represents the action to do nothing with the cost of cW. With rapid advances of hybrid AM technology, it is anticipated that more actions will be available with different costs to be considered in future work.

  3. Decision Policy: Let Qt(st, a) denote the decision rule at layer t, which is defined as the probability to choose an action aA given the presence of defect state st at the layer t.

  4. State Transition: Let Pta(st+1st) be the transition probability from state st of layer t to state st+1 of layer t + 1 under the action aA. Given the decision policy Qt(st, a), the state transition is then defined as
    Mt(i,j)=aAQt(st,a)Pta(st+1=sjst=si).

Let the vector xt=[xt1,,xtl]T (1Txt=1, where 1 is a vector of 1’s) represent the probability distribution of defect states st{s1,,sl} at layer t, which means that the probability of defect state st staying in the defect level si is xti. Then, xt evolves according to

xt+1=Mtxt.

The CMDP model will then be formulated as follows:

minQ1,,QT1vT=Ex1[t=1T1ct(xt,Qt)+cT]s.t.xth,1Txt=1xt+1=Mtxt,Qt1=1,Qt0fort=1,2,,T1

where Qt is the decision matrix for layer t, νT is the expected total cost in energy or time, ct(xt,Qt)=aϵAcaQt(st,a) is the immediate cost at layer t, and cT is the terminal cost at the final layer T. The first constraint makes sure that the quality standards are met by bounding the probability of each defect state with an upper limit h and 0 ≤ h ≤ 1. The second constraint guarantees each row of Qt to be a valid probability distribution. If we delete the quality constraint (i.e., xth) in the CMDP model, then the rows of Qt will be independent and not correlated. As such, the CMDP model can be solved with dynamic programming and simple backward induction algorithms. However, due to the quality constraint on the density distribution xt of defect state st, the rows of Qt are correlated in the formulation through state-action transition dynamics xt+1 = Mtxt. As a result, it is difficult to solve the CMDP model here with traditional dynamic programming algorithms. Therefore, our prior work developed new dynamic programming algorithms to solve the CMDP model and demonstrated the optimal control policy for the worst case scenario of the probability distribution of defect states [115].

In the proposed “sensing–modeling–optimization” framework, in situ sensor signals, which exemplify specific process defects, are integrated with AI, machine learning, and CMDP models to optimize the selection of corrective actions for smart and sustainable AM. In addition, the objectives can also be extended to include the minimization of delamination and warpage of the final workpiece and the maximization of reliability measures, such as build strength and fatigue resistance. As opposed to purely data-driven approaches, which cannot suggest process adjustments, this sensor-based modeling and optimization approach not only detects process anomalies but also guides the optimal corrective action, thereby enabling closed-loop control of AM to build quality and functional integrity.

VII. CONCLUSION AND DISCUSSION

AM provides an unprecedented opportunity to produce complex geometries that are often impossible with traditional subtractive (machining) and formative (casting, welding, and molding) manufacturing processes. Once the quality challenge is tackled, such a capability will result in the advent of newer and cheaper consumer products. Also, AM offers the possibility of taking a computer-generated design and directly putting the build into the hands of an end user. If the designs can be repeatably produced with a very low probability for defects, then new disruptive business models will become possible. A brick-and-mortar retail store will no longer need to carry an inventory of final products. A consumer could simply go to the store or the store’s online website, select a premium and validated product from a catalog, push a button, and wait for the product to be made using an AM process. This so-called “zero lead time” store could see extended applications with at-home AM machinery and systems. Digital designs could be downloaded from the internet and created in the comfort of one’s own home. Nonetheless, these concepts of “zero defects” or “zero lead time” depend to a great extent on the effective management of AM quality to recognize and anticipate defects and then take the appropriate corrective action to control process variability and ensure the final build’s conformance to standards.

However, effective management of AM quality cannot just rely on the purchase of new machines and the installation of sensing and automation systems but rather requires a set of quality-focused activities, ranging from quality planning, QA/QC, and continuous quality improvement. Quality planning identifies the needs of AM customers, for example, whether they are interested in zero-defect products, esthetic aspects, or geometric accuracies. Only by listening to the customers, the AM manufacturers can develop the right strategic plan to help save time and costs in the handling of product returns, warranty charges, and customer complaints. QA/QC focuses on the reduction of process variability and ensures that the quality levels of final builds meet with standards (or specifications) from the customers. An important QA/QC function is to develop the ontological knowledge graph, document fundamental elements of the AM process (e.g., suppliers, materials, machines, processes, outputs, and customers in the AM ontology), analyze their relevance to the product quality, and identify the responsibilities (and accountability) of each element or business unit. Quality improvement goes beyond QA/QC activities to engage in the continuous improvement of quality toward gaining competitive advantages in the global market. As mentioned in Section V, ontology analytics, DOE, and simulation analyses are major methods and tools that can be used to help AM manufacturers further improve quality on a continual basis.

Furthermore, quality management is not just the job of the quality-inspection unit in an AM enterprise, but rather depends on all units during the AM process. For example, the design should consider the capability of AM machines and then be optimized for quality. The selection of suppliers should not only be based on the cost only but also focus on the quality, timely delivery of raw materials, and so on. Indeed, quality management should include engineering, operational, and managerial activities to ensure that the AM builds are conforming to standards and then continuously engage in quality improvement. On the other hand, quality should not become anybody’s job once everybody is involved. QA/QC is needed to develop the documentation and policy to explicitly provide the quality-related responsibility and accountability of each person or business unit during the AM process, from procurement engineers to machine operators to higher levels of management, and so on. The philosophy of quality management is to emphasize quality, raise awareness, engage each person in the AM process, and then communicate quality problems effectively, so as to optimize resource allocation and tackle such problems efficiently.

Lest quality-related challenges with AM are addressed, it is unlikely that traditional manufacturers will forego well-established conventional methods. In light of the strategic and economic prize at stake, there is a burgeoning need to address the quality challenges in AM, reduce process variability, and improve AM process repeatability. This article aims to advance the scientific basis of AM quality management. The DMAIC approach for AM quality improvement has the potential to substantially improve the production-scale viability of AM and enable wider exploitation of AM capabilities beyond the current rapid prototyping status quo. Achieving quality excellence in AM may have consequential socioeconomic impacts and outcomes, in terms of profitability (quick scaling of process conditions to changing requirements), sustainability (economy of resources and energy by the reduction in waste, scrap, and rework), and efficiency (minimize efforts required toward obtaining the best quality product). This will spur the growth of advanced manufacturing in the nation and the world, thus leading to broader social and economic impacts. It is hoped that this article will help catalyze more in-depth investigations and multidisciplinary research efforts to advance the new practice of 6S quality management for AM.

Footnotes

DISCLAIMER

Certain commercial equipment, or materials, suppliers, or software are identified in this article to foster understanding. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology nor does it imply that the materials or equipment identified are necessarily the best available for the purpose.

Contributor Information

HUI YANG, Harold and Inge Marcus Department of Industrial and Manufacturing Engineering, The Pennsylvania State University, University Park, PA 16802 USA.

PRAHALAD RAO, Department of Mechanical and Materials Engineering, University of Nebraska–Lincoln, Lincoln, NE 68588 USA.

TIMOTHY SIMPSON, Harold and Inge Marcus Department of Industrial and Manufacturing Engineering, The Pennsylvania State University, University Park, PA 16801 USA; Center for Innovative Materials Processing 3D (CIMP-3D), The Pennsylvania State University, University Park, PA 16801 USA.

YAN LU, National Institute of Standards and Technology, Gaithersburg, MD 20899 USA.

PAUL WITHERELL, National Institute of Standards and Technology, Gaithersburg, MD 20899 USA.

ABDALLA R. NASSAR, Center for Innovative Materials Processing 3D (CIMP-3D), The Pennsylvania State University, University Park, PA 16801 USA

EDWARD REUTZEL, Center for Innovative Materials Processing 3D (CIMP-3D), The Pennsylvania State University, University Park, PA 16801 USA.

SOUNDAR KUMARA, Harold and Inge Marcus Department of Industrial and Manufacturing Engineering, The Pennsylvania State University, University Park, PA 16802 USA.

REFERENCES

  • [1].Yang H. and Chen Y, “Heterogeneous recurrence monitoring and control of nonlinear stochastic processes,” Chaos, Interdiscipl. J. Nonlinear Sci, vol. 24, no. 1, 2014, Art. no. 013138, doi: 10.1063/1.4869306” 10.1063/1.4869306. [DOI] [PubMed] [Google Scholar]
  • [2].Chen Y. and Yang H, “Heterogeneous recurrence T2 charts for monitoring and control of nonlinear dynamic processes,” in Proc. 11th Annu. IEEE Conf. Automat. Sci. Eng. (CASE), Gothenburg, Sweden, Aug. 2015, pp. 1066–1071, doi: 10.1109/CoASE.2015.7294240. [DOI] [Google Scholar]
  • [3].Liu G. and Yang H, “Model-driven parametric monitoring of high-dimensional nonlinear functional profiles,” in Proc. 10th Annu. IEEE Conf. Automat. Sci. Eng. (CASE), Taipei, Taiwan, Aug. 2014, pp. 722–727, doi: 10.1109/CoASE.2014.6899408. [DOI] [Google Scholar]
  • [4].Kan C. and Yang H, “Network models for monitoring high-dimensional image profiles,” in Proc. 11th Annu. IEEE Conf. Automat. Sci. Eng. (CASE), Gothenburg, Sweden, Aug. 2015, pp. 1078–1083, doi: 10.1109/CoASE.2015.7294242. [DOI] [Google Scholar]
  • [5].Dasgupta T, “Using the six-sigma metric to measure and improve the performance of a supply chain,” Total Qual. Manage. Bus. Excellence, vol. 14, no. 3, pp. 355–366, May 2003. [Google Scholar]
  • [6].Box GEP and Woodall WH, “Innovation, quality engineering, and statistics,” Qual. Eng, vol. 24, no. 1, pp. 20–29, January. 2012. [Google Scholar]
  • [7].Seifi M. et al. , “Progress towards metal additive manufacturing standardization to support qualification and certification,” JOM, vol. 69, no. 3, pp. 439–455, March. 2017. [Google Scholar]
  • [8].Jurrens K, “Measurement science roadmap for metal-based additive manufacturing—Workshop summary report,” Nat. Inst. Standards Technol., Gaithersburg, MD, USA, Tech. Rep, 2013, pp. 1–86. [Online]. Available: https://www.nist.gov/system/files/documents/el/isd/NISTAdd_Mfg_Report_FINAL-2.pdf [Google Scholar]
  • [9].Sachs E. et al. , “Three-dimensional printing: The physics and implications of additive manufacturing,” CIRP Ann, vol. 42, no. 1, pp. 257–260, 1993. [Google Scholar]
  • [10].Pal D, Patil N, Zeng K, and Stucker B, “An integrated approach to additive manufacturing simulations using physics based, coupled multiscale process modeling,” J. Manuf. Sci. Eng, vol. 136, no. 6, pp. 061022-1–061022-16, December. 2014. [Google Scholar]
  • [11].King WE et al. , “Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges,” Appl. Phys. Rev, vol. 2, no. 4, December. 2015, Art. no. 041304. [Google Scholar]
  • [12].Matthews M, Trapp J, Guss G, and Rubenchik A, “Direct measurements of laser absorptivity during metal melt pool formation associated with powder bed fusion additive manufacturing processes,” J. Laser Appl, vol. 30, no. 3, August. 2018, Art. no. 032302. [Google Scholar]
  • [13].Stutzman CB, Nassar AR, and Reutzel EW, “Multi-sensor investigations of optical emissions and their relations to directed energy deposition processes and quality,” Additive Manuf, vol. 21, pp. 333–339, May 2018. [Google Scholar]
  • [14].Reutzel EW and Nassar AR, “A survey of sensing and control systems for machine and process monitoring of directed-energy, metal-based additive manufacturing,” Rapid Prototyping J, vol. 21, no. 2, pp. 159–167, March. 2015. [Google Scholar]
  • [15].Foster B, Reutzel E, Nassar A, Hall B, Brown S, and Dickman C, “Optical, layerwise monitoring of powder bed fusion,” in Proc. Solid Freeform Fabr. Symp., Austin, TX, USA, 2015, pp. 295–307. [Google Scholar]
  • [16].Nassar AR et al. , “Sensing for directed energy deposition and powder bed fusion additive manufacturing at Penn State University,” in Laser 3D Manufacturing III. San Francisco, CA, USA, 2016, Art. no. 97380R. [Online]. Available: https://www.spiedigitallibrary.org/conferenceproceedings-of-spie/9738/97380R/Sensing-fordirected-energy-deposition-and-powder-bedfusionadditive/10.1117/12.2217855.short?SSO=1 [Google Scholar]
  • [17].Gobert C, Reutzel EW, Petrich J, Nassar AR, and Phoha S, “Application of supervised machine learning for defect detection during metallic powder bed fusion additive manufacturing using high resolution imaging,” Additive Manuf, vol. 21, pp. 517–528, May 2018. [Google Scholar]
  • [18].Chen R, Imani F, Reutzel E, and Yang H, “From design complexity to build quality in additive manufacturing—A sensor-based perspective,” IEEE Sensors Lett, vol. 3, no. 1, pp. 1–4, January. 2019. [Google Scholar]
  • [19].Froes F, Boyer R, and Dutta B, “Introduction to aerospace materials requirements and the role of additive manufacturing,” in Additive Manufacturing for the Aerospace Industry, Froes F. and Boyer R, Eds. Amsterdam, The Netherlands: Elsevier, 2019, pp. 1–6. [Online]. Available: http://www.sciencedirect.com/science/article/pii/B9780128140628000017, doi: 10.1016/B978-0-12-814062-8.00001-7. [DOI] [Google Scholar]
  • [20].Russell R. et al. , “Qualification and certification of metal additive manufactured hardware for aerospace applications,” in Additive Manufacturing for the Aerospace Industry. Amsterdam, The Netherlands: Elsevier, 2019, pp. 33–66. [Online]. Available: http://www.sciencedirect.com/science/article/pii/B9780128140628000030, doi: 10.1016/B978-0-12-814062-8.00003-0. [DOI] [Google Scholar]
  • [21].Francis MP, Kemper N, Maghdouri-White Y, and Thayer N, “Additive manufacturing for biofabricated medical device applications,” in Additive Manufacturing, Zhang J. and Jung Y-G, Eds. Oxford, U.K.: Butterworth-Heinemann, 2018, pp. 311–344. [Online]. Available: http://www.sciencedirect.com/science/article/pii/B9780128121559000098, doi: 10.1016/B978-0-12-812155-9.00009-8. [DOI] [Google Scholar]
  • [22].Adamo JE et al. , “Regulatory interfaces surrounding the growing field of additive manufacturing of medical devices and biologic products,” J. Clin. Transl. Sci, vol. 2, no. 5, pp. 301–304, October. 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [23].Huang W, Liu J, Chalivendra V, Ceglarek D, Kong Z, and Zhou Y, “Statistical modal analysis for variation characterization and application in manufacturing quality control,” IIE Trans, vol. 46, no. 5, pp. 497–511, May 2014. [Google Scholar]
  • [24].Zhong J, Liu J, and Shi J, “Predictive control considering model uncertainty for variation reduction in multistage assembly processes,” IEEE Trans. Autom. Sci. Eng, vol. 7, no. 4, pp. 724–735, October. 2010. [Google Scholar]
  • [25].Joseph VR and Wu CFJ, “Robust parameter design of multiple-target systems,” Technometrics, vol. 44, no. 4, pp. 338–346, November. 2002. [Google Scholar]
  • [26].Petrick IJ and Simpson TW, “3D printing disrupts manufacturing: How economies of one create new rules of competition,” Res.-Technol. Manage, vol. 56, no. 6, pp. 12–16, November. 2013. [Google Scholar]
  • [27].Tofail SAM, Koumoulos EP, Bandyopadhyay A, Bose S, O’Donoghue L, and Charitidis C, “Additive manufacturing: Scientific and technological challenges, market uptake and opportunities,” Mater. Today, vol. 21, no. 1, pp. 22–37, January. 2018. [Google Scholar]
  • [28].Shi J. and Zhou S, “Quality control and improvement for multistage systems: A survey,” IIE Trans, vol. 41, no. 9, p. 744, 2009. [Google Scholar]
  • [29].Sutton AT, Kriewall CS, Leu MC, and Newkirk JW, “Powders for additive manufacturing processes: Characterization techniques and effects on part properties,” in Proc. 26th Annu. Int. Solid Freeform Fabr. Symp. Additive Manuf. Conf., 2016, pp. 1004–1030. [Online]. Available: http://utw10945.utweb.utexas.edu/sites/default/files/2016/082-Sutton.pdf [Google Scholar]
  • [30].Grasso M. and Colosimo BM, “Process defects andin situmonitoring methods in metal powder bed fusion: A review,” Meas. Sci. Technol, vol. 28, no. 4, April. 2017, Art. no. 044005. [Google Scholar]
  • [31].Mani M, Feng S, Lane B, Donmez A, Moylan S, and Fesperman R, “Measurement science needs for real-time control of additive manufacturing powder bed fusion processes,” NIST, Gaithersburg, MD, USA, NIST Interagency/Internal Rep. 8036, 2015. [Online]. Available: https://www.nist.gov/publications/measurementscience-needs-real-time-control-additivemanufacturing-powder-bed-fusion [Google Scholar]
  • [32].Mani M, Lane BM, Donmez MA, Feng SC, and Moylan SP, “A review on measurement science needs for real-time control of additive manufacturing metal powder bed fusion processes,” Int. J. Prod. Res, vol. 55, no. 5, pp. 1400–1418, March. 2017. [Google Scholar]
  • [33].Moylan S, Whitenton E, Lane B, and Slotwinski J, “Infrared thermography for laser-based powder bed fusion additive manufacturing processes,” in Proc. AIP Conf., 2014, pp. 1191–1196. [Google Scholar]
  • [34].Everton SK, Hirsch M, Stravroulakis P, Leach RK, and Clare AT, “Review of in-situ process monitoring and in-situ metrology for metal additive manufacturing,” Mater. Des, vol. 95, pp. 431–445, April. 2016. [Google Scholar]
  • [35].Spears TG and Gold SA, “In-process sensing in selective laser melting (SLM) additive manufacturing,” Integrating Mater. Manuf. Innov, vol. 5, no. 1, pp. 16–40, December. 2016. [Google Scholar]
  • [36].Tapia G. and Elwany A, “A review on process monitoring and control in metal-based additive manufacturing,” J. Manuf. Sci. Eng, vol. 136, no. 6, December. 2014, Art. no. 060801. [Google Scholar]
  • [37].Sames WJ, List FA, Pannala S, Dehoff RR, and Babu SS, “The metallurgy and processing science of metal additive manufacturing,” Int. Mater. Rev, vol. 61, no. 5, pp. 315–360, July. 2016. [Google Scholar]
  • [38].Huang Y, Leu MC, Mazumder J, and Donmez A, “Additive manufacturing: Current state, future potential, gaps and needs, and recommendations,” J. Manuf. Sci. Eng, vol. 137, no. 1, pp. 014001-1–014001-10, February. 2015. [Google Scholar]
  • [39].Roy M, Yavari R, Zhou C, Wodo O, and Rao P, “Prediction and experimental validation of part thermal history in the fused filament fabrication additive manufacturing process,” J. Manuf. Sci. Eng, vol. 141, no. 12, December. 2019, Art. no. 121001. [Google Scholar]
  • [40].Wei HL, Mazumder J, and DebRoy T, “Evolution of solidification texture during additive manufacturing,” Sci. Rep, vol. 5, no. 1, p. 16446, December. 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [41].DebRoy T. et al. , “Additive manufacturing of metallic components—Process, structure and properties,” Prog. Mater. Sci, vol. 92, pp. 112–224, March. 2018. [Google Scholar]
  • [42].Khairallah SA, Anderson AT, Rubenchik A, and King WE, “Laser powder-bed fusion additive manufacturing: Physics of complex melt flow and formation mechanisms of pores, spatter, and denudation zones,” Acta Mater, vol. 108, pp. 36–45, April. 2016. [Google Scholar]
  • [43].Dunbar AJ et al. , “Development of experimental method for in situ distortion and temperature measurements during the laser powder bed fusion additive manufacturing process,” Additive Manuf, vol. 12, pp. 25–30, October. 2016. [Google Scholar]
  • [44].Promoppatum P. et al. , “Numerical modeling and experimental validation of thermal history and microstructure for additive manufacturing of an inconel 718 product,” Prog. Additive Manuf, vol. 3, nos. 1–2, pp. 15–32, June. 2018. [Google Scholar]
  • [45].Krauss H, Zeugner T, and Zaeh MF, “Thermographic process monitoring in powderbed based additive manufacturing,” in Proc. AIP Conf., 2015, pp. 177–183. [Google Scholar]
  • [46].Lane B, Moylan S, Whitenton EP, and Ma L, “Thermographic measurements of the commercial laser powder bed fusion process at NIST,” Rapid Prototyping J, vol. 22, no. 5, pp. 778–787, August. 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [47].Gibson I, Rosen D, and Stucker B, Additive Manufacturing Technologies Rapid Prototyping to Direct Digital Manufacturing. Cham, Switzerland: Springer, 2010. [Google Scholar]
  • [48].Land WS, Zhang B, Ziegert J, and Davies A, “In-situ metrology system for laser powder bed fusion additive process,” Procedia Manuf, vol. 1, pp. 393–403, January. 2015. [Google Scholar]
  • [49].Montazeri M, Yavari R, Rao P, and Boulware P, “In-process monitoring of material cross-contamination defects in laser powder bed fusion,” J. Manuf. Sci. Eng, vol. 140, no. 11, November. 2018, Art. no. 111001. [Google Scholar]
  • [50].Abdelrahman M, Reutzel EW, Nassar AR, and Starr TL, “Flaw detection in powder bed fusion using optical imaging,” Additive Manuf, vol. 15, pp. 1–11, May 2017. [Google Scholar]
  • [51].Nassar A, Starr B, and Reutzel E, “Process monitoring of directed-energy deposition of inconel-718 via plume imaging,” in Proc. Solid Freeform Fabr. Symp. (SFF), Austin, TX, USA, 2015, pp. 10–12. [Google Scholar]
  • [52].Dunbar AJ and Nassar AR, “Assessment of optical emission analysis for in-process monitoring of powder bed fusion additive manufacturing,” Virtual Phys. Prototyping, vol. 13, no. 1, pp. 14–19, January. 2018. [Google Scholar]
  • [53].Montazeri M, Nassar AR, Dunbar AJ, and Rao P, “In-process monitoring of porosity in additive manufacturing using optical emission spectroscopy,” IISE Trans, vol. 52, no. 5, pp. 500–515, 2019. [Google Scholar]
  • [54].Kramida A, Ralchenko Y, Reader J, and NIST ASD Team, “NIST atomic spectra database (version 5.8),” Nat. Inst. Standards Technol., Gaithersburg, MD, USA, Tech. Rep, November. 2020. [Online]. Available: https://physics.nist.gov/PhysRefData/ASD/Html/verhist.shtml, doi: 10.18434/T4W30F. [DOI] [Google Scholar]
  • [55].Fox JC, Moylan SP, and Lane BM, “Effect of process parameters on the surface roughness of overhanging structures in laser powder bed fusion additive manufacturing,” Procedia CIRP, vol. 45, pp. 131–134, January. 2016. [Google Scholar]
  • [56].Ameta G, Fox J, and Witherell P, “Tolerancing and verification of additive manufactured lattice with supplemental surfaces,” Procedia CIRP, vol. 75, pp. 69–74, January. 2018. [Google Scholar]
  • [57].Lu Y, Witherell P, and Donmez A, “A collaborative data management system for additive manufacturing,” in Proc. ASME Int. Design Eng. Tech. Conf. Comput. Inf. Eng. Conf., Quebec, Canada, 2017, pp. 1–11. [Google Scholar]
  • [58].Kan C. and Yang H, “Dynamic network monitoring and control of in situ image profiles from ultraprecision machining and biomanufacturing processes,” Qual. Rel. Eng. Int, vol. 33, no. 8, pp. 2003–2022, December. 2017. [Google Scholar]
  • [59].Chen R, Imani F, Reutzel E, and Yang H, “From design complexity to build quality in additive manufacturing—A sensor-based perspective,” IEEE Sensors Lett, vol. 3, no. 1, pp. 1–4, January. 2019. [Google Scholar]
  • [60].Gaikwad A, Imani F, Yang H, Reutzel E, and Rao P, “In-situ monitoring of thin-wall build quality in laser powder bed fusion using deep learning,” Smart Sustain. Manuf. Syst, pp. 1–39, 2019. [Google Scholar]
  • [61].Savio E, De Chiffre L, and Schmitt R, “Metrology of freeform shaped parts,” CIRP Ann, vol. 56, no. 2, pp. 810–835, 2007. [Google Scholar]
  • [62].Arrieta C. et al. , “Quantitative assessments of geometric errors for rapid prototyping in medical applications,” Rapid Prototyping J, vol. 18, no. 6, pp. 431–442, September. 2012. [Google Scholar]
  • [63].Brajlih T, Valentan B, Balic J, and Drstvensek I, “Speed and accuracy evaluation of additive manufacturing machines,” Rapid Prototyping J, vol. 17, no. 1, pp. 64–75, January. 2011. [Google Scholar]
  • [64].Zeng K. et al. , “Layer by layer validation of geometrical accuracy in additive manufacturing processes,” in Proc. Solid Freeform Fabr. Symp., Austin, TX, USA, Aug. 2013, pp. 12–14. [Google Scholar]
  • [65].Dimitrov D, Schreve K, and De Beer N, “Advances in three dimensional printing-state of the art and future perspectives,” Rapid Prototyping J, vol. 12, no. 3, pp. 136–147, 2006. [Google Scholar]
  • [66].Cooke AL and Soons JA, “Variability in the geometric accuracy of additively manufactured test parts,” in Proc. 21st Annu. Int. Solid Freeform Fabr. Symp., Austin, TX, USA, 2010, pp. 1–12. [Google Scholar]
  • [67].Munguía J, de Ciurana J, and Riba C, “Pursuing successful rapid manufacturing: A users’ best-practices approach,” Rapid Prototyping J, vol. 14, no. 3, pp. 173–179, May 2008. [Google Scholar]
  • [68].Ziemian CW and Crawn PM, “Computer aided decision support for fused deposition modeling,” Rapid Prototyping J, vol. 7, no. 3, pp. 138–147, August. 2001. [Google Scholar]
  • [69].Boschetto A, Giordano V, and Veniali F, “Surface roughness prediction in fused deposition modelling by neural networks,” Int. J. Adv. Manuf. Technol, vol. 67, nos. 9–12, pp. 2727–2742, August. 2013. [Google Scholar]
  • [70].Bukkapatnam S. and Clark B, “Dynamic modeling and monitoring of contour crafting—An extrusion-based layered manufacturing process,” J. Manuf. Sci. Eng, vol. 129, no. 1, pp. 135–142, February. 2007. [Google Scholar]
  • [71].Bauereiß A, Scharowsky T, and Körner C, “Defect generation and propagation mechanism during additive manufacturing by selective beam melting,” J. Mater. Process. Technol, vol. 214, no. 11, pp. 2522–2528, November. 2014. [Google Scholar]
  • [72].Yao B, Imani F, Sakpal AS, Reutzel EW, and Yang H, “Multifractal analysis of image profiles for the characterization and detection of defects in additive manufacturing,” J. Manuf. Sci. Eng, vol. 140, no. 3, March. 2018, Art. no. 031014. [Google Scholar]
  • [73].Imani F, Yao B, Chen R, Rao P, and Yang H, “Joint multifractal and lacunarity analysis of image profiles for manufacturing quality control,” J. Manuf. Sci. Eng, vol. 141, no. 4, pp. 044501-1–044501-7, April. 2019. [Google Scholar]
  • [74].Imani F, Yao B, Chen R, Rao P, and Yang H, “Fractal pattern recognition of image profiles for manufacturing process monitoring and control,” in Proc. ASME 13th Int. Manuf. Sci. Eng. Conf., 2018. [Google Scholar]
  • [75].Imani F, Gaikwad A, Montazeri M, Rao P, Yang H, and Reutzel E, “Layerwise in-process quality monitoring in laser powder bed fusion,” in Proc. ASME Int. Manuf. Sci. Eng. Conf., vol. 1, 2018, Art. no. V001T01A038. [Google Scholar]
  • [76].Imani F, Chen R, Diewald E, Reutzel E, and Yang H, “Deep learning of variant geometry in layerwise imaging profiles for additive manufacturing quality control,” ASME. J. Manuf. Sci. Eng, vol. 141, no. 11, pp. 111001–11012, November. 2019, doi: 10.1115/1.4044420. [DOI] [Google Scholar]
  • [77].Imani F, Chen R, Diewald E, Reutzel E, and Yang H, “Deep learning of variant geometry in layerwise imaging profiles for additive manufacturing quality control,” J. Manuf. Sci. Eng, vol. 141, no. 11, November. 2019. [Google Scholar]
  • [78].Imani F, Chen R, Diewald E, Reutzel EW, and Yang H, “Image-guided variant shape analysis of layerwise build quality in additive manufacturing,” in Proc. 14th Int. Manuf. Sci. Eng. Conf. (ASME), 2019, pp. 10–14. [Google Scholar]
  • [79].Qian L, Mei J, Liang J, and Wu X, “Influence of position and laser power on thermal history and microstructure of direct laser fabricated Ti-6Al-4V samples,” Mater. Sci. Technol, vol. 21, no. 5, pp. 597–605, 2005. [Google Scholar]
  • [80].Brandl E, Leyens C, and Palm F, “Mechanical properties of additive manufactured Ti-6Al-4V using wire and powder based processes,” in Proc. IOP Conf. Ser. Mater. Sci. Eng, vol. 26, no. 1, 2011, Art. no. 012004. [Google Scholar]
  • [81].Roh B, Kumara SR, Simpson TW, and Witherell P, “Ontology-based laser and thermal metamodels for metal-based additive manufacturing,” in Proc. ASME Int. Design Eng. Tech. Conf. Comput. Inf. Eng. Conf., Charlotte, North Carolina, 2016, pp. 21–24. [Google Scholar]
  • [82].Witherell P. et al. , “Toward metamodels for composable and reusable additive manufacturing process models,” J. Manuf. Sci. Eng, vol. 136, no. 6, pp. 061025-1–061025-9, December. 2014. [Google Scholar]
  • [83].Park JZ, “Development of a relational energy balance for additive manufacturing,” M.S. thesis, Dept. Eng. Sci. Mech., Penn State Univ., State College, PA, USA, 2015, pp. 1–109. [Online]. Available: https://etda.libraries.psu.edu/files/final_submissions/10767 [Google Scholar]
  • [84].Hitzler L, Merkel M, Hall W, and Ochsner A, “A review of metal fabricated with laser-and powder-bed based additive manufacturing techniques: Process, nomenclature, materials, achievable properties, and its utilization in the medical sector,” Adv. Eng. Mater, vol. 20, no. 5, 2018, Art. no. 1700658. [Google Scholar]
  • [85].Jared BH et al. , “Additive manufacturing: Toward holistic design,” Scripta Mater, vol. 135, pp. 141–147, July. 2017. [Google Scholar]
  • [86].Gu DD, Meiners W, Wissenbach K, and Poprawe R, “Laser additive manufacturing of metallic components: Materials, processes and mechanisms,” Int. Mater. Rev, vol. 57, no. 3, pp. 133–164, May 2012. [Google Scholar]
  • [87].O’Regan P, Prickett P, Setchi R, Hankins G, and Jones N, “Metal based additive layer manufacturing: Variations, correlations and process control,” Procedia Comput. Sci, vol. 96, pp. 216–224, 2016. [Google Scholar]
  • [88].Samie Tootooni M, Dsouza A, Donovan R, Rao PK, Kong Z, and Borgesen P, “Classifying the dimensional variation in additive manufactured parts from laser-scanned three-dimensional point cloud data using maching learning approaches,” J. Manuf. Sci. Eng, vol. 139, no. 9, 2017, Art. no. 091005. [Google Scholar]
  • [89].Rao PK, Kong Z, Duty CE, Smith RJ, Kunc V, and Love LJ, “Assessment of dimensional integrity and spatial defect localization in additive manufacturing using spectral graph theory,” J. Manuf. Sci. Eng, vol. 138, no. 5, May 2016, Art. no. 051007. [Google Scholar]
  • [90].Boone N, Zhu C, Smith C, Todd I, and Willmott JR, “Thermal near infrared monitoring system for electron beam melting with emissivity tracking,” Additive Manuf, vol. 22, pp. 601–605, August. 2018. [Google Scholar]
  • [91].Montazeri M, Nassar AR, Stutzman CB, and Rao P, “Heterogeneous sensor-based condition monitoring in directed energy deposition,” Additive Manuf., vol. 30, December. 2019, Art. no. 100916. [Google Scholar]
  • [92].Aboutaleb AM, Tschopp MA, Rao PK, and Bian L, “Multi-objective accelerated process optimization of part geometric accuracy in additive manufacturing,” J. Manuf. Sci. Eng, vol. 139, no. 10, October. 2017, Art. no. 101001. [Google Scholar]
  • [93].Joseph VR, Dasgupta T, Tuo R, and Wu CFJ, “Sequential exploration of complex surfaces using minimum energy designs,” Technometrics, vol. 57, no. 1, pp. 64–74, January. 2015. [Google Scholar]
  • [94].Bandyopadhyay A. and Traxel KD, “Invited review article: Metal-additive manufacturing—Modeling strategies for application-optimized designs,” Additive Manuf, vol. 22, pp. 758–774, August. 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [95].Foteinopoulos P, Papacharalampopoulos A, and Stavropoulos P, “On thermal modeling of additive manufacturing processes,” CIRP J. Manuf. Sci. Technol, vol. 20, pp. 66–83, January. 2018. [Google Scholar]
  • [96].Francois MM et al. , “Modeling of additive manufacturing processes for metals: Challenges and opportunities,” Current Opinion Solid State Mater. Sci, vol. 21, no. 4, pp. 198–206, 2017. [Google Scholar]
  • [97].Simpson TW, Williams CB, and Hripko M, “Preparing industry for additive manufacturing and its applications: Summary & recommendations from a national science foundation workshop,” Additive Manuf, vol. 13, pp. 166–178, January. 2017. [Google Scholar]
  • [98].Markl M. and Korner C, “Multiscale modeling of powder bed-based additive manufacturing,” Annu. Rev. Mater. Res, vol. 46, no. 1, pp. 93–123, 2016. [Google Scholar]
  • [99].Keller T. et al. , “Application of finite element, phase-field, and CALPHAD-based methods to additive manufacturing of ni-based superalloys,” Acta Mater, vol. 139, pp. 244–253, October. 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [100].Yavari MR, Cole KD, and Rao P, “Thermal modeling in metal additive manufacturing using graph theory,” ASME J. Manuf. Sci. Eng, vol. 141, no. 7, pp. 071007–071020, July. 2019, doi: 10.1115/1.4043648. [DOI] [Google Scholar]
  • [101].Wang X. and Chou K, “Effect of support structures on Ti-6Al-4 V overhang parts fabricated by powder bed fusion electron beam additive manufacturing,” J. Mater. Process. Technol, vol. 257, pp. 65–78, July. 2018. [Google Scholar]
  • [102].Cooper K, Steele P, Cheng B, and Chou K, “Contact-free support structures for part overhangs in powder-bed metal additive manufacturing,” Inventions, vol. 3, no. 1, p. 2, December. 2017. [Google Scholar]
  • [103].Bikas H, Lianos AK, and Stavropoulos P, “A design framework for additive manufacturing,” Int. J. Adv. Manuf. Technol, vol. 103, pp. 3769–3783, May 2019. [Google Scholar]
  • [104].King WE et al. , “Laser powder bed fusion additive manufacturing of metals; physics, computational, and materials challenges,” Appl. Phys. Rev, vol. 2, no. 4, December. 2015, Art. no. 041304. [Google Scholar]
  • [105].Goldak J, Chakravarti A, and Bibby M, “A new finite element model for welding heat sources,” Metall. Trans. B, vol. 15, no. 2, pp. 299–305, June. 1984. [Google Scholar]
  • [106].Luo Z. and Zhao Y, “A survey of finite element analysis of temperature and thermal stress fields in powder bed fusion additive manufacturing,” Additive Manuf, vol. 21, pp. 318–332, May 2018. [Google Scholar]
  • [107].Stockman T, Schneider JA, Walker B, and Carpenter JS, “A 3D finite difference thermal model tailored for additive manufacturing,” JOM, vol. 71, no. 3, pp. 1117–1126, March. 2019. [Google Scholar]
  • [108].Yavari MR, Cole KD, and Rao PK, “Design rules for additive manufacturing—Understanding the fundamental thermal phenomena to reduce scrap,” Procedia Manuf, vol. 33, pp. 375–382, January. 2019. [Google Scholar]
  • [109].Strong D, Sirichakwal I, Manogharan GP, and Wakefield T, “Current state and potential of additive-hybrid manufacturing for metal parts,” Rapid Prototyping J, vol. 23, no. 3, pp. 577–588, 2017. [Google Scholar]
  • [110].Wang J, Li X, and Zhu X, “Intelligent dynamic control of stochastic economic lot scheduling by agent-based reinforcement learning,” Int. J. Prod. Res, vol. 50, no. 16, pp. 4381–4395, August. 2012. [Google Scholar]
  • [111].Meidani H. and Ghanem R, “Random Markov decision processes for sustainable infrastructure systems,” Struct. Infrastruct. Eng, vol. 11, no. 5, pp. 655–667, 2014. [Google Scholar]
  • [112].Kazaz B. and Sloan TW, “Production policies under deteriorating process conditions,” IIE Trans, vol. 40, no. 3, pp. 187–205, January. 2008. [Google Scholar]
  • [113].Elwany AH, Gebraeel NZ, and Maillart LM, “Structured replacement policies for components with complex degradation processes and dedicated sensors,” Oper. Res, vol. 59, no. 3, pp. 684–695, June. 2011. [Google Scholar]
  • [114].Yao B, Imani F, and Yang H, “Markov decision process for image-guided additive manufacturing,” IEEE Robot. Autom. Lett, vol. 3, no. 4, pp. 2792–2798, October. 2018. [Google Scholar]
  • [115].Yao B. and Yang H, “Constrained Markov decision process modeling for sequential optimization of additive manufacturing build quality,” IEEE Access, vol. 6, pp. 54786–54794, 2018. [Google Scholar]

RESOURCES