Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an optical proximity correction method for ultra-violet based on deep learning, which provides a new viewing angle for EUV lithography, in particular to lithography with small critical dimension. Compared with other OPC methods, the OPC method provided by the invention mainly contributes to the following steps: firstly, the modeling efficiency is greatly improved; second, very large scale patterns can be generated with little computational cost (including time and required computer memory); thirdly, for a critical dimension of 3nm or less, the forward module can output corresponding near-far field distribution quickly and accurately, and the proposed inversion module can quickly generate a corrected mask to improve the quality of imaging on the wafer.
The invention adopts the following technical scheme:
the extreme ultraviolet EUV structure is composed of a mask group and a multilayer Bragg reflector group, and 6-degree incident plane waves are selected as EUV rays. The mask stack includes a mask pattern over the multilayer bragg reflector stack. The multilayer Bragg reflector consists of 40 double-layer Si-Mo layers and can effectively reflect the energy of 13.5nm plane waves at 6-degree incidence. At the same time, the masked areas covered by the absorber absorb most of the EUV light, while the uncovered areas reflect most of the EUV light into the optical projector. The optical projector is used for projecting the layout pattern of the mask onto the wafer. After the photoresist is developed, the layout pattern will be printed on the wafer.
An optical proximity correction method for extreme ultraviolet based on deep learning specifically comprises the following steps:
forward module training samples were generated using Wave knowledge EM software developed by Wave computing Technologies, inc, which employs the spectral-element spectral-integration (SESI) method suitable for this problem. The forward module has a total of 320 training samples, and eight groups, each group containing 40 samples for training. 40 test samples were used for validation. The mask size of the training sample is 128nm multiplied by 128nm, the dispersion is 256 multiplied by 256 pixel points, and the size of each pixel point is 0.5nm multiplied by 0.5 nm.
The forward module is designed, the input of the forward module is a mask mode, i.e. a binary image, and the output is a far-near field on a plane above the mask stack. The near field on the 1nm plane above the mask stack is set as the output of the forward module. The iteration times of two connected U-Net are both 2000, and the learning rates are respectively 1 multiplied by 10-4And 5X 10-5. Meanwhile, Mean Square Error (MSE) is defined as a loss function of the two U-nets.
And generating an inversion module training sample, wherein before the training process, the key for constructing the required training data set is to construct the inversion module. And establishing an OPC model comprising two steps, wherein the OPC model is used for generating a training sample of the inversion module. Setting a side length threshold, comparing the side length of the sample with the threshold, and performing transformation for increasing the corresponding area and the protrusion and the recess of the boundary on the sample to produce a training sample. And selecting a sample with small mask error to construct a training data set of the inversion module. A total of 400 samples were selected for training and 50 additional test samples were tested for validation. The mask size of the training sample of the inversion module is also 128nm by 128nm, with a dispersion of 256 by 256 pixels.
And designing an inversion module, wherein the input of the inversion module is an ideal image expected on the wafer, and the corresponding output is the corrected mask. The inversion module is also constructed on the basis of U-Net, and on the basis, the input single-channel binary image can be converted into the output single-channel binary image. For the inversion module, four samples in the test set that never appeared in the training data set were used to evaluate the performance of the inversion module. The mask sizes of the four samples are 128nm × 128nm, and the mask output values obtained by the inversion module are input into the forward module to calculate the corresponding field distributions.
Establishing a model combining a forward modeling module and an inversion module, which specifically comprises the following steps:
the inversion module inputs the expected image on the wafer, and the corresponding output is the corrected mask; and then, the obtained mask is used as the input of the forward modeling module after training is finished, and the corresponding image on the wafer can be obtained.
As can be seen from the above description of the present invention, compared with the prior art, the present invention has the following advantages:
firstly, the forward modeling module is tested by using samples of different types and scales, and comparison with a full-wave simulation result shows that the forward modeling module not only can accurately map mask patterns of different critical dimensions to corresponding near fields, but also can map patterns of different types to corresponding near fields; meanwhile, compared with full-wave simulation (such as SESI), the forward module greatly reduces the required CPU time and memory, and improves the calculation efficiency;
then, evaluating samples of different scales and types by using the designed inversion module; according to the test results of samples with different critical dimensions, the provided inversion module is different from the traditional iteration method, the corrected mask can be directly output, the calculation efficiency is high, and the corrected mask can enhance the imaging balance, particularly for the samples with the critical dimension less than 3 nm.
The method provided by the invention, including the forward modeling module and the inversion module, can be used as a reliable and effective OPC tool, especially in an EUV system with a small critical dimension.
The above and other objects, advantages and features of the present invention will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1, the extreme ultraviolet EUV structure of the present invention is composed of a mask set and a multi-layer bragg reflector set, and 6-degree incident plane waves are selected as EUV light. The mask stack includes a mask pattern over the multilayer bragg reflector stack. The multilayer Bragg reflector consists of 40 double-layer Si-Mo layers and can effectively reflect the energy of 13.5nm plane waves at 6-degree incidence. At the same time, the masked areas covered by the absorber absorb most of the EUV light, while the uncovered areas reflect most of the EUV light into the optical projector. The optical projector is used for projecting the layout pattern of the mask onto the wafer. After the photoresist is developed, the layout pattern will be printed on the wafer.
Referring to fig. 2, the optical proximity correction method for extreme ultraviolet based on deep learning of the present invention specifically includes:
s101, establishing a deep learning method-based model of the OPC method for EUV.
First, a forward module is designed. The input to the forward module is the mask pattern, i.e. the binary image, and the output is the far and near field on the plane above the mask stack. The near field on the 1nm plane above the mask stack is set as the output of the forward module. The iteration times of two connected U-Net are both 2000, and the learning rates are respectively 1 multiplied by 10-4And 5X 10-5. Meanwhile, MSE is defined as the loss function of the two U-nets.
Secondly, an inversion module is designed. The input to the inversion module is the ideal image on the wafer and the corresponding output is the corrected mask. The inversion module is also constructed on the basis of U-Net, and the structure of the inversion module is shown in FIG. 4, wherein the gray vertical frame represents the multi-channel image, and the height and width thereof represent the size and the number of channels of the image respectively. The operation is indicated by arrows of different colors and directions. On the basis, the input single-channel binary image can be converted into the output single-channel binary image. For the inversion module, four samples in the test set that never appeared in the training data set were used to evaluate the performance of the inversion module. The mask sizes of the four samples are 128nm × 128nm, and the mask output values obtained by the inversion module are input into the forward module to calculate the corresponding field distributions.
Finally, establishing a model combining the forward modeling module and the inversion module, which specifically comprises the following steps:
the inversion module inputs are the desired imaging on the wafer and the corresponding outputs are the corrected mask. And then, the obtained mask is used as the input of the forward modeling module after training is finished, and the corresponding image on the wafer can be obtained.
And S102, establishing a data set of the forward modeling module and the inversion module.
Forward modeling block training samples were generated using Wave knowledge EM software developed by Wave computing Technologies, inc. The forward module has a total of 320 training samples. Eight training samples, each containing 40 samples for training. 40 test samples were used for validation. The mask size of the training sample was 128nm by 128nm, the dispersion was 256 by 256 pixels, and the size of each pixel was 0.5nm by 0.5 nm.
And generating an inversion module training sample, wherein before the training process, the key for constructing the required training data set is to construct the inversion module. And establishing an OPC model comprising two steps, wherein the OPC model is used for generating a training sample of the inversion module. As shown in FIGS. 3(a) -3(c), taking a rectangular shape as an example, the first step is to increase m in two directions respectively1And m2And enlarging the definition field of the original rectangle. Then, for four sides, the midpoints of the four sides respectively form an amplitude h1、h2、h3And h4Concave/convex. If the width of one side is less than the w threshold, the corresponding adjacent side is raised. The w is a threshold value for determining whether one side is concave or convex, and the present invention sets the w threshold value of the rectangle to 3 nm. As shown in FIG. 3(d), if L3If the value is less than the w threshold value, m is respectively increased to two directions in the area3And m4Then, h appears at the corresponding adjacent side5And h7As shown in fig. 3 (f). Conversely, if the length of a side is greater than the w threshold, the corresponding side forms a depression, as shown in fig. 3 (c). As shown in FIG. 3(g), if L5Greater than the w threshold, m is increased in both directions in the region5And m6Then, the corresponding adjacent side h9And h11A depression is formed as shown in fig. 3 (i). And finally, selecting a sample with small mask error to construct a training data set of the inversion module. A total of 400 samples were selected for training and 50 additional test samples were tested for validation. The mask size of the training sample of the inversion module is also 128nm by 128nm, with a dispersion of 256 by 256 pixels.
S103, training and verifying the inversion module by using the established data set of the inversion module; and taking the mask output by the inversion module as the input of the trained forward module to train and test the forward module.
The invention performs all calculations on workstations of Intel i9-10940X 3.30GHz CPU, 256GB RAM and NVIDIA GeForce RTX 3090 GPU.
As shown above, the forward module has a total of 320 training samples. Eight training samples, each containing 40 samples for training. 40 test samples were used for validation. The mask size of the training sample is 128nm multiplied by 128nm, the dispersion is 256 multiplied by 256 pixel points, and the size of each pixel point is 0.5nm multiplied by 0.5 nm.
For the inversion module, a total of 400 samples were selected for training, and 50 additional test samples were tested for validation. Fig. 5 records the convergence of the loss function during training. The training sample of the inversion module is also 128nm by 128nm, discretized into 256 by 256 pixels.
Specifically, for an inversion module, four samples in the test set that never appeared in the training data set were used to evaluate the performance of the inversion module. The mask sizes of the four samples are 128nm × 128nm, and the mask output values obtained by the inversion module are input into the forward module to calculate the corresponding field distributions. To describe the deviation between the output image and the original desired image (target), the error of the mask is defined as:
wherein m is
pFor predicting matrices of binary images, m
τIs a matrix of the desired binary image.
As the critical dimensions decrease, imaging performance of the lithography system poses significant challenges. Tests #1-4 (corresponding to the graphs in the first, second, third and fourth rows of FIG. 6, respectively) had critical dimensions of 4nm, 7nm, 4nm and 2.5nm, respectively, and corresponding electrical dimensions of 0.3 λ, 0.52 λ, 0.3 λ and 0.19 λ, respectively.
The goals of test #1-4 are shown in FIG. 6, where tests #1-3 are based on samples (i), (e), and (h) in FIG. 3, respectively, and test #4 is quite different from the training sample. The mask error, as defined in the equation. The distances between the targets of test #1-4 and the imaging map on the wafer are shown in table 1. Wherein Misfit1Is imaging and corresponding of a forward moduleError between targets. Misfit2Is the error between the corresponding imaging obtained from the SESI and the corresponding target.
TABLE 1 errors of imaging with corresponding targets
|
Test #1
|
Test #2
|
Test #3
|
Test #4
|
Misfit1 |
0.0632
|
0.0721
|
0.0757
|
0.0948
|
Misfit2 |
0.0698
|
0.0682
|
0.0762
|
0.0872
|
Misfit3 |
0.1342
|
0.1698
|
0.1593
|
0.1839 |
For comparison, the original target was set to mask and the corresponding image from SESI is shown in fig. 6. The mask error between target and image is shown in Table 1, namely Misfit3. It can be seen that the inversion module proposed by the present invention provides a mask that can achieve smaller errors than the original target as a mask. The proposed inversion module is then examined with a very large mask. As shown in FIG. 7, the target size is 6400nm by 6400nm, and the predicted output mask is shown in FIG. 7. Also, the CPU time and the required computational memory are recorded in Table 2. Different from the traditional iteration method, the inversion module can directly output the corrected mask, and has high calculation efficiency on different targets although extra GPU calculation is needed.
Table 2 shows the run time and required memory of the inversion module
|
Size of
|
Run time
|
Memory device
|
Test # |
1
|
128nm×128nm
|
32ms
|
0.42GB
|
Test # |
2
|
128nm×128nm
|
32ms
|
0.42GB
|
Test #3
|
128nm×128nm
|
32ms
|
0.42GB
|
Test # |
4
|
128nm×128nm
|
32ms
|
0.42GB
|
Test #5
|
6400nm×6400nm
|
51s
|
9.8GB |
As shown in fig. 6-7 and tables 1-2, the corrected mask can enhance the uniformity of the imaging as verified by trials # 1-4. Meanwhile, experiment #5 further verifies the computational efficiency through the computation of the oversized mask. Therefore, the inversion module in the deep learning-based OPC method for EUV can be used as an effective tool for the OPC method for EUV, and is especially suitable for the case of small critical dimension.
From the test results, the invention provides an OPC method for EUV based on deep learning, which improves the imaging performance of an EUV system, especially the imaging performance under the condition that the critical dimension is less than 3 nm. Firstly, the forward modeling module is tested by using samples with different types and scales, and comparison with a full-wave simulation result shows that the forward modeling module not only can accurately map mask patterns with different critical dimensions to corresponding near fields, but also can map different types of patterns to corresponding near fields. At the same time, the forward module greatly improves computational efficiency, including required CPU time and memory, compared to full-wave simulation (e.g., SESI). The constructed inversion module evaluates samples of different scales and types. It can be seen that, unlike the conventional iterative method, the proposed inversion module can directly output the corrected mask, which has high computational efficiency, and the corrected mask can enhance the imaging uniformity, especially the imaging uniformity for critical dimensions smaller than 3 nm. The method provided by the invention, including the forward module and the inversion module, can be used as a reliable and effective OPC tool, especially in an EUV system with a small critical dimension.
The above examples are only used to further illustrate the deep learning-based optical proximity correction method for extreme ultraviolet, but the present invention is not limited to the above examples, and any simple modification, equivalent change and modification made to the above examples according to the technical spirit of the present invention fall within the scope of the technical solution of the present invention.