Abstract:
A method including: obtaining a characteristic of a portion of a design layout; determining a characteristic of M3D of a patterning device including or forming the portion; and training, by a computer, a neural network using training data including a sample whose feature vector includes the characteristic of the portion and whose supervisory signal includes the characteristic of the M3D. Also disclosed is a method including: obtaining a characteristic of a portion of a design layout; obtaining a characteristic of a lithographic process that uses a patterning device including or forming the portion; determining a characteristic of a result of the lithographic process; training, by a computer, a neural network using training data including a sample whose feature vector includes the characteristic of the portion and the characteristic of the lithographic process, and whose supervisory signal includes the characteristic of the result.
Abstract:
Methods according to the present invention provide computationally efficient techniques for designing gauge patterns for calibrating a model for use in a simulation process. More specifically, the present invention relates to methods of designing gauge patterns that achieve complete coverage of parameter variations with minimum number of gauges and corresponding measurements in the calibration of a lithographic process utilized to image a target design having a plurality of features. According to some aspects, a method according to the invention includes transforming the space of model parametric space (based on CD sensitivity or Delta TCCs), then iteratively identifying the direction that is most orthogonal to existing gauges' CD sensitivities in this new space, and determining most sensitive line width/pitch combination with optimal assist feature placement which leads to most sensitive CD changes along that direction in model parametric space.
Abstract:
Systems and methods for tuning photolithographic processes are described. A model of a target scanner is maintained defining sensitivity of the target scanner with reference to a set of tunable parameters. A differential model represents deviations of the target scanner from the reference. The target scanner may be tuned based on the settings of the reference scanner and the differential model. Performance of a family of related scanners may be characterized relative to the performance of a reference scanner. Differential models may include information such as parametric offsets and other differences that may be used to simulate the difference in imaging behavior.
Abstract:
A system, method, and apparatus for determining three-dimensional (3D) information of a structure of a patterned substrate. The 3D information can be determined using one or more models configured to generate 3D information (e.g., depth information) using only a single image of a patterned substrate. In a method, the model is trained by obtaining a pair of stereo images of a structure of a patterned substrate. The model generates, using a first image of the pair of stereo images as input, disparity data between the first image and a second image, the disparity data being indicative of depth information associated with the first image. The disparity data is combined with the second image to generate a reconstructed image corresponding to the first image. Further, one or more model parameters are adjusted based on the disparity data, the reconstructed image, and the first image.
Abstract:
A method for determining a mask pattern and a method for training a machine learning model. The method for determining a mask pattern includes obtaining, via executing a model using a target pattern to be printed on a substrate as an input pattern, a post optical proximity correction (post-OPC) pattern; determining, based on the post-OPC pattern, a simulated pattern that will be printed on the substrate; and determining the mask pattern based on a difference between the simulated pattern and the target pattern. The determining of the mask pattern includes modifying, based on the difference, the input pattern inputted to the model such that the difference is reduced; and executing, using the modified input pattern, the model to generate a modified post-OPC pattern from which the mask pattern can be derived.
Abstract:
A method for training a machine learning model to generate a predicted measured image, the method including obtaining (a) an input target image associated with a reference design pattern, and (b) a reference measured image associated with a specified design pattern printed on a substrate, wherein the input target image and the reference measured image are non-aligned images; and training, by a hardware computer system and using the input target image, the machine learning model to generate a predicted measured image.
Abstract:
A method including: obtaining a thin-mask transmission function of a patterning device and a M3D model for a lithographic process, wherein the thin-mask transmission function is a continuous transmission mask (CTM) and the M3D model at least represents a portion of M3D attributable to multiple edges of structures on the patterning device; determining a M3D mask transmission function of the patterning device by using the thin-mask transmission function and the M3D model; and determining an aerial image produced by the patterning device and the lithographic process, by using the M3D mask transmission function.
Abstract:
A three-dimensional mask model that provides a more realistic approximation of the three-dimensional effects of a photolithography mask with sub-wavelength features than a thin-mask model. In one embodiment, the three-dimensional mask model includes a set of filtering kernels in the spatial domain that are configured to be convolved with thin-mask transmission functions to produce a near-field image. In another embodiment, the three-dimensional mask model includes a set of correction factors in the frequency domain that are configured to be multiplied by the Fourier transform of thin-mask transmission functions to produce a near-field image.
Abstract:
Methods of training machine learning models related to a patterning process, including a method for training a machine learning model configured to predict a mask pattern. The method including obtaining (i) a process model of a patterning process configured to predict a pattern on a substrate, wherein the process model comprises one or more trained machine learning models, and (ii) a target pattern, and training the machine learning model configured to predict a mask pattern based on the process model and a cost function that determines a difference between the predicted pattern and the target pattern.
Abstract:
A method for determining a mask pattern and a method for training a machine learning model. The method for generating data for a mask pattern associated with a patterning process includes obtaining (i) a first mask image (e.g., CTM) associated with a design pattern, (ii) a contour (e.g., a resist contour) based on the first mask image, (iii) a reference contour (e.g., an ideal resist contour) based on the design pattern; and (iv) a contour difference between the contour and the reference contour. The contour difference and the first mask image are inputted to a model to generate mask image modification data. Based on the first mask image and the mask image modification data, a second mask image is generated for determining a mask pattern to be employed in the patterning process.