Abstract:
Systems and methods are provided relating to artificial neural networks are provided. The systems and methods obtain a teacher network that includes artificial neural layers configured to automatically identify one or more objects in an image examined by the artificial neural layers, receive a set of task images at the teacher network, examine the set of task images with the teacher network, identify a subset of the artificial neural layers that are utilized during examination of the set of task images with the teacher network, and define a student network based on the set of task images. The student network is configured to automatically identify one or more objects in an image examined by the subset.
Abstract:
A system that generates training images for neural networks includes one or more processors configured to receive input representing one or more selected areas in an image mask. The one or more processors are configured to form a labeled masked image by combining the image mask with an unlabeled image of equipment. The one or more processors also are configured to train an artificial neural network using the labeled masked image to one or more of automatically identify equipment damage appearing in one or more actual images of equipment and/or generate one or more training images for training another artificial neural network to automatically identify the equipment damage appearing in the one or more actual images of equipment.
Abstract:
A method for locating probes within a gas turbine engine may generally include positioning a plurality of location transmitters relative to the engine and inserting a probe through an access port of the engine, wherein the probe includes a probe tip and a location signal receiver configured to receive location-related signals transmitted from the location transmitters. The method may also include determining a current location of the probe tip within the engine based at least in part on the location-related signals and identifying a virtual location of the probe tip within a three-dimensional model of the engine corresponding to the current location of the probe tip within the engine. Moreover, the method may include providing for display the three-dimensional model of the engine, wherein the virtual location of the probe tip is displayed as a visual indicator within the three-dimensional model.
Abstract:
A monitoring system for monitoring a plurality of components is provided. The monitoring system includes a plurality of client systems. The plurality of client systems is configured to generate a plurality of component status reports. The plurality of component status reports is associated with the plurality of components. The monitoring system also includes a component wear monitoring (CWM) computer device configured to receive the plurality of component status reports from the plurality of client systems, generate component status information based on a plurality of component status reports, aggregate the component status information to identify a plurality of images associated with a first component, and compare the plurality of images associated with the first component. The plurality of images represents the first component at different points in time. The CWM computer device is also configured to determine a state of the first component based on the comparison.
Abstract:
A system includes one or more processors and a memory that stores a generative adversarial network (GAN). The one or more processors are configured to receive a low resolution point cloud comprising a set of three-dimensional (3D) data points that represents an object. A generator of the GAN is configured to generate a first set of generated data points based at least in part on one or more characteristics of the data points in the low resolution point cloud, and to interpolate the generated data points into the low resolution point cloud to produce a super-resolved point cloud that represents the object and has a greater resolution than the low resolution point cloud. The one or more processors are further configured to analyze the super-resolved point cloud for detecting one or more of an identity of the object or damage to the object.
Abstract:
Systems and methods are provided relating to artificial neural networks are provided. The systems and methods obtain a teacher network that includes artificial neural layers configured to automatically identify one or more objects in an image examined by the artificial neural layers, receive a set of task images at the teacher network, examine the set of task images with the teacher network, identify a subset of the artificial neural layers that are utilized during examination of the set of task images with the teacher network, and define a student network based on the set of task images. The student network is configured to automatically identify one or more objects in an image examined by the subset.
Abstract:
A generative adversarial network (GAN) system includes a generator neural sub-network configured to receive one or more images depicting one or more objects. The generator neural sub-network also is configured to generate a foreground image and a background image based on the one or more images that are received, the generator neural sub-network configured to combine the foreground image with the background image to form a consolidated image. The GAN system also includes a discriminator neural sub-network configured to examine the consolidated image and determine whether the consolidated image depicts at least one of the objects. The generator neural sub-network is configured to one or more of provide the consolidated image or generate an additional image as a training image used to train another neural network to automatically identify the one or more objects in one or more other images.
Abstract:
A generative adversarial network (GAN) system includes a generator sub-network configured to examine images of an object moving relative to a viewer of the object. The generator sub-network also is configured to generate one or more distribution-based images based on the images that were examined. The system also includes a discriminator sub-network configured to examine the one or more distribution-based images to determine whether the one or more distribution-based images accurately represent the object. A predicted optical flow of the object is represented by relative movement of the object as shown in the one or more distribution-based images.
Abstract:
The present disclosure is directed to a computer-implemented method of sensor planning for acquiring samples via an apparatus including one or more sensors. The computer-implemented method includes defining, by one or more computing devices, an area of interest; identifying, by the one or more computing devices, one or more sensing parameters for the one or more sensors; determining, by the one or more computing devices, a sampling combination for acquiring a plurality of samples by the one or more sensors based at least in part on the one or more sensing parameters; and providing, by the one or more computing devices, one or more command control signals to the apparatus including the one or more sensors to acquire the plurality of samples of the area of interest using the one or more sensors based at least on the sampling combination.
Abstract:
A method for performing a visual inspection of a gas turbine engine may generally include inserting a plurality of optical probes through a plurality of access ports of the gas turbine engine. The access ports may be spaced apart axially along a longitudinal axis of the gas turbine engine such that the optical probes provide internal views of the gas turbine engine from a plurality of different axial locations along the gas turbine engine. The method may also include coupling the optical probes to a computing device, rotating the gas turbine engine about the longitudinal axis as the optical probes are used to simultaneously obtain images of an interior of the gas turbine engine at the different axial locations and receiving, with the computing device, image data associated with the images obtained by each of the optical probes at the different axial locations.