Abstract:
A manipulation platform includes a navigation system, manipulation arm, and one or more area sensors. The navigation unit locates a position of the manipulation platform, and a manipulation arm has a device or a collection sensor. The area sensors acquire data representative of at least a portion of an area in which the manipulation platform is located. Processors determine or predict a presence of an external object within a manipulation range of the manipulation arm using the data acquired by the one or more area sensors. The processors respond to a determination of the external body being, or being predicted to be, within the manipulation range by controlling one or more of the manipulation arm or the manipulation platform.
Abstract:
A robot in one embodiment includes a navigation system, a manipulation unit, a visual recognition unit, and a processing unit. The visual recognition unit includes at least one detector configured to acquire presence information of humans within at least a portion of the area. The processing unit is operably coupled to the navigation system, manipulation unit, and visual recognition unit, and is configured to determine a presence of a human within at least one predetermined range of the robot using the presence information from the visual recognition unit, determine at least one control action responsive to determining the presence of the human within the at least one predetermined range, and control at least one of the navigation system or manipulation unit to implement the at least one control action.
Abstract:
A computer-implemented system for enhanced tip-tracking and navigation of visual inspection devices includes a visual inspection device. The system further includes a plurality of spatially sensitive fibers. The system includes a computing device. The computing device includes a memory device and a processor. The system includes a storage device. The storage device includes an engineering model representing the physical asset. The computing device is configured receive an insertion location from the visual inspection device. The computing device is configured to receive fiber information associated with the visual inspection device. The computing device is configured to determine the real-time location of the visual inspection device using the fiber information. The computing device is configured to identify the real-time location of the visual inspection device with respect to the engineering model. The computing device is configured to navigate the visual inspection device from a first location to a second location.
Abstract:
An optical imaging system includes a birefringent element, a light modulating element, and a polarizer element. The birefringent element is configured for decomposing un-polarized light into first linear polarized light and second linear polarized light under different refractive indexes to respectively form a first focal length and a second focal length in the optical imaging system. The light modulating element is configured for modulating a state of polarization of the first and second linear polarized light in response to control signals. The polarizer element is configured for filtering out one of the modulated first and second linear polarized light for creating a single image.
Abstract:
A manipulation platform includes a navigation system, manipulation arm, and one or more area sensors. The navigation unit locates a position of the manipulation platform, and a manipulation arm has a device or a collection sensor. The area sensors acquire data representative of at least a portion of an area in which the manipulation platform is located. Processors determine or predict a presence of an external object within a manipulation range of the manipulation arm using the data acquired by the one or more area sensors. The processors respond to a determination of the external body being, or being predicted to be, within the manipulation range by controlling one or more of the manipulation arm or the manipulation platform.
Abstract:
A surface roughness measurement device that in one embodiment includes main and auxiliary emitting fibers, multiple collecting fibers, an optical housing, main and auxiliary reflective mirrors, and an external circuit. The optical housing includes the fibers and defines an aperture for optically contacting a surface of an object. The main reflective mirror is arranged in the optical housing, for reflecting light emitted from the main emitting fiber to a detecting point of the aperture and reflected light by the object to the collecting fibers. The auxiliary reflective mirror is arranged in the optical housing, for reflecting light emitted from the auxiliary emitting fiber to the detecting point. The external circuit is for generating a laser beam to the main and auxiliary emitting fibers, collecting the reflected light from the collecting fibers, and calculating the surface roughness of the object based on the collected reflected light.
Abstract:
A robot in one embodiment includes a navigation system, a manipulation unit, a visual recognition unit, and a processing unit. The visual recognition unit includes at least one detector configured to acquire presence information of humans within at least a portion of the area. The processing unit is operably coupled to the navigation system, manipulation unit, and visual recognition unit, and is configured to determine a presence of a human within at least one predetermined range of the robot using the presence information from the visual recognition unit, determine at least one control action responsive to determining the presence of the human within the at least one predetermined range, and control at least one of the navigation system or manipulation unit to implement the at least one control action.
Abstract:
A computer-implemented system for enhanced tip-tracking and navigation of visual inspection devices includes a visual inspection device. The system further includes a plurality of spatially sensitive fibers. The system includes a computing device. The computing device includes a memory device and a processor. The system includes a storage device. The storage device includes an engineering model representing the physical asset. The computing device is configured receive an insertion location from the visual inspection device. The computing device is configured to receive fiber information associated with the visual inspection device. The computing device is configured to determine the real-time location of the visual inspection device using the fiber information. The computing device is configured to identify the real-time location of the visual inspection device with respect to the engineering model. The computing device is configured to navigate the visual inspection device from a first location to a second location.
Abstract:
A system for inspection of a blade of a wind turbine in operation is provided. The system comprises a light projection unit, an imaging unit and a processing unit. The light projection unit generates and projects a light pattern towards a blade of a wind turbine in operation. The imaging unit captures a plurality of scanning light patterns on the blade of the wind turbine during rotation of the blade. The processing unit is configured to process the plurality of the captured d light patterns from the imaging unit for inspection of deflection of the blade. A method for inspection of a blade of a wind turbine in operation is also presented.
Abstract:
A method and system that include an imaging device configured to capture image data of the vehicle. The vehicle includes one or more components of interest. The method and system include a memory device configured to store an image detection algorithm based on one or more image templates corresponding to the one or more components of interest. The method and system also includes an image processing unit operably coupled to the imaging device in the memory device. The image processing unit is configured to determine one or more shapes of interest of the image data using the image detection algorithm that correspond to the one or more components of interest, and determine one or more locations of the one or more shapes of interest respective to the vehicle.