Abstract:
The disclosure provides a feature point position detection method and an electronic device. The method includes: obtaining a plurality of first relative positions of a plurality of feature points on a specific object relative to a first image capturing element; obtaining a plurality of second relative positions of the plurality of feature points on the specific object relative to a second image capturing element; and in response to determining that the first image capturing element is unreliable, estimating a current three-dimensional position of each feature point based on a historical three-dimensional position and the plurality of second relative positions of each feature point.
Abstract:
Augmented reality glasses including a first image source, a second image source and a lens set are provided. The first image source emits a first image beam. The second image source emits a second image beam. The lens set includes a first lens and a second lens and disposed on the path of the image beams. A gap is disposed between the first lens and the second lens. The refractive index of the gap is lower than that of the first lens. The image beams enter the lens set at an incident surface of the lens set, are reflected at a first surface of the first lens, and exit the lens set at an exit surface. The optical path length of the first image beam from the first image source to the eyes is different from that of the second image beam from the second image source to the eyes.
Abstract:
A pair of augmented reality glasses including a projection device and a waveguide is provided. The projection device is configured to provide a collimated beam. The waveguide has a plurality of free form surfaces. Distances between each free form surface and the projection device are different from each other. The collimated beam progresses to and reflects off these free form surfaces in sequence, and then enters eyes of the user.
Abstract:
A stereoscopic display device including a display panel and an optical element is provided. The optical element is disposed on a display surface of the display panel. The optical element includes a lenticular lens array. The lenticular lens array includes a plurality of lenticular lenses extending in a first direction. The lenticular lenses are arranged in a second direction. The optical element includes a center line extending in the first direction. The lenticular lenses include a first lenticular lens and a second lenticular lens. The distance between the first lenticular lens and the center line is smaller than the distance between the second lenticular lens and the center line, and the curvature of the first lenticular lens is greater than the curvature of the second lenticular lens.
Abstract:
A mobile device includes a ground element and an antenna element. The antenna element includes a first radiation portion, a second radiation portion, and a third radiation portion. The first radiation portion is electrically connected between a feeding point and an edge of the ground element, and the antenna element operates in a first frequency band through a first path formed by the first radiation portion. A first end of the second radiation portion is electrically connected to the first radiation portion, and a second end of the second radiation portion is a first open end. The third radiation portion is electrically connected between the second radiation portion and the edge of the ground element. The antenna element operates in a second frequency band through a second path formed by the second radiation portion and the third radiation portion.
Abstract:
An auto-focus compensation method, adapted to an image capturing device having a touch screen, includes the following steps. First, a first touch operation performed on a first object displayed on the touch screen is detected. Next, an auto-focusing procedure is performed so as to obtain a focused image. A staying time of the first touch operation on the first object is accumulated and determined whether it exceeds a time threshold. When the staying time exceeds the time threshold, an adjustment interface is displayed on the touch screen, and a second touch operation performed on the adjustment interface is detected. The focused image is adjusted according to the second touch operation, and a new focused image is thus obtained and captured. An image capturing device is also provided in the invention to implement the aforementioned auto-focus compensation method.
Abstract:
The disclosure provides an augmented reality (AR) device, a notebook, and smart glasses. The AR device includes a laser source, a spatial light modulator (SLM), and a hologram optical element (HOE). The laser source provides a coherent laser ray. The SLM provides a diffraction pattern solely corresponding to the coherent laser ray. When the SLM receives the coherent laser ray, the diffraction pattern diffracts the coherent laser ray as a hologram in response to the coherent laser ray. The HOE provides a concave mirror effect merely in response to a wavelength of the coherent laser ray, wherein the HOE receives the hologram and magnifies the hologram as a stereoscopic virtual image.
Abstract:
An augmented reality display device, which is configured to provide an augmented reality image to an eye of a user, includes a display module and multiple mirror sets. The display module includes a projector and multiple switching elements. The projector provides an original image beam. The switching elements are deposed sequentially on a path of the original image beam. Each of the switching elements reflects or transmits the original image beam. Reflection paths of each ray of the original image beam on the switching elements intersect at one point to generate multiple image beams. The mirror sets are disposed on paths of the image beams. The image beams are reflected at different angles on the mirror sets, and the mirror sets reflect the image beams to the eye.
Abstract:
An augmented reality display device is used to provide an augmented reality image to one eye of a user. The augmented reality display device includes a curved eyepiece, multiple first micromirrors and two first displays. These first micromirrors are disposed on the curved eyepiece. The two first displays are respectively disposed on two opposite sides of the curved eyepiece. Each first display is for emitting a first image beam. These first micromirrors are for imaging two first image beams emitted by the two first displays onto a retina of the eye to form the augmented reality image. Among them, the horizontal field of view formed by these first micromirrors to the eyes falls within the range of 80 degrees to 110 degrees.
Abstract:
The disclosure provides an eye tracking method and an eye tracking device. The method includes obtaining a reference interpupillary distance value; taking images of a user of a 3D display, and finding a first eye pixel coordinate corresponding to a first eye of the user and a second eye pixel coordinate corresponding to a second eye of the user in each image; detecting a first and a second eye spatial coordinates of the first and the second eyes, and determining projection coordinates based on the first eye spatial coordinate, the second eye spatial coordinate, and optical parameters of image capturing elements; determining an optimization condition related to the first and second eye spatial coordinates based on the first and second eye pixel coordinates, the projection coordinates, and the reference interpupillary distance value of each image; and optimizing the first and second eye spatial coordinates based on the optimization condition.