Abstract:
A wearable device is provided for authentication that includes a memory element and processing circuitry coupled to the memory element. The memory element configured to store a plurality of user profiles. The processing circuitry is configured to identify a pairing between the wearable device and a device. The processing circuitry is configured to identify a user of the wearable device. The processing circuitry also is configured to determine if the identified user matches a profile of the plurality of user profiles. The processing circuitry is also configured to responsive to the identified user matching the profile, determine if the profile provides authorization to access the device. The processing circuitry is also configured to responsive to the profile providing authorization to the device, send a message to the device authorizing access to the device.
Abstract:
A wearable device is provided for authentication that includes a memory element and processing circuitry coupled to the memory element. The memory element configured to store a plurality of user profiles. The processing circuitry is configured to identify a pairing between the wearable device and a device. The processing circuitry is configured to identify a user of the wearable device. The processing circuitry also is configured to determine if the identified user matches a profile of the plurality of user profiles. The processing circuitry is also configured to responsive to the identified user matching the profile, determine if the profile provides authorization to access the device. The processing circuitry is also configured to responsive to the profile providing authorization to the device, send a message to the device authorizing access to the device.
Abstract:
An embodiment of this disclosure provides a wearable device. The wearable device includes a memory configured to store a plurality of content for display, a transceiver configured to receive the plurality of content from a connected device, a display configured to display the plurality of content, and a processor coupled to the memory, the display, and the transceiver. The processor is configured to control the display to display at least some of the plurality of content in a spatially arranged format. The displayed content is on the display at a display position. The plurality of content, when shown on the connected device, is not in the spatially arranged format. The processor is also configured to receive movement information based on a movement of the wearable device. The processor is also configured to adjust the display position of the displayed content according to the movement information of the wearable device.
Abstract:
A method for eye tracking in a head-mountable device (HMD) includes determining at least one object within a three-dimensional (3D) extended reality (XR) environment as an eye tracking calibration point and determining a 3D location of the eye tracking calibration point within the XR environment. The method also includes detecting a gaze point of a user of the HMD and comparing the detected gaze point to an area of the XR environment that includes the 3D location of the eye tracking calibration point. The method further includes, in response to determining that the user is looking at the eye tracking calibration point based on the detected gaze point being within the area, calibrating, using a processor, the HMD to correct a difference between the eye tracking calibration point and the detected gaze point. In addition, the method includes, in response to determining that the user is not looking at the eye tracking calibration point based on the detected gaze point being outside of the area, maintaining an existing calibration of the HMD.
Abstract:
A method, apparatus, and computer readable medium for interactive cinemagrams. The method includes displaying a still frame of a cinemagram on a display of an electronic device, the cinemagram having an animated portion. The method also includes after the displaying, identifying occurrence of a triggering event based on an input from one or more sensors of the electronic device. Additionally, the method includes initiating animation of the animated portion of the cinemagram in response to identifying the occurrence of the triggering event. The method may also include generating the image as a cinemagram by identifying a reference frame from a plurality of frames and an object in the reference frame, segmenting the object from the reference frame, tracking the object across multiple of the frames, determining whether a portion of the reference frame lacks pixel information during motion of the object, and identifying pixel information to add to the portion.
Abstract:
Methods and electronic devices for managing information context among devices. The method includes switching from displaying information of a first application to displaying information of a second application. The method also includes identifying information of the first application that is relevant to the second application. The relevant information includes at least a portion of the displayed information of the first application. Additionally, the method includes sending an indication of the relevant information to a second electronic device for display of the relevant information at the second electronic device. The method may also include, while displaying the information of the second application, receiving input information from the second electronic device. The input information may include at least a portion of the relevant information displayed at the second electronic device. Additionally, the method may include using the input information in the second application.
Abstract:
One embodiment provides a method comprising analyzing one or more frames of a piece of content to determine a context of the one or more frames, determining a product to advertise in the piece of content based on the context, and augmenting the piece of content with a product placement for the product. The product placement appears to occur naturally in the piece of content.
Abstract:
A method for eye tracking in a head-mountable device (HMD) includes determining at least one object within a three-dimensional (3D) extended reality (XR) environment as an eye tracking calibration point and determining a 3D location of the eye tracking calibration point within the XR environment. The method also includes detecting a gaze point of a user of the HMD and comparing the detected gaze point to an area of the XR environment that includes the 3D location of the eye tracking calibration point. The method further includes, in response to determining that the user is looking at the eye tracking calibration point based on the detected gaze point being within the area, calibrating, using a processor, the HMD to correct a difference between the eye tracking calibration point and the detected gaze point. In addition, the method includes, in response to determining that the user is not looking at the eye tracking calibration point based on the detected gaze point being outside of the area, maintaining an existing calibration of the HMD.
Abstract:
One embodiment provides a method comprising analyzing one or more frames of a piece of content to determine a context of the one or more frames, determining a product to advertise in the piece of content based on the context, and augmenting the piece of content with a product placement for the product. The product placement appears to occur naturally in the piece of content.
Abstract:
A method, an electronic device and a non-transitory computer readable medium for are provided. The method includes capturing a first eye and a second eye. The method also includes identifying a first position of the first eye and a second position the second eye. Additionally, the method includes staggering the capture of the first eye and the second eye, wherein the first eye is captured prior to capturing the second eye. The method also includes identifying the first gaze direction with respect to a display. The method further includes mapping the second gaze direction of the second eye based on the identified first gaze direction, prior to capturing the second eye.