-
公开(公告)号:US11126224B2
公开(公告)日:2021-09-21
申请号:US16660922
申请日:2019-10-23
Applicant: Snap Inc.
Inventor: Russell Douglas Patton , Jonathan M. Rodriguez, II , Julio Cesar Castañeda , Samuel Bryson Thompson
Abstract: A wearable device includes a body having fasteners and a frame coupled between two fasteners. The frame includes first and second sections. A first portion of the body includes the first section of the frame and one fastener and a second portion of the body includes the second section of the frame and the other fastener. A speaker and a microphone are connected to the first portion and another speaker and another microphone are connected to the second portion. The body also includes a processor, memory accessible to the processor, and programming in the memory for configuring the processor to selectively activate the speakers and microphones such that a first speaker emits an output sound signal while a first microphone and a second speaker are deactivated and a second microphone captures an input sound signal during the emission of the output sound signal by the first speaker.
-
公开(公告)号:US20190295326A1
公开(公告)日:2019-09-26
申请号:US16438226
申请日:2019-06-11
Applicant: Snap Inc
Inventor: Nathan Jurgenson , Linjie Luo , Jonathan M. Rodriguez, II , Rahul Sheth , Jia Li , Xutao Lv
IPC: G06T19/00 , G06T7/20 , G06T7/73 , G06F3/0481 , G06F3/01 , G06K9/00 , G06K9/78 , G06T13/80 , G06T19/20 , G06T7/246
Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. A set of structure façade data describing one or more structure façades associated with the first position estimate is then accessed. A first image of an environment is captured, and a portion of the image is matched to part of the structure façade data. A second position is then estimated based on a comparison of the structure façade data with the portion of the image matched to the structure façade data.
-
公开(公告)号:US20240346784A1
公开(公告)日:2024-10-17
申请号:US18752556
申请日:2024-06-24
Applicant: Snap Inc.
Inventor: Edmund Graves Brown , Benjamin Lucas , Jonathan M. Rodriguez, II , Richard Zhuang
CPC classification number: G06T19/006 , G01C21/365 , G06Q10/02
Abstract: A method of locating a personal mobility system using an augmented reality device is disclosed. The method comprises receiving positional data corresponding to a location of a personal mobility system, determining a relative position between the augmented reality device and the location of the personal mobility system, and causing the display of an augmented reality effect by the augmented reality device based on the relative position between the augmented reality device and the location of the personal mobility system.
-
公开(公告)号:US20240305877A1
公开(公告)日:2024-09-12
申请号:US18665624
申请日:2024-05-16
Applicant: Snap Inc.
Inventor: Peter Brook , Russell Douglas Patton , Jonathan M. Rodriguez, II
IPC: H04N23/617 , H04N23/65 , H04N23/71 , H04N23/73
CPC classification number: H04N23/617 , H04N23/65 , H04N23/71 , H04N23/73
Abstract: System, methods, devices, and instructions are described for fast boot of a processor as part of camera operation. In some embodiments, in response to a camera input, a digital signal processor (DSP) of a device is booted using a first set of instructions. Capture of image sensor data is initiated using the first set of instructions at the DSP. The DSP then receives a second set of instructions and the DSP is programmed using the second set of instructions after at least a first frame of the image sensor data is stored in a memory of the device. The first frame of the image sensor data is processed using the DSP as programmed by the second set of instructions. In some embodiments, the first set of instructions includes only instructions for setting camera sensor values, and the second set of instructions includes instructions for processing raw sensor data into formatted image files.
-
公开(公告)号:US12085787B2
公开(公告)日:2024-09-10
申请号:US18116203
申请日:2023-03-01
Applicant: Snap Inc.
Inventor: Jonathan M. Rodriguez, II , Julio Cesar Castañeda , Samuel Bryson Thompson , Michael Christian Ryner
CPC classification number: G02C7/088 , G02B6/1245 , G02C5/006 , G03B21/142
Abstract: Eyewear including a frame, a projector supported by the frame, and a lens supported by the frame. The lens has a first surface facing an eye of the user and a second surface facing away from the eye of the user when the frame is worn. The lens also includes a waveguide defined by the first and second surfaces to receive light from the projector. An input light coupler and an output light coupler are on the first surface of the lens and at least one reflector is positioned on a second surface of the lens to redirect light received from the input coupler and/or the output coupler to redirect light having an angle of incidence with respect to the second surface of the lens that would result in that portion of the light exiting the waveguide through the second surface in the absence of the at least one reflector.
-
公开(公告)号:US20230350203A1
公开(公告)日:2023-11-02
申请号:US17661498
申请日:2022-04-29
Applicant: Snap Inc.
Inventor: Russell Douglas Patton , Jonathan M. Rodriguez, II
CPC classification number: G02B27/0172 , G09G3/002 , G02B27/1006 , G02B5/3025 , G09G2340/125 , G09G2330/021 , G09G2354/00 , G09G2320/0613 , G02B2027/014 , G02B2027/0138
Abstract: Methods and systems are disclosed for performing operations for displaying virtual content on a contact lens. The operations comprise causing the contact lens to operate in a first display mode to allow unobstructed light associated with a real-world environment to be received by a pupil of a user; detecting a condition associated with display of virtual content; selecting between a second display mode and a third display mode as a new display mode in which to operate the contact lens, the second display mode obstructing a portion of the light associated with a real-world environment received by the pupil with the virtual content, the third display mode obstructing all of the light associated with a real-world environment received by the pupil with the virtual content; and transitioning the contact lens from operating in the first display mode to operating in the new display mode in response to detecting the condition.
-
公开(公告)号:US11714280B2
公开(公告)日:2023-08-01
申请号:US17446905
申请日:2021-09-03
Applicant: Snap Inc.
Inventor: Jonathan M. Rodriguez, II
CPC classification number: G02B27/017 , G06F1/163 , G06F3/011 , G06F21/36 , G06T19/006 , G02B2027/0178
Abstract: Augmented reality eyewear devices allow users to experience a version of our “real” physical world augmented with virtual objects. Augmented reality eyewear may present a user with a graphical user interface that appears to be in the airspace directly in front of the user thereby encouraging the user to interact with virtual objects in socially undesirable ways, such as by making sweeping hand gestures in the airspace in front of the user. Anchoring various input mechanisms or the graphical user interface of an augmented reality eyewear application to a wristwatch may allow a user to interact with an augmented reality eyewear device in a more socially acceptable manner. Combining the displays of a smartwatch and an augmented reality eyewear device into a single graphical user interface may provide enhanced display function and more responsive gestural input.
-
公开(公告)号:US20230007227A1
公开(公告)日:2023-01-05
申请号:US17839579
申请日:2022-06-14
Applicant: Snap Inc.
Inventor: Edmund Brown , Benjamin Lucas , Simon Nielsen , Jonathan M. Rodriguez, II , Richard Zhuang
IPC: H04N13/296 , H04N13/239 , H04N13/344 , H04N13/383 , G02B27/01
Abstract: Eyewear providing an interactive augmented reality experience to users in a first physical environment viewing objects in a second physical environment (e.g., X-ray effect). The second environment may be a room positioned behind a barrier, such as a wall. The user views the second environment via a sensor system moveable on the wall using a track system. As the user in the first environment moves the eyewear to face the outside surface of the wall along a line-of-sight (LOS) at a location (x, y, z), the sensor system on the track system repositions to the same location (x, y, z) on the inside surface of wall. The image captured by the sensor system in the second environment is wirelessly transmitted to the eyewear for displayed on the eyewear displays, providing the user with an X-ray effect of looking through the wall to see the objects within the other environment.
-
公开(公告)号:US20220300032A1
公开(公告)日:2022-09-22
申请号:US17835628
申请日:2022-06-08
Applicant: Snap Inc.
Inventor: Russell Douglas Patton , Jonathan M. Rodriguez, II , Julio Cesar Castañeda , Samuel Bryson Thompson
Abstract: A wearable device includes a body having fasteners and a frame coupled between two fasteners. The frame includes first and second sections. A first portion of the body includes the first section of the frame and one fastener and a second portion of the body includes the second section of the frame and the other fastener. A speaker and a microphone are connected to the first portion and another speaker and another microphone are connected to the second portion. The body also includes a processor, memory accessible to the processor, and programming in the memory for configuring the processor to selectively activate the speakers and microphones such that a first speaker emits an output sound signal while a first microphone and a second speaker are deactivated and a second microphone captures an input sound signal during the emission of the output sound signal by the first speaker.
-
公开(公告)号:US20200219312A1
公开(公告)日:2020-07-09
申请号:US16824297
申请日:2020-03-19
Applicant: Snap Inc.
Inventor: Nathan Jurgenson , Linjie Luo , Jonathan M. Rodriguez, II , Rahul Sheth , Jia Li , Xutao Lv
Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. Point cloud data describing an environment is then accessed. A two-dimensional surface of an image of an environment is captured, and a portion of the image is matched to a portion of key points in the point cloud data. An augmented reality object is then aligned within one or more images of the environment based on the match of the point cloud with the image. In some embodiments, building façade data may additionally be used to determine a device location and place the augmented reality object within an image.
-
-
-
-
-
-
-
-
-