-
公开(公告)号:US09875424B2
公开(公告)日:2018-01-23
申请号:US14992696
申请日:2016-01-11
Applicant: Apple Inc.
Inventor: Selim Benhimane , Daniel Ulbricht
CPC classification number: G06K9/6211 , G06T7/337 , G06T7/70 , G06T7/74 , G06T2207/30244
Abstract: A method for determining correspondences between a first and a second image, comprising the steps providing a first image and a second image of the real environment, defining a warping function between the first and second image, determining the parameters of the warping function between the first image and the second image by means of an image registration method, determining a third image by applying the warping function with the determined parameters to the first image, determining a matching result by matching the third image and the second image, and determining correspondences between the first and the second image using the matching result and the warping function with the determined parameters. The method may be used in a keyframe based method for determining the pose of a camera based on the determined correspondences.
-
公开(公告)号:US11783558B1
公开(公告)日:2023-10-10
申请号:US17678544
申请日:2022-02-23
Applicant: Apple Inc.
Inventor: Angela Blechschmidt , Daniel Ulbricht , Alexander S. Polichroniadis
CPC classification number: G06T19/20 , G06T7/70 , G06T17/00 , G06V20/70 , G06T2207/10028 , G06T2210/61 , G06T2219/2004
Abstract: Various implementations disclosed herein include devices, systems, and methods that uses object relationships represented in the scene graph to adjust the position of objects. For example, an example process may include obtaining a three-dimensional (3D) representation of a physical environment that was generated based on sensor data obtained during a scanning process, detecting positions of a set of objects in the physical environment based on the 3D representation, generating a scene graph for the 3D representation of the physical environment based on the detected positions of the set of objects, wherein the scene graph represents the set of objects and relationships between the objects, and determining a refined 3D representation of the physical environment by refining the position of at least one object in the set of objects based on the scene graph and an alignment rule associated with a relationship in the scene graph.
-
公开(公告)号:US11710283B2
公开(公告)日:2023-07-25
申请号:US17508010
申请日:2021-10-22
Applicant: Apple Inc.
Inventor: Eshan Verma , Daniel Ulbricht , Angela Blechschmidt , Mohammad Haris Baig , Chen-Yu Lee , Tanmay Batra
IPC: G06T19/00
CPC classification number: G06T19/006
Abstract: Various implementations disclosed herein include devices, systems, and methods that enable faster and more efficient real-time physical object recognition, information retrieval, and updating of a CGR environment. In some implementations, the CGR environment is provided at a first device based on a classification of the physical object, image or video data including the physical object is transmitted by the first device to a second device, and the CGR environment is updated by the first device based on a response associated with the physical object received from the second device.
-
公开(公告)号:US11412350B2
公开(公告)日:2022-08-09
申请号:US16576573
申请日:2019-09-19
Applicant: Apple Inc.
Inventor: Robert William Mayor , Isaac T. Miller , Adam S. Howell , Vinay R. Majjigi , Oliver Ruepp , Daniel Ulbricht , Oleg Naroditsky , Christian Lipski , Sean P. Cier , Hyojoon Bae , Saurabh Godha , Patrick J. Coleman
Abstract: Location mapping and navigation user interfaces may be generated and presented via mobile computing devices. A mobile device may detect its location and orientation using internal systems, and may capture image data using a device camera. The mobile device also may retrieve map information from a map server corresponding to the current location of the device. Using the image data captured at the device, the current location data, and the corresponding local map information, the mobile device may determine or update a current orientation reading for the device. Location errors and updated location data also may be determined for the device, and a map user interface may be generated and displayed on the mobile device using the updated device orientation and/or location data.
-
公开(公告)号:US11343589B2
公开(公告)日:2022-05-24
申请号:US17275038
申请日:2019-09-24
Applicant: Apple Inc.
Inventor: Ian M. Richter , Daniel Ulbricht , Eshan Verma
IPC: G06V20/20 , H04N21/81 , G06T19/00 , H04N21/845 , H04N21/44
Abstract: In one implementation, consumption of media content (such as video, audio, or text) is supplemented with an immersive synthesized reality (SR) map based on the media content. In various implementations described herein, the SR map includes a plurality of SR environment representations which, when selected by a user, cause display of a corresponding SR environment.
-
公开(公告)号:US11295529B2
公开(公告)日:2022-04-05
申请号:US17149949
申请日:2021-01-15
Applicant: Apple Inc.
Inventor: Daniel Ulbricht , Amit Kumar K C , Angela Blechschmidt , Chen-Yu Lee , Eshan Verma , Mohammad Haris Baig , Tanmay Batra
IPC: G06T19/00 , G06F3/01 , A63F13/825 , G02B27/01 , A63F13/212 , G06F3/03
Abstract: In one implementation, a method of including a person in a CGR experience or excluding the person from the CGR experience is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes, while presenting a CGR experience, capturing an image of scene; detecting, in the image of the scene, a person; and determining an identity of the person. The method includes determining, based on the identity of the person, whether to include the person in the CGR experience or exclude the person from the CGR experience. The method includes presenting the CGR experience based on the determination.
-
公开(公告)号:US11132546B2
公开(公告)日:2021-09-28
申请号:US17032213
申请日:2020-09-25
Applicant: Apple Inc.
Inventor: Daniel Ulbricht , Angela Blechschmidt , Mohammad Haris Baig , Tanmay Batra , Eshan Verma , Amit Kumar KC
Abstract: In one implementation, a method of generating a plane hypothesis is performed by a head-mounted device (HMD) including one or more processors, non-transitory memory, and a scene camera. The method includes obtaining an image of a scene including a plurality of pixels. The method include obtaining a point cloud based on the image of the scene and generating an object classification set based on the image of the scene, each element of the object classification set including a respective plurality of pixels classified as a respective object in the scene. The method includes generating a plane hypothesis based on the point cloud and the object classification set.
-
公开(公告)号:US11100720B2
公开(公告)日:2021-08-24
申请号:US17031676
申请日:2020-09-24
Applicant: Apple Inc.
Inventor: Daniel Ulbricht , Amit Kumar K C , Angela Blechschmidt , Chen-Yu Lee , Eshan Verma , Mohammad Haris Baig , Tanmay Batra
Abstract: In one implementation, a method of generating a depth map is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes generating, based on a first image and a second image, a first depth map of the second image. The method includes generating, based on the first depth map of the second image and pixel values of the second image, a second depth map of the second image.
-
19.
公开(公告)号:US10762386B2
公开(公告)日:2020-09-01
申请号:US16530694
申请日:2019-08-02
Applicant: Apple Inc.
Inventor: Daniel Ulbricht , Thomas Olszamowski
Abstract: The invention is related to a method of determining a similarity transformation between first coordinates and second coordinates of 3D features, comprising providing a first plurality of 3D features having first coordinates in a first coordinate system which is associated with a first geometrical model of a first real object, wherein the first plurality of 3D features describes physical 3D features of the first real object, providing a second coordinate system, providing image information associated with a plurality of images captured by at least one camera, for each respective 3D feature of at least part of the first plurality of 3D features, wherein the respective 3D feature is captured by at least two of the plurality of images, determining camera poses of the at least one camera in the second coordinate system while the at least two of the plurality of images are captured, determining for the respective 3D feature a second coordinate in the second coordinate system according to the at least two of the plurality of images and the camera poses, and the method further comprising determining a similarity transformation between the first coordinates and the second coordinates of the at least part of the first plurality of 3D features, wherein the similarity transformation includes at least one translation, at least one rotation, at least one scale and/or their combinations in 3D space.
-
公开(公告)号:US20200226383A1
公开(公告)日:2020-07-16
申请号:US16833364
申请日:2020-03-27
Applicant: Apple Inc.
Inventor: Peter Meier , Daniel Ulbricht
Abstract: In an exemplary process for providing content in an augmented reality environment, image data correspond to a physical environment are obtained. Based on the image data, predefined entities of the plurality of predefined entities in the physical environment are identified using classifiers corresponding to predefined entities. Based on the one or more of the identified predefined entities, a geometric layout of the physical environment is determined. Based on the geometric layout, an area corresponding to a particular entity is determined. The particular entity corresponds to one or more identified predefined entities. Based on the area corresponding to the particular entity, the particular entity in the physical environment is identified using classifiers corresponding to the determined area. Based on the identified particular entity, a type of the physical environment is determined. Based on the type of the physical environment, virtual-reality objects are displayed corresponding to a representation of the physical environment.
-
-
-
-
-
-
-
-
-