-
公开(公告)号:US10977798B2
公开(公告)日:2021-04-13
申请号:US16545158
申请日:2019-08-20
Applicant: Apple Inc.
Inventor: Amit Kumar Kc , Daniel Ulbricht
Abstract: In some implementations a neural network is trained to perform to directly predict thin boundaries of objects in images based on image characteristics. A neural network can be trained to predict thin boundaries of objects without requiring subsequent computations to reduce the thickness of the boundary prediction. Instead, the network is trained to make the predicted boundaries thin by effectively suppressing non-maximum values in normal directions along what might otherwise be a thick predicted boundary. To do so, the neural network can be trained to determine normal directions and suppress non-maximum values based on those determined normal directions.
-
公开(公告)号:US20210073429A1
公开(公告)日:2021-03-11
申请号:US16984406
申请日:2020-08-04
Applicant: Apple Inc.
Inventor: Angela Blechschmidt , Daniel Ulbricht , Omar Elafifi
Abstract: Implementations disclosed herein provide systems and methods that determine relationships between objects based on an original semantic mesh of vertices and faces that represent the 3D geometry of a physical environment. Such an original semantic mesh may be generated and used to provide input to a machine learning model that estimates relationships between the objects in the physical environment. For example, the machine learning model may output a graph of nodes and edges indicating that a vase is on top of a table or that a particular instance of a vase, V1, is on top of a particular instance of a table, T1.
-
公开(公告)号:US10891922B1
公开(公告)日:2021-01-12
申请号:US16508467
申请日:2019-07-11
Applicant: Apple Inc.
Inventor: Tanmay Batra , Daniel Ulbricht
Abstract: In one implementation, a method is disclosed for controlling attention diversions while presenting computer-generated reality (CGR) environments on an electronic device. The method includes presenting content representing a view of CGR environment on a display. While presenting the content, an object is detected in a physical environment in which the electronic device is located using an image sensor of the electronic device. The method further includes determining whether the object exhibits a characteristic indicative of attention-seeking behavior. In accordance with a determination that the object exhibits the characteristic, a visual cue corresponding to the object is presented on a first portion of the display without modifying the presentation of the content on a second portion of the display.
-
公开(公告)号:US10832487B1
公开(公告)日:2020-11-10
申请号:US16580172
申请日:2019-09-24
Applicant: Apple Inc.
Inventor: Daniel Ulbricht , Amit Kumar K C , Angela Blechschmidt , Chen-Yu Lee , Eshan Verma , Mohammad Haris Baig , Tanmay Batra
Abstract: In one implementation, a method of generating a depth map is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes generating, based on a first image and a second image, a first depth map of the second image. The method includes generating, based on the first depth map of the second image and pixel values of the second image, a second depth map of the second image.
-
公开(公告)号:US12112519B1
公开(公告)日:2024-10-08
申请号:US17678090
申请日:2022-02-23
Applicant: Apple Inc.
Inventor: Alexander S. Polichroniadis , Angela Blechschmidt , Daniel Ulbricht
IPC: G06V10/74 , G06T17/00 , G06V10/762 , G06V10/80 , G06V20/70
CPC classification number: G06V10/761 , G06T17/00 , G06V10/7635 , G06V10/80 , G06V20/70 , G06T2210/61
Abstract: An exemplary process obtains sensor data for a physical environment, generates a local scene graph for the physical environment based on the sensor data, wherein the local scene graph represents a set of objects and relationships between the objects, matches the local scene graph with a principal scene graph of a set of principal scene graphs, and executes one or more scripted actions involving the objects based on a narrative associated with the matched principal scene graph. In some implementations, the set of principal scene graphs is generated by generating local scene graphs for a plurality of environments, and generating individual scene graphs each representative of local scene graphs.
-
公开(公告)号:US11943679B2
公开(公告)日:2024-03-26
申请号:US17861167
申请日:2022-07-08
Applicant: Apple Inc.
Inventor: Robert William Mayor , Isaac T. Miller , Adam S. Howell , Vinay R. Majjigi , Oliver Ruepp , Daniel Ulbricht , Oleg Naroditsky , Christian Lipski , Sean P. Cier , Hyojoon Bae , Saurabh Godha , Patrick J. Coleman
IPC: H04W4/024 , G01C21/36 , G06T7/73 , G06V10/44 , G06V10/80 , G06V20/10 , H04M1/724 , H04W4/02 , H04W64/00
CPC classification number: H04W4/024 , G01C21/3647 , G06T7/74 , G06V10/44 , G06V10/806 , G06V20/10 , H04M1/724 , H04W4/026 , G06T2207/30244 , H04M2250/52 , H04W64/006
Abstract: Location mapping and navigation user interfaces may be generated and presented via mobile computing devices. A mobile device may detect its location and orientation using internal systems, and may capture image data using a device camera. The mobile device also may retrieve map information from a map server corresponding to the current location of the device. Using the image data captured at the device, the current location data, and the corresponding local map information, the mobile device may determine or update a current orientation reading for the device. Location errors and updated location data also may be determined for the device, and a map user interface may be generated and displayed on the mobile device using the updated device orientation and/or location data.
-
37.
公开(公告)号:US20240013487A1
公开(公告)日:2024-01-11
申请号:US17862301
申请日:2022-07-11
Applicant: Apple Inc.
Inventor: Ian M. Richter , Daniel Ulbricht , Jean-Daniel E. Nahmias , Omar Elafifi , Peter Meier
Abstract: In one implementation, a method includes: identifying a plurality of plot-effectuators and a plurality of environmental elements within a scene associated with a portion of video content; determining one or more spatial relationships between the plurality of plot-effectuators and the plurality of environmental elements within the scene; synthesizing a representation of the scene based at least in part on the one or more spatial relationships; extracting a plurality of action sequences corresponding to the plurality of plot-effectuators based at least in part on the portion of the video content; and generating a corresponding synthesized reality (SR) reconstruction of the scene by driving a plurality of digital assets, associated with the plurality of plot-effectuators, within the representation of the scene according to the plurality of action sequences.
-
公开(公告)号:US20230351644A1
公开(公告)日:2023-11-02
申请号:US18215311
申请日:2023-06-28
Applicant: Apple Inc.
Inventor: Ian M. Richter , Daniel Ulbricht , Jean-Daniel E. Nahmias , Omar Elafifi , Peter Meier
IPC: G06T11/00
CPC classification number: G06T11/00 , G06F3/04883
Abstract: In one implementation, a method includes: obtaining a user input to view SR content associated with video content; if the video content includes a first scene when the user input was detected: obtaining first SR content for a first time period of the video content associated with the first scene; obtaining a task associated with the first scene; and causing presentation of the first SR content and a first indication of the task associated with the first scene; and if the video content includes a second scene when the user input was detected: obtaining second SR content for a second time period of the video content associated with the second scene; obtaining a task associated with the second scene; and causing presentation of the second SR content and a second indication of the task associated with the second scene.
-
公开(公告)号:US20230206623A1
公开(公告)日:2023-06-29
申请号:US18111541
申请日:2023-02-18
Applicant: Apple Inc.
Inventor: Daniel Ulbricht , Angela Blechschmidt , Mohammad Haris Baig , Tanmay Batra , Eshan Verma , Amit Kumar KC
IPC: G06V20/10 , G06T7/70 , G06V30/262 , G06T19/00 , G06F18/24
CPC classification number: G06V20/10 , G06T7/70 , G06V30/274 , G06T19/006 , G06F18/24 , G06T2200/24 , G06T2207/10028 , G06T2207/20084
Abstract: In one implementation, a method of generating a plane hypothesis is performed by a device including one or more processors, non-transitory memory, and a scene camera. The method includes obtaining an image of a scene including a plurality of pixels. The method includes obtaining a plurality of points of a point cloud based on the image of the scene. The method includes obtaining an object classification set based on the image of the scene. Each element of the object classification set includes a plurality of pixels respectively associated with a corresponding object in the scene. The method includes detecting a plane within the scene by identifying a subset of the plurality of points of the point cloud that correspond to a particular element of the object classification set.
-
公开(公告)号:US11647260B2
公开(公告)日:2023-05-09
申请号:US17729462
申请日:2022-04-26
Applicant: Apple Inc.
Inventor: Ian M. Richter , Daniel Ulbricht , Eshan Verma
IPC: G06V20/20 , H04N21/81 , G06T19/00 , H04N21/44 , H04N21/845
CPC classification number: H04N21/8133 , G06T19/006 , G06V20/20 , H04N21/44008 , H04N21/8456
Abstract: In one implementation, consumption of media content (such as video, audio, or text) is supplemented with an immersive synthesized reality (SR) map based on the media content. In various implementations described herein, the SR map includes a plurality of SR environment representations which, when selected by a user, cause display of a corresponding SR environment.
-
-
-
-
-
-
-
-
-