-
公开(公告)号:US11922588B2
公开(公告)日:2024-03-05
申请号:US17856558
申请日:2022-07-01
Applicant: APPLE INC.
Inventor: Nathan L. Fillhardt , Syed Mohsin Hasan , Adrian P. Lindberg
CPC classification number: G06T19/006 , G06F16/29 , G06T7/74 , G06T15/00 , G06T17/05 , G06T19/003 , H04L67/131 , H04W4/02 , G06T2200/24 , G06T2207/30244 , G06T2219/024 , H04L67/01
Abstract: To reduce this amount of bandwidth needed to share 3D map images between mobile devices, according to some embodiments, a user's mobile device (i.e., a host device) may identify its origin in a 3D map and a current virtual camera position relative to the origin based on the physical position of the mobile device. The mobile device may send both the origin and the virtual camera position to another mobile device (i.e., a client device) for use in rendering a corresponding image. Separately, the client device may download the 3D map images from a server, e.g., in preparation for a meeting. In this manner, the host device may send the origin to the client device once, as well as send a data stream of the current virtual camera position for use in accessing the corresponding 3D map images at the client device.
-
公开(公告)号:US20230319296A1
公开(公告)日:2023-10-05
申请号:US17889277
申请日:2022-08-16
Applicant: Apple Inc.
Inventor: Ranjit Desai , Adrian P. Lindberg , Kaushik Raghunath , Vinay Palakkode
Abstract: A method is provided that includes receiving content data captured by a sensor and receiving a context signal representing a user context. The received content data is scaled using a trained model, wherein the context signal is an input to the trained model, and the scaled content data is provided for presentation to a user.
-
公开(公告)号:US12052430B2
公开(公告)日:2024-07-30
申请号:US17889277
申请日:2022-08-16
Applicant: Apple Inc.
Inventor: Ranjit Desai , Adrian P. Lindberg , Kaushik Raghunath , Vinay Palakkode
Abstract: A method is provided that includes receiving content data captured by a sensor and receiving a context signal representing a user context. The received content data is scaled using a trained model, wherein the context signal is an input to the trained model, and the scaled content data is provided for presentation to a user.
-
公开(公告)号:US11783550B2
公开(公告)日:2023-10-10
申请号:US17187736
申请日:2021-02-26
Applicant: Apple Inc.
Inventor: Daniele Casaburo , Adrian P. Lindberg
CPC classification number: G06T19/006 , G06T7/13 , G06T7/50
Abstract: Implementations of the subject technology provide for image composition for extended reality systems. Image composition may include combining virtual content from virtual images with physical content from images captured by one or more cameras. The virtual content and the physical content can be combined to form a composite image using depth information for the virtual content and the physical content. An adjustment mask may be generated to indicate edges or boundaries between virtual and physical content at which artifact correction for the composite image can be performed.
-
公开(公告)号:US11151798B1
公开(公告)日:2021-10-19
申请号:US17089927
申请日:2020-11-05
Applicant: Apple Inc.
Inventor: Daniele Casaburo , Adrian P. Lindberg
Abstract: Various implementations disclosed herein include devices, systems, and methods that create additional depth frames in the circumstance wherein a depth camera runs at a lower frame rate than a light intensity camera. Rather than upconverting the depth frames by simply repeating a previous depth camera frame, additional depth frames are created by adjusting some of the depth values of a prior frame based on the RGB camera data (e.g., by “dragging” depths from their positions in the prior depth frame to new positions for a new frame). Specifically, a contour image (e.g., identifying interior and exterior outlines of a hand with respect to a virtual cube that the hand occludes) is generated based on a mask (e.g., occlusions masks identifying where the hand occludes the virtual cube). Changes in the contour image are used to determine how to adjust (e.g., drag) the depth values for the additional depth frames.
-
公开(公告)号:US20220335699A1
公开(公告)日:2022-10-20
申请号:US17856558
申请日:2022-07-01
Applicant: APPLE INC.
Inventor: Nathan L. Fillhardt , Syed Mohsin Hasan , Adrian P. Lindberg
Abstract: To reduce this amount of bandwidth needed to share 3D map images between mobile devices, according to some embodiments, a user's mobile device (i.e., a host device) may identify its origin in a 3D map and a current virtual camera position relative to the origin based on the physical position of the mobile device. The mobile device may send both the origin and the virtual camera position to another mobile device (i.e., a client device) for use in rendering a corresponding image. Separately, the client device may download the 3D map images from a server, e.g., in preparation for a meeting. In this manner, the host device may send the origin to the client device once, as well as send a data stream of the current virtual camera position for use in accessing the corresponding 3D map images at the client device.
-
公开(公告)号:US10777007B2
公开(公告)日:2020-09-15
申请号:US15867351
申请日:2018-01-10
Applicant: Apple Inc.
Inventor: Nathan L. Fillhardt , Syed Mohsin Hasan , Adrian P. Lindberg
Abstract: To reduce this amount of bandwidth needed to share 3D map images between mobile devices, according to some embodiments, a user's mobile device (i.e., a host device) may identify its origin in a 3D map and a current virtual camera position relative to the origin based on the physical position of the mobile device. The mobile device may send both the origin and the virtual camera position to another mobile device (i.e., a client device) for use in rendering a corresponding image. Separately, the client device may download the 3D map images from a server, e.g., in preparation for a meeting. In this manner, the host device may send the origin to the client device once, as well as send a data stream of the current virtual camera position for use in accessing the corresponding 3D map images at the client device.
-
公开(公告)号:US20190102943A1
公开(公告)日:2019-04-04
申请号:US15867351
申请日:2018-01-10
Applicant: Apple Inc.
Inventor: Nathan L. Fillhardt , Syed Mohsin Hasan , Adrian P. Lindberg
Abstract: To reduce this amount of bandwidth needed to share 3D map images between mobile devices, according to some embodiments, a user's mobile device (i.e., a host device) may identify its origin in a 3D map and a current virtual camera position relative to the origin based on the physical position of the mobile device. The mobile device may send both the origin and the virtual camera position to another mobile device (i.e., a client device) for use in rendering a corresponding image. Separately, the client device may download the 3D map images from a server, e.g., in preparation for a meeting. In this manner, the host device may send the origin to the client device once, as well as send a data stream of the current virtual camera position for use in accessing the corresponding 3D map images at the client device.
-
公开(公告)号:US20240104686A1
公开(公告)日:2024-03-28
申请号:US18469984
申请日:2023-09-19
Applicant: Apple Inc.
Inventor: Srinidhi Aravamudhan , Adrian P. Lindberg , Eshan Verma , Jaya Vijetha Gattupalli , Mingshan Wang , Ranjit Desai , Vinay Palakkode
CPC classification number: G06T1/20 , G06T3/40 , G06T7/11 , G06T2207/20081
Abstract: Techniques are disclosed herein for implementing a novel, low latency, guidance map-free video matting system, e.g., for use in extended reality (XR) platforms. The techniques may be designed to work with low resolution auxiliary inputs (e.g., binary segmentation masks) and to generate alpha mattes (e.g., alpha mattes configured to segment out any object(s) of interest, such as human hands, from a captured image) in near real-time and in a computationally efficient manner. Further, in a domain-specific setting, the system can function on a captured image stream alone, i.e., it would not require any auxiliary inputs, thereby reducing computational costs—without compromising on visual quality and user comfort. Once an alpha matte has been generated, various alpha-aware graphical processing operations may be performed on the captured images according to the generated alpha mattes (e.g., background replacement operations, synthetic shallow depth of field (SDOF) rendering operations, and/or various XR environment rendering operations).
-
公开(公告)号:US11636656B1
公开(公告)日:2023-04-25
申请号:US17491901
申请日:2021-10-01
Applicant: Apple Inc.
Inventor: Daniele Casaburo , Adrian P. Lindberg
Abstract: Various implementations disclosed herein include devices, systems, and methods that create additional depth frames where a depth camera runs at a lower frame rate than a light intensity camera. Rather than upconverting the depth frames by simply repeating a previous depth camera frame, additional depth frames are created by adjusting some of the depth values of a prior frame based on the RGB camera data (e.g., by “dragging” depths from their positions in the prior depth frame to new positions for a new frame). Specifically, a contour image is generated, and changes in the contour image are used to determine how to adjust (e.g., drag) the depth values for the additional depth frames. The contour image may be based on a mask (e.g., occlusions masks identifying where the hand occludes the virtual cube).
-
-
-
-
-
-
-
-
-