-
公开(公告)号:US12052430B2
公开(公告)日:2024-07-30
申请号:US17889277
申请日:2022-08-16
Applicant: Apple Inc.
Inventor: Ranjit Desai , Adrian P. Lindberg , Kaushik Raghunath , Vinay Palakkode
Abstract: A method is provided that includes receiving content data captured by a sensor and receiving a context signal representing a user context. The received content data is scaled using a trained model, wherein the context signal is an input to the trained model, and the scaled content data is provided for presentation to a user.
-
公开(公告)号:US20230319296A1
公开(公告)日:2023-10-05
申请号:US17889277
申请日:2022-08-16
Applicant: Apple Inc.
Inventor: Ranjit Desai , Adrian P. Lindberg , Kaushik Raghunath , Vinay Palakkode
Abstract: A method is provided that includes receiving content data captured by a sensor and receiving a context signal representing a user context. The received content data is scaled using a trained model, wherein the context signal is an input to the trained model, and the scaled content data is provided for presentation to a user.
-
3.
公开(公告)号:US20240104693A1
公开(公告)日:2024-03-28
申请号:US18472710
申请日:2023-09-22
Applicant: Apple Inc.
Inventor: Vinay Palakkode , Kaushik Raghunath , Venu M. Duggineni , Vivaan Bahl
CPC classification number: G06T3/4046 , G06T7/60 , G06T7/70 , G06T15/005 , G06T2207/20084 , G06T2207/30196
Abstract: Generating synthesized data includes capturing one or more frames of a scene at a first frame rate by one or more cameras of a wearable device, determining body position parameters for the frames, and obtaining geometry data for the scene in accordance with the one or more frames. The frames, body position parameters, and geometry data are applied to a trained network which predicts one or more additional frames. With respect to virtual data, generating a synthesized frame includes determining current body position parameters in accordance with the one or more frames, predicting a future gaze position based on the current body position parameters, and rendering, at a first resolution, a gaze region of a frame in accordance with the future gaze position. A peripheral region is predicted for the frame at a second resolution, and the combined regions form a frame that is used to drive a display.
-
公开(公告)号:US20240104686A1
公开(公告)日:2024-03-28
申请号:US18469984
申请日:2023-09-19
Applicant: Apple Inc.
Inventor: Srinidhi Aravamudhan , Adrian P. Lindberg , Eshan Verma , Jaya Vijetha Gattupalli , Mingshan Wang , Ranjit Desai , Vinay Palakkode
CPC classification number: G06T1/20 , G06T3/40 , G06T7/11 , G06T2207/20081
Abstract: Techniques are disclosed herein for implementing a novel, low latency, guidance map-free video matting system, e.g., for use in extended reality (XR) platforms. The techniques may be designed to work with low resolution auxiliary inputs (e.g., binary segmentation masks) and to generate alpha mattes (e.g., alpha mattes configured to segment out any object(s) of interest, such as human hands, from a captured image) in near real-time and in a computationally efficient manner. Further, in a domain-specific setting, the system can function on a captured image stream alone, i.e., it would not require any auxiliary inputs, thereby reducing computational costs—without compromising on visual quality and user comfort. Once an alpha matte has been generated, various alpha-aware graphical processing operations may be performed on the captured images according to the generated alpha mattes (e.g., background replacement operations, synthetic shallow depth of field (SDOF) rendering operations, and/or various XR environment rendering operations).
-
-
-