-
公开(公告)号:US09734587B2
公开(公告)日:2017-08-15
申请号:US14871955
申请日:2015-09-30
Applicant: Apple Inc.
Inventor: Zehang Sun , Toshihiro Horie , Xin Tong , Peter Chou
CPC classification number: G06T7/248 , G06K9/4609 , G06K9/4671 , G06K9/6212 , G06K9/6215 , G06K9/6223 , G06T5/10 , G06T7/246 , G06T7/269 , G06T2207/10016 , G06T2207/10024 , G06T2207/20024 , G06T2207/20076 , G06T2207/20104 , G06T2207/30241
Abstract: In some implementations, a computing device can track an object from a first image frame to a second image frame using a self-correcting tracking method. The computing device can select points of interest in the first image frame. The computing device can track the selected points of interest from the first image frame to the second image frame using optical flow object tracking. The computing device can prune the matching pairs of points and generate a transform based on the remaining matching pairs to detect the selected object in the second image frame. The computing device can generate a tracking confidence metric based on a projection error for each point of interest tracked from the first frame to the second frame. The computing device can correct tracking errors by reacquiring the object when the tracking confidence metric is below a threshold value.
-
公开(公告)号:US20190080498A1
公开(公告)日:2019-03-14
申请号:US16177408
申请日:2018-10-31
Applicant: Apple Inc.
Inventor: Toshihiro Horie , Kevin O'Neil , Zehang Sun , Xiaohuan Corina Wang , Joe Weil , Omid Khalili , Stuart Mark Pomerantz , Marc Robins , Eric Beale , Nathalie Castel , Jean-Michel Berthoud , Brian Walsh , Andy Harding , Greg Dudey
Abstract: Systems, methods, apparatuses and non-transitory, computer-readable storage mediums are disclosed for generating AR self-portraits or “AR selfies.” In an embodiment, a method comprises: capturing, by a first camera of a mobile device, image data, the image data including an image of a subject in a physical, real-world environment; receiving, by a depth sensor of the mobile device, depth data indicating a distance of the subject from the camera in the physical, real-world environment; receiving, by one or more motion sensors of the mobile device, motion data indicating at least an orientation of the first camera in the physical, real-world environment; generating a virtual camera transform based on the motion data, the camera transform for determining an orientation of a virtual camera in a virtual environment; and generating a composite image data, using the image data, a matte and virtual background content selected based on the virtual camera orientation.
-
公开(公告)号:US20190082118A1
公开(公告)日:2019-03-14
申请号:US16124168
申请日:2018-09-06
Applicant: Apple Inc.
Inventor: Xiaohuan Corina Wang , Zehang Sun , Joe Weil , Omid Khalili , Stuart Mark Pomerantz , Marc Robins , Toshihiro Horie , Eric Beale , Nathalie Castel , Jean-Michel Berthoud , Brian Walsh , Kevin O'Neil , Andy Harding , Greg Dudey
IPC: H04N5/265 , G06T13/80 , G06T7/50 , G06T17/20 , H04N5/247 , H04N5/272 , H04N5/445 , G06T7/194 , G06T7/13
Abstract: Systems, methods, apparatuses and non-transitory, computer-readable storage mediums are disclosed for generating AR self-portraits or “AR selfies.” In an embodiment, a method comprises: capturing, by a first camera of a mobile device, live image data, the live image data including an image of a subject in a physical, real-world environment; receiving, by a depth sensor of the mobile device, depth data indicating a distance of the subject from the camera in the physical, real-world environment; receiving, by one or more motion sensors of the mobile device, motion data indicating at least an orientation of the first camera in the physical, real-world environment; generating a virtual camera transform based on the motion data, the camera transform for determining an orientation of a virtual camera in a virtual environment; and generating a composite image data, using the image data, a matte and virtual background content selected based on the virtual camera orientation.
-
公开(公告)号:US20220353432A1
公开(公告)日:2022-11-03
申请号:US17861086
申请日:2022-07-08
Applicant: Apple Inc.
Inventor: Xiaohuan Corina Wang , Zehang Sun , Joe Weil , Omid Khalili , Stuart Mark Pomerantz , Marc Robins , Toshihiro Horie , Eric Beale , Nathalie Castel , Jean-Michel Berthoud , Brian Walsh , Kevin O'Neil , Andy Harding , Greg Dudey
IPC: H04N5/265 , H04N5/232 , G06T7/11 , H04N5/222 , G06T7/174 , G06T11/60 , G06T7/194 , G06T7/50 , G06T7/13 , G06T13/80 , G06T15/50 , G06T17/20 , H04N5/247 , H04N5/272 , H04N5/445
Abstract: Systems, methods, apparatuses and non-transitory, computer-readable storage mediums are disclosed for generating AR self-portraits or “AR selfies.” In an embodiment, a method comprises: capturing, by a first camera of a mobile device, live image data, the live image data including an image of a subject in a physical, real-world environment; receiving, by a depth sensor of the mobile device, depth data indicating a distance of the subject from the camera in the physical, real-world environment; receiving, by one or more motion sensors of the mobile device, motion data indicating at least an orientation of the first camera in the physical, real-world environment; generating a virtual camera transform based on the motion data, the camera transform for determining an orientation of a virtual camera in a virtual environment; and generating a composite image data, using the image data, a matte and virtual background content selected based on the virtual camera orientation.
-
公开(公告)号:US20170091952A1
公开(公告)日:2017-03-30
申请号:US14871955
申请日:2015-09-30
Applicant: Apple Inc.
Inventor: Zehang Sun , Toshihiro Horie , Xin Tong , Peter Chou
CPC classification number: G06T7/248 , G06K9/4609 , G06K9/4671 , G06K9/6212 , G06K9/6215 , G06K9/6223 , G06T5/10 , G06T7/246 , G06T7/269 , G06T2207/10016 , G06T2207/10024 , G06T2207/20024 , G06T2207/20076 , G06T2207/20104 , G06T2207/30241
Abstract: In some implementations, a computing device can track an object from a first image frame to a second image frame using a self-correcting tracking method. The computing device can select points of interest in the first image frame. The computing device can track the selected points of interest from the first image frame to the second image frame using optical flow object tracking. The computing device can prune the matching pairs of points and generate a transform based on the remaining matching pairs to detect the selected object in the second image frame. The computing device can generate a tracking confidence metric based on a projection error for each point of interest tracked from the first frame to the second frame. The computing device can correct tracking errors by reacquiring the object when the tracking confidence metric is below a threshold value.
-
-
-
-