-
公开(公告)号:US11927874B2
公开(公告)日:2024-03-12
申请号:US14788386
申请日:2015-06-30
Applicant: Apple Inc.
Inventor: Claus Molgaard , Iain A. McAllister
IPC: H04N23/45 , G03B13/34 , H04N5/262 , H04N23/58 , H04N23/62 , H04N23/63 , H04N23/69 , H04N25/40 , H04N23/55
CPC classification number: G03B13/34 , H04N5/2628 , H04N23/45 , H04N23/58 , H04N23/62 , H04N23/635 , H04N23/69 , H04N25/41 , H04N23/55
Abstract: Some embodiments include methods and/or systems for using multiple cameras to provide optical zoom to a user. Some embodiments include a first camera unit of a multifunction device capturing a first image of a first visual field. A second camera unit of the multifunction device simultaneously captures a second image of a second visual field. In some embodiments, the first camera unit includes a first optical package with a first focal length. In some embodiments, the second camera unit includes a second optical package with a second focal length. In some embodiments, the first focal length is different from the second focal length, and the first visual field is a subset of the second visual field. In some embodiments, the first image and the second image are preserved to a storage medium as separate data structures.
-
公开(公告)号:US10609348B2
公开(公告)日:2020-03-31
申请号:US15627409
申请日:2017-06-19
Applicant: Apple Inc.
Inventor: Gennadiy A. Agranov , Claus Molgaard , Ashirwad Bahukhandi , Chiajen Lee , Xiangli Li
IPC: H04N9/04 , H04N5/347 , H04N5/3745
Abstract: Pixel binning is performed by summing charge from some pixels positioned diagonally in a pixel array. Pixel signals output from pixels positioned diagonally in the pixel array may be combined on the output lines. A signal representing summed charge produces a binned 2×1 cluster. A signal representing combined voltage signals produces a binned 2×1 cluster. A signal representing summed charge and a signal representing combined pixel signals can be combined digitally to produce a binned 2×2 pixel. Orthogonal binning may be performed on other pixels in the pixel array by summing charge on respective common sense regions and then combining the voltage signals that represent the summed charge on respective output lines.
-
公开(公告)号:US20190208125A1
公开(公告)日:2019-07-04
申请号:US16298272
申请日:2019-03-11
Applicant: Apple Inc.
Inventor: Claus Molgaard , Thomas E. Bishop
IPC: H04N5/232 , G06T7/593 , H04N13/246 , H04N13/239
CPC classification number: H04N5/23238 , G06T7/593 , H04N13/239 , H04N13/246 , H04N2013/0081
Abstract: A method for generating a depth map is described. The method includes obtaining a first image of a scene from a first image capture unit, the first image having a first depth-of-field (DOF), obtaining a second image of the scene from a second image capture unit, the second image having a second DOF that is different than the first DOF. Each pixel in the second image has a corresponding pixel in the first image. The method also includes generating a plurality of third images, each corresponding to a blurred version of the second image at each of a plurality of specified depths, generating a plurality of fourth images, each representing a difference between the first image and one or the plurality of third images, and generating a depth map where each pixel in the depth map is based on the pixels in one of the plurality of fourth images.
-
公开(公告)号:US10237473B2
公开(公告)日:2019-03-19
申请号:US14864603
申请日:2015-09-24
Applicant: Apple Inc.
Inventor: Claus Molgaard , Thomas E. Bishop
Abstract: A method for generating a depth map is described. The method includes obtaining a first image of a scene from a first image capture unit, the first image having a first depth-of-field (DOF), obtaining a second image of the scene from a second image capture unit, the second image having a second DOF that is different than the first DOF. Each pixel in the second image has a corresponding pixel in the first image. The method also includes generating a plurality of third images, each corresponding to a blurred version of the second image at each of a plurality of specified depths, generating a plurality of fourth images, each representing a difference between the first image and one or the plurality of third images, and generating a depth map where each pixel in the depth map is based on the pixels in one of the plurality of fourth images.
-
公开(公告)号:US20180316864A1
公开(公告)日:2018-11-01
申请号:US16030632
申请日:2018-07-09
Applicant: Apple Inc.
Inventor: Claus Molgaard , Marius Tico , Rolf Toft , Paul M. Hubel
IPC: H04N5/232 , G06T5/00 , G06T5/50 , H04N5/235 , G06T7/20 , H04N5/91 , G06T7/254 , H04N5/355 , G06T11/60 , H04N5/359
CPC classification number: H04N5/23277 , G06T5/002 , G06T5/003 , G06T5/50 , G06T7/20 , G06T7/254 , G06T11/60 , G06T2207/10004 , G06T2207/10144 , G06T2207/20221 , H04N5/23229 , H04N5/23254 , H04N5/23267 , H04N5/2355 , H04N5/2356 , H04N5/35581 , H04N5/3597 , H04N5/91
Abstract: Techniques to capture and fuse short- and long-exposure images of a scene from a stabilized image capture device are disclosed. More particularly, the disclosed techniques use not only individual pixel differences between co-captured short- and long-exposure images, but also the spatial structure of occluded regions in the long-exposure images (e.g., areas of the long-exposure image(s) exhibiting blur due to scene object motion). A novel device used to represent this feature of the long-exposure image is a “spatial difference map.” Spatial difference maps may be used to identify pixels in the short- and long-exposure images for fusion and, in one embodiment, may be used to identify pixels from the short-exposure image(s) to filter post-fusion so as to reduce visual discontinuities in the output image.
-
公开(公告)号:US20180070007A1
公开(公告)日:2018-03-08
申请号:US15697224
申请日:2017-09-06
Applicant: Apple Inc.
Inventor: Claus Molgaard , Paul M. Hubel , Ziv Attar , Ilana Volfin
CPC classification number: H04N5/23216 , G06K9/00201 , G06K9/00248 , G06K9/3233 , G06K9/38 , G06K9/46 , G06T7/50 , G06T2207/10028 , H04N5/23219
Abstract: Techniques are described for automated analysis and filtering of image data. Image data is analyzed to identify regions of interest (ROIs) within the image content. The image data also may have depth estimates applied to content therein. One or more of the ROIs may be designated to possess a base depth, representing a depth of image content against which depths of other content may be compared. Moreover, the depth of the image content within a spatial area of an ROI may be set to be a consistent value, regardless of depth estimates that may have been assigned from other sources. Thereafter, other elements of image content may be assigned content adjustment values in gradients based on their relative depth in image content as compared to the base depth and, optionally, based on their spatial distance from the designated ROI. Image content may be adjusted based on the content adjustment values.
-
公开(公告)号:US20180063441A1
公开(公告)日:2018-03-01
申请号:US15797768
申请日:2017-10-30
Applicant: Apple Inc.
Inventor: Claus Molgaard , Marius Tico , Rolf Toft , Paul M. Hubel
CPC classification number: H04N5/23277 , G06T5/002 , G06T5/003 , G06T5/50 , G06T7/20 , G06T7/254 , G06T11/60 , G06T2207/10004 , G06T2207/10144 , G06T2207/20221 , H04N5/23229 , H04N5/23254 , H04N5/23267 , H04N5/2355 , H04N5/2356 , H04N5/35581 , H04N5/3597 , H04N5/91
Abstract: Techniques to capture and fuse short- and long-exposure images of a scene from a stabilized image capture device are disclosed. More particularly, the disclosed techniques use not only individual pixel differences between co-captured short- and long-exposure images, but also the spatial structure of occluded regions in the long-exposure images (e.g., areas of the long-exposure image(s) exhibiting blur due to scene object motion). A novel device used to represent this feature of the long-exposure image is a “spatial difference map.” Spatial difference maps may be used to identify pixels in the short- and long-exposure images for fusion and, in one embodiment, may be used to identify pixels from the short-exposure image(s) to filter post-fusion so as to reduce visual discontinuities in the output image.
-
公开(公告)号:US20170237905A1
公开(公告)日:2017-08-17
申请号:US15587617
申请日:2017-05-05
Applicant: Apple Inc.
Inventor: Claus Molgaard , Marius Tico , Rolf Toft , Paul M. Hubel
IPC: H04N5/232
CPC classification number: H04N5/23277 , G06T5/002 , G06T5/003 , G06T5/50 , G06T7/20 , G06T7/254 , G06T11/60 , G06T2207/10004 , G06T2207/10144 , G06T2207/20221 , H04N5/23229 , H04N5/23254 , H04N5/23267 , H04N5/2355 , H04N5/2356 , H04N5/35581 , H04N5/3597 , H04N5/91
Abstract: Techniques to capture and fuse short- and long-exposure images of a scene from a stabilized image capture device are disclosed. More particularly, the disclosed techniques use not only individual pixel differences between co-captured short- and long-exposure images, but also the spatial structure of occluded regions in the long-exposure images (e.g., areas of the long-exposure image(s) exhibiting blur due to scene object motion). A novel device used to represent this feature of the long-exposure image is a “spatial difference map.” Spatial difference maps may be used to identify pixels in the short-and long-exposure images for fusion and, in one embodiment, may be used to identify pixels from the short-exposure image(s) to filter post-fusion so as to reduce visual discontinuities in the output image.
-
公开(公告)号:US20170069097A1
公开(公告)日:2017-03-09
申请号:US14864603
申请日:2015-09-24
Applicant: Apple Inc.
Inventor: Claus Molgaard , Thomas E. Bishop
CPC classification number: H04N5/23238 , G06T7/593 , H04N13/239 , H04N13/246 , H04N2013/0081
Abstract: A method for generating a depth map is described. The method includes obtaining a first image of a scene from a first image capture unit, the first image having a first depth-of-field (DOF), obtaining a second image of the scene from a second image capture unit, the second image having a second DOF that is different than the first DOF. Each pixel in the second image has a corresponding pixel in the first image. The method also includes generating a plurality of third images, each corresponding to a blurred version of the second image at each of a plurality of specified depths, generating a plurality of fourth images, each representing a difference between the first image and one or the plurality of third images, and generating a depth map where each pixel in the depth map is based on the pixels in one of the plurality of fourth images.
Abstract translation: 描述用于生成深度图的方法。 该方法包括从第一图像捕捉单元获取场景的第一图像,第一图像具有第一景深(DOF),从第二图像捕获单元获取场景的第二图像,第二图像具有 与第一个自由度不同的第二个自由度。 第二图像中的每个像素具有第一图像中的相应像素。 该方法还包括产生多个第三图像,每个第三图像对应于多个指定深度中的每一个处的第二图像的模糊版本,产生多个第四图像,每个第四图像表示第一图像与一个或多个第一图像之间的差异 并且生成深度图,其中深度图中的每个像素基于多个第四图像之一中的像素。
-
公开(公告)号:US20170069060A1
公开(公告)日:2017-03-09
申请号:US14872104
申请日:2015-09-30
Applicant: Apple Inc.
Inventor: Farhan A. Baqai , Fabio Riccardi , Russell A. Pflughaupt , Claus Molgaard , Gijesh Varghese
CPC classification number: H04N9/77 , G06K9/40 , G06T5/002 , G06T5/009 , G06T5/20 , G06T5/50 , G06T2207/10024 , G06T2207/20016 , G06T2207/20182 , G06T2207/20208 , G06T2207/20221 , H04N5/208 , H04N5/217 , H04N9/646 , H04N9/67 , H04N9/69 , H04N9/73
Abstract: Systems, methods, and computer readable media to fuse digital images are described. In general, techniques are disclosed that use multi-band noise reduction techniques to represent input and reference images as pyramids. Once decomposed in this manner, images may be fused using novel low-level (noise dependent) similarity measures. In some implementations similarity measures may be based on intra-level comparisons between reference and input images. In other implementations, similarity measures may be based on inter-level comparisons. In still other implementations, mid-level semantic features such as black-level may be used to inform the similarity measure. In yet other implementations, high-level semantic features such as color or a specified type of region (e.g., moving, stationary, or having a face or other specified shape) may be used to inform the similarity measure.
Abstract translation: 描述了融合数字图像的系统,方法和计算机可读介质。 通常,公开了使用多频带降噪技术将输入和参考图像表示为金字塔的技术。 一旦以这种方式分解,可以使用新颖的低级(噪声依赖)相似性度量来融合图像。 在一些实现中,相似性度量可以基于参考和输入图像之间的层内比较。 在其他实现中,相似性度量可以基于层间比较。 在其他实现中,可以使用诸如黑色级别的中级语义特征来通知相似性度量。 在其他实现中,可以使用诸如颜色或指定类型的区域(例如,移动,静止或具有面部或其他指定形状)的高级语义特征来通知相似性度量。
-
-
-
-
-
-
-
-
-