-
公开(公告)号:US20220284657A1
公开(公告)日:2022-09-08
申请号:US17340222
申请日:2021-06-07
Applicant: NVIDIA Corporation
Inventor: Thomas Müller , Fabrice Pierre Armand Rousselle , Jan Novák , Alexander Georg Keller
Abstract: A real-time neural radiance caching technique for path-traced global illumination is implemented using a neural network for caching scattered radiance components of global illumination. The neural (network) radiance cache handles fully dynamic scenes, and makes no assumptions about the camera, lighting, geometry, and materials. In contrast with conventional caching, the data-driven approach sidesteps many difficulties of caching algorithms, such as locating, interpolating, and updating cache points. The neural radiance cache is trained via online learning during rendering. Advantages of the neural radiance cache are noise reduction and real-time performance. Importantly, the runtime overhead and memory footprint of the neural radiance cache are stable and independent of scene complexity.
-
公开(公告)号:US20230230310A1
公开(公告)日:2023-07-20
申请号:US18184519
申请日:2023-03-15
Applicant: NVIDIA Corporation
Inventor: Thomas Müller , Nikolaus Binder , Fabrice Pierre Armand Rousselle , Jan Novák , Alexander Georg Keller
CPC classification number: G06T15/005 , G06T15/06 , G06T15/506 , G06N3/10 , G06T2210/52
Abstract: A fully-connected neural network may be configured for execution by a processor as a fully-fused neural network by limiting slow global memory accesses to reading and writing inputs to and outputs from the fully-connected neural network. The computational cost of fully-connected neural networks scale quadratically with its width, whereas its memory traffic scales linearly. Modern graphics processing units typically have much greater computational throughput compared with memory bandwidth, so that for narrow, fully-connected neural networks, the linear memory traffic is the bottleneck. The key to improving performance of the fully-connected neural network is to minimize traffic to slow “global” memory (off-chip memory and high-level caches) and to fully utilize fast on-chip memory (low-level caches, “shared” memory, and registers), which is achieved by the fully-fused approach. A real-time neural radiance caching technique for path-traced global illumination is implemented using the fully-fused neural network for caching scattered radiance components of global illumination.
-
公开(公告)号:US20210294945A1
公开(公告)日:2021-09-23
申请号:US17083787
申请日:2020-10-29
Applicant: NVIDIA Corporation
Inventor: Thomas Müller , Fabrice Pierre Armand Rousselle , Alexander Georg Keller , Jan Novák
Abstract: Monte Carlo and quasi-Monte Carlo integration are simple numerical recipes for solving complicated integration problems, such as valuating financial derivatives or synthesizing photorealistic images by light transport simulation. A drawback of a straightforward application of (quasi-)Monte Carlo integration is the relatively slow convergence rate that manifests as high error of Monte Carlo estimators. Neural control variates may be used to reduce error in parametric (quasi-)Monte Carlo integration—providing more accurate solutions in less time. A neural network system has sufficient approximation power for estimating integrals and is efficient to evaluate. The efficiency results from the use of a first neural network that infers the integral of the control variate and using normalizing flows to model a shape of the control variate.
-
公开(公告)号:US11055381B1
公开(公告)日:2021-07-06
申请号:US16900046
申请日:2020-06-12
Applicant: NVIDIA Corporation
Inventor: David Augustus Hart , Matthew Milton Pharr , Thomas Müller , Ward Lopes , Morgan McGuire , Peter Schuyler Shirley
Abstract: Sampling a function is used for many applications, such as rendering images. The challenge is how to select the best samples to minimize computations and produce accurate results. An alternative is to use a larger number of samples that may not be carefully selected in an attempt to increase accuracy. For a function that is an integral, such as functions used to render images, a sample distribution may be computed by inverting the integral. Unfortunately, for many integrals, it is neither easy nor practical to compute the inverted integral. Instead, warp functions may be combined to provide a sample distribution that accurately approximates the factors of the product being integrated. Each warp function approximates an inverted term of the product while accounting for the effects of warp functions approximating other factors in the product. The selected warp functions are customized or “fitted” to implement importance sampling for the approximated product.
-
公开(公告)号:US20220284658A1
公开(公告)日:2022-09-08
申请号:US17340283
申请日:2021-06-07
Applicant: NVIDIA Corporation
Inventor: Thomas Müller , Nikolaus Binder , Fabrice Pierre Armand Rousselle , Jan Novák , Alexander Georg Keller
Abstract: A fully-connected neural network may be configured for execution by a processor as a fully-fused neural network by limiting slow global memory accesses to reading and writing inputs to and outputs from the fully-connected neural network. The computational cost of fully-connected neural networks scale quadratically with its width, whereas its memory traffic scales linearly. Modern graphics processing units typically have much greater computational throughput compared with memory bandwidth, so that for narrow, fully-connected neural networks, the linear memory traffic is the bottleneck. The key to improving performance of the fully-connected neural network is to minimize traffic to slow “global” memory (off-chip memory and high-level caches) and to fully utilize fast on-chip memory (low-level caches, “shared” memory, and registers), which is achieved by the fully-fused approach. A real-time neural radiance caching technique for path-traced global illumination is implemented using the fully-fused neural network for caching scattered radiance components of global illumination.
-
公开(公告)号:US11816404B2
公开(公告)日:2023-11-14
申请号:US17083787
申请日:2020-10-29
Applicant: NVIDIA Corporation
Inventor: Thomas Müller , Fabrice Pierre Armand Rousselle , Alexander Georg Keller , Jan Novák
CPC classification number: G06F30/27 , G06F17/11 , G06N3/045 , G06T15/06 , G06T15/506 , G06F2111/10
Abstract: Monte Carlo and quasi-Monte Carlo integration are simple numerical recipes for solving complicated integration problems, such as valuating financial derivatives or synthesizing photorealistic images by light transport simulation. A drawback of a straightforward application of (quasi-)Monte Carlo integration is the relatively slow convergence rate that manifests as high error of Monte Carlo estimators. Neural control variates may be used to reduce error in parametric (quasi-)Monte Carlo integration—providing more accurate solutions in less time. A neural network system has sufficient approximation power for estimating integrals and is efficient to evaluate. The efficiency results from the use of a first neural network that infers the integral of the control variate and using normalizing flows to model a shape of the control variate.
-
公开(公告)号:US11631210B2
公开(公告)日:2023-04-18
申请号:US17340283
申请日:2021-06-07
Applicant: NVIDIA Corporation
Inventor: Thomas Müller , Nikolaus Binder , Fabrice Pierre Armand Rousselle , Jan Novák , Alexander Georg Keller
Abstract: A fully-connected neural network may be configured for execution by a processor as a fully-fused neural network by limiting slow global memory accesses to reading and writing inputs to and outputs from the fully-connected neural network. The computational cost of fully-connected neural networks scale quadratically with its width, whereas its memory traffic scales linearly. Modern graphics processing units typically have much greater computational throughput compared with memory bandwidth, so that for narrow, fully-connected neural networks, the linear memory traffic is the bottleneck. The key to improving performance of the fully-connected neural network is to minimize traffic to slow “global” memory (off-chip memory and high-level caches) and to fully utilize fast on-chip memory (low-level caches, “shared” memory, and registers), which is achieved by the fully-fused approach. A real-time neural radiance caching technique for path-traced global illumination is implemented using the fully-fused neural network for caching scattered radiance components of global illumination.
-
公开(公告)号:US11610360B2
公开(公告)日:2023-03-21
申请号:US17340222
申请日:2021-06-07
Applicant: NVIDIA Corporation
Inventor: Thomas Müller , Fabrice Pierre Armand Rousselle , Jan Novák , Alexander Georg Keller
Abstract: A real-time neural radiance caching technique for path-traced global illumination is implemented using a neural network for caching scattered radiance components of global illumination. The neural (network) radiance cache handles fully dynamic scenes, and makes no assumptions about the camera, lighting, geometry, and materials. In contrast with conventional caching, the data-driven approach sidesteps many difficulties of caching algorithms, such as locating, interpolating, and updating cache points. The neural radiance cache is trained via online learning during rendering. Advantages of the neural radiance cache are noise reduction and real-time performance. Importantly, the runtime overhead and memory footprint of the neural radiance cache are stable and independent of scene complexity.
-
公开(公告)号:US11935179B2
公开(公告)日:2024-03-19
申请号:US18184519
申请日:2023-03-15
Applicant: NVIDIA Corporation
Inventor: Thomas Müller , Nikolaus Binder , Fabrice Pierre Armand Rousselle , Jan Novák , Alexander Georg Keller
CPC classification number: G06T15/06 , G06N3/10 , G06T15/005 , G06T15/506 , G06T2210/52
Abstract: A fully-connected neural network may be configured for execution by a processor as a fully-fused neural network by limiting slow global memory accesses to reading and writing inputs to and outputs from the fully-connected neural network. The computational cost of fully-connected neural networks scale quadratically with its width, whereas its memory traffic scales linearly. Modern graphics processing units typically have much greater computational throughput compared with memory bandwidth, so that for narrow, fully-connected neural networks, the linear memory traffic is the bottleneck. The key to improving performance of the fully-connected neural network is to minimize traffic to slow “global” memory (off-chip memory and high-level caches) and to fully utilize fast on-chip memory (low-level caches, “shared” memory, and registers), which is achieved by the fully-fused approach. A real-time neural radiance caching technique for path-traced global illumination is implemented using the fully-fused neural network for caching scattered radiance components of global illumination.
-
公开(公告)号:US20240020443A1
公开(公告)日:2024-01-18
申请号:US18478025
申请日:2023-09-29
Applicant: NVIDIA Corporation
Inventor: Thomas Müller , Fabrice Pierre Armand Rousselle , Alexander Georg Keller , Jan Novák
CPC classification number: G06F30/27 , G06T15/06 , G06F17/11 , G06T15/506 , G06N3/045 , G06F2111/10
Abstract: Monte Carlo and quasi-Monte Carlo integration are simple numerical recipes for solving complicated integration problems, such as valuating financial derivatives or synthesizing photorealistic images by light transport simulation. A drawback of a straightforward application of (quasi-)Monte Carlo integration is the relatively slow convergence rate that manifests as high error of Monte Carlo estimators. Neural control variates may be used to reduce error in parametric (quasi-)Monte Carlo integration—providing more accurate solutions in less time. A neural network system has sufficient approximation power for estimating integrals and is efficient to evaluate. The efficiency results from the use of a first neural network that infers the integral of the control variate and using normalizing flows to model a shape of the control variate.
-
-
-
-
-
-
-
-
-