Turki et al., 2023 - Google Patents
Pynerf: Pyramidal neural radiance fieldsTurki et al., 2023
View PDF- Document ID
- 2727718855567085966
- Author
- Turki H
- Zollhöfer M
- Richardt C
- Ramanan D
- Publication year
- Publication venue
- Advances in neural information processing systems
External Links
Snippet
Abstract Neural Radiance Fields (NeRFs) can be dramatically accelerated by spatial grid representations. However, they do not explicitly reason about scale and so introduce aliasing artifacts when reconstructing scenes captured at different camera distances. Mip …
- 230000001537 neural effect 0 title abstract description 13
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/36—Image preprocessing, i.e. processing the image information without deciding about the identity of the image
- G06K9/46—Extraction of features or characteristics of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding, e.g. from bit-mapped to non bit-mapped
- G06T9/001—Model-based coding, e.g. wire frame
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/06—Ray-tracing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image, e.g. from bit-mapped to bit-mapped creating a different image
- G06T3/40—Scaling the whole image or part thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/32—Image data format
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Fridovich-Keil et al. | K-planes: Explicit radiance fields in space, time, and appearance | |
| Turki et al. | Pynerf: Pyramidal neural radiance fields | |
| Turki et al. | Mega-nerf: Scalable construction of large-scale nerfs for virtual fly-throughs | |
| Liu et al. | Neural sparse voxel fields | |
| Jiang et al. | Gaussianshader: 3d gaussian splatting with shading functions for reflective surfaces | |
| Kopf et al. | One shot 3d photography | |
| Chen et al. | Mobilenerf: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures | |
| Morgenstern et al. | Compact 3d scene representation via self-organizing gaussian grids | |
| Wang et al. | Adaptive O-CNN: A patch-based deep representation of 3D shapes | |
| Kratimenos et al. | Dynmf: Neural motion factorization for real-time dynamic view synthesis with 3d gaussian splatting | |
| Müller et al. | Autorf: Learning 3d object radiance fields from single view observations | |
| Wei et al. | Fast texture synthesis using tree-structured vector quantization | |
| Guo et al. | Vmesh: Hybrid volume-mesh representation for efficient view synthesis | |
| WO2022198684A1 (en) | Methods and systems for training quantized neural radiance field | |
| Wan et al. | Learning neural duplex radiance fields for real-time view synthesis | |
| Fischer et al. | Dynamic 3d gaussian fields for urban areas | |
| Mihajlovic et al. | Splatfields: Neural gaussian splats for sparse 3d and 4d reconstruction | |
| Jang et al. | D-tensorf: Tensorial radiance fields for dynamic scenes | |
| Wan et al. | Superpoint gaussian splatting for real-time high-fidelity dynamic scene reconstruction | |
| Li et al. | Ho-gaussian: Hybrid optimization of 3d gaussian splatting for urban scenes | |
| Hwang et al. | Vegs: View extrapolation of urban scenes in 3d gaussian splatting using learned priors | |
| Ververas et al. | Sags: Structure-aware 3d gaussian splatting | |
| Li et al. | Dgnr: Density-guided neural point rendering of large driving scenes | |
| Lai et al. | Fast radiance field reconstruction from sparse inputs | |
| Zhao et al. | Tclc-gs: Tightly coupled lidar-camera gaussian splatting for autonomous driving: Supplementary materials |