Hybrid geometry- and image-based modeling and rendering systems use photographs taken of a real-world environment and mapped onto the surfaces of a 3D model to achieve photorealism and visual complexity in synthetic images rendered from arbitrary viewpoints. A primary challenge in these systems is to develop algorithms that map the pixels of each photograph efficiently onto the appropriate surfaces of a 3D model, a classical visible surface determination problem. This paper describes an object-space algorithm for computing a visibility map for a set of polygons for a given camera viewpoint. The algorithm traces pyramidal beams from each camera viewpoint through a spatial data structure representing a polyhedral convex decomposition of space containing cell, face, edge and vertex adjacencies. Beam intersections are computed only for the polygonal faces on the boundary of each traversed cell, and thus the algorithm is output-sensitive. The algorithm also supports efficient determination of silhouette edges, which allows an image-based modeling and rendering system to avoid mapping pixels along edges whose colors are the result of averaging over several disjoint surfaces. Results reported for several 3D models indicate the method is well suited for large, densely occluded virtual environments, such as building interiors.
All Science Journal Classification (ASJC) codes
- Human-Computer Interaction
- Computer Graphics and Computer-Aided Design
- Beam tracing
- Image-based rendering
- Visibility map