Architectural walkthroughs and other visual simulation applications demand enormously powerful graphics hardware to achieve interactive frame rates. Standard computer graphics rendering schemes waste much computational effort processing objects that are not visible to the simulated observer. An alternative is to precompute superset visibility information about the model, by determining what portions of the model will definitely be invisible for an observer in certain locations. This information can then be used during the simulation phase to dramatically reduce the number of model entities that must be processed during each time frame.
The visibility precomputation phase first subdivides the model into cells by partitioning the space embedding the model along the planes of large opaque polygonal occluders, such as walls, floors, and ceilings. The remainder of the geometric data, for example furniture and wall trim, are considered to be non-occluding detail objects. For each cell, a coarse visibility determination is first made as to what other cells might be visible from it. The detail objects are then inserted into the subdivision, and a finer-grain visibility determination is made for these objects and stored with each cell.
The online culling phase dynamically tracks the position and field of view of the simulated observer through the cells of the spatial subdivision. The precomputed visibility information is subjected to further on-line culling operations that use the observer's exact position and field of view. The resulting reduced set of objects is issued to graphics hardware, where a discrete depth-buffer solves the hidden-surface problem in screen space.
The visibility framework is defined generally in terms of conforming spatial subdivisions that support a small number of abstract operations. All visibility determinations are proven to produce a superset of the objects actually visible to the observer. This is crucial, since omitting any visible object would cause an erroneous display. The generally small set of invisible objects produced by the on-line culling operation is then removed by the graphics rendering hardware.
We implemented these abstract notions for several interesting and realistic input classes, i.e., axial and non-axial scenes in two and three dimensions. We evaluated the usefulness of the precomputation and culling scheme using objective metrics of culling effectiveness, pixel depth complexity, and on-line culling and rendering time. The test data was a complex, three-dimensional architectural model comprising ten thousand detail objects and almost three-quarters of a million polygons. On-line frame times decreased from about ten seconds for the unprocessed model, to a tenth of a second, thus accelerating frame rates by a factor of about one hundred.