Cloth is difficult to simulate because it forms small, complex folds. These folds make cloth configuration difficult to measure. Particular problems include fast motion (ruling out laser ranging methods), the necessity for high resolution measurement, the fact that no viewing direction can see into the folds, and the fact that many points are visible with either small baseline or in only one view. We describe a method that can recover high resolution measurements of the shape of real cloth. Our method uses multiple cameras, a special pattern printed on the cloth, and high shutter speeds to capture fast motions. Cameras are calibrated directly from the cloth pattern. Folds result in local occlusion effects that can make identifying feature correspondences very difficult. We build correspondences between image features and material coordinates using novel techniques that apply approximate inference to exploit both local neighborhood information and global strain information. These correspondences yield an initial reconstruction that is polished using a combination of bundle adjustment with a strain minimization to get very good 3D reconstructions of points seen in multiple views. Finally, we use a combination of volume occupance cues derived from silhouettes and strain cues to get very good 3D reconstructions of points seen in just one view. Our observations provide both a 3D mesh and a parameterization of that mesh in material (or, quivalently, texture) coordinates. We demonstrate that our method can capture fast cloth motions and complicated configurations using a variety of natural cloth configurations, including: a view of a bent arm with extensive, complex folds at the elbow; a pair of pants moving very fast as the wearer jumps; and cloth shuddering when it is hit by dropped coins.




Download Full History