Though the research was performed in a specialized, heavily instrumented video laboratory,
In contrast to most previous work, which typically has involved just 10 to 20 video feeds, the
"At some point, extra camera views just become 'noise,'" said Hanbyul Joo, a Ph.D. student in the
The research team developed a technique for estimating visibility that uses motion as a cue. In contrast to motion capture systems that use balls or other markers, the researchers used established techniques for automatically identifying and tracking points based on appearance features -- in this case, distinctive patterns. For each point, the system then seeks to determine which cameras see motion that is consistent with that point.
For instance, if a point on a person's chest is being tracked and most cameras show that point is moving to the right, a camera that picks up motion in the opposite direction is probably seeing a person or object that is in between the target and the camera. Or it may indicate the person has turned and the chest is no longer visible to the camera. In either case, the system knows that camera cannot see the target point and that its video feed is not useful for 3D reconstruction involving that point.
Other researchers have been able to use images from a large number of cameras, such as smartphones, to create 3D reconstructions of still images, Joo noted. But without methods such as the visibility estimation technique, 3D motion reconstruction at such a large scale has not been possible.
In their laboratory, called the
Such a dense array of cameras enables the researchers to perform 3D motion reconstructions not previously possible. These include 3D reconstructions of a person tossing confetti into the air, with each piece of paper tracked until it reaches the floor. In another case, confetti is fed into a fan, enabling a motion capture of the air flow. "You couldn't put markers on the paper without changing the flow," Joo explained.
Likewise, such techniques might be used for reconstruction of the motion of animals, which typically can't be instrumented. The CMU researchers have used the
A video of the 3D reconstructions and links to the team's research paper are available on the project website, http://www.cs.cmu.edu/~hanbyulj/14/visibility.html.
The findings were presented at the Computer Vision and Pattern Recognition conference,
This research was supported by the
TNS 18DejucosGrace-140718-30FurigayJane-4800697 30FurigayJane
Most Popular Stories
- Doctor Who Christmas Episode Begins Production
- HCL America Adding 1,200 IT Jobs
- Medical Mfg. Jobs Coming to Dayton
- Michael Jackson, Freddie Mercury on Previously Unreleased Queen Cut
- Longtime Unemployed to Get Help in Las Vegas
- SpaceX Aims for Predawn Launch on Saturday
- U.S. Chamber Caught Up in Tax Inversion Question
- Women Key to Democratic Party: Clinton
- Feds Won't Say How Many Border Crossers Jailed
- Christie Didn't Order Bridge Shut Down, Feds Say