Patent number 8654130 is assigned to
The following quote was obtained by the news editors from the background information supplied by the inventors: "The present invention is generally directed to computer animation. Specifically, the present invention is directed to customizing a computer animation wireframe with three-dimensional range and color data or with a two-dimensional representation and a depth map.
"Known systems can provide generic computer animations that integrate audio and visual information. For example, these generic computer animations typically display a talking head of a human or of a cartoon animal. These generic computer animations can be used for a number of applications.
"For example, some known systems display the computer animation on a computer video monitor to interface with a human user. Other known systems can convert ASCII (American Standard Code for
"These known computer animations are based on generic animation wireframe models. Although these generic animation wireframe models are generic in the sense that the animations doe not represent a specific person; these generic models can be deformed according to a predefined set of parameters to vary the presentation from the one generic version. Deforming a generic animation wireframe model can be used to more closely resemble realistic and natural interactions, for example, human-to-human interactions. Deforming the generic model using a predefined set of parameters, however, cannot sufficiently modify the generic model to present actual people recognized by the viewer.
"To produce more realistic and natural displays for human interactions, animation wireframe models should incorporate real measurements of the structure of the desired face, as well as color, shape and size. Such information can be obtained by a three-dimensional laser scanner system that scan a person's head to produce very dense range data and color data of the head.
"Some known systems that incorporate measured three-dimensional information into generic animation wireframe models, however, suffer from several shortcomings. In general, accurately modifying generic animation wireframe models with measured three-dimensional range data requires extensive and expensive manual adjustments or automated computer-based adjustments. Manual adjustments of generic animation wireframe models can be time consuming and/or can require expensive human personal with specialized training. Automated adjustments of generic animation wireframe models can require expensive computer equipment that is generally cost-prohibitive for mass distribution and may require extensive maintenance performed by human personnel with specialized training."
In addition to the background information obtained for this patent, VerticalNews journalists also obtained the inventor's summary information for this patent: "The present invention modifies a generic animation wireframe model with measured three-dimensional range data to produce a customized animated wireframe. The present invention can also modify a generic animation wireframe model with a depth map and image to produce a customized animated wireframe. The image can be a convention 2D image where each pel corresponds to the appropriate surface color of the face (color or black and white). The depth map is a 2D image where each pel represents the absolute or relative distance between the face model and the camera when the depth map was acquired. The present invention produces the customized animated wireframe without requiring extensive manual adjustments or elaborate computer equipment.
"The present invention modifies an animation wireframe having multiple points with three-dimensional (3D) range data. The 3D range data has a corresponding shape surface. The animation wireframe is vertically scaled based on the distances between consecutive features within the 3D range data. For each animation wireframe point, the location of the animation-wireframe point is horizontally adjusted to correspond to a point on the shape surface within a horizontal plane.
"The vertical scaling factors can be calculated based on the distances between certain points within the 3D range data. A primary point within the 3D range data corresponding to a first feature within the plurality of features can be obtained. A vertical alignment line based on the primary point can be obtained. Secondary points within the 3D range data corresponding to features that lie along the vertical alignment line can be obtained. Consequently, vertical scaling factors based on the distances between consecutive features can be calculated.
"A tertiary point within the 3D range data can be selected to define a vertical cut-off plane. For each animation wireframe point, the origin point within the horizontal plane can also be defined. The primary point, the secondary points and the tertiary point can be obtained manually or automatically.
"In another embodiment of the invention, a similar result can be accomplished by using a depth map acquired with a range finder. The scanner can consist of an active range finder using structured light or a laser measuring the time of flight. Alternatively, the scanner can be passive, using stereo or depth-from defocus. In any case, the scanner system will produce a depth map and the color data (image) showing the texture of the face. In this embodiment, the scanner can associate the depth map with the animation wireframe to ascertain the relative depth of each point on the shape surface. Since the depth map does not distinguish between parts that belong to the object of interest (here a face), the face needs to be segmented in the depth map. In one preferred embodiment, a depth map is created when the face is relatively far away from the range finder such that all points of a depth map beyond a given threshold are considered background and the remaining point are considered as the face. Since the depth map only defines the frontal distances for the animation wireframe, the scanner scales the back of the animation wireframe such that the outline of the face model as defined by the depth map is preserved. With a proper alignment, the above-outlined method can be implemented to determine, for example, the primary point, the secondary point, the vertical alignment line as well as the vertical scaling factor. In another embodiment of the present invention, horizontal scaling factor. In another embodiment of the present invention, horizontal scaling can be performed by defining a scaling line within the horizontal plane. The scaling line can be defined as containing the animation-wireframe point and an origin point. A shape-surface point is determined as the intersection of the scaling line and the shape surface. The location of the animation-wireframe point can be adjusted to correspond to the shape-surface point. This process can be repeated for each wireframe-animation point.
"In an alternative embodiment, horizontally scaling can be performed by selecting a pair of animation-wireframe points within the same horizontal plane and then defining two individual scaling lines. Two shape-surface points can be determined as the two scaling lines and the shape surface. The location of the two animation-wireframe points are then horizontally adjusted to correspond to the two shape-surface points.
"Two animation-wireframe points can be adjusted so that L.sub.W'/L.sub.W substantially equals L.sub.R'/L.sub.R, where L.sub.W' is the length of a first line connecting the first animation wireframe point and the second animation wireframe point along the animation wireframe within the horizontal plane, L.sub.W is the length of a second line along the animation wireframe within the horizontal plane, the second line being between the limit of the horizontal plane and a first intersection point where the animation wireframe within the horizontal plane intersects a perpendicular line containing the origin point and being perpendicular from the limit of the horizontal plane, L.sub.R' is the length of a third line connecting the first shape-surface point and the second shape-surface point along the shape surface within the horizontal plane, and L.sub.R is the length of a fourth line along the shape surface within the horizontal plane, the fourth line being between the limit of the horizontal plane and a second intersection point where the shape surface within the horizontal plane intersects the perpendicular line.
"Texture mapping can be provided to the animation wireframe based on color data corresponding to the 3D range data. In an alternative embodiment, an alignment point within the 3D range data can be obtained corresponding to an object within the animation wireframe that substantially moves during animation. The alignment point can be matched with the corresponding point within the animation wireframe."
URL and more information on this patent, see: Ostermann, Joern. Computer Readable Medium for Modifying an Animation Wire Frame. U.S. Patent Number 8654130, filed
Keywords for this news article include:
Our reports deliver fact-based news of research and discoveries from around the world. Copyright 2014, NewsRx LLC
Most Popular Stories
- Dmytro Firtash, Ukrainian Billionaire, Arrested in Vienna
- Obama, Ukraine Discuss Russian Incursion in Crimea
- Uli Hoeness, Bayern Munich President, Gets Prison for Tax Evasion
- Navarro Celebrates 2 Years of Vida Mia
- Calumet Photo Files for Bankruptcy
- Russia Holds Large Military Drills in South
- Ukraine Moves Closer to Joining E.U.
- Federal Gov't Deficit Continues to Decline
- Herbalife Puts Off Meeting for Icahn Talks
- Ukraine Loan Delayed While Congress Goes on Vacation