News Column

Researchers Submit Patent Application, "3D Scene Scanner and Position and Orientation System", for Approval

July 15, 2014



By a News Reporter-Staff News Editor at Journal of Mathematics -- From Washington, D.C., VerticalNews journalists report that a patent application by the inventors Valkenburg, Robert Jan (Auckland, NZ); Penman, David William (Auckland, NZ); Schoonees, Johann August (Auckland, NZ); Alwesh, Nawar Sami (Auckland, NZ); Palmer, George Terry (Auckland, NZ), filed on December 13, 2013, was made available online on July 3, 2014.

The patent's assignee is Industrial Research Limited.

News editors obtained the following quote from the background information supplied by the inventors: "Various types of 3D scanners are available, each more suited to specific applications, e.g. for scanning small objects with high resolution or for scanning large objects with low resolution. To scan all around an object requires that either the object is moved past the scanner, e.g. on a turntable, or the scanner moved around the object.

"Several types of known scanners are capable of capturing complete surface information of objects and scenes. Generally, these scanners can be separated into three categories: namely photogrammetric scanners, fixed station laser scanners and hand-held 3D shape scanners. The scanners generate data points or other structures representing the scene or object scanned and this data can be post-processed by software to allow visualisation and to generate 3D representations or 3D computer models of the scene or object.

"Photogrammetric systems reconstruct a 3D scene or object based on analysis of multiple overlapping 2D images. Provided common features are visible and identified in the images and camera calibration parameters are known or determined, it is possible to extract 3D metric scene or object information. In some cases, the cameras are pre-calibrated. In other cases, self-calibration is attempted based on the image matches.

"Fixed station scanners scan a scene from a fixed location. Typically, fixed station scanners are arranged to scan a modulated laser beam in two dimensions and acquire range information by measuring the phase-shift of the reflected modulated laser light or the time-of-flight of a reflected laser pulse. By panning the scanner through 360.degree., it is possible to produce a 360.degree. panoramic range map of the scene. To scan a complete scene often requires moving the fixed station scanner to a number of different scanning locations. Depending on the size of scene, scanning time is typically 10-30 minutes. Some fixed station scanners also comprise a digital camera that is arranged to capture colour information for each surface point in the scan of the scene so that dual colour and range images can be generated. Other fixed station scanners incorporate multiple lasers to allow acquisition of colour as well as range information.

"Hand-held 3D shape scanners comprise a hand-held mobile scanner-head that is commonly maneuvered by a user about the object being scanned. Typically, the scanner-head includes a range sensor for determining the local shape of the object by sensing the position in space of the surface points of the object relative to the scanner-head. For example, the range sensor may sense the position in space of the surface points via laser triangulation. The hand-held 3D shape scanners also comprise a position and orientation system that measures the position and orientation of the mobile scanner-head in space during the scan. The local shape information is then coupled with the scanner-head position and orientation information to enable a 3D computer model of the object to be constructed.

"In this specification where reference has been made to patent specifications, other external documents, or other sources of information, this is generally for the purpose of providing a context for discussing the features of the invention. Unless specifically stated otherwise, reference to such external documents is not to be construed as an admission that such documents, or such sources of information, in any jurisdiction, are prior art, or form part of the common general knowledge in the art.

"It is an object of the present invention to provide a flexible and portable 3D scene scanner that is capable of scanning wide-area scenes, or to provide a position and orientation system that is capable of sensing the pose of a mobile object in 6D, or to at least provide the public with a useful choice."

As a supplement to the background information on this patent application, VerticalNews correspondents also obtained the inventors' summary information for this patent application: "In a first aspect, the present invention broadly consists in a hand-held mobile 3D scanner for scanning a scene comprising: a range sensor that is arranged to sense the location of surface points in the scene relative to the scanner and generate representative location information; a texture sensor that is arranged to sense the texture of each surface point in the scan of the scene and generate representative texture information; a position and orientation sensor that is arranged to sense the position and orientation of the scanner during the scan of the scene by interacting with multiple reference targets located about the scene and generate representative position and orientation information; and a control system that is arranged to receive the information from each of the sensors and generate data representing the scan of the scone.

"Preferably, the data generated relates to scanned surface points in the scene and may comprise information on the 3D positions of those surface points in space and texture information in relation to those surface points. More preferably, the data may further comprise viewpoint information in relation to the viewpoint from which the surface points were scanned by the scanner.

"Preferably, the control system may be arranged to generate the texture information for the data based on texture values sensed by the texture sensor from multiple viewpoints during the scan.

"Preferably, the control system may be arranged to generate a texture model representing the scan of the scene.

"Preferably, the data generated by the control system may be in the form of rich-3D data.

"Preferably, the control system may be arranged to generate a 3D substantially photo-realistic representation of the scene from the data.

"Preferably, the control system may comprise a user interface that is operable by a user to control scanning parameters.

"Preferably, the control system may comprise an output display that is arranged to generate a progressive representation of the scene as it is being scanned.

"Preferably, the control system may be arranged to filter out data associated with scanned surface points in the scene that fall outside scanning zones that are selected by the user.

"Preferably, the control system may be arranged to increase or decrease the resolution of the range and texture sensors for particular scanning zones that are selected by the user.

"Preferably, the range sensor may comprise any one of the following: a light detection and ranging (LIDAR) device, triangulation-based device, or a non-scanning time-of-flight camera device.

"In one form, the texture sensor may comprise a colour camera that is arranged to capture digital images of the scene, each digital image comprising an array of pixels and each pixel or group of pixels corresponding to a surface point in the scan of the scene from which texture information can be extracted. In an alternative form, the texture sensor may comprise a multi-spectral laser imager that is arranged to sense texture information relating to the scanned surface points of the scene.

"In one form, the position and orientation sensor may comprise an optical tracking device that senses the position and orientation of the scanner by tracking visible reference targets located about the scene. Preferably, the optical tracking device may comprise one or more direction sensors that are arranged to detect visible reference targets and generate direction information relating to the direction of the visible reference targets relative to the scanner, the optical tracking device processing the direction information to determine the position and orientation of the scanner. More preferably, the direction sensors may be optical sensors that are each arranged to view outwardly relative to the scanner to provide direction information relating to any visible reference targets.

"The position and orientation sensor may additionally comprise an inertial sensor that is arranged to sense the position and orientation of the scanner and provide representative position and orientation information if the optical tracking device experiences target dropout.

"In a second aspect, the present invention broadly consists in a portable 3D scanning system for scanning a scene comprising: a hand-held mobile scanner comprising: a range sensor that is arranged to sense the location of surface points in the scene relative to the scanner and generate representative location information; a texture sensor that is arranged to sense the texture of each surface point in the scan of the scene and generate representative texture information; and a position and orientation sensor that is arranged to sense the position and orientation of the scanner during the scan of the scene and generate representative position and orientation information; multiple reference targets for placing randomly about the scene, the position and orientation sensor interacting with detectable reference targets to sense the position and orientation of the scanner; and a control system that is arranged to control the scanner and its sensors and the reference targets, receive the information from each of the sensors, and generate data representing the scan of the scene.

"Preferably, the data generated relates to scanned surface points in the scene and may comprise information on the 3D positions of those surface points in space and texture information in relation to those surface points. More preferably, the data may further comprise viewpoint information in relation to the viewpoint from which the surface points were scanned by the scanner.

"Preferably, the control system may be arranged to generate the texture information for the data based on texture values sensed by the texture sensor from multiple viewpoints during the scan.

"Preferably, the control system may be arranged to generate a texture model representing the scan of the scene.

"Preferably, the data generated by the control system may be in the form of rich-3D data.

"Preferably, the control system may be arranged to generate a 3D substantially photo-realistic representation of the scene from the data.

"Preferably, the control system may comprise a user interface that is operable by a user to control scanning parameters.

"Preferably, the control system may comprise an associated output display that is arranged to generate a progressive representation of the scene as it is being scanned.

"Preferably, the control system may be arranged to filter out data associated with scanned surface points in the scene that fall outside scanning zones that are selected by the user.

"Preferably, the control system may be arranged to increase or decrease the resolution of the range and texture sensors for particular scanning zones that are selected by the user.

"Preferably, the range sensor may comprise any one of the following: a light detection and ranging (LIDAR) device, triangulation-based device, or a non-scanning time-of-flight camera device.

"In one form, the texture sensor may comprise a colour camera that is arranged to capture digital images of the scene, each digital image comprising an array of pixels and each pixel or group of pixels corresponding to a surface point in the scan of the scene from which texture information can be extracted. In another form, the texture sensor may comprise a multi-spectral laser imager that is arranged to sense texture information relating to the scanned surface points of the scene.

"In one form, the position and orientation sensor may comprise an optical tracking device that senses the position and orientation of the scanner by tracking visible reference targets located about the scene. Preferably, the optical tracking device may comprise one or more direction sensors that are arranged to detect visible reference targets and generate direction information relating to the direction of the visible reference targets relative to the scanner, the optical tracking device processing the direction information to determine the position and orientation of the scanner. More preferably, the direction sensors may be optical sensors that are each arranged to view outwardly relative to the scanner to provide direction information relating to any visible reference targets.

"In one form, the position and orientation sensor may additionally comprise an inertial sensor that is arranged to sense the position and orientation of the scanner and provide representative position and orientation information if the optical tracking device experiences target dropout.

"In a third aspect, the present invention broadly consists in a method of scanning a scene comprising the steps of operating a hand-held mobile scanner to scan the scene, the scanner comprising: a range sensor that is arranged to sense the shape of the object(s) in the scene on a surface point-by-point basis and generate representative shape information; a texture sensor that is arranged to sense the texture of the objects) in the scene on a surface point-by-point basis and generate representative texture information; and a position and orientation sensor that is arranged to sense the position and orientation of the scanner in a local reference frame by interacting with multiple reference targets located about the scene and generate representative position and orientation information; obtaining the shape, texture, and position and orientation information from the sensors; processing the shape, texture, and position and orientation information; and generating data representing the scan of the scene.

"Preferably, the step of processing the shape, texture, and position and orientation information may comprise extracting information about each surface point of the surfaces and objects in the scan on a point-by-point basis by computing the 3D position of each surface point in the local reference frame from the shape information and the position and orientation information; generating the texture information around the region of each surface point from the texture information from the texture sensor and the position and orientation information; and extracting the viewpoint from which the surface point was scanned by the scanner from the position and orientation information.

"Preferably, the step of generating data representing the scene may comprise constructing rich-3D data.

"Preferably, the method may further comprise the step of processing the data to generate a 3D substantially photo-realistic representation of the scene for display.

"Preferably, the method may further comprise the step of placing reference targets about the scene and operating the position and orientation sensor of the scanner to interact with detectable reference targets to sense the position and orientation of the scanner in the local reference frame.

"Preferably, the step of operating the hand-held mobile scanner to scan the scene may comprise scanning the surfaces and objects of the scene from multiple viewpoints.

"Preferably, the step of operating the hand-held mobile scanner to scan the scene may comprise first initially setting scanning zones within the scene such that processing step discards any information relating to surface points of objects in the scene that fall outside the scanning zones.

"In a fourth aspect, the present invention broadly consists in a mobile 3D scene scanner for scanning a scene comprising: a range sensor arranged to sense the shape of the object(s) in the scene on a surface point-by-point basis and generate representative shape information; a texture sensor arranged to sense the texture of the object(s) in the scene on a surface point-by-point basis and generate representative texture information; a position and orientation sensor arranged to sense the position and orientation of the scanner in a local reference frame by interacting with multiple reference targets located about the scene and generate representative position and orientation information; and a control system arranged to control each of the sensors, receive the information from each of the sensors, and generate data representing the surface points of the object(s) scanned in the scene.

"Preferably, the data generated by the control system may be in the form of rich-3D data.

"Preferably, the range sensor may comprise any one of the following: a light detection and ranging (LIDAR) device, triangulation-based device, or a non-scanning time-of-flight camera device.

"In one form, the texture sensor may comprise a colour camera that is arranged to capture digital images of the scene, each digital image comprising an array of pixels and each pixel or group of pixels corresponding to a surface point of an object in the scan of the scene from which texture information can be extracted. In another form, the texture sensor may comprise a multi-spectral laser imager that is arranged to sense texture information relating to the object(s) in the scene on a surface point-by-point basis.

"In a fifth aspect, the present invention broadly consists in a position and orientation system for sensing the position and orientation of a mobile object that is moveable in an environment comprising: multiple reference targets locatable in random positions within the environment to define a local reference frame; an optical tracking device mounted to the mobile object comprising one or more direction sensors that are arranged to detect visible reference targets and generate direction information relating to the direction of the visible reference targets relative to the optical tracking device; and a control system arranged to operate the optical tracking device, receive the direction information, and process the direction information to initially determine the 3D positions of the reference targets and then generate position and orientation information relating to the position and orientation of the mobile object in the local reference frame during movement of the mobile object.

"Preferably, the direction sensors of the optical tracking device may be optical sensors that are each arranged to view outwardly relative to the scanner to provide direction information relating to any visible reference targets. More preferably, the optical sensors may comprise an arrangement of cameras that view outwardly relative to the mobile object to provide direction information relating to any visible reference targets.

"Preferably, the one or more direction sensors of the optical tracking device may be arranged to form an omnidirectional direction sensor.

"Preferably, the reference targets may each comprise a switchable light source that is arranged to emit light for sensing by the direction sensors of the optical tracking device.

"Preferably, the control system is arranged to auto-calibrate in operation by automatically determining the 3D position of visible reference targets in the environment by processing direction information from the optical tracking device. More preferably, the control system may be arranged to auto-calibrate at start-up and continue to periodically auto-calibrate during operation to register the movement, removal, and addition of reference targets to the environment.

"Preferably, the control system may be arranged to provide the user with feedback on the quality of the distribution of the reference targets within the environment after auto-calibration has taken place, the distribution of the reference targets affecting the accuracy of the position and orientation information generated.

"Preferably, the control system may comprise a user interface that is operable by a user to control the system and an associated output display for presenting the position and orientation information.

"Preferably, the position and orientation system may further comprise an inertial sensor that is arranged to sense the position and orientation of the mobile object and provide representative position and orientation information if the optical tracking device experiences target dropout.

"In a sixth aspect, the present invention broadly consists in a method of sensing the position and orientation of a mobile object that is moveable in an environment comprising the steps of: placing multiple reference targets at random positions within the environment to define a local reference frame; mounting an optical tracking device to the mobile object, the optical tracking device comprising one or more direction sensors that are arranged to detect visible reference targets and generate direction information relating to the direction of the visible reference targets relative to the optical tracking device; operating the optical tracking device to track and sense visible reference targets as it moves with the mobile object in the environment and generate direction information; and processing the direction information to initially determine the 3D positions of the reference targets and then generate position and orientation information relating to the position and orientation of the mobile object in the local reference frame during movement of the mobile object.

"Preferably, the step operating the optical tracking device to track and sense visible reference targets may further comprise the step of operating the optical tracking device to initially determine the 3D position of visible reference targets in the environment by auto-calibrating from the direction information.

"Preferably, the step of auto-calibrating may comprise: moving the mobile object into N locations in the environment and sensing direction information for visible reference targets at each location; calculating initial estimates of the position and orientation of the mobile object at the N locations; calculating accurate estimates of the position and orientation of the mobile object; and reconstructing the reference target 3D positions by triangulation using the direction information and the accurate estimates of the position and orientation of the mobile object. In one form, the step of calculating initial estimates may comprise executing a closed form algorithm. In one form, the step of calculating accurate estimates may comprise executing a non-linear minimisation algorithm.

"Preferably, the step of auto-calibrating occurs periodically to register the movement, removal, and addition of reference targets.

"Preferably, the method of sensing the position and orientation of a mobile object may further comprise the step of feeding back information on the quality of the distribution of the reference targets within the environment after auto-calibration step has taken place, the distribution of the reference targets affecting the accuracy of the position and orientation-information generated.

"Preferably, the step of processing the direction information to generate position and orientation information may comprise: calculating an initial estimate of the position and orientation of the mobile object using a boot-strapping process; predicting the current position and orientation of the mobile object based on previous position and orientation estimate; associating the sensed direction information with specific individual reference target 3D positions; and updating the current position and orientation prediction using the individual reference target 3D positions and direction information. In one form, the step of predicting the current position and orientation of the mobile object may comprise extrapolating from the previous position and orientation estimate. In one form, the step of updating the current position and orientation predication may comprise executing a non-linear algorithm.

"Preferably, the method of sensing the position and orientation of a mobile object may further comprise mounting an inertial sensor to the mobile object and operating the inertial sensor to sense the position and orientation of the mobile object and generate representative position and orientation information if the optical tracking device experiences target dropout.

"In this specification and the accompanying claims, the term 'texture' is intended to cover any information relating to the surface texture including, but not limited to, colour, such as hue, brightness and saturation, or grey-scale intensity information.

"In this specification and the accompanying claims, the term 'scene' is intended to cover any indoor or outdoor environment, surfaces and objects together within such environments, and also individual objects in isolation within such environments.

"In this specification and the accompanying claims, the term 'portable' in the context of a system is intended to cover any system that has components that may be packed into a carry-case or that are relatively easily transportable to different locations.

"Unless the context requires otherwise, the term 'targets' in this specification and the accompanying claims is intended to cover any powered or non-powered object, device, marker, landmark, beacon, pattern or the like.

"In this specification and the accompanying claims, the phrase 'visible reference targets' is intended to cover reference targets that are visible to the optical tracking device of the position and orientation system in that the targets are able to be sensed and not occluded.

"In this specification and the accompanying claims, the phrase 'surface points' is intended to refer to the points or patches on the surface of objects and surroundings within a scene being scanned.

"In this specification and the accompanying claims, the phrase 'rich-3D data' is intended to capture 3D point cloud data in which each surface point or patch has associated viewpoint information and texture values obtained from different viewpoints or an associated texture model constructed from texture information sensed from multiple viewpoints during the scan for that surface point or patch.

"The term 'comprising' as used in this specification and claims means 'consisting at least in part of', that is to say when interpreting statements in this specification and claims which include that term, the features, prefaced by that term in each statement, all need to be present but other features can also be present.

"The invention consists in the foregoing and also envisages constructions of which the following gives examples only.

BRIEF DESCRIPTION OF THE DRAWINGS

"Preferred embodiments of the 3D scene scanner and the position and orientation system will be described by way of example only and with reference to the drawings, in which:

"FIG. 1 shows a schematic diagram of the main modules of the preferred form 3D scene scanner;

"FIG. 2 shows a perspective view of a hand-held form of the 3D scene scanner;

"FIG. 3 shows a schematic diagram of the coordinate systems employed by the 3D scene scanner algorithms;

"FIG. 4 shows a schematic diagram of the functional architecture employed by the 3D scene scanner;

"FIG. 5 shows a schematic diagram of the 3D scene scanner in use at the scene of a car crash;

"FIG. 6 shows a perspective view of one form of optical tracking device of the position and orientation system;

"FIG. 7 shows a perspective view of another form of optical tracking device of the position and orientation system; and

"FIG. 8 shows a schematic diagram of the position and orientation system detecting visible reference targets in a scene in order to sense position and orientation."

For additional information on this patent application, see: Valkenburg, Robert Jan; Penman, David William; Schoonees, Johann August; Alwesh, Nawar Sami; Palmer, George Terry. 3D Scene Scanner and Position and Orientation System. Filed December 13, 2013 and posted July 3, 2014. Patent URL: http://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.html&r=5360&p=108&f=G&l=50&d=PG01&S1=20140626.PD.&OS=PD/20140626&RS=PD/20140626

Keywords for this news article include: Algorithms, Industrial Research Limited.

Our reports deliver fact-based news of research and discoveries from around the world. Copyright 2014, NewsRx LLC


For more stories covering arts and entertainment, please see HispanicBusiness' Arts & Entertainment Channel



Source: Journal of Mathematics


Story Tools






HispanicBusiness.com Facebook Linkedin Twitter RSS Feed Email Alerts & Newsletters