Font size:
Small
Medium
Large

Smart Point Cloud Technology Wins 2021 R&D 100 Award

Smart Point Cloud Technology Wins 2021 R&D 100 AwardSmart Point Cloud Technology Wins 2021 R&D 100 Award

The Ministry of Science and Technology (MOST) has always valued the integration of science and technology and the humanities. NARLabs' National Center for High‑Performance Computing (NCHC), in collaboration with National Yang Ming Chiao Tung University and Tunghai University, has recently announced that their integrative research product, "Cloud-based Smart Point Cloud¹ Processing", has been awarded a prize at the R&D 100 Awards, which is considered the Oscars of the tech industry. They were the only team from Taiwan in the IT/Electrical category to receive an award, joining the ranks of other awardees which include MIT, Northwestern University, and Oak Ridge National Laboratory (ORNL).

Founded in 1963, the R&D 100 Awards are now highly regarded in the world of science and technology. The Awards recognize top innovations each year, and emphasize the recognition of commercial products, technology, and materials that can be directly marketed or technically licensed. Awards are presented in the following categories: Analytical/Test, IT/Electrical, Mechanical/Materials, Process/Prototyping, and Software/Services.

The NCHC's "Cloud-based Smart Point Cloud Processing" (CSPCP) product uses artificial intelligence to automatically correct color errors that are prone to occur with existing point cloud technology. The use of point cloud technology to build digital 3D models of real-world objects has already been important in the field of information technology, but it can also be applied to a wide range of fields such as historic preservation, theatrical scene archiving, architectural engineering, autonomous driving, digital cities, and craniofacial reconstruction in medicine. However, existing point cloud technology sometimes produces incorrect color matches and residual shadows, among other errors. The way to fix these defects, besides trying to improve measurement technology and environmental conditions, is to manually recut and apply color to the abnormal area of the point cloud, which is not only laborious and time-consuming, but can also result in a lack of fidelity.

CSPCP was developed by NCHC researcher Chia-Chen Kuo, Distinguished Professor I-Chen Wu of National Yang Ming Chiao Tung University, and Associate Professor Lung-Pin Chen of Tunghai University. It can automatically correct color errors in 3D models of buildings or objects, and can be refined to the millimeter level for a grain-free effect. In addition, since color is corrected by automatic AI detection, the time needed for correction, which originally was 6 months, has been reduced to only 1 month. Since the launch of consumer cell phones equipped with LiDAR, point cloud modeling technology has helped with video recording for social media, and CSPCP can help expand the range of its applications. For instance, it can accelerate the construction of more types of 3D digital models, such as those required for scenes in movies and animation, which can replace single-use physical set pieces.

CSPCP was built in the NCHC Render Farm as a value-added service for point cloud solutions, allowing clients with cloud model correction needs to utilize the platform's high-speed computing benefits to automatically optimize the presentation of their point clouds. The patent for this technology is also currently pending in Taiwan and the United States. The NCHC hopes that CSPCP can be applied to 3D digital reconstruction of buildings and objects, and that it can be authorized for future use in point cloud software R&D, scanning and measurement, and other applications involving spatial information. CSPCP can allow Taiwan to boost its point cloud technology to help build a digital nation and develop varied applications using digital space.

"Cloud-based Smart Point Cloud Processing" (CSPCP) Introduction Video: https://youtu.be/0z9Vn8_j22M


¹ A "point cloud" is a digital record of the spatial dimensions of a real-world object, consisting of millions or billions of points, each of which is scanned and measured with LiDAR to quickly obtain precise 3D coordinates for the object. When these points are combined in the computer, a three-dimensional model of the object can be constructed. When combined with digital color images, the colors of the real object can be pasted to the correct locations on the digital 3D model, so the model will look like the real object.