The use of robotics on construction sites has not reached its potential, says a University of Toronto researcher, but with more research, according to Daeho Kim, a fully autonomous higher-order mobile robot could be one more step towards achievement.
Kim, an assistant professor in the U of T’s School of Applied Science and Engineering, says many of today’s so-called robots patrolling construction sites should be more accurately called tools that repeat certain pre-programmed tasks.
A few success stories aside, what is missing is full robotic automation and digitization that uses human-level visual artificial intelligence (AI) to fully understand the construction sites where they are deployed.
To achieve the high level of visual AI that will power robotics on sites, millions of images are needed, but for various reasons getting to that number is impractical. What Kim and his team are proposing are two new techniques: synthesizing virtual building images and generating miniature-scale building images.
“As we develop new forms of construction robots, the hardware part has taken a big step forward, for example, Boston Dynamics’ Spot, but software development, this part of artificial intelligence, still has a long way to go. to go,” Kim said.
“The problem is that we lack training data for the construction scenes. The DNN, deep neural network, the central engine of visual AI, is a supervised model, which naturally becomes data-intensive. To develop a well-trained artificial intelligence… we need a gigantic number of well-diversified training images for construction scenes.
Kim’s research program was one of 251 university-based initiatives that were announced as recipients of a total of $64 million in funding from the Canada Foundation for Innovation’s John R. Evans Leaders Fund in September.
Its project summary, submitted to the Innovation Center, stated: “Robotic solutions with enhanced AIs will collaborate with workers in the field safely, improving productivity and profitability while offsetting growing labor shortages. work. The proposed research project is key to realizing this vision, providing optimized DNN models applicable in the field – a critical next step in the development of autonomous construction robots.
Robotics will collect, analyze and document site information, enabling the creation of live digital twin models of ongoing construction sites.
The synthesis of images that will be developed into visual AI is necessary, Kim explained, first because it is difficult to collect the data in person.
Surveillance cameras and drones have occlusions, are very expensive — Kim mentioned $2-10 per frame — and have other problems.
Collecting a million images would take time, and there are various problematic regulations and privacy issues.
Marketing and sharing data in a competitive construction environment are other issues.
Work is progressing rapidly in Kim’s lab at the University of Toronto, with the team using five Tensor processing units and Google Cloud software. More computing resources are needed.

“We have been fully focused on developing simulation software that can automatically synthesize non-real but real-looking construction images, and a few weeks ago we started actively generating one million images of This is exciting news for me because, to my knowledge, we have never had the chance before to use a million training images in DNN in-construction training,” he said. declared.
Synthesis steps include creating a 3D human model, followed by entering worker motion capture data; create a 3D construction worker avatar by mapping the 2D or 3D clothing map to the 3D human model; randomize imaging conditions, including camera distance and lighting conditions; and synthesize and generate construction images or videos by overlaying the virtual construction worker avatar on 3D construction backgrounds.
Next comes the prototyping of a fully autonomous mobile robot for construction digital twinning that deploys the higher-order DNN models.
Construction robots will need to be able to monitor and analyze location, speed and direction of travel, pose, proximity, and other factors that sense the presence of construction workers.
“It is still unclear how effective synthetic images are in training visual AI models for a construction scene, which is very dynamic and unstructured.” said Kim. “We may or may not need our own unique solution.”
For the final stage, Kim will need partners from the private sector – he is looking for an innovative construction company that would financially support the research.
Follow the author on Twitter @DonWall_DCN
#professor #seeks #higherorder #construction #robotics #constructconnect.com #Daily #Commercial #News