Robot with vision and touch-sensing capabilities learns how to play the game of Jenga

clock
The Jenga-playing robot. Image: MIT
Image:

The Jenga-playing robot. Image: MIT

The technology could be useful in production units where delicate touch and careful vision are needed

A team of researchers from the Massachusetts Institute of Technology (MIT) claims to have designed a robot that can teach itself the complex physics of Jenga.

Researchers say the manipulator arm of the robot relies on a machine learning algorithm, visual data, and tactile feedback to teach itself how to correctly move blocks in the game of Jenga.

Jenga is the classic block-staking game which demands physical skills. Players remove one block at a time from a tower comprised of 54 wooden blocks. The block removed is gently put on the top of the tower, so with each move, the tower becomes both taller and more unstable.

Playing the game requires a soft touch to prevent the tower from tumbling down. It also demands mastery of other skills, such as pushing, pulling, probing, placing, and aligning the blocks in the tower.

The success rate of the robot in keeping the tower upright while removing the wooden blocks was almost on a par with that of human players

In the current research, the team equipped an industrial ABB IRB 20 robotic arm with a soft-pronged gripper, an external camera, and a force-sensing wrist cuff. To train the machine, it was directed to randomly select a block in the Jenga tower and also select a location on that block to push and move it.

Each time the arm pushed the block, a connected computer recorded the force and visual measurements, and compared those measurements to previous moves. The system also categorised the attempt as successful or unsuccessful.

Rather than performing thousands of such attempts, the robot was trained on just about 300 attempts. The team grouped attempts of similar measurements and outcomes in different clusters, each signifying specific block behaviour. Within around 300 attempts, the robot learned to intelligently anticipate its moves, guessing which blocks would be more difficult to move than others, and which might cause the tower to collapse.

The performance of the robotic arm was also compared with the performance of human players. The team found the success rate of the robot in keeping the tower upright while removing the wooden blocks was almost on a par with that of human players. 

The team believes this technology could be useful in production units where delicate touch and careful vision are needed, for example, assembly of cellular phones or other smaller parts in a, separating recycling items from waste, etc.

"In a cellphone assembly line, in almost every single step, the feeling of a snap-fit, or a threaded screw, is coming from force and touch rather than vision," says Alberto Rodriguez, Assistant Professor in the Department of Mechanical Engineering at MIT, and the lead researcher of the study.

"Learning models for those actions is prime real-estate for this kind of technology."

The findings of the research are published in journal Science Robotics.

More on Mobile

Zoom versus Google versus Slack: Who has the edge in UCC?

Zoom versus Google versus Slack: Who has the edge in UCC?

Computing Delta surveyed more than 180 end users of different UCC tools. In this article we compare answers to find the winner from some of the major players: Zoom vs Google vs Slack

Penny French
clock 28 September 2021 • 13 min read
BT conducts quantum-secure communication on revolutionary hollow core fibre cable

BT demonstrates quantum-secure communication on revolutionary hollow core fibre cable

Trial of quantum key distribution over HCF cable is a 'world's first', BT claims

clock 14 September 2021 • 3 min read
Cloud Awards finalist Precisely is bringing data and comms together

Cloud Awards finalist Precisely is bringing data and comms together

"Data underpins every operation in an organisation”

Computing Staff
clock 08 September 2021 • 3 min read