Home Tech Google DeepMind trained a robot to beat humans at table tennis

Google DeepMind trained a robot to beat humans at table tennis

39
0
Google DeepMind trained a robot to beat humans at table tennis

Types of technology It was able to draw on vast amounts of data to refine its playing vogue and adjust its tactics as matches improved.

types of technology collage of 9 scenes from video of human players matched against a robot in ping pong

Google DeepMind

Achieve you fancy your chances of beating a robot at a game of table tennis? Google DeepMind has trained a robot to play the game at the equivalent of amateur-level competitive performance, the company has announced. It claims it’s the first time a robot has been taught to play a sport with humans at a human level.

Researchers managed to accumulate a robotic arm wielding a 3D-printed paddle to catch 13 of 29 games against human opponents of varying abilities in full games of competitive table tennis. The research was printed in an Arxiv paper. 

The machine is far from perfect. Although the table tennis bot was able to beat all beginner-level human opponents it faced and 55% of those playing at amateur level, it misplaced all the games against advanced players. Peaceful, it’s an spectacular advance.

“Even a few months back, we projected that realistically the robot may now now not be able to catch against of us it had now now not played earlier than. The machine certainly exceeded our expectations,” says  Pannag Sanketi, a senior staff software engineer at Google DeepMind who led the undertaking. “The way the robot outmaneuvered even stable opponents was mind blowing.”

And the research is now now not fair all enjoyable and games. In fact, it represents a step towards creating robots that can slay invaluable tasks skillfully and safely in real environments like properties and warehouses, which is a prolonged-standing goal of the robotics neighborhood. Google DeepMind’s approach to training machines is applicable to many totally different areas of the sector, says Lerrel Pinto, a computer science researcher at Fresh York University who did now now not work on the undertaking.

“I am a broad fan of seeing robot programs actually working with and around real humans, and right here is a fantastic example of this,” he says. “It may now now not be a stable player, however the raw ingredients are there to retain bettering and eventually accumulate there.”

To turn into a proficient table tennis player, humans require gorgeous hand-gaze coordination, the ability to streak rapidly and make like a flash choices reacting to their opponent—all of which are significant challenges for robots. Google DeepMind’s researchers weak a two-part approach to train the machine to mimic these abilities: they weak computer simulations to train the machine to master its hitting abilities; then sparkling tuned it using real-world data, which allows it to wait on over time.

The researchers compiled a dataset of table tennis ball states, together with data on place, straggle, and escape. The machine drew from this library in a simulated ambiance designed to accurately assume the physics of table tennis matches to learn abilities such as returning a assist, hitting a forehand topspin, or backhand shot. As the robot’s limitations meant it may now now not assist the ball, the real-world games had been modified to accommodate this.

For the duration of its matches against humans, the robot collects data on its performance to wait on refine its abilities. It tracks the ball’s place using data captured by a pair of cameras, and follows its human opponent’s playing vogue via a motion capture machine that uses LEDs on its opponent’s paddle. The ball data is fed back into the simulation for training, creating a steady feedback loop.

This feedback allows the robot to test out unique abilities to attempt and beat its opponent—meaning it can adjust its tactics and behavior fair like a human would. This means it turns into step by step better each at some level of a given match, and over time the extra games it plays.

The machine struggled to hit the ball when it was hit either very fast, beyond its visual view (extra than six toes above the table), or very low, because of a protocol that instructs it to avoid collisions that may damage its paddle. Spinning balls proved a challenge because it lacked the capacity to without delay measure straggle—a limitation that advanced players had been like a flash to take advantage of.

Training a robot for all eventualities in a simulated ambiance is a real challenge, says Chris Walti, founding father of robotics company Mytra and beforehand head of Tesla’s robotics team, who was now now not bearing in mind the undertaking.

“Or now now not it is very, very complicated to actually simulate the real world because there may be so many variables, like a gust of wind, and even dirt [on the table]” he says. “Unless you have very realistic simulations, a robot’s performance goes to be capped.” 

Google DeepMind believes these limitations can be addressed in a resolution of ways, together with by rising predictive AI fashions designed to anticipate the ball’s trajectory,

 » …
Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here