UC Berkeley professors have previously used YouTube videos as a guide for robots to learn various motions such as jumping or dancing, while Google has trained robots to understand depth and motion. The team applied that knowledge to their latest project, Motion2Vec, in which videos of actual surgical procedures are used for instruction. In a recently released research paper, researchers outline how they used YouTube videos to train a two-armed da Vinci robot to insert needles and perform sutures on a cloth device.
The medical team relied on Siamese networks, a deep-learning setup that incorporates two or more networks sharing the same data. The system is optimal for comparing and assessing relationships between datasets. Such networks have been used in the past for facial detection, signature verification and language detection.
Read the full article here: https://techxplore.com/news/2020-06-intel-google-uc-berekely-ai.html
Source: Peter Grad , Tech Xplore