A remarkable development in the field of robotics has enabled a "chef" robot to learn and recreate recipes simply by watching food videos.




A team of researchers from the University of Cambridge has successfully trained a robotic "chef" to learn from cooking videos and recreate the dishes shown. The study, published in the journal IEEE Access, demonstrates the potential of using video content as a valuable resource for automated food production, with implications for the development of more accessible and cost-effective robot chefs.


While robot chefs have long been a staple of science fiction, their real-world implementation poses significant challenges. Although prototype robot chefs have been developed by various commercial companies, they still lag behind human chefs in terms of skill and versatility. Teaching a robot to prepare a wide range of dishes traditionally requires complex and time-consuming programming.


To address this, the researchers aimed to train their robotic chef in an incremental manner, similar to how humans learn. They created a "cookbook" consisting of eight simple salad recipes and recorded themselves preparing these salads in videos. Utilizing a publicly available neural network already capable of object recognition, the researchers trained the robot to identify the ingredients and actions involved in the recipes.


The robot analyzed each frame of the videos using computer vision techniques, identifying objects, such as fruits, vegetables, knives, and the human demonstrator's body parts. By converting the recipes and videos into vectors and performing mathematical operations on them, the robot could determine the similarity between a demonstration and a given vector. This enabled the robot to identify the recipe being prepared based on the recognized ingredients and actions.


Remarkably, the robot achieved a 93% accuracy in identifying the correct recipe out of the 16 videos it watched, even though it only detected 83% of the human demonstrator's actions. The robot also demonstrated the ability to recognize variations in recipes, distinguishing between a portion increase and a completely new recipe. It successfully learned a new, ninth salad recipe from a demonstration and incorporated it into its repertoire.


The researchers acknowledged that the videos used in the training process were different from the fast-paced, visually dynamic food videos commonly found on social media platforms. The robot required clear visuals, such as unobstructed views of the ingredients, to accurately identify and replicate the steps. However, as robot chefs continue to improve in ingredient recognition from videos, platforms like YouTube could become valuable resources for expanding their repertoire by learning a wider range of recipes.


Supported by Beko plc and the Engineering and Physical Sciences Research Council (EPSRC), this research highlights the potential of using video content as a training tool for robot chefs. By bridging the gap between human culinary expertise and automated food production, such advancements have the potential to transform the future of cooking and culinary automation.

Comments