The traditional game of rock, paper, scissors has been given a very modern slant by Canadian aerospace engineer Julien de la Bruère-Terreault.
He has developed a computer-vision and machine-learning-driven version, pitting one player against a computer.
“The idea for the project actually comes from my son,” reveals Julien. After coding a text-based rock, paper, scissors game in Python, Julien was experimenting with basic computer vision on his Raspberry Pi when his son asked if he could make a version of the game that uses the camera to detect the hand gestures. “Knowing it was feasible, I accepted the challenge, seeing this as an opportunity to learn a lot in the process.”
Over the course of a year, Julien developed his skills around machine learning and computer vision. “The challenge for me in this project is that I had to learn almost everything required to make it work,” he says. “Developing and testing the machine-learning algorithm was the most difficult part.” This algorithm, or ‘classifier’, teaches the computer to recognise a set of hand gestures: “[It] has been previously ‘trained’ on a bank of labelled images corresponding to the rock, paper, and scissors hand gestures.”
Training AI to play Rock, Paper, Scissors
Julien began the project by designing the 3D-printed fixture to hold the camera and LED lighting strips. He then developed a simple application to capture training images to develop the machine-learning algorithm.
“I started with an initial set of approximately 150 labelled images.” As this was quite a low number, “I had to select a relatively simple algorithm with few parameters to tune and perform dimensionality reduction on the images.”
Once happy with the algorithm, Julien began work on enabling the game to be played in real-time, adding the logic and graphical interface.
He says the Raspberry Pi was an integral part of the build: “With its Camera Module and great Picamera library, its portable size and sufficient computing power, the Pi was the ideal platform to develop this project.”