Gesture Controlled robotic Arm

Description
this activity introduces the integration of machine learning into robotics by developing a hand gesture recognition model in PictoBlox. Through systematic steps, you learn to train, test, and export the model to control a robotic arm using gestures. By combining gesture analysis with robotic arm settings, this project highlights the potential of machine learning in enabling intuitive and precise control in robotics, paving the way for innovative applications.

In this activity, we will learn how to add machine learning in robotics. In this particular activity, we will be making a machine learning for hand gesture recognition. Once our model achieves the desired accuracy then we can embed our model with a robotic arm.

Follow the steps below:

1. Open PictoBlox and create a new file.
2. Select the Block coding environment as the appropriate Coding Environment.

3. Select the “Open ML Environment” option under the “Files” tab to access the ML Environment.

4. Click on “Create New Project“.
5. A window will open. Type in a project name of your choice and select the “Hand Gesture Classifier” extension. Click the “Create Project” button to open the Hand Pose Classifier window.

6. You shall see the Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.

Class in Hand Gesture Classifier
There are 2 things that you have to provide in a class:
1. Class Name: It’s the name to which the class will be referred as.

2. Hand Pose Data: This data can either be taken from the webcam or by uploading from local storage.

Note: The Add Class button will add more classes to the projects.

Adding Data to Class

You can perform the following operations to manipulate the data into a class.

1. Naming the Class: You can rename the class by clicking on the edit button.

2. Adding Data to the Class: You can add the data using the Webcam or by Uploading the files from the local folder.
1. Webcam:

Note: You must add at least 20 samples to each of your classes for your model to train. More samples will lead to better results.
Training the Model
After data is added, it’s fit to be used in model training. To do this, we have to train the model. By training the model, we extract meaningful information from the hand pose, and that in turn updates the weights. Once these weights are saved, we can use our model to make predictions on data previously unseen.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The range of the accuracy is 0 to 1.

Testing the Model

To test the model, simply enter the input values in the “Testing” panel and click on the “Predict” button. The model will return the probability of the input belonging to the classes.

Export in Block Coding

Click on the “Export Model” button on the top right of the Testing box, pictoblox will ask you to choose the coding preference as block or python. For this activity we are using the block coding environment so select block and pictoblox will load the model as blocks.

Below are the gestures for this particular activity

Let’s Code

  1. Our first step will be to initialize the camera settings for the recognition.2. The second important thing is to initialize the settings for the robotic arms, like pin connection, gripper angle, calibration etc.

Alert: Please change the gripper angles according to the assembly and motor alignment.

                   3.  Now we will be analyzing the gestures from the webcam and setting the conditions for different actions of the arm using nested if-elsee.

          4. Continue the same and complete your script as below.

with this your script is complete and now you can use this script to control your Robotic arm.

 

 

Table of Contents