Table of Contents

say () for () seconds

Description

The block displays a speech bubble with the specified text for the sprite that runs it, which appears on the screen for the specified amount of seconds.

Example

The example demonstrates the costume change in PictoBlox.

Script

Output

Read More
The example demonstrates how to use a repeat block to recite a table in PictoBlox.

Sprite

Output

Read More
Learn about noun detectors, tools or algorithms designed to identify and extract nouns from text or speech inputs.

Introduction

A noun detector is a tool or algorithm designed to identify and extract nouns from a given text or speech input. Nouns are a type of word that typically represent people, places, things, or ideas. In the context of chat-based applications, a noun detector can be useful for extracting key information or identifying specific entities mentioned in a conversation. It can help in tasks such as named entity recognition, information retrieval, sentiment analysis, and many more.
A noun detector serves as a valuable component in language processing systems, helping to extract and utilize meaningful information from text or speech inputs in chat-based interactions.

Logic

First, ChatGPT generates random sentences, and we save this response in a variable. Then, it asks users to identify a noun from the given sentence. If the user’s answer matches the response generated by ChatGPT, it will say “Correct.” Otherwise, it will say “Incorrect answer.”

  1. Open PictoBlox and create a new file.
  2. Select the environment as appropriate Block Coding Environment.
  3. To add the ChatGPT extension, click on the extension button located as shown in the image. This will enable the ChatGPT extension, allowing you to incorporate its capabilities into your project.
  4. We drag and drop the “Ask (AI)” block from the ChatExtension, and we use it to ask for any random sentence as input from chatGPT.
  5. We create a new variable called sentence and assign the value of a random sentence generated by ChatGPT to it.
  6. Use the say() method to provide instructions for finding nouns in the given sentence.
  7. Drag and drop the get() from the () block from the ChatGPT extension to obtain information from the sentence.
  8. If we use an if-else loop, we prompt the user to identify a noun from a given sentence. If the user’s answer matches the response generated by ChatGPT, it will say Correct answer for 2 minutes.
  9. Otherwise, if the user’s answer does not match the response from ChatGPT, it will return Answer is not a noun for 2 seconds.
  10. To begin the script, simply click on the green flag button.

Code

Output

Read More
Learn how to use the Object Detection extension of PictoBlox's Machine Learning Environment to count specific targets in images by writing Block code.

Introduction

In this example project we are going to create a Machine Learning Model which can count number of nuts and bolts from the camera feed or images.

Object Detection in Machine Learning Environment

Object Detection is an extension of the ML environment that allows users to detect images and making bounding box into different classes. This feature is available only in the desktop version of PictoBlox for Windows, macOS, or Linux. As part of the Object Detection workflow, users can add classes, upload data, train the model, test the model, and export the model to the Block Coding Environment.

 Opening Image Detection Workflow

Alert: The Machine Learning Environment for model creation is available in the only desktop version of PictoBlox for Windows, macOS, or Linux. It is not available in Web, Android, and iOS versions.

 

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2.  Select the Block Coding Environment.
  3. Select the “Open ML Environment” option under the “Files” tab to access the ML Environment.
  4. You’ll be greeted with the following screen.
    Click on “Create New Project”.
  5. A window will open. Type in a project name of your choice and select the “Object Detection” extension. Click the “Create Project” button to open the Object Detection window.

You shall see the Object Detection workflow. Your environment is all set.
   

Collecting and Uploading the Data

Uploading images from your device’s hard drive

  1. Now it’s time to upload the images which you downloaded from another source or captured from your camera. Click on the “Select from device” option from the Import Images block.
  2. Now click on the “Choose images from your computer” and go to the folder where you downloaded your images.
  3. Select all images which you want to upload then click on “open” option.
  4. Now page of PictoBlox looks like:

Making Bounding Box – Labelling Images

  1. Labeling is essential for Object Detection. Click on the “Bbox” tab to make the labels.

    Notes: Notice how the targets are marked with a bounding box. The labels appear in the “Label List” column on the right.

  2. To create the bounding box in the images, click on the “Create Box” button, to create a bounding box. After the box is drawn, go to the “Label List” column and click on the edit button, and type in a name for the object under the bounding box. This name will become a class. Once you’ve entered the name, click on the tick mark to label the object.
  3. File List: It shows the list of images available for labeling in the project.
  4. Label List: It shows the list of Labels created for the selected image.
  5. Class Info: It shows the summary of the classes with the total number of bounding boxes created for each class.
  6.   You can view all the images under the “Image” tab.

Training the Model

In Object Detection, the model must locate and identify all the targets in the given image. This makes Object Detection a complex task to execute. Hence, the hyperparameters work differently in the Object Detection Extension.

  1. Go to the “Train” tab. You should see the following screen:
  2. Click on the “Train New Model” button.
  3. Select all the classes, and click on “Generate Dataset”.
  4.  Once the dataset is generated, click “Next”. You shall see the training configurations.
  5. Specify your hyperparameters. If the numbers go out of range, PictoBlox will show a message.
  6. Click “Create”, It creates new model according to inserting value of hyperparameter.
  7. Click “Start Training”, If desired performance is reached, click on the “Stop” 
    1. Total Loss
    2. Regularization Loss
    3. Localization Loss
    4. Classification Loss
  8. After the training is completed, you’ll see four loss graphs: 
    Note: Training an Object Detection model is a time taking task. It might take a couple of hours to complete training

     

  9. You’ll be able to see the graphs under the “Graphs” panel. Click on the buttons to view the graph.

    1. Graph between “Total loss” and “Number of steps”
    2. Graph between “Regularization loss” and “Number of steps”.
    3. Graph between “Localization” and “Number of steps”.
    4. Graph between “Classification loss” and “Number of steps”.

Evaluating the Model

Now, let’s move to the “Evaluate” tab. You can view True Positives, False Negatives, and False Positives for each class here along with metrics like Precision and Recall.

Testing the Model

The model will be tested by uploading an Image from device:

 

Export in Block Coding

Click on the “PictoBlox” button, and PictoBlox will load your model into the Block Coding Environment if you have opened the ML Environment in the Block Coding.

 

 

Code

The idea is simple, we’ll add image samples in the “Backdrops” column. We’ll keep cycling through the backdrops and keep predicting the image on the stage.

  1. Add testing images in the backdrop and delete default backdrop.
  2. Now, come back to the coding tab and select the Tobi sprite.
  3. We’ll start by adding a when flag clicked block from the Events palette.
  4. Add () bounding box block from the Machine Learning palette. Select the “hide” option.
  5. Follow it up with a set detection threshold to () block from the Machine Learning palette and set the drop-down to 0.5.
  6. Add switch backdrop to () block from the Looks palette. Select any image.
  7. Add a forever block from the  Control palette
  8. Add analyse image from() block from the Machine Learning palette. Select the “stage” option.
  9. Add () bounding box block from the Machine Learning palette. Select the “show” option.
  10. Add two blocks of say () for () seconds from the Looks palette.
  11. Inside the say block add join () () block from operator palette.
  12. Inside the join block write statement at first empty place and at second empty place add get number of () detected? from the Machine Learning palette.
  13. Select the “Nut” option for first get number of () detected and for second choose “Bolt” option.
  14. Add the () bounding box block from the Looks palette. Select the “hide” option.
  15. Finally, add the next backdrop block from the Looks palette below the () bounding box block.

Final Result

 

 

 

Read More
Learn how to build a Machine Learning model which can identify the type of flower from the camera feed or images using PictoBlox.

Introduction

In this example project we are going to create a Machine Learning Model which can identify the type of flower from the camera feed or images.

 

Images Classifier in Machine Learning Environment

Image Classifier is an extension of the ML environment that allows users to classify images into different classes. This feature is available only in the desktop version of PictoBlox for Windows, macOS, or Linux. As part of the Image Classifier workflow, users can add classes, upload data, train the model, test the model, and export the model to the Block Coding Environment.

Let’s create the ML model.

Opening Image Classifier Workflow

Alert: The Machine Learning Environment for model creation is available in the only desktop version of PictoBlox for Windows, macOS, or Linux. It is not available in Web, Android, and iOS versions.

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2. Select the coding environment as Block Coding Environment.
  3. Select the “Open ML Environment” option under the “Files” tab to access the ML Environment.
  4. You’ll be greeted with the following screen.
    Click on “Create New Project“.
  5. A window will open. Type in a project name of your choice and select the “Image Classifier” extension. Click the “Create Project” button to open the Image Classifier window.
  6. You shall see the Image Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.

Class in Image Classifier

Class is the category in which the Machine Learning model classifies the images. Similar images are put in one class.

There are 2 things that you have to provide in a class:

  1. Class Name: It’s the name to which the class will be referred as.
  2. Image Data: This data can either be taken from the webcam or by uploading from local storage.

Note: You can add more classes to the projects using the Add Class button.

Adding Data to Class

You can perform the following operations to manipulate the data into a class.

  1. Naming the Class: You can rename the class by clicking on the edit button.
  2. Adding Data to the Class: You can add the data using  by Uploading the files from the local folder or the Webcam.

Training the Model

After data is added, it’s fit to be used in model training. In order to do this, we have to train the model. By training the model, we extract meaningful information from the images, and that in turn updates the weights. Once these weights are saved, we can use our model to make predictions on data previously unseen.

However, before training the model, there are a few hyperparameters that you should be aware of. Click on the “Advanced” tab to view them.


It’s a good idea to train a numeric classification model for a high number of epochs. The model can be trained in both JavaScript and Python. In order to choose between the two, click on the switch on top of the Training panel.

Note: These hyperparameters can affect the accuracy of your model to a great extent. Experiment with them to find what works best for your data.

Alert: Dependencies must be downloaded to train the model in Python, JavaScript will be chosen by default.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The x-axis of the graph shows the epochs, and the y-axis represents the corresponding accuracy. The range of the accuracy is 0 to 1.


Other evaluating parameter we can see by clicking on Train Report

Here we can see confusion matrix and training accuracy of individual classes after training.

Testing the Model

To test the model, simply enter the input values in the “Testing” panel and click on the “Predict” button.

The model will return the probability of the input belonging to the classes.

Export in Block Coding

Click on the “Export Model” button on the top right of the Testing box, and PictoBlox will load your model into the Block Coding Environment if you have opened the ML Environment in the Block Coding.

Code

The idea is simple, we’ll add image samples in the “Backdrops” column. We’ll keep cycling through the backdrops and keep classifying the image on the stage.

  1. Add testing images in the backdrop and delete default backdrop.
  2. Now, come back to the coding tab and select the Tobi sprite.
  3. We’ll start by adding a when flag clicked block from the Events palette.
  4. Add switch backdrop to () block from the Looks palette. Select any image.
  5. Add a forever block from the  Control palette.
  6. Inside the forever block add an analyze image from () block from the Machine Learning palette.
  7. Add two blocks of say () for () seconds from the Looks palette.
  8. Inside the say block add join () () block from operator palette.
  9. Inside the join block write statement at first empty place and at second empty place add identified class from the Machine Learning palette.
  10. Finally, add the next backdrop block from the Looks palette below the () bounding box block.

Final Result

 

 

Read More
Learn how to build a Machine Learning model which can identify the type of waste from the camera feed or images using PictoBlox.

Introduction

In this example project we are going to create a Machine Learning Model which can identify the type of waste from the camera feed or images.

Images Classifier in Machine Learning Environment

Image Classifier is an extension of the ML environment that allows users to classify images into different classes. This feature is available only in the desktop version of PictoBlox for Windows, macOS, or Linux. As part of the Image Classifier workflow, users can add classes, upload data, train the model, test the model, and export the model to the Block Coding Environment.

Let’s create the ML model.

Opening Image Classifier Workflow

Alert: The Machine Learning Environment for model creation is available in the only desktop version of PictoBlox for Windows, macOS, or Linux. It is not available in Web, Android, and iOS versions.

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2. Select the coding environment as Block Coding Environment.
  3. Select the “Open ML Environment” option under the “Files” tab to access the ML Environment.
  4. You’ll be greeted with the following screen.
    Click on “Create New Project“.
  5. A window will open. Type in a project name of your choice and select the “Image Classifier” extension. Click the “Create Project” button to open the Image Classifier window.
  6. You shall see the Image Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.

Class in Image Classifier

Class is the category in which the Machine Learning model classifies the images. Similar images are put in one class.

There are 2 things that you have to provide in a class:

  1. Class Name: It’s the name to which the class will be referred as.
  2. Image Data: This data can either be taken from the webcam or by uploading from local storage.

Note: You can add more classes to the projects using the Add Class button.

Adding Data to Class

You can perform the following operations to manipulate the data into a class.

  1. Naming the Class: You can rename the class by clicking on the edit button.
  2. Adding Data to the Class: You can add the data using by Uploading the files from the local folder or the Webcam.

Training the Model

After data is added, it’s fit to be used in model training. In order to do this, we have to train the model. By training the model, we extract meaningful information from the images, which in turn updates the weights. Once these weights are saved, we can use our model to make predictions on data previously unseen.

However, before training the model, there are a few hyperparameters that you should be aware of. Click on the “Advanced” tab to view them.


It’s a good idea to train an image classification model for a high number of epochs. The model can be trained in both JavaScript and Python. In order to choose between the two, click on the switch on top of the Training panel.

Note: These hyperparameters can affect the accuracy of your model to a great extent. Experiment with them to find what works best for your data.

Alert: Dependencies must be downloaded to train the model in Python, JavaScript will be chosen by default.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The x-axis of the graph shows the epochs, and the y-axis represents the corresponding accuracy. The range of accuracy is 0 to 1.


Other evaluating parameters can see by clicking on Train Report

Here we can see the confusion matrix and training accuracy of individual classes after training.

Testing the Model

To test the model, simply enter the input values in the “Testing” panel and click on the “Predict” button.

The model will return the probability of the input belonging to the classes.

Export in Block Coding

Click on the “Export Model” button on the top right of the Testing box, and PictoBlox will load your model into the Block Coding Environment if you have opened the ML Environment in the Block Coding.

Code

The idea is simple, we’ll add image samples in the “Backdrops” column. We’ll keep cycling through the backdrops and keep classifying the image on the stage.

  1. Add testing images in the backdrop and delete the default backdrop.
  2. Now, come back to the coding tab and select the Tobi sprite.
  3. We’ll start by adding a when flag clicked block from the Events palette.
  4. Add switch backdrop to () block from the Looks palette. Select any image.
  5. Add a forever block from the  Control palette.
  6. Inside the forever block add an analyze image from () block from the Machine Learning palette.
  7. Add two blocks of say () for () seconds from the Looks palette.
  8. Inside the say block add join () () block from operator palette.
  9. Inside the join block write statement at first empty place and at second empty place add identified class from the Machine Learning palette.
  10. Finally, add the next backdrop block from the Looks palette below the () bounding box block.

Final Result

You can build more applications on top of this waste classifier.

Read More
The examples show how to use pose recognition in PictoBlox to maintain a yoga pose for a particular time interval.

Script

The idea is simple, we’ll add one image of  each class in the “costume” column by making one new sprite which will we display on the stage according to input from user. we’ll also change name of the image according to pose.

  1. Add testing images to the backdrop and delete the default backdrop.
  2. Now, come back to the coding tab and select the Tobi sprite.
  3. We’ll start by adding a when flag clicked block from the Events palette.
  4. We made the new variable “count” by choosing the “Make a Variable” option from the Variables palette.
  5. Add the “hide variable () block from the Variables palette. Select count.
  6. Add the “turn () video on stage with () transparency” block from the Machine Learning palette. Select the off option at the first empty place, and for the second, write a 0 value.
  7. Add an “ask () and wait” block from the Sensing palette. Write an appropriate statement in an empty place.
  8. Add the “if () then” block from the control palette for checking the user’s input.
  9. In the empty place of the “if () then” block, add a condition checking block from the operators palette block. At the first empty place, put the answer block from the sensing palette, and at the second place, write an appropriate statement.
  10. Inside the “if () then” block, add a “broadcast ()” block from the Events palette block. Select the “New message” option and write an appropriate statement for broadcasting a message to another sprite.
  11. Add the “turn () video on stage with () transparency” block from the Machine Learning palette. Select the option at the first empty place, and for the second, write a 0 value.
  12. Add the “() key points” block from the Machine Learning palette. Select the show option.
  13. Add the “Set () to ()” block from the Variables palette. Select the count option at the first empty place, and for the second, write a 30 value.
  14. Add the Show variable () block from the Variables palette. Select count.
  15. Add “forever” from the Control palette.
  16. Inside the “forever” block, add an “analysis image from ()” block from the Machine Learning palette. Select the Web camera option.
  17. Inside the “forever” block, add an “if () then” block from the Control palette.
  18. In the empty place of the “if () then” block, add an “is identified class ()” block from the Machine Learning palette. Select the appropriate class from the options.
  19. Inside the “if () then” block, add an “say ()” block from the Looks palette block. Write an appropriate statement in an empty place.
  20. Add “change () by ()” from the Variables palette. Select the count option in the first empty place, and for the second, write a -1 value.

  21. Add the “if () then” block from the control palette for checking the user’s input.
  22. In the empty place of the “if () then” block, add a condition checking block from the operators palette block. In the first empty place, put the “count” block from the sensing palette, and in the second place, write 0.
  23. Add the “Set () to ()” block from the Variables palette. Select the count option at the first empty place, and for the second, write a 30 value.
  24. Add the “turn () video on stage with () transparency” block from the Machine Learning palette. Select the off option at the first empty place, and for the second, write a 0 value.
  25. Inside the “if () then” block, add an “say ()” block from the Looks palette block. Write an appropriate statement in an empty place.
  26. Add the “() key points” block from the Machine Learning palette. Select the hide option
  27. Add the “stop ()” block to the control pallet. Select all options.
  28. Repeat “if () then” block code for other classes, make appropriate changes in copying block code according to other classes, and add code just below it.
  29. The final block code looks like
  30. Now click on another sprite and write code.
  31. We’ll start writing code for this sprite by adding a when flag is clicked block from the Events palette.
  32. Add the “hide” block from the Looks pallet.
  33. Write a new code in the same sprite according to class and add the “when I receive ()” block from the Events palette. Select the appropriate class from the options.
  34. Add the “show” block from the Looks pallet.
  35. Add the “switch costume to ()” block from the Looks palette. Select the appropriate class from the options.
  36. Repeat the same code for other classes and make changes according to the class.

    Final Result

Read More
The examples show how to use pose recognition in PictoBlox to make jumping jack counter.

Introduction

In this example project, we are going to create a machine learning model that can count the number of jumping jack activities from the camera feed.

Pose Classifier in Machine Learning Environment

The pose Classifier is the extension of the ML Environment used for classifying different body poses into different classes.

The model works by analyzing your body position with the help of 17 data points.

Pose Classifier Workflow

  1. Open PictoBlox and create a new file.
  2. You can click on “Machine Learning Environment” to open it.
  3. Click on “Create New Project“.
  4. A window will open. Type in a project name of your choice and select the “Pose Classifier” extension. Click the “Create Project” button to open the Pose Classifier window.
  5. You shall see the Pose Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.

Class in Pose Classifier

Class is the category in which the Machine Learning model classifies the poses. Similar posts are put in one class.

There are 2 things that you have to provide in a class:

  1. Class Name: The name to which the class will be referred.
  2. Pose Data: This data can be taken from the webcam or uploaded from local storage.

Adding Data to Class

You can perform the following operations to manipulate the data into a class.

  1. Naming the Class: You can rename the class by clicking on the edit button.
  2. Adding Data to the Class: You can add the data using the Webcam or by Uploading the files from the local folder.
    1. Webcam:

Training the Model

After data is added, it’s fit to be used in model training. To do this, we have to train the model. By training the model, we extract meaningful information from the hand pose, and that in turn updates the weights. Once these weights are saved, we can use our model to predict previously unseen data.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The range of accuracy is 0 to 1.

Testing the Model

To test the model, simply enter the input values in the “Testing” panel and click on the “Predict” button.

The model will return the probability of the input belonging to the classes.

Export in Block Coding

Click on the “Export Model” button on the top right of the Testing box, and PictoBlox will load your model into the Block Coding Environment if you have opened the ML Environment in the Block Coding.

Script

The idea is simple, after running code we will do jumping jack activity in front of camera and tobi sprite will say counting of jumping jack.

  1. Select the Tobi sprite.
  2. We’ll start by adding a when flag clicked block from the Events palette.
  3. We made the new variable “count” by choosing the “Make a Variable” option from the Variables palette.
  4. Also we made the new variable “temp” by choosing the “Make a Variable” option from the Variables palette.
  5. Add “forever” from the Control palette.
  6. Inside the “forever” block, add an “analysis image from ()” block from the Machine Learning palette. Select the Web camera option.
  7. Inside the “forever” block, add an “if () then” block from the Control palette.
  8. In the empty place of the “if () then” block, add an “key () pressed?” block from the Sensing palette. Select the ‘q’ key from the options.
  9. Inside the “if () then” block, add the “Set () to ()” block from the Variables palette. Select the count option at the first empty place, and for the second, write a 0 value.
  10. Also add the “Set () to ()” block from the Variables palette. Select the temp option at the first empty place, and for the second, write a 0 value.
  11. Inside the “forever” block, add an new “if () then” block from the Control palette.
  12. In the empty place of the “if () then” block, add an “is identified class ()” block from the Machine Learning palette. Select the ‘Upper hand‘ option from the options.
  13. Inside the “if () then” block, add the “Set () to ()” block from the Variables palette. Select the temp option at the first empty place, and for the second, write a 1 value.
  14. Inside the “forever” block, add an new “if () then” block from the Control palette.
  15. In the empty place of the “if () then” block, add an “is identified class ()” block from the Machine Learning palette. Select the ‘Down hand‘ option from the options.
  16. Inside the “if () then” block, add the another “if () then” block from the Control palette.
  17. In the empty place of the “if () then” block, add a condition checking block from the operators palette block. At the first empty place, put the temp variable from the variables palette, and at the second place, write a 1 value.
  18. Inside the “if () then” block, add the “Set () to ()” block from the Variables palette. Select the count option at the first empty place, and for the second, write a 1 value.
  19. Also add the “Set () to ()” block from the Variables palette. Select the temp option at the first empty place, and for the second, write a 0 value
  20. Inside the “if () then” block, add an “say () for () seconds” block from the Looks palette block. At the first empty place, add the “join () ()” block from operator palette and at the second place, write a 2 value.
  21. Inside “join () ()” block at the first empty place, write the appropriate statement and at the second place, add count variable from Variables palette.

    Final Output

     

Read More
The example shows how to use a audio classifier in PictoBlox to make the Bird Audio Classifier Bot.

Introduction

In this example project, we are going to create a machine learning model that can classify different audio messages of birds from the microphone feed of computer.

Audio Classifier in Machine Learning Environment

The Audio Classifier is the extension of the ML Environment used for classifying different birds voice.

Audio Classifier Workflow

Follow the steps below to create your own Audio Classifier Model:

  1. Open PictoBlox and create a new file.
  2. Select the Block coding environment as the appropriate Coding Environment.
  3. Select the “Open ML Environment” option under the “Files” tab to access the ML Environment.
  4. A new window will open. Type in an appropriate project name of your choice and select the “Audio Classifier” extension. Click the “Create Project” button to open the Audio Classifier Window.
  5. You shall see the Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.
  6. As you can observe in the above image, we will add many classes for audio. We will be able to add audio samples with the help of the microphone.

Note: You can add more classes to the projects using the Add Class button.

Adding Data to Class

You can perform the following operations to manipulate the data into a class.

  1. Naming the Class: You can rename the class by clicking on the edit button.
  2. Adding Data to the Class: You can add the data using the Microphone.
  3. You will be able to add the audio sample in each class and make sure you add at least 20 samples for the model to run with good accuracy.

Training the Model

After data is added, it’s fit to be used in model training. To do this, we have to train the model. By training the model, we extract meaningful information from the hand pose, and that in turn updates the weights. Once these weights are saved, we can use our model to make predictions on data previously unseen.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The range of accuracy is 0 to 1.

Testing the Model

To test the model simply, use the microphone directly and check the classes as shown in the below image:

You will be able to test the difference in audio samples recorded from the microphone as shown below:

Export in Block Coding

Click on the “Export Model” button on the top right of the Testing box, and PictoBlox will load your model into the Block Coding Environment if you have opened the ML Environment in the Block Coding.

Script

The idea is simple, we’ll add one image of  each class in the “costume” column by making one new sprite which will we display on the stage according to input from user. we’ll also change name of the image according to bird class type.

  1. Add one bird image as another sprite and upload at-least one image of all bird classes on costume.
  2. Now, come back to the coding tab and select the Tobi sprite.
  3. We’ll start by adding a when flag clicked block from the Events palette.
  4. Add the “open recognition window” block from the Machine Learning palette.
  5. Add an “when () is predicted” block from the Machine Learning palette. Select the appropriate class from the options.
  6. Add an “say () for () seconds” block from the Looks palette block. Write an appropriate statement in an empty place.
  7. Repeat the same code for other classes and make changes according to the class.
  8. For “BackNoise” class, don’t add any statement at place of empty space of “say () for () seconds” block
  9. Final code of “Tobi” sprite is
  10. Now click on another sprite and write code.
  11. We’ll start writing code for this sprite by adding a “when () is predicted” block from the Machine Learning palette.
  12. Add the “switch costume to ()” block from the Looks palette. Select the appropriate class from the options.
  13. Repeat the same code for other classes and make changes according to the class.
  14. Final code of another sprite is

Final Output

Read More
All articles loaded
No more articles to load