Table of Contents

switchbackdrop()

Function Definition: switchbackdrop(backdrop_name = "backdrop1")

Parameters

NameTypeDescriptionExpected ValuesDefault Value
backdrop_namestringThe name of the target backdrop.String"backdrop1"

Description

The function changes the Stage’s backdrop to the specified one.

Example

Learn how to create a Machine Learning Model to count nuts and bolts from the camera feed or images. See how to open the ML environment, collect and upload data, label images, train the model, and export the python script.

Introduction

In this example project we are going to explore how to create a Machine Learning Model which can count the number of nuts and bolts from the camera feed or images. You will learn how to open the ML environment, collect and upload data, label images, train the model, and export the Python script.

Object Detection in Machine Learning Environment

Object Detection is an extension of the ML environment that allows users to detect images and make bounding boxes into different classes. This feature is available only in the desktop version of PictoBlox for Windows, macOS, or Linux. As part of the Object Detection workflow, users can add classes, upload data, train the model, test the model, and export the model to the Block Coding Environment.

Opening Image Detection Workflow

Alert: The Machine Learning Environment for model creation is available in the only desktop version of PictoBlox for Windows, macOS, or Linux. It is not available in Web, Android, and iOS versions.

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2.  Select the Python Coding Environment.
  3. Select the “Open ML Environment” option under the “Files” tab to access the ML Environment.
  4. You’ll be greeted with the following screen.
    Click on “Create New Project”.
  5. A window will open. Type in a project name of your choice and select the “Object Detection” extension. Click the “Create Project” button to open the Object Detection window.

You shall see the Object Detection workflow. Your environment is all set.

Collecting and Uploading the Data

The left side panel will give you three options to gather images:

  1. Using the Webcam to capture images.
  2. Uploading images from your device’s hard drive.
  3. Downloading images from a repository of images.

Uploading images from your device’s hard drive

  1. Now it’s time to upload the images which you downloaded from another source or captured from your camera. Click on the “Select from device” option from the Import Images block.
  2. Now click on the “Choose images from your computer” and go to the folder where you downloaded your images.
  3. Select all images which you want to upload then click on “open” option.
  4. Now page of PictoBlox looks like this:

Making Bounding Box – Labelling Images

A bounding box is an imaginary rectangle that serves as a point of reference for object detection and creates a collision box for that object.

We draw these rectangles over images, outlining the object of interest within each image by defining its X and Y coordinates. This makes it easier for machine learning algorithms to find what they’re looking for, determine collision paths and conserves valuable computing resources.

  1. Labeling is essential for Object Detection. Click on the “Bbox” tab to make the labels.

    Notes:

    1. Notice how the targets are marked with a bounding box. The labels appear in the “Label List” column on the right.
    2. A single image can have multiple targets. Every target must be enclosed in a bounding box.
    3. The bounding boxes are color coded.
  2. To create the bounding box in the images, click on the “Create Box” button, to create a bounding box. After the box is drawn, go to the “Label List” column and click on the edit button, and type in a name for the object under the bounding box. This name will become a class. Once you’ve entered the name, click on the tick mark to label the object.

  3. Options in Bounding Box:

    1. Auto Save: This option allows you to auto-save the bounding boxes with the labels they are created. You do not need to save the images every time this option is enabled.
    2. Manual Save: This option disables the auto-saving of the bounding boxes. When this option is enabled you have to save the image before moving on to the next image for labeling.
    3. Create Box: This option starts the cursor on the images to create the bounding box. When the box is created, you can label it in the Label List.
    4. Save Box: This option saves all the bounding boxes created under the Label List.
  4. File List: It shows the list of images available for labeling in the project.
  5. Label List: It shows the list of Labels created for the selected image.
  6. Class Info: It shows the summary of the classes with the total number of bounding boxes created for each class.
  7.   You can view all the images under the “Image” tab.

Training the Model

In Object Detection, the model must locate and identify all the targets in the given image. This makes Object Detection a complex task to execute. Hence, the hyperparameters work differently in the Object Detection Extension.

  1. Go to the “Train” tab. You should see the following screen:
  2. Click on the “Train New Model” button.
  3. Select all the classes, and click on “Generate Dataset”.
  4.  Once the dataset is generated, click “Next”. You shall see the training configurations.
  5. Observe the hyperparameters.
    1. Model name- The name of the model.
    2. Batch size- The number of training samples utilized in one iteration. The larger the batch size, the larger the RAM required.
    3. The number of iterations- The number of times your model will iterate through a batch of images.
    4. Number of layers- The number of layers in your model. Use more layers for large models.
  6. Specify your hyperparameters. If the numbers go out of range, PictoBlox will show a message.
  7. Click “Create”, It creates a new model according to inserting the value of the hyperparameter.
  8. Click “Start Training”, If desired performance is reached, click on the “Stop” 
    1. Total Loss
    2. Regularization Loss
    3. Localization Loss
    4. Classification Loss
  9. After the training is completed, you’ll see four loss graphs: 
    Note: Training an Object Detection model is a time taking task. It might take a couple of hours to complete the training.
  10. You’ll be able to see the graphs under the “Graphs” panel. Click on the buttons to view the graph.

    1. The graph between “Total loss” and “Number of steps”
    2. The graph between “Regularization loss” and “Number of steps”.
    3. The graph between “Localization” and “Number of steps”.
    4. The graph between “Classification loss” and “Number of steps”.

Evaluating the Model

Now, let’s move to the “Evaluate” tab. You can view True Positives, False Negatives, and False Positives for each class here along with metrics like Precision and Recall.

Testing the Model

The model will be tested by uploading an Image from the device:

Export in Python Coding

    1. Click on the “PictoBlox” button, and PictoBlox will load your model into the Python Coding Environment if you have opened the ML Environment in the Python Coding.
    2. Make a new folder(name is img) in Python script and add a number of images for testing.
      |
    3. Also, add the same images in the backdrop and delete the default backdrop.
    4. Change in Python script according to requirement.

Code

####################imports####################
# Do not change
sprite = Sprite('Tobi')
sprite1 = Sprite('Stage')
import cv2
import numpy as np
import tensorflow.compat.v2 as tf
import os
import time
# Do not change
####################imports####################

#Following are the model and video capture configurations
# Do not change

detect_fn = tf.saved_model.load(
    "saved_model")

# cap = cv2.VideoCapture(0)                                          # Using device's camera to capture video
# if (cap.isOpened()==False):
#   print("Please change defalut value of VideoCapture(k)(k = 0, 1, 2, 3, etc). Or No webcam device found")

folder="img"
for filename in os.listdir(folder):
  img = cv2.imread(os.path.join(folder,filename))
  if img is not None:
    x=filename
    

    a=0
    b=0
    y=x.split('.')
   
      
  
    font = cv2.FONT_HERSHEY_SIMPLEX
    fontScale = 1
    color_box = (50,50,255)
    color_text = (255,255,255)
    thickness = 2
    
    
    class_list = ['Bolt','Nut',]                        # List of all the classes 
  
  #This is the while loop block, computations happen here
    image_np=img
    height, width, channels = image_np.shape                       # Get height, wdith  
    image_resized = cv2.resize(image_np,(320,320))                   # Resize image to model input size  
    image_resized = cv2.cvtColor(image_resized, cv2.COLOR_BGR2RGB) # Convert bgr image array to rgb image array  
    input_tensor = tf.convert_to_tensor(image_resized)             # Convert image to tensor
    input_tensor = input_tensor[tf.newaxis, ...]                   # Expanding the tensor dimensions
    
    detections = detect_fn(input_tensor)                           #Pass image to model
    
    num_detections = int(detections.pop('num_detections'))         #Postprocessing
    detections = {key: value[0, :num_detections].numpy() for key, value in detections.items()}
    detections['num_detections'] = num_detections
    detections['detection_classes'] = detections['detection_classes'].astype(np.int64)
    width=320
    height=320
    # Draw recangle around detection object
    for j in range(len(detections['detection_boxes'])):
      # Set minimum threshold to 0.3
      if(detections['detection_scores'][j] > 0.3):
        # Starting and end point of detected object
        starting_point = (int(detections['detection_boxes'][j][1]*width),int(detections['detection_boxes'][j][0]*height))
        end_point = (int(detections['detection_boxes'][j][3]*width),int(detections['detection_boxes'][j][2]*height))
        # Class name of detected object
        className = class_list[detections['detection_classes'][j]-1]
        # Starting point of text
        print(className)
        if(className=="Bolt"):
          a=a+1
        elif(className=="Nut"):
          b=b+1
        starting_point_text = (int(detections['detection_boxes'][j][1]*width),int(detections['detection_boxes'][j][0]*height)-5)
        # Draw rectangle and put text
        image_resized = cv2.rectangle(image_resized, starting_point, end_point,color_box, thickness)
        image_resized = cv2.putText(image_resized,className, starting_point_text, font,fontScale, color_text, thickness, cv2.LINE_AA)
        # Show image in new window
    cv2.imshow("Detection Window",image_resized)
    sprite1.switchbackdrop(y[0])
    print(a)
    print(b)

    if cv2.waitKey(25) & 0xFF == ord('q'):                          # Press 'q' to close the classification window
      break
    time.sleep(2)
    sprite.say("Total number of bolts is "+str(a))
    time.sleep(1)
    sprite.say("Total number of Nuts is "+str(b))
    time.sleep(5)
    sprite.say("")
cv2.waitKey(5)                                               
cv2.destroyAllWindows()                                             # Closes input window

Logic

The example demonstrates how to count nuts and bolts from an image of a stage. Following are the key steps happening:

  1. Creates a sprite object named “Tobi”. A sprite is typically a graphical sprite that can be animated or displayed on a screen.
  2. Creates another sprite object named ‘Stage’. It represents the backdrop.
  3. Imports the ‘time’ module, which provides functions to work with time-related operations using import time.
  4. Imports the ‘os’ module, which provides functions to work with outside folders and images inside them.
  5. Make a new folder in the project files and upload testing images from your computer, as well as the same images in the backdrop.
  6. One by one, go through all uploaded images using the ‘for’ loop.
  7. Store images in the variable ‘img’ and their labels in the variable ‘filename’.
  8. Initialize two count variables named ‘a’ and ‘b’ with a 0 value, which stores the count of nuts and bolts in the image.
  9. Split the label of the image and store only the name of the image in a variable (name y), which is used to change the backdrop.
  10. Define the new height (which is equal to 320) and width (which is equal to 320) of the detection window so we can easily see bounding boxes on the detected image.
  11. Write a condition in the code to count the number of nuts and bolts.
  12. If a nut is detected, it will increase the count of the nut variable (a), and if a bolt is detected, it will increase the count of the bolt variable (b).
  13. Switch backdrop according to image name using a predefined function (sprite1.switchbackdrop()) for changing backdrop in PictoBlox.
  14. Show an updated detection window in the output, which contains bounding boxes for detected objects.
  15. Write a predefined function (sprite.say()) by which ‘Tobi’ will say the number of nuts and bolts.
  16. Also adds waiting time so we can clearly see the output of the model.

Final Result

Conclusion

Creating a Machine Learning Model to count nuts and bolts can be both complex and time-consuming. Through the steps demonstrated in this project, you can create your own Machine Learning Model that can detect and count nuts and bolts in an image. Once trained, you can export the model into the Python Coding Environment, where you can tweak it further to give you the desired output. Try creating a Machine Learning Model of your own today and explore the possibilities of Object Detection in PictoBlox!

Read More
All articles loaded
No more articles to load