Table of Contents

switch backdrop to ()

Description

The block changes the Stage’s backdrop to the specified one.

Example

Learn how to explore the fundamentals of digital colours using PictoBlox’s Image Processing extension. This project demonstrates how to break down any image into its primary colour components by converting the stage into green and blue channels in a continuous loop!

Introduction

This project teaches students how to break down a full-colour image into its three primary colour channels — red, green, and blue — using PictoBlox’s AI-powered Image Processing extension. Each channel reveals different information about the original image, laying the groundwork for understanding how neural networks and machine learning models process visual data.

The block-coded script runs automatically when the green flag is clicked, cycling through three stages:

  •       Stage 1 — Red Channel: Displays how the red colour intensity is distributed across the image
  •       Stage 2 — Green Channel: Reveals the green colour information within the scene
  •       Stage 3 — Blue Channel: Shows the blue component that completes the full RGB model

Prerequisites

Step 1: Open PictoBlox

Step 2: Add the Image Processing Extension

  1.   Click the Add Extension button (purple icon) at the bottom left of the screen.
  2.   Search for ‘Image Processing‘ in the extension library.
  3.   Click Add. The Image Processing extension will now appear in your palette side.

Step 3: Prepare Your Backdrops

This project uses three custom backdrops labelled:

  •       Image Processing Red Channel
  •       Image Processing Green Channel
  •       Image Processing Blue Channel

To add these backdrops, click the Choose a Backdrop button and upload or create each one. These backdrops act as contextual labels that tell the audience which channel is currently being displayed.

STEM Concepts Behind RGB Image Processing

What is the RGB colour model?

Every digital image on a screen is made up of millions of tiny pixels. Each pixel contains three values – one for red, one for green, and one for blue – each ranging from 0 to 255. By combining these three channels, screens produce over 16 million colours. This is the foundation of all digital imaging systems, from smartphone cameras to satellite imagery used in AI-based remote sensing.

How AI Uses Colour Channels in Computer Vision

Modern AI and machine learning models, particularly Convolutional Neural Networks (CNNs), process images channel by channel. When a model identifies a face, detects a tumour in an X-ray, or classifies a plant species, it analyses the R, G, and B pixel values independently before combining that information. Understanding this concept is the first step toward building your own image classification models using AI.

RED The red channel highlights warm tones and high-contrast edges in an image. It is particularly important in face recognition and skin-tone detection algorithms.

 

GREEN The green channel carries the most luminance (brightness) information. It is the primary channel used in night-vision imaging and vegetation analysis (NDVI) in environmental AI.

 

BLUE The blue channel captures sky, water, and cool-tone details. It plays a key role in atmospheric correction in satellite imagery and underwater computer vision applications.

 

Step-by-Step Code Walkthrough

The complete script runs inside a ‘forever’ loop triggered by the ‘When Green Flag Clicked’ event. Here is how to structure each section:

Red Channel Block Sequence

  1. Place a ‘Wait 1 Seconds’ block from Control palette to give the project a startup pause.
  2. Add a ‘Switch Backdrop to ‘Image Processing Red Channel’ block from the looks palette.
  3. Add a ‘Convert Stage Image to Red Channel’ block from the Image Processing palette.
  4. Place a ‘Wait 3 Seconds’ block to let you observe the result.
  5. Add a ‘Reset All’ block to restore the original image.
  6. Place another ‘Wait 1 Seconds’ block before transitioning to the next channel.

Green Channel Block Sequence

Repeat the same pattern as the Red Channel but replace the backdrop and the conversion block place inside the forever block:

  1. Add ‘Switch Backdrop to ‘Image Processing Green Channel’.
  2. Place a ‘Wait 1 Seconds’ block from the Control palette.
  3. Add ‘Convert Stage Image to Green Channel’.
  4. Add ‘Wait 3 Seconds’, then ‘Reset All’, then Wait 1 Second.

Blue Channel Block Sequence

Complete the cycle with the Blue Channel:

  1. Add ‘Switch Backdrop to ‘Image Processing Blue Channel’.
  2. Place a ‘Wait 1 Second’ block from the Control palette.
  3. Add ‘Convert Stage Image to Blue Channel’.
  4. Add ‘Wait 3 Seconds’, then ‘Reset All’, then ‘Wait 1 Second’.

Understanding the Output: What Does Each Channel Show?

When you run the project, you will observe three distinct visual outputs. Here is how to interpret what you see through an AI and computer vision lens:

  • Red Channel Output: Areas that appear bright white have high red intensity (value close to 255). Dark areas have little to no red pigment. This is similar to how thermal imaging highlights heat signatures.
  • Green Channel Output: The green channel often appears brightest in natural scenes because human eyes are most sensitive to green light. AI-based plant health monitoring systems exploit this channel heavily.
  • Blue Channel Output: Blue dominates in sky, water, and artificially lit scenes. Security cameras and face-detection systems often rely on blue-channel data in low-light conditions.

Real-World AI and Machine Learning Connections

  • This project is not just a visual exercise — it directly connects to how professional AI systems operate:
  • Medical Imaging AI: MRI and CT scan analysis software separates image channels to detect anomalies in tissue that are invisible to the naked eye.
  • Autonomous Vehicles: Self-driving car AI processes RGB frames from cameras at 30+ frames per second, extracting lane markings, traffic signs, and obstacles from each colour channel.
  • Agricultural Drones: Precision farming drones use colour channel separation to assess crop health using the Normalised Difference Vegetation Index (NDVI) — a key metric derived from red and green channel data.
  • Facial Recognition Systems: Modern biometric systems use colour channel decomposition as a preprocessing step before running face-matching algorithms.

Output

 

Conclusion

Congratulations! You have successfully built an RGB image processing project in PictoBlox that demonstrates one of the most fundamental concepts in artificial intelligence and machine learning — colour channel analysis. In this project, you learned how digital images are structured using the RGB colour model, how AI systems process image data channel by channel, and how to use block coding to automate a sequential image analysis workflow. These skills form the foundation of computer vision — one of the fastest-growing areas of AI and technology today.

Read More
Learn how to use the Object Detection extension of PictoBlox's Machine Learning Environment to count specific targets in images by writing Block code.

Introduction

In this example project we are going to create a Machine Learning Model which can count number of nuts and bolts from the camera feed or images.

Object Detection in Machine Learning Environment

Object Detection is an extension of the ML environment that allows users to detect images and making bounding box into different classes. This feature is available only in the desktop version of PictoBlox for Windows, macOS, or Linux. As part of the Object Detection workflow, users can add classes, upload data, train the model, test the model, and export the model to the Block Coding Environment.

 Opening Image Detection Workflow

Alert: The Machine Learning Environment for model creation is available in the only desktop version of PictoBlox for Windows, macOS, or Linux. It is not available in Web, Android, and iOS versions.

 

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2.  Select the Block Coding Environment.
  3. Select the “Open ML Environment” option under the “Files” tab to access the ML Environment.
  4. You’ll be greeted with the following screen.
    Click on “Create New Project”.
  5. A window will open. Type in a project name of your choice and select the “Object Detection” extension. Click the “Create Project” button to open the Object Detection window.

You shall see the Object Detection workflow. Your environment is all set.
   

Collecting and Uploading the Data

Uploading images from your device’s hard drive

  1. Now it’s time to upload the images which you downloaded from another source or captured from your camera. Click on the “Select from device” option from the Import Images block.
  2. Now click on the “Choose images from your computer” and go to the folder where you downloaded your images.
  3. Select all images which you want to upload then click on “open” option.
  4. Now page of PictoBlox looks like:

Making Bounding Box – Labelling Images

  1. Labeling is essential for Object Detection. Click on the “Bbox” tab to make the labels.

    Notes: Notice how the targets are marked with a bounding box. The labels appear in the “Label List” column on the right.

  2. To create the bounding box in the images, click on the “Create Box” button, to create a bounding box. After the box is drawn, go to the “Label List” column and click on the edit button, and type in a name for the object under the bounding box. This name will become a class. Once you’ve entered the name, click on the tick mark to label the object.
  3. File List: It shows the list of images available for labeling in the project.
  4. Label List: It shows the list of Labels created for the selected image.
  5. Class Info: It shows the summary of the classes with the total number of bounding boxes created for each class.
  6.   You can view all the images under the “Image” tab.

Training the Model

In Object Detection, the model must locate and identify all the targets in the given image. This makes Object Detection a complex task to execute. Hence, the hyperparameters work differently in the Object Detection Extension.

  1. Go to the “Train” tab. You should see the following screen:
  2. Click on the “Train New Model” button.
  3. Select all the classes, and click on “Generate Dataset”.
  4.  Once the dataset is generated, click “Next”. You shall see the training configurations.
  5. Specify your hyperparameters. If the numbers go out of range, PictoBlox will show a message.
  6. Click “Create”, It creates new model according to inserting value of hyperparameter.
  7. Click “Start Training”, If desired performance is reached, click on the “Stop” 
    1. Total Loss
    2. Regularization Loss
    3. Localization Loss
    4. Classification Loss
  8. After the training is completed, you’ll see four loss graphs: 
    Note: Training an Object Detection model is a time taking task. It might take a couple of hours to complete training

     

  9. You’ll be able to see the graphs under the “Graphs” panel. Click on the buttons to view the graph.

    1. Graph between “Total loss” and “Number of steps”
    2. Graph between “Regularization loss” and “Number of steps”.
    3. Graph between “Localization” and “Number of steps”.
    4. Graph between “Classification loss” and “Number of steps”.

Evaluating the Model

Now, let’s move to the “Evaluate” tab. You can view True Positives, False Negatives, and False Positives for each class here along with metrics like Precision and Recall.

Testing the Model

The model will be tested by uploading an Image from device:

 

Export in Block Coding

Click on the “PictoBlox” button, and PictoBlox will load your model into the Block Coding Environment if you have opened the ML Environment in the Block Coding.

 

 

Code

The idea is simple, we’ll add image samples in the “Backdrops” column. We’ll keep cycling through the backdrops and keep predicting the image on the stage.

  1. Add testing images in the backdrop and delete default backdrop.
  2. Now, come back to the coding tab and select the Tobi sprite.
  3. We’ll start by adding a when flag clicked block from the Events palette.
  4. Add () bounding box block from the Machine Learning palette. Select the “hide” option.
  5. Follow it up with a set detection threshold to () block from the Machine Learning palette and set the drop-down to 0.5.
  6. Add switch backdrop to () block from the Looks palette. Select any image.
  7. Add a forever block from the  Control palette
  8. Add analyse image from() block from the Machine Learning palette. Select the “stage” option.
  9. Add () bounding box block from the Machine Learning palette. Select the “show” option.
  10. Add two blocks of say () for () seconds from the Looks palette.
  11. Inside the say block add join () () block from operator palette.
  12. Inside the join block write statement at first empty place and at second empty place add get number of () detected? from the Machine Learning palette.
  13. Select the “Nut” option for first get number of () detected and for second choose “Bolt” option.
  14. Add the () bounding box block from the Looks palette. Select the “hide” option.
  15. Finally, add the next backdrop block from the Looks palette below the () bounding box block.

Final Result

 

 

 

Read More
Learn how to build a Machine Learning model which can identify the type of flower from the camera feed or images using PictoBlox.

Introduction

In this example project we are going to create a Machine Learning Model which can identify the type of flower from the camera feed or images.

 

Images Classifier in Machine Learning Environment

Image Classifier is an extension of the ML environment that allows users to classify images into different classes. This feature is available only in the desktop version of PictoBlox for Windows, macOS, or Linux. As part of the Image Classifier workflow, users can add classes, upload data, train the model, test the model, and export the model to the Block Coding Environment.

Let’s create the ML model.

Opening Image Classifier Workflow

Alert: The Machine Learning Environment for model creation is available in the only desktop version of PictoBlox for Windows, macOS, or Linux. It is not available in Web, Android, and iOS versions.

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2. Select the coding environment as Block Coding Environment.
  3. Select the “Open ML Environment” option under the “Files” tab to access the ML Environment.
  4. You’ll be greeted with the following screen.
    Click on “Create New Project“.
  5. A window will open. Type in a project name of your choice and select the “Image Classifier” extension. Click the “Create Project” button to open the Image Classifier window.
  6. You shall see the Image Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.

Class in Image Classifier

Class is the category in which the Machine Learning model classifies the images. Similar images are put in one class.

There are 2 things that you have to provide in a class:

  1. Class Name: It’s the name to which the class will be referred as.
  2. Image Data: This data can either be taken from the webcam or by uploading from local storage.

Note: You can add more classes to the projects using the Add Class button.

Adding Data to Class

You can perform the following operations to manipulate the data into a class.

  1. Naming the Class: You can rename the class by clicking on the edit button.
  2. Adding Data to the Class: You can add the data using  by Uploading the files from the local folder or the Webcam.

Training the Model

After data is added, it’s fit to be used in model training. In order to do this, we have to train the model. By training the model, we extract meaningful information from the images, and that in turn updates the weights. Once these weights are saved, we can use our model to make predictions on data previously unseen.

However, before training the model, there are a few hyperparameters that you should be aware of. Click on the “Advanced” tab to view them.


It’s a good idea to train a numeric classification model for a high number of epochs. The model can be trained in both JavaScript and Python. In order to choose between the two, click on the switch on top of the Training panel.

Note: These hyperparameters can affect the accuracy of your model to a great extent. Experiment with them to find what works best for your data.

Alert: Dependencies must be downloaded to train the model in Python, JavaScript will be chosen by default.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The x-axis of the graph shows the epochs, and the y-axis represents the corresponding accuracy. The range of the accuracy is 0 to 1.


Other evaluating parameter we can see by clicking on Train Report

Here we can see confusion matrix and training accuracy of individual classes after training.

Testing the Model

To test the model, simply enter the input values in the “Testing” panel and click on the “Predict” button.

The model will return the probability of the input belonging to the classes.

Export in Block Coding

Click on the “Export Model” button on the top right of the Testing box, and PictoBlox will load your model into the Block Coding Environment if you have opened the ML Environment in the Block Coding.

Code

The idea is simple, we’ll add image samples in the “Backdrops” column. We’ll keep cycling through the backdrops and keep classifying the image on the stage.

  1. Add testing images in the backdrop and delete default backdrop.
  2. Now, come back to the coding tab and select the Tobi sprite.
  3. We’ll start by adding a when flag clicked block from the Events palette.
  4. Add switch backdrop to () block from the Looks palette. Select any image.
  5. Add a forever block from the  Control palette.
  6. Inside the forever block add an analyze image from () block from the Machine Learning palette.
  7. Add two blocks of say () for () seconds from the Looks palette.
  8. Inside the say block add join () () block from operator palette.
  9. Inside the join block write statement at first empty place and at second empty place add identified class from the Machine Learning palette.
  10. Finally, add the next backdrop block from the Looks palette below the () bounding box block.

Final Result

 

 

Read More
Learn how to build a Machine Learning model which can identify the type of waste from the camera feed or images using PictoBlox.

Introduction

In this example project we are going to create a Machine Learning Model which can identify the type of waste from the camera feed or images.

Images Classifier in Machine Learning Environment

Image Classifier is an extension of the ML environment that allows users to classify images into different classes. This feature is available only in the desktop version of PictoBlox for Windows, macOS, or Linux. As part of the Image Classifier workflow, users can add classes, upload data, train the model, test the model, and export the model to the Block Coding Environment.

Let’s create the ML model.

Opening Image Classifier Workflow

Alert: The Machine Learning Environment for model creation is available in the only desktop version of PictoBlox for Windows, macOS, or Linux. It is not available in Web, Android, and iOS versions.

Follow the steps below:

  1. Open PictoBlox and create a new file.
  2. Select the coding environment as Block Coding Environment.
  3. Select the “Open ML Environment” option under the “Files” tab to access the ML Environment.
  4. You’ll be greeted with the following screen.
    Click on “Create New Project“.
  5. A window will open. Type in a project name of your choice and select the “Image Classifier” extension. Click the “Create Project” button to open the Image Classifier window.
  6. You shall see the Image Classifier workflow with two classes already made for you. Your environment is all set. Now it’s time to upload the data.

Class in Image Classifier

Class is the category in which the Machine Learning model classifies the images. Similar images are put in one class.

There are 2 things that you have to provide in a class:

  1. Class Name: It’s the name to which the class will be referred as.
  2. Image Data: This data can either be taken from the webcam or by uploading from local storage.

Note: You can add more classes to the projects using the Add Class button.

Adding Data to Class

You can perform the following operations to manipulate the data into a class.

  1. Naming the Class: You can rename the class by clicking on the edit button.
  2. Adding Data to the Class: You can add the data using by Uploading the files from the local folder or the Webcam.

Training the Model

After data is added, it’s fit to be used in model training. In order to do this, we have to train the model. By training the model, we extract meaningful information from the images, which in turn updates the weights. Once these weights are saved, we can use our model to make predictions on data previously unseen.

However, before training the model, there are a few hyperparameters that you should be aware of. Click on the “Advanced” tab to view them.


It’s a good idea to train an image classification model for a high number of epochs. The model can be trained in both JavaScript and Python. In order to choose between the two, click on the switch on top of the Training panel.

Note: These hyperparameters can affect the accuracy of your model to a great extent. Experiment with them to find what works best for your data.

Alert: Dependencies must be downloaded to train the model in Python, JavaScript will be chosen by default.

The accuracy of the model should increase over time. The x-axis of the graph shows the epochs, and the y-axis represents the accuracy at the corresponding epoch. Remember, the higher the reading in the accuracy graph, the better the model. The x-axis of the graph shows the epochs, and the y-axis represents the corresponding accuracy. The range of accuracy is 0 to 1.


Other evaluating parameters can see by clicking on Train Report

Here we can see the confusion matrix and training accuracy of individual classes after training.

Testing the Model

To test the model, simply enter the input values in the “Testing” panel and click on the “Predict” button.

The model will return the probability of the input belonging to the classes.

Export in Block Coding

Click on the “Export Model” button on the top right of the Testing box, and PictoBlox will load your model into the Block Coding Environment if you have opened the ML Environment in the Block Coding.

Code

The idea is simple, we’ll add image samples in the “Backdrops” column. We’ll keep cycling through the backdrops and keep classifying the image on the stage.

  1. Add testing images in the backdrop and delete the default backdrop.
  2. Now, come back to the coding tab and select the Tobi sprite.
  3. We’ll start by adding a when flag clicked block from the Events palette.
  4. Add switch backdrop to () block from the Looks palette. Select any image.
  5. Add a forever block from the  Control palette.
  6. Inside the forever block add an analyze image from () block from the Machine Learning palette.
  7. Add two blocks of say () for () seconds from the Looks palette.
  8. Inside the say block add join () () block from operator palette.
  9. Inside the join block write statement at first empty place and at second empty place add identified class from the Machine Learning palette.
  10. Finally, add the next backdrop block from the Looks palette below the () bounding box block.

Final Result

You can build more applications on top of this waste classifier.

Read More
All articles loaded
No more articles to load