Table of Contents

Function Definition: analysestage()

Parameters

Description

This function is used to analyze the image received as input from the stage, for the feature.

By analyzing the images for image features, you can recognize the following things:

  1. Brands: Brand detection uses a database of thousands of global logos to identify commercial brands in images. The Computer Vision service detects if there are brand logos in a given image; if so, it returns the brand name; else, it returns NULL.
    Brands
  2. Celebrity: Celebrity detection uses a database to identify celebrities in images. The Computer Vision service detects if there is a celebrity in a given image; if so, it returns their name; else, it returns NULL.
    Celebrity
  3. Objects: Computer vision detects if there are objects in a given image; if so, it returns their name; else, it returns NULL.
    Object Detection
  4. Landmarks: Landmark detection uses a database of thousands of global landmarks to identify them in images, e.g., the Taj Mahal.
    Taj Mahal 2
  5. Image Tags: Computer vision returns the taxonomy-based categories detected in an image. Computer Vision can categorize an image broadly or specifically according to the 86 categories given in the following diagram:
    Category Tags
  6. Image Description: Human-readable sentence that describes the contents of the image.

Alert: This block processes the image input and updates the values in the other functions hence it needs to be put inside loops while making projects.

Example

Learn how to use face detection to control humanoid robot movements for interactive and responsive robotics applications. Get started now!

Introduction

One of the most fascinating activities is face tracking, in which the Quarky can detect a face and move its head in the same direction as yours. How intriguing it sounds, so let’s get started with the coding for a face-tracking robot.

Logic

  1. If the face is tracked at the center of the stage, the humanoid should be straight.
  2. As the face moves to the left side, the humanoid will also move to the left side.
  3. As the face moves to the right side, the humanoid will also move to the right side.

Code

sprite = Sprite('Tobi')
quarky=Quarky()
import time
import math
humanoid = Humanoid(7,2,6,3,8,1)

fd = FaceDetection()
fd.video("on", 0)
fd.enablebox()
fd.setthreshold(0.5)
time.sleep(1)
Angle=0
while True:
  fd.analysestage()
  for i in range(fd.count()):
    sprite.setx(fd.x(i + 1))
    sprite.sety(fd.y(i + 1))
    sprite.setsize(fd.width(i + 1))
    Angle=fd.width(i + 1)
    angle=int(float(Angle))
    if angle>90:
      humanoid.move("left",1000,3)
    elif angle<90:
      humanoid.move("right",1000,3)
      time.sleep(1)
    else:
      humanoid.home()

Code Explanation

  1. First, we import libraries and create objects for the robot.
  2. Next, we set up the camera and enable face detection with a 0.5 threshold.
  3. We use a loop to continuously analyze the camera feed for faces and control the humanoid’s movement based on this information.
  4. When a face is detected, the humanoid sprite moves to the face’s location, and the angle of the face is used to determine the direction of movement.
  5. If the angle is greater than 90 degrees, the humanoid moves to the left.if angle is less than 90 degrees, the humanoid moves to the right.if angle is  exactly 90 degrees, the humanoid returns to its original position.
  6. This code demonstrates how to use face detection to control the movement of a humanoid robot and how to incorporate external inputs into a program to create more interactive and responsive robotics applications.

Output

Read More
All articles loaded
No more articles to load