Time for which the message is displayed. If value is 0 then the sprite will say until another speech or thought block is activated, or the stop sign is pressed.
Integer
0
Description
The function gives its sprite a speech bubble with the specified text — the speech bubble stays until another speech or thought block is activated, or the stop sign is pressed.
sprite = Sprite('Tobi')
sprite.input("Which table you want to recite")
number = int(sprite.answer())
for i in range(1, 10):
sprite.say(str(number) + " x " + str(i) + " = " + str(number*i), 1)
Learn about AI-based face expression detection, a technology that uses artificial intelligence algorithms and computer vision techniques to analyze images or videos of human faces and recognize emotions or expressions.
AI-based face expression detection refers to the use of artificial intelligence algorithms and computer vision techniques to analyze images or videos of human faces and recognize the emotions or expressions being displayed. The technology can detect and analyze subtle changes in facial features, such as eye movement, mouth shape, and eyebrow position, to determine whether a person is happy, sad, angry, surprised, or expressing other emotions.
Discover the various fields that utilize this technology, including psychology, marketing, and human-computer interaction. Additionally, read about the logic and code behind face detection with a camera feed, including the initialization of parameters, face detection library, loop execution, and if-else conditions. Explore how the technology continuously analyzes emotions, and how the humanoid responds with different facial expressions and movements.
Code
sprite = Sprite('Tobi')
fd = FaceDetection()
quarky = Quarky()
import time
humanoid = Humanoid(7, 2, 6, 3, 8, 1)
# Turn the video ON with 0% transparency
fd.video("ON", 0)
fd.enablebox()
# Run this script forever
while 1:
fd.analysecamera() # Analyse image from camera
sprite.say(fd.expression()) # Say the face expressions
if fd.isexpression(1, "happy"): # if face expression is happy
quarky.showemotion("happy") # show happy emotion on Quarky
humanoid.action("dance2", 1000, 1)
if fd.isexpression(1, 'sad'):
quarky.showemotion("crying")
humanoid.action("updown", 1000, 1)
if fd.isexpression(1, 'surprise'):
quarky.showemotion('surprise')
humanoid.action("moonwalker", 1000, 1)
if fd.isexpression(1, 'angry'):
quarky.showemotion('angry')
humanoid.action("flapping2", 1000, 1)
else:
humanoid.home()
# Comment the above script, uncomment the below script and
# run this script to clear the stage and quarky display
fd.disablebox()
fd.video("off")
quarky.cleardisplay()
Logic
The example demonstrates how to use face detection with a camera feed. Following are the key steps happening:
The code is using face detection to recognize facial expressions and control a humanoid and a display device called Quarky accordingly.
Then, the program turns on the video with 0% transparency and enables the bounding box for face detection.
The code then enters an infinite loop where it continuously analyzes the image from the camera using face detection and says the detected facial expressions.
The code then checks if the expression is happy, sad, surprised, or angry using the if statement. If the expression is happy, the Quarky device displays a happy emotion, and the humanoid performs the “dance2” action for specific time. Similarly, for sad, surprised, and angry expressions, Quarky displays the respective emotion, and the humanoid performs the associated action.
If no facial expression is detected, the humanoid is set to its “home” position. Finally, if the program needs to be stopped.
The Language Translator with ChatGPT is a powerful system that enables real-time translation and conversation support, facilitating multilingual communication.
The Language Translator with ChatGPT and Speech Recognition is a system that helps people communicate across languages by providing real-time translation and conversation support. It combines language translation, chatbot capabilities, and speech recognition to facilitate multilingual communication.
Language Translator Using ChatGPT is a project that trains the ChatGPT language model with multilingual data to enable it to understand and translate text between different languages. It utilizes ChatGPT’s natural language processing abilities to generate human-like responses, making it ideal for building a language translation system. The training data includes sentence pairs in different languages and their corresponding translations.
Logic
The code represents a conversation between the sprite character “Tobi” and the AI models. The sprite asks the user for a definition, the user responds, the AI generates a result based on the response, and the sprite says and speaks the result.
Follow the steps below:
Open PictoBlox and create a new file.
Choose a suitable coding environment for block-based coding.
Add theextensions to your project from the extension palette located at the bottom right corner of PictoBlox.
We create an instance of the Text to Speech class called speech. This class allows us to convert text into spoken audio.
Next, we create an instance of the ChatGPT model called gpt. ChatGPT is a language model that can generate human-like text responses based on the input it receives.
The sprite asks the user to provide a sentence by using the input() function and stores the input in the variable ‘l‘.
The translatelanguage() method of the ‘gpt‘ object is then used to translate the sentence stored in ‘l’ into Hindi. The translated sentence is stored in the variable ‘result‘.
The sprite uses the say() method say() to speak out the translated sentence stored in ‘result’.
The speak() method of the speech() object is called to convert the translated sentence into audio and play it.
Code
sprite = Sprite('Tobi')
gpt = ChatGPT()
speech = TexttoSpeech()
sprite.input("Provide a Sentece i will traslated into Hindi ")
l = str(sprite.answer())
gpt.translatelanguage(l,"hindi")
result=gpt.chatGPTresult()
sprite.say(result,2)
speech.speak(result)
Learn about AI-based face expression detection, computer vision techniques to analyze images or videos of human faces and recognize emotions or expressions.
AI-based face expression detection refers to the use of artificial intelligence algorithms and computer vision techniques to analyze images or videos of human faces and recognize the emotions or expressions being displayed. The technology can detect and analyze subtle changes in facial features, such as eye movement, mouth shape, and eyebrow position, to determine whether a person is happy, sad, angry, surprised, or expressing other emotions.
Discover the various fields that utilize this technology, including psychology, marketing, and human-computer interaction. Additionally, read about the logic and code behind face detection with a camera feed, including the initialization of parameters, face detection library, loop execution, and if-else conditions. Explore how the technology continuously analyzes emotions, and how the Humanoid responds with different facial expressions and movements.
Code
sprite = Sprite('Tobi')
fd = FaceDetection()
quarky = Quarky()
import time
humanoid = Humanoid(7, 2, 6, 3, 8, 1)
# Turn the video ON with 0% transparency
fd.video("ON", 0)
fd.enablebox()
# Run this script forever
while 1:
fd.analysecamera() # Analyse image from camera
sprite.say(fd.expression()) # Say the face expressions
if fd.isexpression(1, "happy"): # if face expression is happy
quarky.showemotion("happy") # show happy emotion on Quarky
humanoid.action("dance2", 1000, 1)
if fd.isexpression(1, 'sad'):
quarky.showemotion("crying")
humanoid.action("updown", 1000, 1)
if fd.isexpression(1, 'surprise'):
quarky.showemotion('surprise')
humanoid.action("moonwalker", 1000, 1)
if fd.isexpression(1, 'angry'):
quarky.showemotion('angry')
humanoid.action("flapping2", 1000, 1)
else:
humanoid.home()
# Comment the above script, uncomment the below script and
# run this script to clear the stage and quarky display
fd.disablebox()
fd.video("off")
quarky.cleardisplay()
Logic
The example demonstrates how to use face detection with a camera feed. Following are the key steps happening:
Creates a sprite object named ‘Tobi’. A sprite is typically a graphical element that can be animated or displayed on a screen.also creates a Quarky object.
Creates a face detection object named ‘fd’. This object is responsible for detecting faces in images or video using fd = FaceDetection()
Imports the ‘time’ module, which provides functions to work with time-related operations using import time.
Creates a humanoid object with specific pins assigned to control various actions of the humanoid robot.
Turns on the video display with 0% transparency for the face detection module using fd.video(“ON”, 0).
Enables the face detection module to draw boxes around detected faces using fd.enablebox().
The code enters an infinite loop using while 1, which means it will keep running indefinitely until interrupted.
Analyzes the image from the camera for face detection using fd.analysecamera().
The sprite says the detected face expressions obtained from the face detection module using sprite.say(fd.ex * pression()).
The code checks for different face expressions using if statements and performs corresponding actions.
For example, if the face expression is determined to be “happy“, the Quarky device shows a “happy” emotion, and the humanoid performs a dance action.
Similarly, other face expressions like “sad”, “surprised”, and “angry” trigger specific emotional displays on Quarky and corresponding actions on the humanoid.
If none of the predefined face expressions match, the humanoid goes back to its default or “home” position.
Are you looking to add some fun and expressiveness to your conversations? Look no further! I’m here to help you convert any word or phrase into a colorful array of emojis. Whether you want to spice up your messages, or social media posts, or simply bring a smile to someone’s face, I’ve got you covered.
Just type in the word or phrase you want to transform, and I’ll generate a delightful sequence of emojis that capture the essence of your text. Emojis are a universal language that transcends words from happy faces to animals, objects, and everything in between.
So, let’s get started and infuse your text with a touch of emoji magic! 🎉🔥
Logic
This code allows the user to interact with the sprite and provide emojis, which are then transformed into a response using the ChatGPT model. The sprite then speaks the generated response using the provided emojis.
Open PictoBlox and create a new file.
Choose a suitable coding environment for python-based coding.
Define a sprite , Tobi.
Then, we create an instance of the ChatGPT model using the ChatGPT class.
The sprite asks the user to input the emojis they want to use by calling the input method.
The sprite uses its answer method to get the user’s response, which is then converted to a string using str().
The movieToemoji method of the ChatGPT model converts the user’s response into emojis.
The chatGPTresult method retrieves the result of the ChatGPT conversation.
Finally, the sprite says the result for 5 seconds using the said method.
Code
sprite = Sprite('Tobi')
gpt = ChatGPT()
sprite.input("Please let me know which emojis you'd like me to use by typing them here.")
answer= str(sprite.answer())
gpt.movieToemoji(answer)
result=gpt.chatGPTresult()
sprite.say(result,5)
Are you looking to add some fun and expressiveness to your conversations? Look no further! I’m here to help you convert any word or phrase into a colorful array of emojis. Whether you want to spice up your messages, or social media posts, or simply bring a smile to someone’s face, I’ve got you covered.
Just type in the word or phrase you want to transform, and I’ll generate a delightful sequence of emojis that capture the essence of your text. Emojis are a universal language that transcends words from happy faces to animals, objects, and everything in between.
So, let’s get started and infuse your text with a touch of emoji magic! 🎉🔥
Logic
This code allows the user to interact with the sprite and provide emojis, which are then transformed into a response using the ChatGPT model. The sprite then speaks the generated response using the provided emojis.
Open PictoBlox and create a new file.
Choose a suitable coding environment for Block-based coding.
Define a sprite , Tobi.
Then, we create an instance of the ChatGPT model using the ChatGPT class.
The sprite, named Tobi, asks for a world that can be converted into emojis by using the command sprite.input(“Please provide a world that I can convert into emojis”).
After receiving the input, Tobi uses the answer() function to generate a response based on the provided world.
Next, the language model, ChatGPT, is involved. It has a function called movieToemoji() that takes the generated response from Tobi and performs some operation related to converting a movie into emojis.
Finally, the result of the operation performed by ChatGPT is stored in the variable result. Tobi then uses the command sprite.say(result, 5) to display the result for 5 seconds.
In summary, the code represents a scenario where Tobi the sprite asks for a world, ChatGPT processes the input and performs some operation related to movies and emojis, and Tobi displays the result.
Code
sprite = Sprite('Tobi')
gpt = ChatGPT()
sprite.input("Please provide a world that i can convert into an emojis")
answer=sprite.answer()
gpt.movieToemoji(answer)
result=gpt.chatGPTresult()
sprite.say(result,5)
Welcome to the Noun Detector! This powerful tool utilizes the capabilities of ChatGPT and leverages the spaCy library to identify and extract nouns from text. By employing advanced natural language processing techniques, the Noun Detector analyzes sentences and highlights the essential elements that represent people, places, objects, or concepts.
Noun Detector is designed to excel at identifying and extracting nouns from text. Experience the Noun Detector’s capabilities firsthand and unlock the power of noun extraction in your language-processing endeavors. Try it out and witness the precision and efficiency of this invaluable tool!
Code
sprite = Sprite('Tobi')
quarky=Quarky()
gpt = ChatGPT()
gpt.askOnChatGPT("AIAssistant", "Genrate simple random sentence for me")
result=gpt.chatGPTresult()
gpt.getgrammerfromtext("GrammerNoun",result)
noun=gpt.chatGPTresult()
sprite.say(result,5)
print(result)
print(noun)
sprite.input("Indentify and write the noun in sentance")
answer= str(sprite.answer())
if the answer in noun:
sprite.say("You have a strong understanding of noun concepts. Well done!",5)
else:
sprite.say("Please check the terminal for the correct answer as your response is incorrect",5)
Logic
Open PictoBlox and create a new file.
Choose a suitable coding environment for block-based coding.
We have a sprite character named Tobi.
Add the ChatGPT extensions to your project from the extension palette located at the bottom right corner of PictoBlox.
We will ask the AI assistant to create a random sentence for us.
The AI assistant will generate the sentence and identify the nouns in it.
Tobi will then say the generated sentence out loud for 5 seconds. The sentence and the identified nouns will be displayed on the screen.
Next, Tobi will ask you to identify and write the noun in the sentence. You need to type your answer.
If your answer matches any of the identified nouns, Tobi will appreciate you.
But if your answer is incorrect, Tobi will say to check the terminal.
So, give it a try and see if you can identify the noun correctly!
The Synonyms and Antonyms Word Converter is a powerful tool powered by the ChatGPT extension that allows users to effortlessly find synonyms and antonyms for words. It harnesses the capabilities of the advanced language model to provide accurate and contextually relevant word alternatives.
With the Synonyms and Antonyms Word Converter, you can expand your vocabulary, enhance your writing, and improve your communication skills. Whether you’re a writer seeking more expressive language or a student looking to diversify your word choices, this tool is designed to assist you in finding suitable alternatives quickly and easily.
Using the ChatGPT extension, the Synonyms and Antonyms Word Converter engage in interactive conversations, making it an intuitive and user-friendly tool. By providing a word as input, you can receive a list of synonyms or antonyms, depending on your preference, helping you to diversify your language and convey your ideas with precision.
Code
sprite = Sprite('Tobi')
gpt = ChatGPT()
str2 = ""
var2=""
sprite.input("Please provide a word for which you would like to find synonyms and antonyms")
answer= str(sprite.answer())
gpt.getsynonymsAntonymsfromText("Synonyms",answer)
str1=gpt.chatGPTresult()
for i in str1:
if not i.isdigit():
str2 += i
print("Synonyms words are:", str2)
gpt.getsynonymsAntonymsfromText("Antonyms",answer)
var1=gpt.chatGPTresult()
for j in var1:
if not j.isdigit():
var2 += j
print("Antonyms words are:", var2)
Logic
Open PictoBlox and select the environment as appropriate Python Coding Environment.
Create a new file.
Select the environment as appropriate Python Coding Environment.
First, an instance of the Sprite class is created, with the name “Tobi”.
To add the ChatGPT extension, click on the extension button located as shown in the image. This will enable the ChatGPT extension, allowing you to incorporate its capabilities into your project.
Two empty strings, str2 and var2, are declared to store the resulting synonyms and antonyms, respectively.
The user is prompted to provide a word for which they want to find synonyms and antonyms using the input() method from the Sprite library.
The user’s input is stored in the answer variable as a string.
The getsynonymsAntonymsfromText() method is called on the gpt object to find synonyms for the provided word. The category “Synonyms” is specified.
The resulting synonyms are obtained from gpt.chatGPTresult() and stored in the str1 variable.
The code then iterates over each character in str1 and appends non-digit characters to str2, filtering out any numerical values.
Finally, the code prints the extracted synonyms stored in str2.
The process is repeated for finding antonyms, where the getsynonymsAntonymsfromText() method is called with the category “Antonyms“, and the resulting antonyms are stored in the var1 variable.
Non-digit characters are extracted and stored in var2, which contains the antonyms.
The code concludes by printing the extracted antonyms stored in var2.
Press Run to run the code.
Sprite Tobi asks for the word you want synonyms/antonyms for.
Go to the terminal. the terminal will display the synonyms and antonyms for a word.
The Chatbox with ChatGPT Extension is a versatile tool that enables developers to integrate AI-driven conversations into their applications. It leverages the power of the ChatGPT model to create interactive and intelligent chat experiences. With this extension, you can build chatbots, virtual assistants, or conversational agents that understand and respond to user inputs naturally.
The code creates a character named “Tobi” and uses speech recognition to understand spoken commands. It then asks a question to the AI assistant (ChatGPT) and displays the response on the screen, converts it into speech, and makes the character “Tobi” speak the response.
Choose a suitable coding environment for Block-based coding.
We create an instance of Speech recognition. This class allows us to convert spoken audio into text.
Next, we create an instance of the ChatGPT model called gpt. ChatGPT is a language model that can generate human-like text responses based on the input it receives.
Recognize speech for 5 seconds using recognize speech for ()s in the () block.
The sprite object has already been initialized.
We analyze the spoken speech for 4 seconds and assume it is in the English language using analysespeech().
We store the recognized speech as text in the variable command using speechresult().
We convert the recognized speech to lowercase and store it in the variable answer.
We ask the AI model ChatGPT (acting as an AI assistant) a question based on the user’s input stored in the answer.
We retrieve the response from the AI model and store it in the variable result using chatGPTresult().
We display the response on the screen using print(result).
We convert the response into speech and play it aloud using the speech object using speak().
The character “Tobi” speaks the response for 5 seconds using say().
Press the Run button to run the code.
Output
I asked ChatGPT for a joke, and it responded with an interesting response.
Sign detection is being performed using a camera and a RecognitionCards object. The RecognitionCards object is set up with a threshold value and is enabled to draw a box around the detected object. The robot uses sensors, cameras, and machine learning algorithms to detect and understand the sign, and then performs a corresponding action based on the signal detected.
These robots are often used in manufacturing, healthcare, and customer service industries to assist with tasks that require human-like interaction and decision-making.
Code
sprite = Sprite('Tobi')
quarky = Quarky()
import time
quad=Quadruped(4,1,8,5,3,2,7,6)
recocards = RecognitionCards()
recocards.video("on flipped")
recocards.enablebox()
recocards.setthreshold(0.6)
quad.home()
while True:
recocards.analysecamera()
sign = recocards.classname()
sprite.say(sign + ' detected')
if recocards.count() > 0:
if 'Go' in sign:
quarky.drawpattern("jjjijjjjjiiijjjiiiiijjjjijjjjjjijjj")
quad.move("forward",1000,1)
if 'Turn Left' in sign:
quarky.drawpattern("jjjddjjjjjdddjdddddddjjjdddjjjjddjj")
quad.move("lateral right",1000,1)
if 'Turn Right' in sign:
quarky.drawpattern("jjggjjjjgggjjjgggggggjgggjjjjjggjjj")
quad.move("lateral left",1000,1)
if 'U Turn' in sign:
quarky.drawpattern("jjjbjjjjjjbjjjjbbbbbjjjbbbjjjjjbjjj")
quad.move("backward",1000,1)
else:
quad.home()
Logic
This code is using several objects to detect and respond to certain signs or images captured by a camera.
First, it creates a Sprite object with the name ‘Tobi’, and a Quarky object. It also imports a time module.
Next, a Quadruped object is created with some parameters. Then, a RecognitionCards object is created to analyze the camera input. The object is set to enable a box around the detected object and to set the threshold of detection to 0.6.
The code then puts the Quadruped object in its home position and enters an infinite loop.
Within the loop, the code captures the camera input and uses the RecognitionCards object to analyze it. If an object is detected, the object’s class name is retrieved and used by the Sprite object to say that the object was detected.
If the count of detected objects is greater than zero, the code checks if the detected object is a specific sign.
If the object is a ‘Go‘ sign, the Quarky object will draw a specific pattern, and the Quadruped object will move forward.
If the object is a ‘Turn Left‘ sign, the Quarky object will draw a different pattern and the Quadruped object will move to the right.
If the object is a ‘Turn Right‘ sign, the Quarky object will draw another pattern, and the Quadruped object will move to the left.
Finally, if the object is a ‘U Turn‘ sign, the Quarky object will draw a fourth pattern, and the Quadruped object will move backward.
If the detected object is not any of the specific signs, the Quadruped object will return to its home position.
So, this code helps a robot understand hand signs and move in response to them!