Make sure you have Color Signatures and Color Codes configured with your AI Vision Sensor so they can be used with your blocks. To learn more about how to configure them, you can read the articles below:
- Configuring Color Signatures with the AI Vision Signature in VEXcode EXP
- Configuring Color Codes with the AI Vision Signature in VEXcode EXP
The AI Vision Sensor can also detect AI Classifcations and AprilTags. To learn how to enable these detection modes, go here:
- AI Classifications with the AI Vision Sensor in VEXcode EXP
- AprilTags with the AI Vision Sensor in VEXcode EXP
Obtain Visual Data with the AI Vision Sensor
Every AI Vision Sensor command will start with the name of the configured AI Vision Sensor. For all the examples in this article, the name of the AI Vision Sensor used will be ai_vision_1
.
take_snapshot
The take_snapshot
method takes a picture of what the AI Vision Sensor is currently seeing and pulls data from that snapshot that can then be used in a project. When a snapshot is taken, you need to specify what type of object the AI Vision Sensor should collect data of:
- A Color Signature or Color Code
- These Visual Signatures start with the name of the AI Vision Sensor, double underscore, and then the name of the Visual Signature, for example:
ai_vision_1__Blue
.
- These Visual Signatures start with the name of the AI Vision Sensor, double underscore, and then the name of the Visual Signature, for example:
-
AI Classifications -
AiVision.ALL_AIOBJS
-
AprilTags -
AiVision.ALL_TAGS
Taking a snapshot will create a tuple of all of the detected objects that you specified. For instance, if you wanted to detect a "Blue" Color Signature, and the AI Vision Sensor detected 3 different blue objects, data from all three would be put in the tuple.
In this example, the variable vision_objects
stores a tuple containing the detected "Purple" Color Signatures from the AI Vision Sensor named ai_vision_1
. It displays the number of objects detected and captures a new snapshot every 0.5 seconds.
while True:
# Get a snapshot of all Purple Color Signatures and store it in vision_objects.
vision_objects = ai_vision_1.take_snapshot(ai_vision_1__Purple)
# Check to make sure an object was detected in the snapshot before pulling data.
if vision_objects[0].exists == True
brain.screen.clear_screen()
brain.screen.set_cursor(1, 1)
brain.screen.print("Object count:", len(vision_objects))
wait(0.5, SECONDS)
Object Properties
Every object from a snapshot has different properties that can be used to report information about that object. The available properties are as follows:
- id
- centerX and centerY
- originX and originY
- width
- height
- angle
- score
- exists
To access an object's property, use the variable name storing the tuple, followed by the object index.
The object index indicates which specific object's property you want to retrieve. After taking a snapshot, the AI Vision Sensor automatically sorts objects by size. The largest object is assigned index 0, with smaller objects receiving higher index numbers.
For example, calling the largest's object's width inside the vision_objects
variable would be: vision_objects[0].width
.
id
The id
property is only available for AprilTags and AI Classifications.
For an AprilTag, the id
property represents the detected AprilTag(s) ID number.
Identifying specific AprilTags allows for selective navigation. You can program your robot to move towards certain tags while ignoring others, effectively using them as signposts for automated navigation.
For AI Classifications, the id
property represents the specific type of AI Classification detected.
Identifying specific AI Classifications allows the robot to only focus on specific objects, such as only wanting to navigate towards a red Buckyball, not a blue one.
Go to these articles for more information on AprilTags and AI Classifications and how to enable their detection in the AI Vision Utility.
centerX
and centerY
This is the center coordinates of the detected object in pixels.
CenterX and CenterY coordinates help with navigation and positioning. The AI Vision Sensor has a resolution of 320 x 240 pixels.
You can see that an object closer to the AI Vision Sensor will have a lower CenterY coordinate than an object that is farther away.
In this example, because the center of the AI Vision Sensor's view is (160, 120), the robot will turn right until a detected object's centerX coordinate is greater than 150 pixels, but less than 170 pixels.
while True:
# Get a snapshot of all Blue Color Signatures and store it in vision_objects.
vision_objects = ai_vision_1.take_snapshot(ai_vision_1__Blue)
# Check to make sure an object was detected in the snapshot before pulling data.
if vision_objects[0].exists == True
# Check if the object isn't in the center of the AI Vision Sensor's view.
if vision_objects[0].centerX > 150 and 170 > vision_objects[0].centerX:
# Keep turning right until the object is in the center of the view.
drivetrain.turn(RIGHT)
else:
drivetrain.stop()
wait(5, MSEC)
originX
and originY
OriginX and OriginY is the coordinate at the top-left corner of the detected object in pixels.
OriginX and OriginY coordinates help with navigation and positioning. By combining this coordinate with the object's Width and Height, you can determine the size of the object's bounding box. This can help with tracking moving objects or navigating between objects.
width
and height
This is the width or height of the detected object in pixels.
The width and height measurements help identify different objects. For example, a Buckyball will have a larger height than a Ring.
Width and height also indicate an object's distance from the AI Vision Sensor. Smaller measurements usually mean the object is farther away, while larger measurements suggest it's closer.
In this example, the width of the object is used for navigation. The robot will approach the object until the width has reached a specific size before stopping.
while True:
# Get a snapshot of all Blue Color Signatures and store it in vision_objects.
vision_objects = ai_vision_1.take_snapshot(ai_vision_1__Blue)
# Check to make sure an object was detected in the snapshot before pulling data.
if vision_objects[0].exists == True
# Check if the largest object is close to the AI Vision Sensor by measuring its width.
if vision_objects[0].width < 250:
# Drive closer to the object until it's wider than 250 pixels.
drivetrain.drive(FORWARD)
else:
drivetrain.stop()
wait(5, MSEC)
angle
The angle
property is only available for Color Codes and AprilTags.
This represents if the detected Color Code or AprilTag is orientated differently.
You can see if the robot is orientated differently in relation to the Color Code or AprilTag and make navigation decisions according to that.
For instance, if a Color Code isn't detected at a proper angle, then the object it represents may not be able to be picked up properly by the robot.
score
The score
property is used when detecting AI Classifications with the AI Vision Sensor.
The confidence score indicates how certain the AI Vision Sensor is about its detection. In this image, it's 99% confident in identifying these four objects' AI Classifications. You can use this score to ensure your robot only focuses on highly confident detections.
exists
The exists
property is used to detect if a specified Visual Signature has been detected in the last taken snapshot.
This lets you check if any detected objects were detected in the previous snapshot. This property will return a True when an object exists, and a False when the object does not exist.