Make sure you have Color Signatures and Color Codes configured with your AI Vision Sensor so they can be used with your blocks. To learn more about how to configure them, you can read the articles below:
- Configuring Color Signatures with the AI Vision Signature in VEXcode EXP
- Configuring Color Codes with the AI Vision Signature in VEXcode EXP
The AI Vision Sensor can also detect AI Classifcations and AprilTags. To learn how to enable these detection modes, go here:
- AI Classifications with the AI Vision Sensor in VEXcode EXP
- AprilTags with the AI Vision Sensor in VEXcode EXP
Obtain Visual Data with the AI Vision Sensor
Every AI Vision Sensor command will start with the name of the configured AI Vision Sensor. For all the examples in this article, the name of the AI Vision Sensor used will be AIVision
.
takeSnapshot
The takeSnapshot
method takes a picture of what the AI Vision Sensor is currently seeing and pulls data from that snapshot that can then be used in a project. When a snapshot is taken, you need to specify what type of object the AI Vision Sensor should collect data of:
- A Color Signature or Color Code
- These Visual Signatures start with the name of the AI Vision Sensor, double underscore, and then the name of the Visual Signature, for example:
AIVision1__Blue
.
- These Visual Signatures start with the name of the AI Vision Sensor, double underscore, and then the name of the Visual Signature, for example:
-
AI Classifications -
aivision::ALL_AIOBJS
-
AprilTags -
aivision::ALL_TAGS
Taking a snapshot will create a array of all of the detected objects that you specified. For instance, if you wanted to detect a "Blue" Color Signature, and the AI Vision Sensor detected 3 different blue objects, data from all three would be put in the array.
In this example, a snapshot is taken of the "Blue" Color Signature from the AI Vision Sensor named AIVision1
. It displays the number of objects detected in the array and captures a new snapshot every 0.5 seconds.
while (true) {
// Get a snapshot of all Blue Color objects.
AIVision.takeSnapshot(AIVision1__Blue);
// Check to make sure an object was detected in the snapshot before pulling data.
if (AIVision.objectCount > 0) {
Brain.Screen.clearScreen();
Brain.Screen.setCursor(1, 1);
Brain.Screen.print(AIVision1.objectCount);
}
wait(5, msec);
}
objects
Every object from a snapshot has different properties that can be used to report information about that object. The objects method allows you to access these properties.
The available properties are as follows:
- id
- centerX and centerY
- originX and originY
- width
- height
- angle
- exists
- score
To access an object's property, use the name of the AI Vision Sensor, followed by the objects method, and then the object's index.
The object index indicates which specific object's property you want to retrieve. After taking a snapshot, the AI Vision Sensor automatically sorts objects by size. The largest object is assigned index 0, with smaller objects receiving higher index numbers.
For example, calling the largest's object's width would be AIVision1.objects[0].width
.
id
The id
property is only available for AprilTags and AI Classifications.
For an AprilTag, the id
property represents the detected AprilTag(s) ID number.
Identifying specific AprilTags allows for selective navigation. You can program your robot to move towards certain tags while ignoring others, effectively using them as signposts for automated navigation.
For AI Classifications, the id
property represents the specific type of AI Classification detected.
Identifying specific AI Classifications allows the robot to only focus on specific objects, such as only wanting to navigate towards a red Buckyball, not a blue one.
Go to these articles for more information on AprilTags and AI Classifications and how to enable their detection in the AI Vision Utility.
centerX
and centerY
This is the center coordinates of the detected object in pixels.
CenterX and CenterY coordinates help with navigation and positioning. The AI Vision Sensor has a resolution of 320 x 240 pixels.
You can see that an object closer to the AI Vision Sensor will have a lower CenterY coordinate than an object that is farther away.
In this example, because the center of the AI Vision Sensor's view is (160, 120), the robot will turn right until a detected object's centerX coordinate is greater than 150 pixels, but less than 170 pixels.
while (true) {
// Get a snapshot of all Blue Color objects.
AIVision.takeSnapshot(AIVision__Blue);
// Check to make sure an object was detected in the snapshot before pulling data.
if (AIVision.objectCount > 0) {
if (AIVision.objects[0].centerX > 150.0 && 170.0 > AIVision.objects[0].centerX) {
Drivetrain.turn(right);
} else {
Drivetrain.stop();
}
}
wait(5, msec);
}
originX
and originY
OriginX and OriginY is the coordinate at the top-left corner of the detected object in pixels.
OriginX and OriginY coordinates help with navigation and positioning. By combining this coordinate with the object's Width and Height, you can determine the size of the object's bounding box. This can help with tracking moving objects or navigating between objects.
width
and height
This is the width or height of the detected object in pixels.
The width and height measurements help identify different objects. For example, a Buckyball will have a larger height than a Ring.
Width and height also indicate an object's distance from the AI Vision Sensor. Smaller measurements usually mean the object is farther away, while larger measurements suggest it's closer.
In this example, the width of the object is used for navigation. The robot will approach the object until the width has reached a specific size before stopping.
while (true) {
// Get a snapshot of all Blue objects.
AIVision.takeSnapshot(AIVision1__Blue);
// Check to make sure an object was detected in the snapshot before pulling data.
if (AIVision.objectCount > 0) {
if (AIVision.objects[0].width < 250.0) {
Drivetrain.drive(forward);
} else {
Drivetrain.stop();
}
}
wait(5, msec);
}
angle
The angle
property is only available for Color Codes and AprilTags.
This represents if the detected Color Code or AprilTag is orientated differently.
You can see if the robot is orientated differently in relation to the Color Code or AprilTag and make navigation decisions according to that.
For instance, if a Color Code isn't detected at a proper angle, then the object it represents may not be able to be picked up properly by the robot.
score
The score
property is used when detecting AI Classifications with the AI Vision Sensor.
The confidence score indicates how certain the AI Vision Sensor is about its detection. In this image, it's 99% confident in identifying these four objects' AI Classifications. You can use this score to ensure your robot only focuses on highly confident detections.
exists
The exists
property is used to detect if a specified Visual Signature has been detected in the last taken snapshot.
This lets you check if any detected objects were detected in the previous snapshot. This property will return a True when an object exists, and a False when the object does not exist.
objectCount
The objectCount method returns the amount of detected objects in the last snapshot.
In this example, two objects have been detected with the Color Signature "Blue". They both will be put in the array when the takeSnapshot method is used.
This code snippet continuously updates the EXP Brain with the number of detected objects. Based on the example provided, it will repeatedly send the value 2, indicating that two objects have been detected.
while (true) {
// Get a snapshot of all Blue objects.
AIVision.takeSnapshot(AIVision__Blue);
Brain.Screen.clearScreen();
Brain.Screen.setCursor(1, 1);
// Check to make sure an object was detected in the snapshot before pulling data.
if (AIVision.objectCount > 0) {
Brain.Screen.print(AIVision1.objectCount);
}
wait(5, msec);
}