Make sure you have Color Signatures and Color Codes configured with your AI Vision Sensor so they can be used with your blocks. To learn more about how to configure them, you can read the articles below:
- Configuring Color Signatures with the AI Vision Signature in VEXcode V5
- Configuring Color Codes with the AI Vision Signature in VEXcode V5
The AI Vision Sensor can also detect AI Classifcations and AprilTags. To learn how to enable these detection modes, go here:
- AI Classifications with the AI Vision Sensor in VEXcode V5
- AprilTags with the AI Vision Sensor in VEXcode V5
Take Snapshot
The Take Snapshot block takes a picture of what the AI Vision Sensor is currently seeing and pulls data from that snapshot that can then be used in a project. When a snapshot is taken, you need to specify what type of object the AI Vision Sensor should collect data of:
- Color Signature
- Color Code
- AI Classifications
- AprilTags
Taking a snapshot will create an array of all of the detected objects that you specified. For instance, if you wanted to detect a "Red" Color Signature, and the AI Vision Sensor detected 3 different red objects, data from all three would be put in the array.
For more information on how to specify between different objects, go to the "Set Object Item" section in this article.
In this example, it will only detect objects that match its configured “Blue” Color Signature and nothing else.
Data Taken From a Snapshot
Keep in mind that the AI Vision Sensor will use its last taken snapshot for any Blocks that come after. To make sure you're always getting the most up-to-date information from your AI Vision Sensor, retake your snapshot every time you want to pull data from it.
Width and Height
This is the width or height of the detected object in pixels.
The width and height measurements help identify different objects. For example, a Buckyball will have a larger height than a Ring.
Width and height also indicate an object's distance from the AI Vision Sensor. Smaller measurements usually mean the object is farther away, while larger measurements suggest it's closer.
In this example, the width of the object is used for navigation. The robot will approach the object until the width has reached a specific size before stopping.
CenterX and Center Y
This is the center coordinates of the detected object in pixels.
CenterX and CenterY coordinates help with navigation and positioning. The AI Vision Sensor has a resolution of 320 x 240 pixels.
You can see that an object closer to the AI Vision Sensor will have a lower CenterY coordinate than an object that is farther away.
In this example, because the center of the AI Vision Sensor's view is (160, 120), the robot will turn right until a detected object's centerX coordinate is greater than 150 pixels, but less than 170 pixels.
Angle
Angle is a property only available for Color Codes and AprilTags. This represents if the detected Color Code or AprilTag is orientated differently.
You can see if the robot is orientated differently in relation to the Color Code or AprilTag and make navigation decisions according to that.
For instance, if a Color Code isn't detected at a proper angle, then the object it represents may not be able to be picked up properly by the robot.
OriginX and OriginY
OriginX and OriginY is the coordinate at the top-left corner of the detected object in pixels.
OriginX and OriginY coordinates help with navigation and positioning. By combining this coordinate with the object's Width and Height, you can determine the size of the object's bounding box. This can help with tracking moving objects or navigating between objects.
In this example, a rectangle will be drawn on the Brain using the exact coordinates of its origin, width, and height.
tagID
The tagID is only available for AprilTags. This is the ID number for the specified AprilTag.
Identifying specific AprilTags allows for selective navigation. You can program your robot to move towards certain tags while ignoring others, effectively using them as signposts for automated navigation.
Score
The score property is used when detecting AI Classifications with the AI Vision Sensor.
The confidence score indicates how certain the AI Vision Sensor is about its detection. In this image, it's 99% confident in identifying these four objects' AI Classifications. You can use this score to ensure your robot only focuses on highly confident detections.
Set Object Item
When an object is detected by the AI Vision Sensor, it's put into an array. By default, the AI Vision Sensor will pull data from the first object in the array, or the object with the index of 1. If your AI Vision Sensor has only detected one object, then that object will be selected by default.
When your AI Vision Sensor has detected multiple objects at once, however, you'll need to use the Set Object Item block to specify which object you want to pull data from.
When multiple objects are detected by the AI Vision Sensor, they are arranged in the array by largest to smallest. That means that the largest detected object will always be set to object index 1, and the smallest object will always be set to the highest number.
In this example, two objects have been detected with the Color Signature "Blue". They both will be put in the array when the Take Snapshot block is used.
Here, the object in the front would become object index 1, since it is the largest object, and the smallest object would become object index 2.
Object Exists
Before pulling any data from a snapshot, it's important to always check to make sure the AI Vision Sensor has detected any objects from that snapshot first. This is where the Object Exists block comes into play.
This block will return a True or False value on whether or not the last taken snapshot has any objects detected in it.
This block should always be used to ensure you're not trying to pull any data from a potentially empty snapshot.
For instance, here the robot will be constantly taking snapshots with the AI Vision Sensor. If it identifies any object with the “Blue” Color Signature, it will drive forward.
If any snapshot does not have the “Blue” Color Signature, the robot will stop moving.
Object Count
Using the Object count block will allow you to see how many objects of a specific Color Signature the AI Vision Sensor can see in its last snapshot.
Here, we see the AI Vision Sensor has the configured Color Signature “Blue”, and is detecting two objects.
In this code, the AI Vision Sensor would take a snapshot and print “2” on the VEXcode console, since it only detects two “Blue” Color Signatures.
Object
The Object block allows you to report the property of your specified object. This lets you use any of the available data pulled from the most recently taken snapshot.
Object properties that can be pulled from taken snapshots are:
- width
- height
- centerX
- centerY
- angle
- originX
- originY
- tagID
- score
Read the "Data Taken from Snapshot" section of this article for more information on these properties.
Detected AprilTag is
The Detected AprilTag is block is only available when the AprilTag Detection Mode is turned on.
This block will report True or False depending on if the specified object is a certain AprilTag.
When multiple AprilTags are detected in a single snapshot, they are arranged in the array based on their identified ID, not by size.
In this image, three AprilTags are detected with IDs 0, 3, and 9. They will be organized in ascending order of their ID in the array. The object at index 1 would correspond to the AprilTag with ID 0, at index 2 to the AprilTag with ID 3, and at index 3 to the AprilTag with ID 9.
AI Classification is
The AI Classification is block is only available when the AI Classification Detection Mode is turned on.
This block will report True or False depending on if the specified object is a certain AI Classification.
What AI Classifications can be detected by the AI Vision Sensor varies depending on what model you are using. For more information on what AI Classifications are available and how to enable their detection with the AI Vision Sensor, read this article.