Coding with the AI Vision Sensor in VEXcode EXP Blocks

Make sure you have Color Signatures and Color Codes configured with your AI Vision Sensor so they can be used with your blocks. To learn more about how to configure them, you can read the articles below:

The AI Vision Sensor can also detect AI Classifcations and AprilTags. To learn how to enable these detection modes, go here:

To learn more detail about these individual Blocks and how to use them in VEXcode, go to the API site.


Take Snapshot

VEXcode EXP Take Snapshot block that reads Take a AIVision1 snapshot of COL1. There are two dropdowns, one to select the AI Vision Sensor and the second to select the Color Code.

The Take Snapshot block takes a picture of what the AI Vision Sensor is currently seeing and pulls data from that snapshot that can then be used in a project. When a snapshot is taken, you need to specify what type of object the AI Vision Sensor should collect data of:

  • Color Signature
  • Color Code
  • AI Classifications
  • AprilTags

Taking a snapshot will create an array of all of the detected objects that you specified. For instance, if you wanted to detect a "Red" Color Signature, and the AI Vision Sensor detected 3 different red objects, data from all three would be put in the array.

For more information on how to specify between different objects, go to the "Set Object Item" section in this article.

VEXcode EXP Take Snapshot block that reads Take a AIVision2 snapshot of Blue.

In this example, it will only detect objects that match its configured “Blue” Color Signature and nothing else.

Data Taken From a Snapshot

Keep in mind that the AI Vision Sensor will use its last taken snapshot for any Blocks that come after. To make sure you're always getting the most up-to-date information from your AI Vision Sensor, retake your snapshot every time you want to pull data from it. 

Resolution

Diagram of the AI Vision Sensor's resolution. The top left corner is labeled 0, 0, the top right corner is labeled 320, 0, and the bottom left corner is labeled 0, 240. The center of the screen is labeled 160, 120.

Understanding the AI Vision Sensor's resolution is crucial for accurate data interpretation. The sensor has a resolution of 320x240 pixels, with the exact center at coordinates (160, 120).

X-coordinates less than 160 correspond to the left half of the sensor's field of view, while those greater than 160 represent the right half. Similarly, Y-coordinates less than 120 indicate the upper half of the view, and those greater than 120 represent the lower half.

Go to Understanding the Data in the AI Vision Utility in VEXcode EXP for more information about how objects are measured with the AI Vision Sensor.

Width and Height

This is the width or height of the detected object in pixels.

AI Vision Sensor is shown tracking a Blue Buckyball. The Buckyball has a tracking rectangle around it, and the label above shows that it has a width of 80 pixels and a height of 78 pixels. Red arrows are highlighting the tracking rectangle to demonstrate its width and height.

The width and height measurements help identify different objects. For example, a Buckyball will have a larger height than a Ring.

AI Vision Sensor is shown tracking two Blue Cubes. The Cubes have tracking rectangles around them, and one is much closer to the camera. The closer one has a width of 144 and a height of 113, and the farther one has a width of 73 and a height of 84.

Width and height also indicate an object's distance from the AI Vision Sensor. Smaller measurements usually mean the object is farther away, while larger measurements suggest it's closer.

VEXcode Blocks project in which the robot will approach the object until the width has reached a specific size before stopping. The project begins with a When started block and a Forever loop. The rest of the project is inside of the Forever loop. First, take a AIVision1 snapshot of Blue, then the rest of the project is inside an If block that reads if AIVision1 object exists? Inside this If block there is an If Else block that reads if AIVision1 object width is less than 250 then drive forward, else stop driving.

In this example, the width of the object is used for navigation. The robot will approach the object until the width has reached a specific size before stopping.

CenterX and Center Y

This is the center coordinates of the detected object in pixels.

AI Vision Sensor is shown tracking a Blue Buckyball. The Buckyball has a tracking rectangle around it, and the label above shows that it has an X position of 176 and a Y position of 117. The tracking rectangle's center is highlighted to demonstrate that the position is measured from the center.

CenterX and CenterY coordinates help with navigation and positioning. The AI Vision Sensor has a resolution of 320 x 240 pixels.

AI Vision Sensor is shown tracking two Blue Cubes. The Cubes have tracking rectangles around them, and one is much closer to the camera. The closer one has a Y position of 184, and the farther one has a Y position of 70.

You can see that an object closer to the AI Vision Sensor will have a lower CenterY coordinate than an object that is farther away.

VEXcode Blocks project in which the robot will turn towards a detected object until it is in the center of the AI Vision Sensor's view. The project begins with a When started block and a Forever loop. The rest of the project is inside of the Forever loop. First, take a AIVision1 snapshot of Blue, then the rest of the project is inside an If block that reads if AIVision1 object exists? Inside this If block there is an If Else block that reads if AIVision1 object centerX greater than 150 and AIVision1 object centerX less than 170, then turn right, else stop driving.

In this example, because the center of the AI Vision Sensor's view is (160, 120), the robot will turn right until a detected object's centerX coordinate is greater than 150 pixels, but less than 170 pixels.

Angle

Animation of a red square and a green square being rotated together to demonstrate the 360 degrees of an angle value.

Angle is a property only available for Color Codes and AprilTags. This represents if the detected Color Code or AprilTag is orientated differently.

AI Vision Sensor is shown tracking a Color Code of Green then Blue. The video feed shows a Green Cube stacked on top of a Blue Cube. The Color Code's angle value is highlighted and reads 87 degrees, which indicates that the Color Code is oriented vertically.

You can see if the robot is orientated differently in relation to the Color Code or AprilTag and make navigation decisions according to that.

AI Vision Sensor is shown tracking a Color Code of Green then Blue. The video feed shows a Green Cube sitting next to a Blue Cube, but they are at an awkward angle compared to the sensor. The Color Code's angle value is highlighted and reads 0 degrees, which indicates that the Color Code's angle can't be read.

For instance, if a Color Code isn't detected at a proper angle, then the object it represents may not be able to be picked up properly by the robot.

OriginX and OriginY

OriginX and OriginY is the coordinate at the top-left corner of the detected object in pixels.

AI Vision Sensor is shown tracking a Blue Buckyball. The Buckyball has a tracking rectangle around it, and the label above shows that it has an X position of 176 and a Y position of 117. The tracking rectangle's upper left corner is highlighted to demonstrate that the origin position is measured from its top left corner.

OriginX and OriginY coordinates help with navigation and positioning. By combining this coordinate with the object's Width and Height, you can determine the size of the object's bounding box. This can help with tracking moving objects or navigating between objects.

VEXcode Blocks project in which the robot will draw a detected object onto its screen as a rectangle. The project begins with a When started block and a Forever loop. The rest of the project is inside of the Forever loop. First, take a AIVision1 snapshot of Blue, then the rest of the project is inside an If block that reads if AIVision1 object exists? Inside this If block there is a Draw rectangle block that reads draw rectangle AIVision1 object originX, AIVision1 object originY, AIVision1 object width, AIVision1 object height on Brain.

In this example, a rectangle will be drawn on the Brain using the exact coordinates of its origin, width, and height.

tagID

The tagID is only available for AprilTags. This is the ID number for the specified AprilTag.

Three AprilTags are being tracked by the AI Vision Utility. Each tag is identified, located, and outlined, indicating its tracking by the system. The AprilTag IDs in this example read 0, 3, and 9.

Identifying specific AprilTags allows for selective navigation. You can program your robot to move towards certain tags while ignoring others, effectively using them as signposts for automated navigation.

Score

The score property is used when detecting AI Classifications with the AI Vision Sensor.

Four objects are being tracked by the AI Vision utility, two BuckyBalls and two Rings. Each object is identified, located, and outlined, indicating its tracking by the system. The utility also lists each object's AI Classification score, in this example each score reads 99%.

The confidence score indicates how certain the AI Vision Sensor is about its detection. In this image, it's 99% confident in identifying these four objects' AI Classifications. You can use this score to ensure your robot only focuses on highly confident detections.


Set Object Item

When an object is detected by the AI Vision Sensor, it's put into an array. By default, the AI Vision Sensor will pull data from the first object in the array, or the object with the index of 1. If your AI Vision Sensor has only detected one object, then that object will  be selected by default.

When your AI Vision Sensor has detected multiple objects at once, however, you'll need to use the Set Object Item block to specify which object you want to pull data from.

VEXcode EXP Set object item block that reads Set AIVision1 object item to 1. There is a dropdown to select the AI Vision Sensor, and a text field to enter the object index.

When multiple objects are detected by the AI Vision Sensor, they are arranged in the array by largest to smallest. That means that the largest detected object will always be set to object index 1, and the smallest object will always be set to the highest number.

AI Vision Sensor is shown tracking two Blue Cubes. The Cubes have tracking rectangles around them, and one is much closer to the camera. The closer one has a width of 136, and the farther one has a width of 78.

In this example, two objects have been detected with the Color Signature "Blue". They both will be put in the array when the Take Snapshot block is used.

AI Vision Sensor is shown tracking two Blue Cubes. The Cubes have tracking rectangles around them, and one is much closer to the camera. The closer cube is labeled 1 and the farther cube is labeled 2.

Here, the object in the front would become object index 1, since it is the largest object, and the smallest object would become object index 2.


Object Exists

Before pulling any data from a snapshot, it's important to always check to make sure the AI Vision Sensor has detected any objects from that snapshot first. This is where the Object Exists block comes into play.

VEXcode EXP Object exists block that reads AIVision1 object exists? There is a dropdown to select the AI Vision Sensor.

This block will return a True or False value on whether or not the last taken snapshot has any objects detected in it.

This block should always be used to ensure you're not trying to pull any data from a potentially empty snapshot.

VEXcode Blocks project in which the robot will drive towards a detected Blue object. The project begins with a When started block and a Forever loop. The rest of the project is inside of the Forever loop. First, take a AIVision2 snapshot of Blue, then an If Else block that reads if AIVision2 object exists then drive forward, else stop driving.

For instance, here the robot will be constantly taking snapshots with the AI Vision Sensor. If it identifies any object with the “Blue” Color Signature, it will drive forward.


If any snapshot does not have the “Blue” Color Signature, the robot will stop moving.


Object Count

VEXcode EXP Object count block that reads AIVision1 object count. There is a dropdown to select the AI Vision Sensor.

Using the Object count block will allow you to see how many objects of a specific Color Signature the AI Vision Sensor can see in its last snapshot. 

AI Vision Sensor is shown tracking two Blue Cubes. The Cubes have tracking rectangles around them, and one is much closer to the camera.

Here, we see the AI Vision Sensor has the configured Color Signature “Blue”, and is detecting two objects.

VEXcode Blocks project in which the robot will print the number of detected Blue objects to the Print Console. The project begins with a When started block and a Forever loop. The rest of the project is inside of the Forever loop. First, take a AIVision2 snapshot of Blue, clear all rows on Console, and then set cursor to next row on Console. Next is an If block that reads if AIVision2 object exists then print AIVision2 object count on Console and set cursor to next row. Outside of the If block, there is a Wait block set to wait for 2 seconds.The Print Console output of the previous VEXcode Blocks project with a printed message reading 2.

In this code, the AI Vision Sensor would take a snapshot and print “2” on the VEXcode console, since it only detects two “Blue” Color Signatures.


Object

VEXcode EXP AI Vision object block that reads AIVision1 object width. There is a dropdown to select the AI Vision Sensor, and an opened dropdown menu to select the object's attribute for sensing. The list of options reads width, height, centerX, centerY, angle, originX, originY, tagID, and score.

The Object block allows you to report the property of your specified object. This lets you use any of the available data pulled from the most recently taken snapshot.

Object properties that can be pulled from taken snapshots are:

  • width
  • height
  • centerX
  • centerY
  • angle
  • originX
  • originY
  • tagID
  • score

Read the "Data Taken from Snapshot" section of this article for more information on these properties.


Detected AprilTag is

VEXcode EXP Detected AprilTag is block that reads AIVision1 detected AprilTag is 1? There is a dropdown to select the AI Vision Sensor.

The Detected AprilTag is block is only available when the AprilTag Detection Mode is turned on.

This block will report True or False depending on if the specified object is a certain AprilTag.

Three AprilTags are being tracked by the AI Vision Utility. Each tag is identified, located, and outlined, indicating its tracking by the system. The AprilTag IDs in this example read 0, 3, and 9.

When multiple AprilTags are detected in a single snapshot, they are arranged in the array based on their identified ID, not by size.

In this image, three AprilTags are detected with IDs 0, 3, and 9. They will be organized in ascending order of their ID in the array. The object at index 1 would correspond to the AprilTag with ID 0, at index 2 to the AprilTag with ID 3, and at index 3 to the AprilTag with ID 9.

For more information on what AprilTags are and how to enable their detection with the AI Vision Sensor, read this article.


AI Classification is

VEXcode EXP AI Classification is block that reads AIVision1 AI classification is BlueBall? There is a dropdown to select the AI Vision Sensor, and another dropdown menu to select the target AI Classification object.

The AI Classification is block is only available when the AI Classification Detection Mode is turned on.

 

This block will report True or False depending on if the specified object is a certain AI Classification.

What AI Classifications can be detected by the AI Vision Sensor varies depending on what model you are using. For more information on what AI Classifications are available and how to enable their detection with the AI Vision Sensor, read this article.

For more information, help, and tips, check out the many resources at VEX Professional Development Plus

Last Updated: