You can use the AI Vision Sensor to help you identify game objects (Blocks) on the V5RC 25-26 Push Back Playground in VEXcode VR using AI Classifications.
If you are familiar with the physical version of the AI Vision Sensor, you will know that the physical sensor also has the ability to report information about AprilTags and configured Color Signatures. Because no robot configuration is needed in VEXcode VR and no AprilTags are present on the V5RC 25-26 Push Back Field, the virtual sensor reports information only on the pre-configured Game Elements: Red Blocks and Blue Blocks.
How the AI Vision Sensor Works in V5RC Push Back in VEXcode VR
The AI Vision Sensor is a camera that can automatically identify and differentiate between Game Elements, allowing your robot to autonomously orient itself toward specific objects. The sensor comes pre-trained to recognize this year's V5RC Push Back Game Elements, so it will automatically detect Blocks.
To detect these objects, the AI Vision Sensor is mounted on the front of the robot (as shown here).
Gathering Data from the AI Vision Sensor
You can view data being reported by the AI Vision Sensor through the Snapshot Window, Monitor Console, or Print Console in VEXcode VR.
To view the Snapshot Window and see the data the AI Vision Sensor is reporting, select the AI Vision Sensor button.
Select the AI Vision Sensor button again to hide the Snapshot Window.
The Snapshot Window appears in the upper left corner of the Playground Window and identifies all Game Elements within the AI Vision Sensor's field of view.
For each detected object, it displays key data including classification, Center X and Center Y coordinates, and width and height.
Explanations of the types of data reported by the AI Vision Sensor, including their related VEXcode commands, can be found in the Blocks and Python VEX API.
Those commands can be used in the Monitor and/or Print Consoles to help visualize the data from each snapshot that is taken while your project is running. Learn more about using the Monitor and Print Consoles with Blocks, with Python, or Using the Print Console.
Using the AI Vision Sensor to Help Dex Identify Objects
You can use the AI Vision Sensor to help Dex navigate to specific objects by interpreting the sensor's data. With this technology, Dex can target and drive to Game Elements to pick them up.
The AI Vision Sensor will only report the data from the most recent snapshot, so Dex needs to be constantly updating that snapshot while driving.
This example project shows how Dex can use the AI Vision Sensor to autonomously orient to a Block and pick it up.
Learn about accessing and running example projects with Blocks or with Python.
Using Dex's Sensors Together
The AI Vision Sensor can be combined with other sensors on the robot to complete tasks around the Field. A full list of the sensors on the virtual version of Dex can be found on this page of the VEX API. These are just a few ideas to help you get started with your code:
- Use the AI Vision Sensor to find and target on a Game Element, then use the GPS Sensor to drive to a goal.
- Use the AI Vision Sensor to find and target on multiple Game Elements, then use the Optical Sensor to determine the color of Block in the conveyor before releasing.
Remember that additional information about specific commands, the V5RC Push Back Field, and the Hero Bot, Dex, can be found in the VEX API and in the built-in Help in VEXcode VR (Blocks and Python).