Using the AI Vision Sensor in the V5RC High Stakes Playground

You can use the AI Vision Sensor to help you identify game objects (Rings and Mobile Goals) on the VEX V5 Robotics Competition (V5RC) Over Under Playground in VEXcode VR using AI Classifications.

If you are familiar with the physical version of the AI Vision Sensor, you will know that the physical sensor also has the ability to report information about AprilTags and configured Color Signatures. Because no robot configuration is needed in VEXcode VR and no AprilTags are present on the V5RC High Stakes Field, the virtual sensor reports information only on the pre-configured Game Elements: Red Rings, Blue Rings, and Mobile Goals. 


How the AI Vision Sensor Works in V5RC High Stakes in VEXcode VR

ai vision sensor on Axel.png

The AI Vision Sensor is a camera that can automatically differentiate between Game Elements, allowing the robot to orient itself towards specific Game Elements autonomously. The camera has been trained on the Game Elements for this year's V5RC game, High Stakes, so Rings and Mobile Goals are automatically detected.

To detect these objects, the AI Vision Sensor is mounted on the front of the robot (as shown here).


Gathering Data from the AI Vision Sensor

You can view data being reported by the AI Vision Sensor through the Snapshot Window, Monitor Console, or Print Console in VEXcode VR.

Note: The Arm of Axel must be raised to clear the field of view of the AI Vision Sensor. If the Arm is not raised, it will take up a large section of the center of the camera.

V5RC HS PG Window AI Vision button.png

To view the Snapshot Window and see the data the AI Vision Sensor is reporting, select the AI Vision Sensor button. 

Select the AI Vision Sensor button again to hide the Snapshot Window. 

image (2).png

The Snapshot Window will appear in the upper left-hand corner of the Playground Window. The Snapshot will identify all Game Elements in the field of view of the AI Vision Sensor and related data. 

Data printed in the Snapshow Window for each object includes the Center X, Center Y, Width, and Height as well as the Classification of the object. 

Explanations of the types of data reported by the AI Vision Sensor, including their related VEXcode commands, can be found in the VEX API. Both Blocks-specific and Python-specific pages are available for reference. 

Those commands can be used in the Monitor and/or Print Consoles to help visualize the data from each snapshot that is taken while your project is running. Learn more about using the Monitor and Print Consoles with these articles.


Using the AI Vision Sensor to Help Axel Identify Objects

You can use the AI Vision Sensor to help Axel navigate to specific objects by using your understanding of the data reported by the sensor. Using the AI Vision Sensor, Axel can target and drive to a Game Element in order to pick up the object up.

The AI Vision Sensor will only report the data from the most recent snapshot, so Axel needs to be constantly updating that snapshot while driving.

example project icon AI.png

In this example project, Axel will use the AI Vision Sensor to determine if a Red Ring is in front of it, turn until the Red Ring's Center X is less than 150, then drive forward to the ring. To drive forward to the Red Ring, the AI Vision Sensor is used to measure the width of the object in the sensor's snapshot. Once the width is large enough, the robot then knows that it is within range to pick up the Red Ring.

Learn about accessing and running example projects with these articles:


Using Axel's Sensors Together

The AI Vision Sensor can be combined with other sensors on the robot to complete tasks around the field. A full list of the sensors on the virtual version of Axel can be found on this page of the VEX API. These are just a few ideas to help you get started with your code.

  • Use the AI Vision Sensor to find and target on a Game Element, then use the Front Distance Sensor to drive until the object is close to the robot.
  • Use the AI Vision Sensor to find and navigate to a Mobile Goal, then use the GPS Sensor to move the Mobile Goal into the corners of the Field.
  • Use the AI Vision Sensor to find and navigate to a Red Ring and Mobile Goal, then use the Rotation Sensor to position the Pusher and place the Ring on the Goal. 

Remember that additional information about specific commands, the V5RC High Stakes Field, and the Hero Bot, Axel, can be found in the VEX API and in the built-in Help in VEXcode VR (Blocks and Python).

 

For more information, help, and tips, check out the many resources at VEX Professional Development Plus

Last Updated: