Accessing the Dashboard of the VEX AI Intel Camera

Before beginning these steps ensure that your setup meets this criteria:

  • VEX AI Intel Camera is plugged in the Jetson.
  • VEX GPS Sensor is plugged into the Jetson and Brain.
  • SD card is inserted in the Jetson.
  • Jetson was rebooted after plugging in all devices (if the Jetson was turned on during devices being plugged in).
  • Intel Wi-Fi Antennas are installed

Accessing the Dashboard of the VEX AI Intel Camera gives you insight into where the robot is on the field as well as what game pieces are in front of it.

Wi-Fi settings screen with the ‘VEX_AI’ network selected.

To begin, on your computer, open the WIFI settings and select ‘VEX_AI.’

Computer prompt asking for a password to join the ‘VEX_AI’ Wi-Fi network.

A password will be required.

Wi-Fi connection window showing the password ‘vexrobotics’ entered for the ‘VEX_AI’ network.

The password is ‘vexrobotics.’

Web browser address bar showing the VEX AI dashboard URL 10.42.0.1:3000.

Once connected, open your preferred internet browser and go to the url 10.42.0.1:3000. This will take you to the web server at the IP Address of the Jetson at port 3000.

VEX AI dashboard showing a field map with robot position and a live camera view detecting game objects.

On the web page, you will see the camera view from the Intel camera and information about the position of the robot. As the robot moves, the feed from the camera will update in real time. The robot’s position on the field will also reflect the GPS position and heading.

Field diagram showing X and Y coordinate axes and reference position of X0 Y0 at the center of the field.

Field map with compass overlay showing robot heading in degrees around the field.

On the Map view, the robot’s position and heading is reflected with an estimated Field of View from the camera’s perspective. There is a compass that shows the heading of the robot. The images below shows how the X and Y grid is laid out on the field and how the heading is displayed (in degrees) relative to the field setup.

Dashboard settings panel with options for compass, field of view, position tracking, and camera and GPS offsets.

You can toggle the settings of the VEX AI Web Dashboard by clicking on the Gear icon in the bottom left corner. This will bring up the settings panel to change the visual renderings on the Dashboard as well as Offsets for the GPS and VEX AI Intel Camera. “Show Compass” will toggle the compass overlay that displays the robot’s heading in degrees. “Show Fog” will toggle the FOV (Field of View) overlay that highlights the estimated view from the VEX AI Intel Camera and shades the remainder of the field. “Show X Y Position Tracks” will toggle the X and Y lines that intersect at the position of the Robot (this default is set to off).

The Camera Offset and GPS Offset are the offsets of those devices from the center of your Robot, as you would define it. This impacts where the Jetson believes it is currently positioned on the field and where it estimates the position of objects are on the field. NOTE: These offsets will only reflect on your Jetson. They will not automatically update on the VEX V5 Brain.

The Socket IP and Socket Port are default set to the correct IP and port when your Jetson is running off of the default Hotspot. However, if you want to network your Jetson to WiFi, this allows you to change the IP Address of the Websocket where the Jetson Camera Stream is being hosted. Be careful when changing this setting and make sure you understand what it does.

 

Camera view with detected game objects labeled by color, position coordinates, and distance.Close up of a detected Green Triball game object. Labels highlight the object's classification, location, and distance data reported by the video feed.

On the video feed, the detected objects’ information is overlaid on the video. Each object that is detected by the system will have the following information:

  1. Field Map location of the center of the object (X,Y). The location is referenced from the center of the bounding box of the object and is the estimated position of the object on the field. It accounts for the offsets from both your GPS Sensor and VEX AI Intel Camera.
  2. Distance from the robot to the object in meters. This distance is from the AI Depth camera to the object.
  3. Width and Height of the detected object in pixels. This is displayed as the box that is drawn over the object on the image.
  4. Classification of the object. The system will classify detected objects as either a Green Triball, a Blue Triball or a Red Triball. This classification is displayed on top of the bounding box that is drawn over the object. And by the color of the bounding box itself.

Note: The VEX AI Model may mistake the Base of the Red and Blue Alliance Goals and Match Load Zones as TriBalls.

 

For more information, help, and tips, check out the many resources at VEX Professional Development Plus

Last Updated: