A competitive evaluation platform for multi-agent reinforcement learning.
TL;DR
- FastAPI-based evaluation server for multi-agent racing on the MetaDrive simulator
- Students upload trained agents and compete against all opponents, with live leaderboard and match replays.
- Built for UCLA CS260R (Reinforcement Learning) — Winter 2026 competition completed
Features
⚡
Automated Matching
Uploaded agent automatically races against all existing opponents. Results refresh on the leaderboard.
🏆
Live Leaderboard
Real-time rankings by win rate, ELO ratings, and route completion stats.
🎥
Match Replays
Bird's-eye view and 3D camera replay videos for every episode.
🚀
GPU-Accelerated Evaluation
Use GPU for faster model inference. Supports concurrent evaluation and multiple GPUs.
🔒
Token Authentication
Students use unique tokens to upload agent and review their own match history.
🛠
Admin Dashboard
Useful tools for user management, round-robin, and grade exports.
Race Tracks
4 custom-designed maps testing different driving skills
Circuit
Oval
Serpentine
Hairpin
Architecture
Step 1
Agent Upload
Step 2
Job Queue
Step 3
GPU Worker
Step 4
MetaDrive Sim
Step 5
Leaderboard
Competition Results
CS260R Reinforcement Learning — Winter 2026
Acknowledgments
Designed and developed by Matthew Leng and Haoyuan Cai, VAIL @ UCLA.
Built on the MetaDrive simulator.