Activity
Ratings Progression
Challenge Categories
Challenges Entered
Trick Large Language Models
Latest submissions
Small Object Detection and Classification
Latest submissions
Understand semantic segmentation and monocular depth estimation from downward-facing drone images
Latest submissions
See Allgraded | 218908 | ||
failed | 218906 | ||
graded | 218905 |
Identify user photos in the marketplace
Latest submissions
See Allgraded | 209696 | ||
failed | 209694 | ||
graded | 209581 |
A benchmark for image-based food recognition
Latest submissions
See Allgraded | 181675 | ||
graded | 181661 | ||
graded | 181634 |
Machine Learning for detection of early onset of Alzheimers
Latest submissions
The first, open autonomous racing challenge.
Latest submissions
Self-driving RL on DeepRacer cars - From simulation to real world
Latest submissions
See Allgraded | 166423 | ||
graded | 166370 | ||
graded | 166369 |
A benchmark for image-based food recognition
Latest submissions
5 Puzzles, 3 Weeks. Can you solve them all? π
Latest submissions
Multi Agent Reinforcement Learning on Trains.
Latest submissions
Latest submissions
See Allgraded | 75509 | ||
graded | 75508 | ||
graded | 75058 |
Perform semantic segmentation on aerial images from monocular downward-facing drone
Latest submissions
See Allgraded | 218908 | ||
failed | 218906 | ||
graded | 218905 |
Commonsense Dialogue Response Generation
Latest submissions
See Allgraded | 250402 | ||
graded | 250400 | ||
graded | 250205 |
Participant | Rating |
---|---|
singstad90 | 0 |
themaroonknight | 0 |
jyotish | 0 |
krishna_kaushik | 0 |
bartosz_ludwiczuk | 0 |
Participant | Rating |
---|---|
jyotish | 0 |
nivedita_rufus | 261 |
-
Gaussian_Estimator QM energy challengeView
-
neutral_gear ECCV 2020 Commands 4 Autonomous VehiclesView
-
OLAV NeurIPS 2021 AWS DeepRacer AI Driving Olympics ChallengeView
-
OLAV Learn-to-Race: Autonomous Racing Virtual ChallengeView
-
gear_fifth Food Recognition Benchmark 2022View
-
seg-dep Scene Understanding for Autonomous Drone Delivery (SUADD'23)View
-
rank-re-rank-re-rank Visual Product Recognition Challenge 2023View
-
agi_is_gonna_take_my_job Commonsense Persona-Grounded Dialogue Challenge 2023View
Task 1: Commonsense Dialogue Response Generation
Semantic Segmentation
Same submissions with different weights failing
Over 1 year agoMany of our submissions with just different weights (have 1.6 second margins minimum) are failing. Usually just a resubmit fixes this but many of the submissions are pending for 6-7-8 hours plus.
Please advice. @dipam
SUADD'23- Scene Understanding for Autonomous Drone
Semantic Segmentation Validation passes but Semantic Segmentation fails
Over 1 year agoCould you please tell us why it failed @dipam TIA.
Semantic Segmentation Validation passes but Semantic Segmentation fails
Over 1 year agoLink to the Submission: AIcrowd
Visual Product Recognition Challenge 2023
Inconsistencies in submission timings
Over 1 year agoThis happened multiple times so we stopped trying to submit this model.
Inconsistencies in submission timings
Over 1 year agosubmission_nick_submission_v101_6
submission_nick_submission_v101_5
Inconsistencies in submission timings
Over 1 year agoSo we have two submissions one with model x and model y. Model x inferences much faster than model y with a T4 gpu on the kaggle workspace.
Still model x submission fails (times outs) but model y succeeds.
Everything else is same in the submissions. Preprocessing, Postprocessing etcβ¦
Food Recognition Benchmark 2022
Error when building Docker image in active submission
Over 2 years agoThanks for the fix @shivam. Another issue I am facing is that on active submission I get the message " Submission failed : The participant has no submission slots remaining for today. Please wait until 2022-05-03 06:17:25 UTC to make your next submission.
" But I see that I have 6 submissions remaining today.
Error when building Docker image in active submission
Over 2 years agoSince 29th April I dont see any active submissions happening @shivam @jyotish
Error when building Docker image in active submission
Over 2 years agoYes I get the same error when I run the evaluation with debug flag set to false in aicrowd.json.
Build fails on setting debug = false in aicrowd.json
Over 2 years agoBy default the aicrowd.json has the debug flag set to βtrueβ.
On evaluating I get the message βWarning: The evaluation is running in debug mode. You can set βdebugβ: false in your aicrowd.json for a full evaluation.β.
However on setting debug to false the Docker image build fails.
NeurIPS 2021 AWS DeepRacer AI Driving Olympics Cha
Regarding evaluation criteria for online submissions
Almost 3 years agoHi everyone!
According to the competition page, we are evaluated on the following metrics:
- Number of laps
- Lap Time
- Number of resets to the start line
- Number of objects avoided
But I have found that amongst the agents that I have trained the faster agents (following the racing line) get scored much lower than the agents which go as slow as possible on the centerline. This observation has been consistent with all my evaluations.
Also the numbers which I see on the scoreboard are very close to the mean rewards my agents get across multiple runs. Is the scoreboard currently reflecting the mean rewards our agents are accumulating across multiple runs? Could the exact formula for calculating the score be revealed?
Is this a bug or am I missing something here?
[Giveaway Alert π’] Make your first submission in the AWS DeepRacer Challenge to get free AWS Credits!
Almost 3 years agoSubmission ID #165110
ECCV 2020 Commands 4 Autonomous Vehicles
Submissions being stopped before the deadline
About 4 years agoNo. There was one submission remaining for my team mate, at around 11:45 this submission βdisappearedβ leaving a message βSubmissions will be possible as of 2020-08-01 17:40:22 UTC.β. Anyways the competition appears to have been completed so it doesnβt matter now.
Submissions being stopped before the deadline
About 4 years agoIts 11:45 pm UTC at the time of posting this the competitions is supposed to last till 12:01. All submissions seem to have been stopped as of now.
Updates to Task 1 Metrics
7 months agoWe were also witnessing almost 0 leaderboard and offline correlation of scores when looking at f1 or bleu. This is a welcome change thanks for this.
However, since the api track uses gpt3.5 wont the gpt3.5 scores be naturally higher for this track? At this point in time its clearly visible on the leaderboard as well? All gpu track false are high scoring and gpu track true are low scoring.