Activity
Ratings Progression
Challenge Categories
Challenges Entered
A benchmark for image-based food recognition
Latest submissions
3D Seismic Image Interpretation by Machine Learning
Latest submissions
Play in a realistic insurance market, compete for profit!
Latest submissions
A benchmark for image-based food recognition
Latest submissions
Predicting smell of molecular compounds
Latest submissions
Classify images of snake species from around the world
Latest submissions
Latest submissions
See Allgraded | 107480 | ||
failed | 107478 |
Robots that learn to interact with the environment autonomously
Latest submissions
5 Puzzles, 3 Weeks | Can you solve them all?
Latest submissions
See Allgraded | 116256 | ||
graded | 116254 | ||
graded | 116253 |
Latest submissions
See Allgraded | 107590 |
Grouping/Sorting players into their respective teams
Latest submissions
See Allgraded | 84933 | ||
graded | 84922 | ||
failed | 84921 |
Participant | Rating |
---|
Participant | Rating |
---|
Hockey Puck Tracking Challenge
Data Labels & metric
About 4 years ago- Are there any labeled positions for a puck? As far as I understood there are no labels for the dataset.
- Can we hand-label the provided dataset?
- Is external data allowed?
- what is the evaluation metric? Mean squared error?
Hockey Team Classification
Clarifications about datasets
Over 4 years agoThank you for the challenge, I have a few questions:
-
Is it ok to manually hand-label the examples from the provided dataset (with 2200 groups) if a trained model will generalize the data with unknown teams added (dataset with 22000 groups)?
-
What does the score/secondary score columns on the leaderboard means? According to the description, βscoreβ is a quality on the dataset with 2200 groups, βsecondary scoreβ is a quality on the dataset with 22,000 groups, but the rules state: βThe results achieved must not deviate by more than 5% when run against datasets with different team images.β. Does it mean that must be gap no more than 5% between score/secondary score columns on the leaderboard?
About Submission
About 4 years ago@jason_brumwell If the list of test images is known in advance and hand-labeling is allowed what prevents us from hand-labeling of test images and getting a perfect score (with model training on these labels)? It sounds like this approach is not prohibited by the rules, but it is not the solution you are looking for (as far as i can understand).