Activity
Ratings Progression
Challenge Categories
Challenges Entered
Revolutionise E-Commerce with LLM!
Latest submissions
Trick Large Language Models
Latest submissions
Identify user photos in the marketplace
Latest submissions
See Allfailed | 208688 | ||
failed | 208665 | ||
failed | 208663 |
Airborne Object Tracking Challenge
Latest submissions
Machine Learning for detection of early onset of Alzheimers
Latest submissions
3D Seismic Image Interpretation by Machine Learning
Latest submissions
See Allgraded | 86014 | ||
graded | 82969 | ||
graded | 82879 |
A benchmark for image-based food recognition
Latest submissions
See Allgraded | 115895 | ||
failed | 113237 | ||
failed | 113221 |
Latest submissions
Predicting smell of molecular compounds
Latest submissions
See Allgraded | 93231 | ||
graded | 93227 | ||
graded | 93225 |
Classify images of snake species from around the world
Latest submissions
Find all the aircraft!
Latest submissions
Latest submissions
Latest submissions
Grouping/Sorting players into their respective teams
Latest submissions
See Allgraded | 85322 | ||
graded | 85294 | ||
graded | 84890 |
5 Problems 15 Days. Can you solve it all?
Latest submissions
Sample-efficient reinforcement learning in Minecraft
Latest submissions
Multi Agent Reinforcement Learning on Trains.
Latest submissions
Latest submissions
See Allgraded | 60300 |
5 Problems 15 Days. Can you solve it all?
Latest submissions
See Allgraded | 67394 | ||
failed | 67393 | ||
graded | 67389 |
Project 2: Road extraction from satellite images
Latest submissions
Project 2: build our own text classifier system, and test its performance.
Latest submissions
Help improve humanitarian crisis response through better NLP modeling
Latest submissions
See Allgraded | 58201 | ||
graded | 58181 | ||
graded | 58179 |
Reincarnation of personal data entities in unstructured data sets
Latest submissions
Robots that learn to interact with the environment autonomously
Latest submissions
Immitation Learning for Autonomous Driving
Latest submissions
Latest submissions
Latest submissions
Latest submissions
See Allgraded | 67545 | ||
graded | 66085 | ||
failed | 66070 |
Latest submissions
Visual SLAM in challenging environments
Latest submissions
Latest submissions
Participant | Rating |
---|---|
nimishsantosh107 | 151 |
shraddhaa_mohan | 272 |
shivam | 136 |
jyoti_yadav2 | 0 |
vrv | 0 |
Participant | Rating |
---|---|
nimishsantosh107 | 151 |
shraddhaa_mohan | 272 |
shivam | 136 |
-
rss_fete NeurIPS 2019: Learn to Move - Walk AroundView
-
rssfete Food Recognition ChallengeView
-
rssfete AMLD 2020 - Transfer Learning for International Crisis ResponseView
-
rssfete ORIENTMEView
-
rssfete AIcrowd Blitz - May 2020View
-
rssfete ECCV 2020 Commands 4 Autonomous VehiclesView
-
rssfete Hockey Team ClassificationView
-
rssfete Seismic Facies Identification ChallengeView
-
rssfete Learning to SmellView
-
rssfete Hockey: Player localizationView
-
rssfete Hockey Puck Tracking ChallengeView
-
rssfete Multi-Agent Behavior: Representation, Modeling, Measurement, and ApplicationsView
-
rssfete ADDI Alzheimers Detection ChallengeView
-
rssfete Visual Product Recognition Challenge 2023View
-
rssfete HackAPrompt 2023View
-
ImperialBois Amazon KDD Cup 2024: Multi-Task Online Shopping Challenge for LLMsView
Seismic Facies Identification Challenge
Hockey Team Classification
Is this a fully unsupervised clustering challenge
About 4 years ago@jason_brumwell just so Iβve understood your reply clearly, we can use external datasets and (or) create a dataset on our own, for training a supervised model, so long as we donβt hand label the current dataset provided by you?
Secondly, is there a private test set in the challenge since youβve mentioned βwhen additional teams are addedβ? If there isnβt a private test set can you explain what you mean by this or is this just a general statement?
Any clarity on this would be greatly appreciated.
Thanking you,
Rohit
FOODC
FOODC Editorial
Over 4 years ago
The ChallengeΒΆ
Maintaining a healthy diet is difficult. As the saying goes, the best way to escape a problem is to solve it. So why not leverage the power of deep learning and computer vision to build the foundation of a semi-automated food tracking application?
With over 9300
hand-annotated images with 61
classes, the challenge is to train accurate models that can look at images of food items and detect the food items present in the image.
It's time to unleash the food (data)scientist in you! Given any image
, identify
the food
item present in it.
Downloads and InstallsΒΆ
!wget -q https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/foodc/v0.1/train_images.zip
!wget -q https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/foodc/v0.1/test_images.zip
!wget -q https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/foodc/v0.1/train.csv
!wget -q https://s3.eu-central-1.wasabisys.com/aicrowd-practice-challenges/public/foodc/v0.1/test.csv
!mkdir data
!mkdir data/test
!mkdir data/train
!unzip train_images -d data/train
!unzip test_images -d data/test
!mkdir models
ImportsΒΆ
import sys
import os
import gc
import warnings
import torch
import torch.nn as nn
import numpy as np
import pandas as pd
import torch.nn.functional as F
from fastai.script import *
from fastai.vision import *
from fastai.callbacks import *
from fastai.distributed import *
from fastprogress import fastprogress
from torchvision.models import *
np.random.seed(23)
torch.cuda.device(0)
warnings.filterwarnings("ignore")
torch.multiprocessing.freeze_support()
print("[INFO] GPU:", torch.cuda.get_device_name())
DataBunch and ModelΒΆ
Here we use a technique called progressive resizing. At each step the model is loaded with weights trained on images of lower sizes.
def get_data(size, batch_size):
"""
function that returns a DataBunch as needed for the Learner
"""
train = pd.read_csv("train.csv")
src = (ImageList.from_df(train, path="data/", folder="train/train_images/").split_by_rand_pct(0.1).label_from_df())
src.add_test_folder("test/test_images/")
tfms = get_transforms(do_flip=True, flip_vert=False, max_rotate=10.0,
max_zoom=1.1, max_lighting=0.2, max_warp=0.2, p_affine=0.75, p_lighting=0.75)
data = (src.transform(
tfms,
size=size,
resize_method=ResizeMethod.SQUISH)
.databunch(bs=batch_size)
.normalize(imagenet_stats))
assert sorted(set(train.ClassName.unique())) == sorted(data.classes), "Class Mismatch"
print("[INFO] Number of Classes: ", data.c)
data.num_workers = 4
return data
sample_data = get_data(32, (2048//32))
sample_data.show_batch(3, 3)
As you can see, the transforms have been applied and the image is normalized as well!
We first initialize all the models.
learn = create_cnn(get_data(32, (2048//32)), models.densenet161,
metrics=[accuracy, FBeta(beta=1,average='macro')])
learn.model_dir = "models/"
learn.save("densenet_32")
learn = create_cnn(get_data(64, (2048//64)), models.densenet161,
metrics=[accuracy, FBeta(beta=1,average='macro')]).load("densenet_32")
learn.model_dir = "models/"
learn.save("densenet_64")
learn = create_cnn(get_data(128, (2048//128)), models.densenet161,
metrics=[accuracy, FBeta(beta=1,average='macro')]).load("densenet_64")
learn.model_dir = "models/"
learn.save("densenet_128")
learn = create_cnn(get_data(256, (2048//256)), models.densenet161,
metrics=[accuracy, FBeta(beta=1,average='macro')]).load("densenet_128")
learn.model_dir = "models/"
learn.save("densenet_256")
def train_model(size, iter1, iter2, mixup=False):
"""
function to quickly train a model for a certain number of iterations.
"""
size_match = {"256": "128", "128": "64", "64": "32"}
learn = create_cnn(get_data(size, (2048//size)), models.densenet161,
metrics=[accuracy,
FBeta(beta=1,average='macro')])
learn.model_dir = "models/"
if mixup:
learn.mixup()
if str(size) != str(32):
learn.load("densenet_" + str(size_match[str(size)]))
name = "densenet_" + str(size)
print("[INFO] Training for : ", name)
learn.fit_one_cycle(iter1, 1e-4, callbacks=[ShowGraph(learn),
SaveModelCallback(learn, monitor='f_beta', mode='max', name=name)])
learn.unfreeze()
learn.fit_one_cycle(iter2, 5e-5, callbacks=[ShowGraph(learn),
SaveModelCallback(learn, monitor='f_beta', mode='max', name=name)])
Here you might notice the use of a function mixup
. mixup
is a callback in fastai
that is extremely efficient at regularizing models in computer vision.
Instead of feeding the model the raw images, we take two images (not necessarily from the same class) and make a linear combination of them: in terms of tensors, we have:
new_image = t * image1 + (1-t) * image2
where t is a float between 0 and 1. The target we assign to that new image is the same combination of the original targets:
new_target = t * target1 + (1-t) * target2
assuming the targets are one-hot encoded (which isnβt the case in PyTorch usually). And it's as simple as that.
For example:
Source Dog or cat? The right answer here is 70% dog and 30% cat!train_model(32, 5, 3)
train_model(64, 5, 4)
train_model(128, 7, 4, mixup=True)
train_model(256, 7, 5, mixup=True)
learn = create_cnn(get_data(300, (2048//300)), models.densenet161,
metrics=[accuracy, FBeta(beta=1,average='macro')]).load("densenet_256")
learn.model_dir = "models/"
learn.mixup()
size = 300
name = "densenet_" + str(size)
print("[INFO] Training for : ", name)
learn.fit_one_cycle(5, 1e-4, callbacks=[ShowGraph(learn),
SaveModelCallback(learn, monitor='f_beta', mode='max', name=name)])
learn.load("densenet_300")
interp = ClassificationInterpretation.from_learner(learn)
losses, idxs = interp.top_losses()
display(interp.plot_top_losses(9, figsize=(15,11)))
display(interp.plot_confusion_matrix(figsize=(12,12), dpi=100))
print("[INFO] MOST CONFUSED:")
interp.most_confused(min_val=5)
The model is getting confused between some very common categories like coffee-with-caffeine
and espresso-with-caffeine
.
The model needs to be made more robust to this and hence appropriate augmentations can be used.
def make_submission(learn, name):
images = []
prediction = []
probability = []
test_path = "data/test/test_images/"
test = pd.read_csv("test.csv")
files = test.ImageId
for i in files:
images.append(i)
img = open_image(os.path.join(test_path, i))
pred_class, pred_idx, outputs = learn.predict(img)
prediction.append(pred_class.obj)
probability.append(outputs.abs().max().item())
answer = pd.DataFrame({'ImageId': images, 'ClassName': prediction, 'probability': probability})
display(answer.head())
answer[["ImageId","ClassName"]].to_csv(name, index=False)
make_submission(learn, name="submission_size300.csv")
Improving FurtherΒΆ
- Appropriate augmnentations
- Different models like
densenet201
,resnet50
- Mixed Precision training (i.e.
to_fp16()
in fastai)
Food Recognition Challenge
Can I submit code in PyTorch?
Over 4 years agoYes, absolutely.
The libraries installed on the computer that the evaluation runs on is defined by you in the Docker file. As long as you make the respective changes there, you can use any library that you want!
Regards,
AIcrowd Team
Dataset on Kaggle
Over 4 years agoPlease note that the dataset is now available for access on Kaggle as well. This is to allow for the problem statement, the dataset and the starter notebooks to be accessible from Kaggleβs vast data science community.
Please find the dataset here: https://www.kaggle.com/rohitmidha23/food-recognition-challenge/
Do let us know if you face any problems accessing the data.
Regards
AIcrowd Team
Kaggle Dataset Related
Over 4 years ago(topic withdrawn by author, will be automatically deleted in 24 hours unless flagged)
Thereβs a Round 2?!
Over 4 years ago@HarryWalters weβd love for you to participate.
Weβve added a few more starter notebooks and updated the prizes for Round 2 as well. Do take a look.
Graded test set similar to being as already uploaded one?
Over 4 years agoHey @hannan4252, when you submit, your model is made to predict on a private test set which is different from the val/test set released.
I hope this clears your doubt.
Regards,
Rohit
Submissions failing, no reason given
Almost 5 years agoWe also tagged aicrowd-bot but no information/logs were provided cause the issue randomly restarted evaluation and then failed.
Weird submission pattern
Almost 5 years ago(topic withdrawn by author, will be automatically deleted in 24 hours unless flagged)
Submission confusion. Am I dumb?
Almost 5 years agoNot sure what could be your problem, but we wrote code to check if the GPU was even there and it gave an error. So if your code uses GPU you have your answer.
Issue with aicrowd_helpers.py
Almost 5 years ago@nikhil_rayaprolu then our code seems to be exiting local eval properly and is giving proper outputs, but when we submit to aicrowd, it doesnβt fail/succeed
Issue with aicrowd_helpers.py
Almost 5 years agoIn particular,
aicrowd_events.register_event(
event_type=aicrowd_events.AICROWD_EVENT_SUCCESS,
message="execution_success",
payload={ #Arbitrary Payload
"event_type": "food_recognition_challenge:execution_success",
"predictions_output_path" : predictions_output_path
},
blocking=True
)
this is the part of the code that doesnβt seem to be working.
Further, one thing I noticed while running ./debug.sh
was that even if an error occurred, the command didnβt stop.
A suggestion would be to add a check for that, or maybe even a timer, since our submissions are getting delayed.
Issues with submitting
Almost 5 years ago@shivam I seem to be getting a HTTPS error. Can you check?
Issues with submitting
Almost 5 years ago@shivam @nikhil_rayaprolu my submission has be in the βsubmittedβ phase for more than a day now. Can you check up on it?
Or at least cancel it so I can submit other stuff?
Issues with submitting
Almost 5 years ago@shivam I made a submission at 10.45am IST and it still hasnβt finished evaluating. Is there any problem on the server side?
Issues with submitting
Almost 5 years ago@shivam is the test set on the server different? When running local evaluation we got a different mAP and recall, hence the question.
ImageCLEF 2020 VQA-Med - VQA
ImageCLEF 2020 Caption - Concept Detection
Possibility of mixed teams
Over 4 years agoHey,
As per the rules we need to have an affiliation to an organization. Is it possible to form teams across organizations?
Say between two independent researchers and two researchers from a company?
@mohanty can you clarify?
Thanks!
AMLD 2020 - Transfer Learning for International...
Rssfete and tearth: Thank you so much
Almost 5 years ago@student same here. We did this competition more as a getting started with NLP competition. So if you donβt mind, can you give us a brief overview of your solution?
Congrats on winning!
Clarification : Submission Count
About 4 years agoOn the submissions page it says β5 submissions remainingβ. Is this on a per day basis or across the challenge?
Can this also be made clear on the challenge page?
Thank You,
Rohit Midha