Loading
1 Follower
1 Following
nilabha
Nilabha Bhattacharya

Organization

Home

Location

Bangalore, IN

Badges

5
3
2

Connect

Activity

Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Latest submissions

No submissions made in this challenge.

ASCII-rendered single-player dungeon crawl game

Latest submissions

No submissions made in this challenge.

Improving the HTR output of Greek papyri and Byzantine manuscripts

Latest submissions

No submissions made in this challenge.

Latest submissions

See All
failed 156823
graded 154860
graded 154780

Machine Learning for detection of early onset of Alzheimers

Latest submissions

See All
graded 140046
graded 137355
graded 137339

Latest submissions

See All
graded 174025

Measure sample efficiency and generalization in reinforcement learning using procedurally generated environments

Latest submissions

No submissions made in this challenge.

Self-driving RL on DeepRacer cars - From simulation to real world

Latest submissions

No submissions made in this challenge.

Robustness and teamwork in a massively multiagent environment

Latest submissions

No submissions made in this challenge.

Latest submissions

See All
failed 144468
graded 144444
graded 144327

Multi-Agent Reinforcement Learning on Trains

Latest submissions

See All
failed 93962
graded 89220
graded 89210

Latest submissions

See All
graded 144152

Latest submissions

See All
graded 9963
failed 9876
failed 9775

Latest submissions

See All
graded 9744
graded 9743

5 Problems 15 Days. Can you solve it all?

Latest submissions

No submissions made in this challenge.

Latest submissions

No submissions made in this challenge.

Sample-efficient reinforcement learning in Minecraft

Latest submissions

No submissions made in this challenge.

Multi Agent Reinforcement Learning on Trains.

Latest submissions

See All
failed 32805
failed 32778
failed 32758

Recognise Handwritten Digits

Latest submissions

See All
graded 60255
failed 60250

Online News Prediction

Latest submissions

See All
failed 60273
graded 60271

Crowdsourced Map Land Cover Prediction

Latest submissions

See All
graded 60285

Project 2: Road extraction from satellite images

Latest submissions

No submissions made in this challenge.

Project 2: build our own text classifier system, and test its performance.

Latest submissions

No submissions made in this challenge.

Robots that learn to interact with the environment autonomously

Latest submissions

No submissions made in this challenge.

Multi-Agent Reinforcement Learning on Trains

Latest submissions

No submissions made in this challenge.
Participant Rating
hagrid67 103
Participant Rating
hagrid67 103

Learn-to-Race: Autonomous Racing Virtual Challenge

πŸ—οΈ Claim Your Training Credits

Over 2 years ago

Submission Id : 174025
Work in Finance Domain as a day job and experienced in RL across a range of projects and competitions.

Flatland

Current status of imitation agent in baseline repository

About 4 years ago

The above script was for doing PPO and IL alternately…

If you want a pure IL, you can try
train.py -ef baselines/custom_imitation_learning_rllib_tree_obs/pure_imitation_tree_obs.yaml --eager --trace

Current status of imitation agent in baseline repository

About 4 years ago

The imitation trainer works. We have generated results for them. You could do a training and simultaneous evaluation using the script
train.py -ief baselines/custom_imitation_learning_rllib_tree_obs/ppo_imitation_tree_obs.yaml --eager --trace
(drop -e flag if you don’t want to do evaluation)
The only thing is the OR expert solution uses was for an older flatland version where the malfunction rate was different. So if you are training with malfunctions, you can workaround it by doing the below changes in the flatland source code

change below line in method malfunction_from_file in the file flatland.envs.malfunction_generators.py

mean_malfunction_rate = 1/oMPD.malfunction_rate

The documentation here https://flatland.aicrowd.com/research/baselines/imitation_learning.html is a bit old , we will update it soon.
You can refer to this Google Colab notebook also which has the details along with the results https://colab.research.google.com/drive/1oK8yaTSVYH4Av_NwmhEC9ZNBS_Wwhi18#scrollTo=P_IMrdL27Ii7
Let me know if you are facing any issues.

RLLib Baselines on Colab!

About 4 years ago

We have taken the repo from https://gitlab.aicrowd.com/flatland/neurips2020-flatland-baselines
and made it into a simple colab notebook

Open In Colab

All training scripts are also provided, so one can modify the configs and do runs of their own. Evaluation is also run and the script to calculate scores on an independent test set is also provided.

Using a trained agent in RLlib

About 4 years ago

Your approach seems correct in principle … not sure why the trainer cannot restore from checkpoint. You could compare with the example provided.

Using a trained agent in RLlib

About 4 years ago

you can refer to the rollout.py script in the AIcrowd baselines for flatland

And the corresponding script

Note that this runs small environments with a custom seed. You will have to change the environment logic for your purpose.

Expert demonstrations for Imitation Learning: Recreating Malfunctions

About 4 years ago

The flatland-rl version has been updated to 2.2.2. (Upgrade it using the command pip install -U flatland-rl). Can you check if the malfunctions are replicable with the same seed? Let us know if you are facing any issues.

Expert demonstrations for Imitation Learning: Recreating Malfunctions

About 4 years ago

It does it slightly different from the MARWIL/Apex-DQfD versions in that it runs every episode alternatively via IL and RL (the ratio is defaulted to 50% ratio but it can be changed and also decayed over time by changing the configs).

Expert demonstrations for Imitation Learning: Recreating Malfunctions

About 4 years ago

You could also try our online RL Solution which does not require any of these intermediate steps like generating experiences. It runs everything on the fly…
You can find a pure IL and IL + PPO solution here


We haven’t documented it but we will do it soon. It uses the last year’s 2nd place solution from CkUA. Unfortunately, it was from an earlier flatland version, so as of now you have to change the malfunction behaviour as per previous versions as follows

change below line in method malfunction_from_file in the file flatland.envs.malfunction_generators.py

mean_malfunction_rate = 1/oMPD.malfunction_rate

Expert demonstrations for Imitation Learning: Recreating Malfunctions

About 4 years ago

Are you using the same flatland versions for both creation and loading environments? The solution for creating the experiences in the AICrowd baselines for MARWIL and APE-X DQfD were mostly used in environments without malfunctions and they used the seed value of 1001 (https://flatland.aicrowd.com/research/baselines/imitation_learning.html).

Flatland Challenge

Solution Codes and Approaches

Almost 5 years ago

I haven’t submitted it yet. But I can share the results of few envs from the local evaluation

Evaluation Number : 1
Reward : -43.83333333333334

====================================================================================================
Evaluation Number : 1
Current Env Path : ./test-envs/Test_5/Level_1.pkl
Env Creation Time : 0.42258167266845703
Number of Steps : 1120
Mean/Std of Time taken by Controller : 0.01895513470683779 0.0018288652394336735
Mean/Std of Time per Step : 0.1480530451451029 0.00961317459096298

Evaluation Number : 2
Reward : -43.41666666666668

====================================================================================================
Evaluation Number : 2
Current Env Path : ./test-envs/Test_3/Level_0.pkl
Env Creation Time : 0.29577183723449707
Number of Steps : 960
Mean/Std of Time taken by Controller : 0.02316151708364487 0.002471594250438426
Mean/Std of Time per Step : 0.14834324022134146 0.01293447105111391

Evaluation Number : 3
Reward : -52.00000000000002

====================================================================================================
Evaluation Number : 3
Current Env Path : ./test-envs/Test_6/Level_0.pkl
Env Creation Time : 1.5495717525482178
Number of Steps : 1760
Mean/Std of Time taken by Controller : 0.02436668398705396 0.0039495335690074304
Mean/Std of Time per Step : 0.19943869560956956 0.024274925544436107

Solution Codes and Approaches

Almost 5 years ago

I have added another code file with a different approach that does not use a model.

The code can be found in the local Github location

For a simple demonstration of how we solve a dense railway network, simply run the file
MultipleAgentNavigationObsConflict.py.
This file does not use any additional packages other than the ones required for flatland and can be run with the latest flatland-rl version 2.1.10

Solution Codes and Approaches

Almost 5 years ago

I have put up some code here


This includes actorcritictrainer.py file which implements an actor critic approach and ESStrategyTraining.py which implements an evolutionary strategy approach.
The results seem to be similar to the Duelling Double DQN approach. I have saved sample results and pre-trained weights.
This has been done using stock observations.

Adding to Erik’s comments, my observations are

  • These models do not show improvement even after training for longer periods and show comparable performance, suggesting that we need to do better feature engineering.

As of now, I next plan to do some visualizations and add documentation to the code to better.

Any comments/suggestions are most welcome.

Submission Errors Flatland

Over 5 years ago

Hi @mohanty @ashivani

I am getting an error on evaluation
https://gitlab.aicrowd.com/nilabha/flatland-challenge-starter-kit/issues/6

Can yo help with the logs

Submission Errors Flatland

Over 5 years ago

@ashivani

I am not able to see logs but I can see comments on the issue though only the first line.

2019-08-03T15:15:13.396985671Z Traceback (most recent call last):…

Submission Errors Flatland

Over 5 years ago

@mohanty

I am getting an error in evaluation
https://gitlab.aicrowd.com/nilabha/flatland-challenge-starter-kit/issues/3

Somehow I cannot see any error logs though debug=True
I have tested this in the local environment (using redis server etc…) and it is working.
Can you please help with the error logs.

Thanks,
Nilabha

SnakeCLEF2021 - Snake Species Identification Chall

Submission Errors

Over 5 years ago

Thanks kongas
I used the below code to remove the images

filter_func = lambda x: str(x) not in lsRemove
test_img = (ImageList.from_folder(path).filter_by_func(filter_func))

Though there is another error…
https://gitlab.aicrowd.com/nilabha/snake-species-identification-challenge/issues/33

@mohanty
Has the competition ended or will it restart again. Would have liked to get a score as my validation results were good.
Is it possible to get the error logs?

Evaluation stuck [Edit : Evaluation took a long time]

Over 5 years ago

@mohanty

all submissions seems to be queued for a long time.
Is there some problem?

Thanks

Submission Errors

Over 5 years ago

@mohanty
I have put a workaround to find the images which fail loading and delete them and later add these probability which are all equal to 1/45
However I still get an error
https://gitlab.aicrowd.com/nilabha/snake-species-identification-challenge/issues/32

Can you please help with the error?

Thanks,
Nilabha

nilabha has not provided any information yet.

Notebooks

Create Notebook