Location
Badges
Activity
Ratings Progression
Challenge Categories
Challenges Entered
Classify images of snake species from around the world
Latest submissions
Robots that learn to interact with the environment autonomously
Latest submissions
See Allgraded | 109766 | ||
failed | 109640 | ||
graded | 109398 |
Reinforcement Learning on Musculoskeletal Models
Latest submissions
Sample-efficient reinforcement learning in Minecraft
Latest submissions
Robots that learn to interact with the environment autonomously
Latest submissions
See Allgraded | 24427 | ||
graded | 11335 | ||
graded | 11246 |
A new benchmark for Artificial Intelligence (AI) research in Reinforcement Learning
Latest submissions
Participant | Rating |
---|
Participant | Rating |
---|
REAL 2020 - Robot open-Ended Autonomous Learning
Round 2 has started!
About 4 years agoDear participants,
today Round 2 of the competition starts!
The second round is open to everyone, even those who did not partecipate in Round 1.
As announced, we have updated both the environment and the starter kit for Round 2.
real_robots package updated to v.0.1.21
(New feature: depth camera!): While we have removed the additional observations of Round 1 (object position and segmented image), we have decided to add a depth observation.
This was a long asked feature, since REAL 2019. Many participants observed that as the environment involves a shelf and the camera has a top-view, it might be hard to judge depth based only on the RGB input. Given that nowadays depth cameras are a common sensor for robots and that returning the depth has no performance impact (PyBullet already calculated it behind the scene), we have decided to add it. The dictionary of the observations now has an additional depth
entry with a 320x240 depth image.
(Fixed - missing objects): we have improved the reset mechanisms for objects that fall off the table.
It was previously possible in some cases to have the objects stuck inside the table, below the shelf: this has been fixed.
(Fixed and improved - Cartesian control): we have fixed cartesian control as the gripper_command
was not being performed. We have also added the option so send βNoneβ as a cartesian_command
to have the robot go to the βHomeβ position.
Finally, we have improved the speed of Cartesian control by adding a cache mechanism (thanks @ermekaitygulov for the suggestion).
If you repeat the same cartesian_command
for more than one timestep, it will use the cache instead of calculating the inverse kynematic again.
This makes using cartesian control much faster, doubling its speed if you repeat the same cartesian_command
for 4 timesteps.
(Improved - Joints control): we have added the possibility to send βNoneβ as a joint_command
, which is equivalent to sending the βHomeβ position (all joints to zero). This makes easier to switch between different types of control, since the βHomeβ is always the None command.
(New feature: Videos!!)
We have added the ability to record videos during the intrinsic and extrinsic phase!
You will find in the local_evaluation.py
file of the starter kit a line with video = (True, True, True)
-
Intrinsic phase recording: the first True means that the intrinsic phase will be recorded.
It will automatically record 3 minutes of the intrinsic phase: the first minute (12000 timesteps), then one minute starting at the middle of the phase and then the last minute of the phase.
You can set this to False to have no video of the extrinsic phase or you can set a different interval of frames to be recorded.
i.e.video = (interval([0, 50000], [70000, 200000]), True, True)
will record the first 50k frames and then from frame 70000 to frame 200000. -
Extrinsic phase recording: the second True means that the extrinsic phase will be recorded.
It will automatically record 5 trials, chosen at random.
You can set this to False to have no video of the intrinsic phase or you can set which trials should be recorded.
i.e.video = (True, interval(7, [20-30]), True)
will record the trial number 7 and also all the trials from 20 to 30. -
Debug info: the third True means that debug info will be added to the videos, such as current timesteps and scores.
This can be set either to True or False (no debug info printed on the videos).
REAL2020_starter_kit repository updated!
(Improvement - Baseline for Round 2) As the macro_action control is now forbidden for Round 2, we have updated the baseline to use joints control.
The baseline now produces a variable-length list of joint positions to go to and then periodically go back to the Home position to check what the effects of those actions were.
It scores lower than using the pre-defined macro_action, but it is still able to learn and move the cube in a variety of positions consistently.
It is also fun to watch as it twists and finds weird strategies to move the cube with all the parts of the robotic arm!
(Improvement - pre-trained VAE) The baseline now automatically saves its Variational Autoencoder after training in two folders (trained_encoder and trained_decoder).
We have added a new parameter in baseline/config.yaml: is it now possible to set pre_trained_vae: true
and it will load the previously trained VAE in subsequent runs.
This is especially beneficial on computers without GPU which take very long to train the VAE.
As always, feel free to use the baseline as a starting base to develop your submissions and enjoy the competition.
We look forward to your submissions!
Round 2 starts on November 16th!
About 4 years agoDear participants,
after warm-up Round 1, the challenge now moves on to Round 2!
Round 2 will officially start on the Monday, November 16th.
On October 28th, we presented the REAL competition and Round 2 to ICDL.2020, you can see the video here.
Rules
We have updated the Rules as we decided to keep most of the simplifications of Round 1 for Round 2 as well.
In Round 2 it will still possible to elect not to use the gripper and to keep its orientation fixed.
It will no longer possible to use macro actions and additional observations, so the robot has to be
controlled directly in either joint or cartesian space and the position of the object with the segmented image will no longer be available.
Fixes and improvements - real_robots update
Before Round 2 starts we will deploy a new version of the real_robots package, which will include a number of fixes and improvements (e.g. see here).
We will also later update the Starter Kit package, since the baseline contained is no longer valid for Round 2 (macro action is not allowed). Other parts of the baseline are still valid and can be used as a source of inspiration.
Prizes
- Top 3 teams will be invited to co-author a paper.
- Top 3 teams will also receive free registrations to ICDL-2021 (hoping it will be a conference in person!) - 1 free registration and a 50% discounted registration for the first team, 1 free registration for the second team, 1 discounted registration for the third team.
Feel free to post here any questions you have and Iβm looking forward to all your Round 2 submissions!
Score on 2D?
About 4 years agoDear @nguyen_thanh_tin,
the score is set to 0 because the submission was set to βdebugβ.
Debug submissions only run 5 extrinsic trials and the final score is always 0 (even if the individual trials were better).
To run a normal submission go into the aicrowd.json file in the starter kit and set
"debug": false
Cartesian space question
About 4 years agoDear @ermekaitygulov,
thanks for the bug report.
A new release of real_robots repository will be available before Round 2, which will include a fix so that βgripper_commandβ is not ignored.
Weβve found cartesian space slows down fps. For example on my PC using βmacro_actionβ and βjointsβ action spaces environment could make around 1000 steps per second. But βcartesianβ slows down to 100 steps per second.
The reason is inverse kinematics calculation. Every environment step is simulation step, so to change arm pose in βjointsβ or βcartesianβ spaces you should send the same action for 100-500 steps and the same inverse kinematics calculations are performed 100-500 times. To speed up actions in βcartesianβ space action caching can be used (as in βmacroβ space)
Good idea!
I will add a caching mechanism so that if you send the same command as the previous one it wonβt perform the inverse kynematic again (i.e. it keeps the latest inverse kynematic result saved).
This will also be included in the release before Round 2.
The start of Round 2
About 4 years agoDear @nguyen_thanh_tin,
we will make an announcement on Round 2 later this week.
Round 2 start date will probably be between Monday 9th and Monday 16th.
Round 1 has ended!
About 4 years agoDear all,
Round 1 has come to a close, thanks to all those who participated.
Congratulations to ermekaitygulov and the AiCrows team who made valid submissions and entered into the Leaderboard, winning free registrations for the ICDL conference!
We will now have a brief pause until November, when Round 2 will start.
Requesting logs
About 4 years agoFor the 3.5GB file, have a look at How to upload large files (size) to your submission
Large files can be uploaded to the AICrowd repository using git-lfs.
Requesting logs
About 4 years agoDear @boussif_oussama,
if it says 5 trials and 0 score, then it must have been a debug submission.
In aicrowd.json set debug to False and it will run the whole 50 extrinsic trials (and you will get a score > 0 in the leaderboard).
I will have a check at the Out of Memory error.
Requesting logs
About 4 years agoLooking at the error I would guess that the variable actions passed to the Dynamic abstractor is empty.
This may happen because the intrinsic phase was not run and no action was loaded from a file, so the variable actions stayed empty.
During Round1 and Round2, only the extrinsic phase is evaluated for the submissions - the intrinsic phase must be run previously on your own computer and the results saved so that your algorithm can reload and use them when the extrinsic phase is run.
As an example, the Baseline algorithm saves at the end of the intrinsic phase a transitions file and then uses it for the extrinsic phase if the baseline/config.yaml is configured to do so (use_experience_data = True and the name of the transition file set in experience_data).
Requesting logs
About 4 years agoDear @boussif_oussama,
here is the agent log:
2020-10-13T20:52:13.794930498Z 2020-10-13 20:52:13.794721: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla V100-PCIE-32GB, Compute Capability 7.0
2020-10-13T20:52:14.180614548Z [REALRobot] Copying over data into pybullet_data_path.This is a one time operation.
2020-10-13T20:52:14.180641021Z 1 Physical GPUs
2020-10-13T20:52:14.180644362Z 1 Logical GPUs
2020-10-13T20:52:14.1806469Z INFO:matplotlib.font_manager:generated new fontManager
2020-10-13T20:52:14.582229999Z /srv/conda/envs/notebook/lib/python3.6/site-packages/gym/logger.py:30: UserWarning: e[33mWARN: Box bound precision lowered by casting to float32e[0m
2020-10-13T20:52:14.582253826Z warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow'))
2020-10-13T20:52:14.582698571Z [WARNING] Skipping Intrinsic Phase as intrinsic_timesteps = 0 or False
2020-10-13T20:52:14.582704243Z ######################################################
2020-10-13T20:52:14.582706577Z # Extrinsic Phase Initiated
2020-10-13T20:52:14.582709014Z ######################################################
2020-10-13T20:52:14.582711317Z pybullet build time: Oct 8 2020 00:10:04
2020-10-13T20:52:14.584499436Z
Extrinsic Phase: 0%| | 0/5 [00:00<?, ?trials /s]
Extrinsic Phase: 0%| | 0/5 [00:00<?, ?trials /s]
Extrinsic Phase: 0%| | 0/5 [00:00<?, ?trials /s]
Extrinsic Phase: 0%| | 0/5 [00:00<?, ?trials /s]Traceback (most recent call last):
2020-10-13T20:52:14.584520268Z File "1050662482_evaluation.py", line 26, in <module>
2020-10-13T20:52:14.584524556Z goals_dataset_path=DATASET_PATH
2020-10-13T20:52:14.584526997Z File "/srv/conda/envs/notebook/lib/python3.6/site-packages/real_robots/evaluate.py", line 424, in evaluate
2020-10-13T20:52:14.584529811Z evaluation_service.run_extrinsic_phase()
2020-10-13T20:52:14.584532269Z File "/srv/conda/envs/notebook/lib/python3.6/site-packages/real_robots/evaluate.py", line 320, in run_extrinsic_phase
2020-10-13T20:52:14.584534824Z raise e
2020-10-13T20:52:14.584553011Z File "/srv/conda/envs/notebook/lib/python3.6/site-packages/real_robots/evaluate.py", line 314, in run_extrinsic_phase
2020-10-13T20:52:14.584555699Z self._run_extrinsic_phase()
2020-10-13T20:52:14.584557957Z File "/srv/conda/envs/notebook/lib/python3.6/site-packages/real_robots/evaluate.py", line 340, in _run_extrinsic_phase
2020-10-13T20:52:14.584560498Z self.controller.start_extrinsic_phase()
2020-10-13T20:52:14.584562701Z File "/home/aicrowd/baseline/policy.py", line 530, in start_extrinsic_phase
2020-10-13T20:52:14.584565042Z self.planner = Planner(allAbstractedActions)
2020-10-13T20:52:14.584567343Z File "/home/aicrowd/baseline/planner.py", line 45, in __init__
2020-10-13T20:52:14.584569781Z self.abstractor = abstr.DynamicAbstractor(actions)
2020-10-13T20:52:14.584571977Z File "/home/aicrowd/baseline/abstractor.py", line 256, in __init__
2020-10-13T20:52:14.584574454Z if len(actions[0]) != 3:
2020-10-13T20:52:14.584576742Z IndexError: list index out of range
2020-10-13T20:52:14.809836903Z
Extrinsic Phase: 0%| | 0/5 [00:00<?, ?trials /s]
Round 1 ends soon! Submission deadline 15 October 23:59 (UTC-12)
About 4 years agoDear all,
Round 1 is coming to a close. You have time until 15 October 23:59 Anywhere on Earth (UTC-12) to send your submissions.
Round 1 was just a warm-up, so regardless of the achieved performance, everyone will be free to participate in Round 2, even those who did not participate in Round 1.
However, I encourage you all to send submissions of your current work before Round 1 deadline, since there is still a high chance to win a free registration to the ICDL conference!
Top 3 participants will be awarded a free registration.
Competition organizers are not eligible, so do not count ec_ai and shivam submissions in the Leaderboard.
The ICDL conference focuses on the understanding of how biological agents take advantage of interaction with social and physical environments to develop their cognitive capabilities. Moreover, how this knowledge can be used to improve future computing and robotic systems; the ICDL topics are a perfect fit for the REAL challenge.
During the ICDL conference we will present Round 2 of the competition so we expect many more participants to join us in November!
Cropping images rule exception and other rules clarifications
About 4 years agoDear all,
I have received some questions from the participants via mail that I am now reposting here for everyone.
Round 1 is closing, but these clarifications will apply to Round 2 as well.
While discussing these, we have also decided to make an explicit permission about cropping the observation image;
it is allowed to crop the observation image, in the following manner:
cropped_observation = observation['retina'][0:180,70:250,:]
See questions below for the rationale.
Questions
Can we use computer vision techniques (such as using OpenCV) to generate some internal rewards such as the distance between the gripper and the cube in the intrinsic phase or this is not allowed?
No, it is not allowed to make a reward explicitly tied to the distance between the gripper and the cube, because it would be a way to give an information about the extrinsic task.
The robot does not know that the cube (or moving the cube) will be a βgoalβ later.
On the other hand, it is allowed to build intrinsic rewards that are general and would apply to any task.
For example, Pathakβs Curiosity-driven Exploration by Self-supervised Prediction gives rewards for unpredicted events; we would expect such an algorithm to give a reward to the robot the first time it hits (and moves) the cube as it would be an unpredicted event, and to keep giving rewards whenever it moves the cube in new, unexpected directions.
In general, it is difficult to create a reward that moves the robot immediately to the cube without giving it βforbiddenβ information - it has to hit the cube by chance at least once before intrinsic rewards can help it to reach it again and more often.
Is it possible to use OpenCV to locate the red cube?
No, using OpenCV to directly locate the red cube is forbidden.
However, using OpenCV is not forbidden per se, as long as one does not put specific knowledge about the task in it (like: locate the red cube because we need to touch it).
As an example, the baseline does use OpenCV. We used it to subtract the background of the collected images so that the VAE only processed the images of what had changed before and after the action was performed.
Using a VAE also in a way locates the red cube, since the latent space usually correlates with the x,y position of the cube - however, this is allowed because it does so in a general manner (i.e. the VAE would work also for other objects and even if the task would be different, such as rotating the cube or putting the arm in a specific position).
Is it okay to crop the observation image?
It is difficult to crop the image in a βgeneral wayβ without introducing some knowledge about the task - since the temptation would be to just zoom on the table since the real action is there. To avoid breaking the rules, one would have to invent something clever so that the robot itself learns where it should focus its attention and then do the crop by itself.
On the other hand, the provided 320x240 image observation is mostly blank. We used a really big field of view, which goes well beyond the table (especially at the bottom of the image where the arm would rarely go).
So, if it is needed for performance reasons we allow as a special exception to crop the image to 180x180 by doing this:
image = observation['retina'][0:180,70:250,:]
This crops all the extra βwhiteβ on sides and at the bottom. It also crops some information about the arm since the arm can easily go beyond the table.
Notice that resizing the whole image (or the 180x180 crop) is always allowed since it does alter all the image at once without bias.
Is it possible to reset the environment?
It is not allowed to reset the environment - however the environment will reset automatically the position of the objects whenever they fall off the table.
Requesting logs
About 4 years agoDear @anass_elidrissi,
here is the final part of the agent log:
2020-10-11T20:01:57.289113523Z 2020-10-11 20:01:57.288914: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Tesla V100-PCIE-32GB, Compute Capability 7.0
2020-10-11T20:01:57.667584462Z [REALRobot] Copying over data into pybullet_data_path.This is a one time operation.
2020-10-11T20:01:57.667603822Z 1 Physical GPUs
2020-10-11T20:01:57.667607494Z 1 Logical GPUs
2020-10-11T20:01:57.667610593Z INFO:matplotlib.font_manager:generated new fontManager
2020-10-11T20:01:57.818557511Z pybullet build time: Oct 8 2020 00:10:04
2020-10-11T20:01:57.81892566Z Traceback (most recent call last):
2020-10-11T20:01:57.818944719Z File "1050662482_evaluation.py", line 4, in <module>
2020-10-11T20:01:57.818948903Z from my_controller import SubmittedPolicy
2020-10-11T20:01:57.818951424Z File "/home/aicrowd/my_controller.py", line 2, in <module>
2020-10-11T20:01:57.81895421Z from baseline.policy import Baseline
2020-10-11T20:01:57.818956597Z File "/home/aicrowd/baseline/policy.py", line 6, in <module>
2020-10-11T20:01:57.818959236Z import baseline.explorer as exp
2020-10-11T20:01:57.818961451Z File "/home/aicrowd/baseline/explorer.py", line 1, in <module>
2020-10-11T20:01:57.818963879Z from baseline.curiosity import Curiosity
2020-10-11T20:01:57.818966122Z File "/home/aicrowd/baseline/curiosity.py", line 1, in <module>
2020-10-11T20:01:57.818968519Z import torch
2020-10-11T20:01:57.818970835Z ModuleNotFoundError: No module named 'torch'
It seems torch is not installed when the agent is evaluated.
You have to modify environment.yml so that all requirements of your code are included.
It is suggested to work using a Conda environment and then exporting it to the environment.yml so that the submission has always all the modules it needs.
See Setup and How do I specify my software runtime? sections in https://github.com/AIcrowd/REAL2020_starter_kit
Let me know if you need further assistance.
Wrappers using / observation space access
Over 4 years agoWe have released an update to fix these issues
See here.
Update on real-robots and REAL2020_starter_kit
Over 4 years agoDear participants,
we have updated the real-robots package to v0.1.20 to fix some issues discussed here.
This update includes the following changes:
- BasePolicy now includes both the action_space and the observation_space in its constructor.
This way, the participants controller can be made aware of both spaces without directly accessing the environment or coding them by hand. - The observation space now includes the βgoal_positionsβ extended observation.
- Both βobject_positionsβ and 'βgoal_positionsβ are now defined and returned as a dictionary of arrays of shape (3,) containing the objects x,y,z position.
Note that during the intrinsic phase there is no goal, so βgoal_positionsβ is None, just as the βgoal_maskβ observation.
The REAL2020_starter_kit repository has also been updated to reflect the changes.
When making new submissions, ensure that your real-robots copy and starter kit is updated and that your controller accepts also the new observation_space parameter.
Looking forward to more submission from you all!
Wrappers using / observation space access
Over 4 years agoAbout βobject_positionβ: I mean βobject_positionβ space.shape vs βobject_positionβ observation.shape.
Environment observation space is taken from βrobotβ attribute - Kuka class. Kukas observation space is Dict space. There is key βobject_positionβ and it corresponds to Dict space with keys [βtomatoβ, β¦]. This spaces (βtomatoβ-space and etc.) are Box spaces with shape (7,) (real_robots/envs/robot.py, line 75). But environments [βstepβ, βresetβ] methods returns observation where observation[βobject_positionβ][βtomatoβ].shape is (3,), because get_position() is called instead of get_pose() (real_robots/envs/env.py, line 234).
You are right @ermekaitygulov , the observation space definition should be amended too to reflect what step and reset currently return!
Thanks for pointing it out
Team Creation
Over 4 years agoDear @nashikun,
welcome to REAL 2020!
The challenge is open to both individuals and teams.
I have removed the incorrect team freeze, so you can create teams now
Wrappers using / observation space access
Over 4 years agoHi @ermekaitygulov,
Is there any way to use wrappers? There are None values (for βgoal_maskβ and βgoal_positionsβ keys) in observation dict in R1-environment. It can be solved with adding zero values for this keys to 93 line in real_robots/env.py:
mmm I am not sure if I understand what do you mean by using wrappers.
As you have noticed, during the intrinsic phase, the goal is an all zeros image, while the additional observations goal_mask and goal_positions are set to None.
Your controller can be made aware of when the intrinsic phase starts or ends since it should extend the real_robots.BasePolicy class:
this means that you can implement the following methods:
def start_intrinsic_phase(self):
def end_intrinsic_phase(self, observation, reward, done):
and they will be automatically called when the intrinsic phase starts and ends.
If necessary, when your controller receives the observation through its step method, you can change the None values of goal_mask and goal_positions to anything you like (e.g. all zeros like the goal for example).
Also it can be useful if observation_space also was provided to controller (for nn model defining and etc.). In my code I got information about observation_space from Kuka class, but it is not the most elegant way)
Yes, correct, it would be better to pass the observation space too, since you shouldnβt access the environment directly.
We will change that.
Also environments βobject_positionsβ spaces shape differs from corresponding shape in observation: (7,) vs (3,). I guess problem is in get_position() method calling (returns only coordinates) instead of get_pose() (returns coordinates and orientation)
Do you mean βobject_positionsβ vs βgoal_positionsβ observations?
Yes, they are different - it is actually the goal_positions that returns more than we intended.
Baseline question
Over 4 years agoDear @ermekaitygulov,
that code divides all the (ordered) differences into 200 abstraction levels, ignoring some of the differences at both the extremes.
We no longer use that βpercentage_of_actions_ignored_at_the_extremesβ parameter in the current baseline (it is set to 0).
However, we found it to be useful in previous versions of the baseline, when we used the object positions instead of the images+VAE for planning.
Empirically, we found that the smallest differences where due to environment noise (i.e. objects very slightly changed position value between due observation even if the robot missed them) and also the largest differences were not that useful as abstraction levels (i.e. if you conflate positions that are too much different between each other the planned actions no longer work) - so it paid off to remove both.
PS: Welcome to the competition!
REAL2020 stater kit - December update!
About 4 years agoDear participants,
we have just released a new version of the REAL2020_starter_kit repository.
This new version significantly improves the baseline computational performance:
We have also released a transition file using joints and the cube (see at the bottom of the page here).
The current performance of the baseline is 0.150 on average on Round 2 - hereβs a video of how it performs:
Do you think you can do better than that? Show us!
We look forward to all your submissions!