NeurIPS 2019 : MineRL Competition
Challenge Rules
-
Entries to the MineRL challenge can be either “open” or “closed.” Teams submitting “open” entries will be expected to reveal most details of their method including source-code (special exceptions may be made for pending publications). Teams may choose to submit a “closed” entry, and are then not required to provide any details beyond an abstract. The motivation for introducing this division is to allow greater participation from industrial teams that may be unable to reveal algorithmic details while also allocating more time at the workshop to teams that are able to give more detailed presentations. Participants are strongly encouraged to submit “open” entries if possible.
-
For a team to be eligible to move to round two, each member must satisfy the following: (1) be at least 18 and at least the age of majority in place of residence; (2) not reside in any region or country subject to U.S. Export Regulations; and (3) not be an organizer of this competition nor a family member of a competition organizer. In addition, to receive any awards from our sponsors, competition winners must attend the NeurIPS workshop.
-
The submission must train a machine learning model without relying heavily on human domain knowledge. A manually specified policy may not be used as a component of this model. Likewise, the reward function may not be changed (shaped) based on manually engineered, hard-coded functions of the state. For example, though a learned hierarchical controller is permitted, meta-controllers may not choose between two policies based on a manually specified function of the state, such as whether the agent has a certain item in its inventory. Similarly, additional rewards for approaching tree-like objects are not permitted, but rewards for encountering novel states (“curiosity rewards”) are permitted.
-
Participants may only use the provided dataset; no additional datasets may be included in the source file submissions nor may be downloaded during training evaluation. During evaluation of submitted code, the individual containers will not have access to any external network to avoid any information leak. All submitted code repositories will be scrubbed to remove any files larger than 30MB to ensure participants are not checking in any model weighs pre-trained on the released training dataset. While the container running the submitted code will not have external network access, relevant exceptions are added to ensure participants can download and use the pre-trained models included in popular frameworks like PyTorch and TensorFlow. Participants can request to add network exceptions for any other publicly available pre-trained models, which will be validated by AICrowd on a case-by-case basis.
-
During Round 1, participants submit their trained models for evaluation at most 20 times and receive the performance of their models. At the end of Round 1, participants must submit source code to train their models. This code must terminate within four days on the provided platform. For participants with the highest evaluation scores, this code will be used to re-train their models (i.e., no information may be carried over from previous training through saved model weights or otherwise), and performance after re-training will dictate standings at the end of Round 1. The top 10 participants will progress to Round 2.
-
During Round 2, participants will submit their source code at most 4 times. After each submission, the model will be trained for four days on a re-rendered, private dataset and domain, and the participant will receive the final performance of their model. Final standings are based on the best performance of each participant during the round.