Loading
3 Follower
0 Following
matthew_howe

Location

AU

Badges

0
0
0

Activity

Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Latest submissions

See All
failed 181166
graded 178752
failed 178750
Participant Rating
stefan_podgorski 0
james_bockman 0
SamBahrami 0
Participant Rating
matthew_howe has not joined any teams yet...

Learn-to-Race: Autonomous Racing Virtual Challenge

Ground truth segmentation image in training phase seems to be invalid

Almost 2 years ago

Unfortunately having the agent running on local doesn’t mean it will run on the challenge server (re: my 20+ failed debugging submissions over the last month). Keep in mind that there is a significant amount of files overwritten for the test phase so changes to some files don’t result in changes during testing. I’m not sure what issue you are encountering, sorry I can’t be of any more help.

Ground truth segmentation image in training phase seems to be invalid

Almost 2 years ago

Check your active sensors in config.py, afaik the order matters but I would check that assumption because list. Since we aren’t getting a dictionary it’s difficult to be certain but when I was previously checking input sizes they seemed to be fine.

Success rate is always 0.0 during evaluation

Almost 2 years ago

I’m not sure what your problem may be but I’ll add that the unmodified SAC agent doesn’t go around any of the tracks as far as I know. I don’t think it works at all from our experiments. I would say it is your actual agent your submitting. To debug my issues I print out a lot in the debugger to check what’s going on inside the sim like if the car is even moving, if images are being processed, etc.

Multi-camera evaluation has shape 800x800

About 2 years ago

I am trying to submit a multi-cam evaluation and get an error when I stack my images to run through my model. The images I get at test time for left and right are 800x800, while front is 512x384. As far as I know we do not set these, they are set on the simulation end.

[Round 2] Launch - Expected Date

About 2 years ago

I think that maybe we were not meant to be able to submit before, but could. So they have blocked submissions now, if your submission says Submission failed : Failed to communicate with the grader. then it isn’t on your side.

Information available during training in round 2

About 2 years ago

Sorry, during the training step (1hr practice).

Information available during training in round 2

About 2 years ago

What ground truth information is available in round two?

  • Full ground truth segmentation?
  • Additional cameras?
  • All multimodal information?

Observation delay limits processing to maximum of 10FPS

About 2 years ago

Are we meant to be able to change this? I am able to run my code locally at around 30FPS while on the server it runs at around 5-6FPS because of the obs_delay parameter in config.py being overwritten to 0.1s. This 0.1s delay essentially makes a hard limit that no ones code should run faster than 10FPS.

Is this and intentional limitation to make the processing power similar to what the real world car can actually achieve?

KeyError: β€˜success_rate’

About 2 years ago

@jyotish I don’t think ignoring the absence of the key is going to do any good.

Let’s say success rate is meant to be 100%, which is the case for my agent and it shows in my own logs, the key is missing and it is replaced with 0 giving a false positive result for that run.

This issue is likely the cause of not one vehicle completing the track 100% on the leader board.

I’m currently trying to figure out what the issue could be but I’m afraid I don’t have access to the files responsible for this.

matthew_howe has not provided any information yet.