Loading
2 Follower
0 Following
lachlan_mares

Organization

University Of Adelaide

Location

AU

Badges

0
1
0

Activity

Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Understand semantic segmentation and monocular depth estimation from downward-facing drone images

Latest submissions

No submissions made in this challenge.

Using AI For Building’s Energy Management

Latest submissions

No submissions made in this challenge.

Latest submissions

See All
graded 178630
graded 178623
graded 178619

Perform semantic segmentation on aerial images from monocular downward-facing drone

Latest submissions

No submissions made in this challenge.
Participant Rating
stefan_podgorski 0
james_bockman 0
Participant Rating
lachlan_mares has not joined any teams yet...

Learn-to-Race: Autonomous Racing Virtual Challenge

Ground truth segmentation image in training phase seems to be invalid

About 2 years ago

I would also check the number of images. Obviously for single camera it should be 2 and you would expect segmentation mask to be index 1, might pay to check if 6 exist as per multi camera.

Clarification on input sensors during evaluation

About 2 years ago

Any updates here? would be good to know before round 2 starts

Clarification on input sensors during evaluation

About 2 years ago

After reading this thread I am still unclear about the availability of the ground truth segmentation masks during the β€œ1 Hour” training period for round 2. It is clear they will not be available during the evaluation period.

After the code change for using multiple cameras this line in evaluator.py

self.check_for_allowed_sensors()

throws an exception when trying to add them to the sim environment.

Access to these masks is important for anyone using a segmentation model

Clarification on input sensors during evaluation

About 2 years ago

Check that the sensors you want are enabled in the config.py file. See active_sensors, add the ones you want from the cameras dict in the Envconfig class.

class SimulatorConfig(object):
racetrack = β€œThruxton”
active_sensors = [
β€œCameraFrontRGB”,
]
driver_params = {
β€œDriverAPIClass”: β€œVApiUdp”,
β€œDriverAPI_UDP_SendAddress”: β€œ0.0.0.0”,
}
camera_params = {
β€œFormat”: β€œColorBGR8”,
β€œFOVAngle”: 90,
β€œWidth”: 512,
β€œHeight”: 384,
β€œbAutoAdvertise”: True,
}
vehicle_params = False

Hope this is helpful

Need your Inputs for improving competition

About 2 years ago

Is there a way to view/playback submitted evaluations? It would be a great asset to be able to view these so that irregular behavior can be diagnosed. I understand it cannot be done for round 2. I have noticed large discrepancy between scores, performance and agent behavior in a local simulator versus the evaluation results used for grading, even if you reduce the frame rate to match the evaluation server.

KeyError: β€˜success_rate’

About 2 years ago

Hi @jyotish,

Could you please have a look at this problem, this error occurred today for my submission. The agent likely completed the entire course.

2022-02-04 07:46:05.823 | INFO | main:run_evaluation:81 - Starting evaluation on Thruxton racetrack
2022-02-04 07:46:09.866 | INFO | aicrowd_gym.clients.base_oracle_client:register_agent:210 - Registering agent with oracle…
2022-02-04 07:46:09.868 | SUCCESS | aicrowd_gym.clients.base_oracle_client:register_agent:226 - Registered agent with oracle
/home/miniconda/lib/python3.9/site-packages/numpy/core/fromnumeric.py:3440: RuntimeWarning: Mean of empty slice.
return _methods._mean(a, axis=axis, dtype=dtype,
/home/miniconda/lib/python3.9/site-packages/numpy/core/_methods.py:189: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)

lachlan_mares has not provided any information yet.