Loading
0 Follower
0 Following
OG_SouL
Swayambodha Mohapatra

Organization

PwC US Advisory

Location

Mumbai, IN

Badges

1
0
0

Activity

Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

Predicting smell of molecular compounds

Latest submissions

No submissions made in this challenge.

Latest submissions

No submissions made in this challenge.

Latest submissions

See All
graded 67712
failed 67711
graded 67699
Participant Rating
Participant Rating
OG_SouL has not joined any teams yet...

ImageCLEF 2020 DrawnUI

Exploit like score - 0.998!?

Over 4 years ago

Thanks @dimitri.fichou for the clarification. Since we have not tagged any of our submission as a primary run, can you please confirm which one would be considered for deciding the leaderboard (according to new evaluation script) ?

Also, it would be great if we could also see the scores of other participants’ submissions, because we’re only able to see our own scores. @picekl, can you help us out in this?

Thanks.

Exploit like score - 0.998!?

Over 4 years ago

Hi @picekl, if you look at our first submission, we had got a good overall_precision score but a lower overall_recall score according to the evaluation metric. Therefore, we improved our model to get a better overall_recall score, which we achieved with our subsequent submissions.

But, by then, when we submitted, the β€˜overall_recall’ column was removed. After getting confirmation from @dimitri.fichou via mail that overall_precision score is going to be the sole evaluation metric for the competition, we re-trained our model to improve on the precision scores. This is reflected in our last two submissions.

We believe that even if the evaluation metric is modified to consider either the f1 score or mAP (over IoU > 0.5), two of our submissions would excel in that, as they were trained particularly to increase the same.

@dimitri.fichou, it would be great if you could clarify what exactly would be the final evaluation metric. We’ll make another submission and tag that as the β€˜Primary Run’.

No overall_recall in evaluation metric?

Over 4 years ago

Seems like the overall_recall score has been removed from the evaluation metric. Is this some error or only the overall_accuracy is going to be the sole metric to determine the results?
With just a few days remaining for the challenge to end, I would request you to please confirm what exactly is going to be the metric.

Apart from that, it would be great if we could have a leaderboard of sorts so that we know where we lie exactly and modify our approach accordingly.

Getting incomplete error message after submission

Over 4 years ago

Thank you.

I had a few other queries related to submission.

  1. On the Submission Instructions section, it’s mentioned to upload a plain ASCII text file. But, when I looked at the evaluation script, it seems to read a csv file. Can you please confirm whether we need to upload a txt file or a csv file.
  2. In the sample submission row, the format given is - 1c3e1163fa864f9c.jpg;paragraph 0.8:190x135+410+474;button 0.95:99x60+265+745,0.6:85x50+434+819,0.6:89x50+614+739;image 0.7:259x135+379+305;container 0.8:614x925+179+95,0.8:549x229+219+689;
    Can you confirm whether a ’ ; ’ needs to be added at the end of every row or not. Because the error that we’re getting is related to that.

Getting incomplete error message after submission

Over 4 years ago

Made the submission in plain ASCII text format. And got this error - β€œError : Incorrect localisation format (Line nbr 1). The format should be …” .

OG_SouL has not provided any information yet.