Loading
1 Follower
0 Following
sigma_g

Organization

IIIT Hyderabad

Location

IN

Badges

2
0
0

Connect

Activity

Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Mon
Wed
Fri

Ratings Progression

Loading...

Challenge Categories

Loading...

Challenges Entered

5 Problems 21 Days. Can you solve it all?

Latest submissions

See All
graded 124481
graded 122093
graded 122080

5 Puzzles, 3 Weeks | Can you solve them all?

Latest submissions

No submissions made in this challenge.

Recognise Handwritten Digits

Latest submissions

See All
graded 67522
graded 67521
failed 67519

5 Problems 15 Days. Can you solve it all?

Latest submissions

See All
graded 66745
graded 65264

Latest submissions

No submissions made in this challenge.

5 PROBLEMS 3 WEEKS. CAN YOU SOLVE THEM ALL?

Latest submissions

See All
graded 80476
graded 80473
graded 80472

Latest submissions

No submissions made in this challenge.
Participant Rating
bhuvanesh_sridharan 0
Participant Rating

AI Blitz #6

About the new datasets for WinPrediction

Over 3 years ago

Hi, I think it does not matter whether or not these game positions were from real human players, grandmasters, or even from the TCEC. Given a board position and which side’s turn it is, there is a clear unique evaluation that Stockfish 12+ will give, which is the evaluation assuming best play from both sides.

Now, in such positions, when giving the win prediction, we have to assume best play from both the side. We cannot assume human play because it’s irregular. A human play can be from a 1200 ELO player or a 2100 ELO player, and we have no way to account for that. Even a 2100 ELO player can have a bad day and play with a drop of 100 points in performance rating.

Now that we have established that there is one unique answer, we come back to the above pictured position - and similarly in another position on this post - to state that we have contradictory information in the dataset (against what we get from Stockfish evaluating the position). And this is not rare. For the first 100 training samples we observed 20 of them with opposite win predictions. Even if we assume our OCR is wrong on half of them, that’s still a 10% error rate in the training dataset.

Moreover, another issue is that not all positions are few moves before checkmate, as the problem statement says on the main page. Several positions are already mated, where there’s no sense of giving whose turn it is. On the other hand, several positions are far from mated, as you can see in the linked post, the evaluation is a meagre approx +3. However, any position near checkmate will ceratinly have a \pm Mx evaluation from stockfish, which means mate in x moves by either white or black.

Let me know if any part is unclear, I will re-explain. But I hope - if the dataset is revised once again - these issues are taken care of, because as it stands, it is almost impossible to submit a better score if we follow standard Chess evaluation metrics.

sigma_g has not provided any information yet.