Loading
Round 1: Completed #educational Weight: 40.0

MASKD

Real Time Mask Detection

6750
402
34
311

๐Ÿ›  Contribute: Found a typo? Or any other change in the description that you would like to see? Please consider sending us a pull request in the public repo of the challenge here

๐Ÿ•ต๏ธIntroduction

The goal of this challenge is to train object detection models capable of identifying the location of masked faces in an image as well as the location of unmasked faces in the image. These detectors should be robust to noise and provide as little room as possible to accomodate false positives for masks due to the potentially dire consequences that they would lead to. Ideally they should be fast enough to work well for real-world applications, something we hope to focus on in future rounds of the competition.

                Original Video                               Sample Model Output

      

 


Getting Started Code using MaskRCNN is here! ๐Ÿ˜„

๐Ÿ’พ Dataset

The dataset that would be used is from a combination of many sources, which have been hand-labelled. The dataset would contain annotations for masked and unmasked faces.

Note: If you wish to contribute to the dataset please follow these instructions or feel free to contact the challenge organizers.

 

Bonus

Check out this repository that shows a live web-cam-demo(real-life application) of a sample model in action!

๐Ÿ“ Files

This dataset contains : 

  • train_images.zip: This is the Training Set of 679 (as RGB images) images.
  • train.json: This is the train annotations in MS-COCO format.
  • val_images.zip: This is the suggested Validation Set of 120 (as RGB images) images.
  • val.json: This is the val annotations in MS-COCO format.
  • test_images.zip: This is the Test Set of 774 (as RGB images) images.
  • test.json: This file contains the metadata of the test images including their filename,width,height,image_id.
  • mask_video.mp4: This file contains the original video as shown in the "Introduction" of this page. Participants can use this to create their own sample output(refer code in the MaskRCNN baseline).

The two annotation files: - train.json, val.json  contain annotations for images in the train_images/ and the val_images/ folders and follow the MS COCO annotation format:

annotation{ 'id': int, 'image_id': int, 'category_id': int, 'segmentation': RLE or [polygon], 'area': float, 'bbox': [x,y,width,height], 'iscrowd': 0 or 1, }

categories[{ 'id': int, 'name': str, 'supercategory': str, }]

๐Ÿš€ Submission:

  • Submission instructions : You will be required to submit your predictions as a json file that is in accorandance to the MS COCO annotation format.

For detection with bounding boxes, please use the following format:

[{ 'image_id': int, 'category_id': int, 'bbox': [x,y,width,height], 'score': float, }]

Example: 

[ {'image_id': int, 'bbox': [ float, float, float, float], 'score': float, 'category_id': int },
{'image_id': int, 'bbox': [ float, float, float, float], 'score': float, 'category_id': int }, ... ]

๐Ÿ–Š Evaluation Criteria

IoU (Intersection Over Union)

IoU measures the overall overlap between the true region and the proposed region. Then we consider it a True detection, when there is atleast half an overlap, or when IoU > 0.5.

Then we can define the following parameters :

Precision (IoU > 0.5) : precision

Recall (IoU > 0.5) : recall

The final scoring parameters AP_{IoU > 0.5} and AR_{IoU > 0.5} are computed by averaging over all the precision and recall values for all known annotations in the ground truth.

A further discussion about the evaluation metric can be found here.

The evaluation code can be found here.

๐Ÿ”— Links

๐Ÿ“ฑ Contact

Getting Started

Notebooks

See all
[Getting Started Notebook] MASKD Challange
By
gauransh_k
Almost 3 years ago
0