Loading

Face Recognition

Solution for submission 175683

A detailed solution for submission 175683 submitted for challenge Face Recognition

jakub_bartczuk

Starter Code for Face Recognition

In this baseline we will be using basic Mean Squared Error to compare the missing person image to all target faces ans generate our predictions.

Downloading Dataset

Installing puzzle datasets via aicrowd-cli

In [1]:
!pip install aicrowd-cli

# Make sure to re-run below code whenever you restart colab notebook
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: aicrowd-cli in /home/kuba/.local/lib/python3.8/site-packages (0.1.7)
Requirement already satisfied: click<8,>=7.1.2 in /home/kuba/.local/lib/python3.8/site-packages (from aicrowd-cli) (7.1.2)
Requirement already satisfied: tqdm<5,>=4.56.0 in /home/kuba/.local/lib/python3.8/site-packages (from aicrowd-cli) (4.56.0)
Requirement already satisfied: toml<1,>=0.10.2 in /home/kuba/.local/lib/python3.8/site-packages (from aicrowd-cli) (0.10.2)
Requirement already satisfied: rich<11,>=10.0.0 in /home/kuba/.local/lib/python3.8/site-packages (from aicrowd-cli) (10.16.2)
Requirement already satisfied: requests-toolbelt<1,>=0.9.1 in /home/kuba/.local/lib/python3.8/site-packages (from aicrowd-cli) (0.9.1)
Requirement already satisfied: requests<3,>=2.25.1 in /home/kuba/.local/lib/python3.8/site-packages (from aicrowd-cli) (2.25.1)
Requirement already satisfied: gitpython<4,>=3.1.12 in /home/kuba/.local/lib/python3.8/site-packages (from aicrowd-cli) (3.1.12)
Requirement already satisfied: commonmark<0.10.0,>=0.9.0 in /home/kuba/.local/lib/python3.8/site-packages (from rich<11,>=10.0.0->aicrowd-cli) (0.9.1)
Requirement already satisfied: colorama<0.5.0,>=0.4.0 in /usr/lib/python3/dist-packages (from rich<11,>=10.0.0->aicrowd-cli) (0.4.3)
Requirement already satisfied: pygments<3.0.0,>=2.6.0 in /home/kuba/.local/lib/python3.8/site-packages (from rich<11,>=10.0.0->aicrowd-cli) (2.7.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2019.11.28)
Requirement already satisfied: chardet<5,>=3.0.2 in /usr/lib/python3/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/lib/python3/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2.8)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/lib/python3/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (1.25.8)
Requirement already satisfied: gitdb<5,>=4.0.1 in /home/kuba/.local/lib/python3.8/site-packages (from gitpython<4,>=3.1.12->aicrowd-cli) (4.0.5)
Requirement already satisfied: smmap<4,>=3.0.1 in /home/kuba/.local/lib/python3.8/site-packages (from gitdb<5,>=4.0.1->gitpython<4,>=3.1.12->aicrowd-cli) (3.0.4)

Creating a new data directory and downloading the dataset

!rm -rf data !mkdir data %aicrowd ds dl -c face-recognition -o data

%%bash cd data unzip data.zip

In [2]:
!ls data/
data.zip  missing  sample_submission.csv  target

unzipping the data

!unzip data/data.zip -d data > /dev/null

Importing Libraries

In [3]:
import pandas as pd
import os
import numpy as np
import random
from tqdm.notebook import tqdm
import cv2
import torch

random.seed(42)

from backbones import iresnet import torch

In [4]:
#elasticface_model = iresnet.iresnet100()
In [5]:
#elasticface_model.load_state_dict(torch.load("elasticface_cos.pth"))
In [6]:
!mv /home/kuba/Downloads/elasticface_cos.pth .
mv: cannot stat '/home/kuba/Downloads/elasticface_cos.pth': No such file or directory

Reading Dataset

In [7]:
# Getting all image ids from a folder

image_ids = os.listdir("data/missing")
len(image_ids)
Out[7]:
1000
In [8]:
!ls data/
data.zip  missing  sample_submission.csv  target
In [9]:
import matplotlib.pyplot as plt
In [10]:
import pickle
In [11]:
# Reading a sample missing person image


sample_image_id = random.choice(image_ids)
sample_image_id = '7kuz4.jpg'

sample_missing = cv2.imread(os.path.join("data/missing", sample_image_id))[:,:,[2,1,0]]
plt.imshow(sample_missing)
Out[11]:
<matplotlib.image.AxesImage at 0x7fe5ec8e7cd0>
In [12]:
from skimage import filters
In [13]:
sample_image_id = '7kuz4.jpg'

sample_missing = cv2.imread(os.path.join("data/missing", image_ids[20]))[:,:,[2,1,0]]
plt.imshow(sample_missing)
Out[13]:
<matplotlib.image.AxesImage at 0x7fe43ca84ca0>
In [14]:
def unwatermark_image(img, threshold=0.6):
    img = img.copy() / 255 
    pos_mask =  img[:,:,0] > threshold 
    neg_mask = (img[:,:,1:] < img.mean()).min(axis=-1)
    blurred_img = filters.gaussian(img, 50)
    img[pos_mask & neg_mask] = blurred_img[pos_mask & neg_mask]
    return img
In [15]:
plt.imshow(unwatermark_image(sample_missing))
<ipython-input-14-64ae217bb57a>:5: RuntimeWarning: Images with dimensions (M, N, 3) are interpreted as 2D+RGB by default. Use `multichannel=False` to interpret as 3D image with last dimension of length 3.
  blurred_img = filters.gaussian(img, 50)
Out[15]:
<matplotlib.image.AxesImage at 0x7fe43c3e7b20>
In [16]:
# Reading the corrosponding target faces

sample_target = cv2.imread(os.path.join("data/target", sample_image_id)) [:,:,[2,1,0]]
plt.imshow(cv2.resize(sample_target, (512, 512)))
Out[16]:
<matplotlib.image.AxesImage at 0x7fe43ca04970>
In [17]:
# We can also split all the faces in the target image to convert them into individual faces images

sample_target_faces = []


def get_target_face(face_no, target_image):


  # Top-Left x, y corrdinates of the specific face 
  x, y = (int(face_no[0]))*216, (int(face_no[1]))*216
  target_face = target_image[x:x+216, y:y+216]

  return target_face
In [18]:
# Showing a sample face from a sample target image 

sample_target_face = get_target_face("96", sample_target)
plt.imshow(sample_target_face)
Out[18]:
<matplotlib.image.AxesImage at 0x7fe43c9650d0>

Generating Predictions

In [19]:
from PIL import Image
In [20]:
type(Image.fromarray(sample_target_face))
Out[20]:
PIL.Image.Image
In [21]:
from mlutil.feature_extraction import images as image_feature_extraction
In [22]:
from torchvision import models
import torch
from torch import nn
from torchvision import transforms
from facenet_pytorch import InceptionResnetV1
from torchvision import transforms
from PIL import Image
from torchvision import datasets
from sklearn import metrics
from skimage import util
import torch.nn.functional as F
from PIL import Image
from skimage import data
from skimage.feature import Cascade

from torch.utils import data

import matplotlib.pyplot as plt
from matplotlib import patches

import skimage.io
from torch.utils import data

from facenet_pytorch import MTCNN, InceptionResnetV1

resnet = InceptionResnetV1(pretrained='vggface2').eval() resnet_children = list(resnet.children())

In [23]:
# For a model pretrained on VGGFace2
model = InceptionResnetV1(pretrained='vggface2').eval()
In [24]:
truncated_model = nn.Sequential(*list(model.children())[:-4])
In [25]:
truncated_model.eval();
In [26]:
truncated_model
Out[26]:
Sequential(
  (0): BasicConv2d(
    (conv): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), bias=False)
    (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU()
  )
  (1): BasicConv2d(
    (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU()
  )
  (2): BasicConv2d(
    (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU()
  )
  (3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
  (4): BasicConv2d(
    (conv): Conv2d(64, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(80, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU()
  )
  (5): BasicConv2d(
    (conv): Conv2d(80, 192, kernel_size=(3, 3), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU()
  )
  (6): BasicConv2d(
    (conv): Conv2d(192, 256, kernel_size=(3, 3), stride=(2, 2), bias=False)
    (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    (relu): ReLU()
  )
  (7): Sequential(
    (0): Block35(
      (branch0): BasicConv2d(
        (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (branch2): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(96, 256, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
    (1): Block35(
      (branch0): BasicConv2d(
        (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (branch2): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(96, 256, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
    (2): Block35(
      (branch0): BasicConv2d(
        (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (branch2): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(96, 256, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
    (3): Block35(
      (branch0): BasicConv2d(
        (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (branch2): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(96, 256, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
    (4): Block35(
      (branch0): BasicConv2d(
        (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (branch2): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(96, 256, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
  )
  (8): Mixed_6a(
    (branch0): BasicConv2d(
      (conv): Conv2d(256, 384, kernel_size=(3, 3), stride=(2, 2), bias=False)
      (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
    )
    (branch1): Sequential(
      (0): BasicConv2d(
        (conv): Conv2d(256, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (1): BasicConv2d(
        (conv): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (2): BasicConv2d(
        (conv): Conv2d(192, 256, kernel_size=(3, 3), stride=(2, 2), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
    )
    (branch2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (9): Sequential(
    (0): Block17(
      (branch0): BasicConv2d(
        (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
    (1): Block17(
      (branch0): BasicConv2d(
        (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
    (2): Block17(
      (branch0): BasicConv2d(
        (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
    (3): Block17(
      (branch0): BasicConv2d(
        (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
    (4): Block17(
      (branch0): BasicConv2d(
        (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
    (5): Block17(
      (branch0): BasicConv2d(
        (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
    (6): Block17(
      (branch0): BasicConv2d(
        (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
    (7): Block17(
      (branch0): BasicConv2d(
        (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
    (8): Block17(
      (branch0): BasicConv2d(
        (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
    (9): Block17(
      (branch0): BasicConv2d(
        (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(896, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(1, 7), stride=(1, 1), padding=(0, 3), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(128, 128, kernel_size=(7, 1), stride=(1, 1), padding=(3, 0), bias=False)
          (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(256, 896, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
  )
  (10): Mixed_7a(
    (branch0): Sequential(
      (0): BasicConv2d(
        (conv): Conv2d(896, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (1): BasicConv2d(
        (conv): Conv2d(256, 384, kernel_size=(3, 3), stride=(2, 2), bias=False)
        (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
    )
    (branch1): Sequential(
      (0): BasicConv2d(
        (conv): Conv2d(896, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (1): BasicConv2d(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
    )
    (branch2): Sequential(
      (0): BasicConv2d(
        (conv): Conv2d(896, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (1): BasicConv2d(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (2): BasicConv2d(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), bias=False)
        (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
    )
    (branch3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (11): Sequential(
    (0): Block8(
      (branch0): BasicConv2d(
        (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)
          (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)
          (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(384, 1792, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
    (1): Block8(
      (branch0): BasicConv2d(
        (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)
          (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)
          (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(384, 1792, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
    (2): Block8(
      (branch0): BasicConv2d(
        (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)
          (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)
          (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(384, 1792, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
    (3): Block8(
      (branch0): BasicConv2d(
        (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)
          (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)
          (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(384, 1792, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
    (4): Block8(
      (branch0): BasicConv2d(
        (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (branch1): Sequential(
        (0): BasicConv2d(
          (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (1): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)
          (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
        (2): BasicConv2d(
          (conv): Conv2d(192, 192, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)
          (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
          (relu): ReLU()
        )
      )
      (conv2d): Conv2d(384, 1792, kernel_size=(1, 1), stride=(1, 1))
      (relu): ReLU()
    )
  )
  (12): Block8(
    (branch0): BasicConv2d(
      (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU()
    )
    (branch1): Sequential(
      (0): BasicConv2d(
        (conv): Conv2d(1792, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (1): BasicConv2d(
        (conv): Conv2d(192, 192, kernel_size=(1, 3), stride=(1, 1), padding=(0, 1), bias=False)
        (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
      (2): BasicConv2d(
        (conv): Conv2d(192, 192, kernel_size=(3, 1), stride=(1, 1), padding=(1, 0), bias=False)
        (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU()
      )
    )
    (conv2d): Conv2d(384, 1792, kernel_size=(1, 1), stride=(1, 1))
  )
  (13): AdaptiveAvgPool2d(output_size=1)
)
In [27]:
image_vectorizer = image_feature_extraction.TorchFeatureExtractor(
    truncated_model,
    appended_modules=[nn.Flatten()],
    normalize=transforms.Normalize(mean=[0.0, 0.0, 0.0], std=[1.0, 1.0, 1.0]),
    last_layer_index=None, use_gpu=True, last_nested_layer_index=None, img_size=160)
/home/kuba/.local/lib/python3.8/site-packages/torchvision/transforms/transforms.py:285: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
  warnings.warn("The use of the transforms.Scale transform is deprecated, " +
In [28]:
img_id = image_ids[0]
missing_image = cv2.imread(os.path.join("data/missing", img_id))[:,:,[2,1,0]]
In [29]:
img_id = image_ids[0]
target_image = cv2.imread(os.path.join("data/target", img_id))[:,:,[2,1,0]]
In [30]:
def get_imgs(target_image, preprocess=True):
    imgs = [
        target_image[i*216:(i+1)*216, 
        j*216:(j+1)*216]
        for i in range(10)
        for j in range(10)
    ]
    if preprocess:
        return torch.vstack([image_vectorizer.process_img(img) for img in imgs])
    else:
        return imgs
In [31]:
imgs = get_imgs(target_image)
In [32]:
used_model = image_vectorizer.model#image_vectorizer.model
In [33]:
target_image = skimage.io.imread(os.path.join("data/target", image_ids[0]))
In [34]:
missing_image = unwatermark_image(skimage.io.imread(os.path.join("data/missing", img_id)))
<ipython-input-14-64ae217bb57a>:5: RuntimeWarning: Images with dimensions (M, N, 3) are interpreted as 2D+RGB by default. Use `multichannel=False` to interpret as 3D image with last dimension of length 3.
  blurred_img = filters.gaussian(img, 50)
In [35]:
missing_image = (unwatermark_image(skimage.io.imread(os.path.join("data/missing", img_id))) * 255).astype('uint8')
<ipython-input-14-64ae217bb57a>:5: RuntimeWarning: Images with dimensions (M, N, 3) are interpreted as 2D+RGB by default. Use `multichannel=False` to interpret as 3D image with last dimension of length 3.
  blurred_img = filters.gaussian(img, 50)
In [36]:
image_vectorizer.process_img(missing_image).shape
Out[36]:
torch.Size([1, 3, 160, 160])
In [37]:
used_model(image_vectorizer.process_img(missing_image)).shape
Out[37]:
torch.Size([1, 1792])

predictions = {"ImageID":[], "target":[]}

for img_id in tqdm(image_ids):

missing_image = (unwatermark_image(skimage.io.imread(os.path.join("data/missing", img_id))) * 255).astype('uint8')

missing_image_vector = used_model(image_vectorizer.process_img(missing_image)).cpu().numpy()
target_image = skimage.io.imread(os.path.join("data/target", img_id))
target_images_path = os.path.splitext(os.path.join("data/target", img_id))[0]
imgs = util.view_as_blocks(target_image, (216, 216, 3)).reshape(100, 216, 216, 3)
!mkdir -p $target_images_path
for i, img in enumerate(imgs):
    skimage.io.imsave(os.path.join(target_images_path, str(i) + '.jpg'), img)##np.array(img).transpose([1,2,0]))

target_vectors = used_model(torch_imgs).cpu().numpy()
# Face no with minimum MSE
#imgs = get_imgs(target_image)

#similarities = metrics.pairwise.pairwise_distances(missing_image_vector, target_vectors)
#closest_face_no = similarities[0].argmin()

#predictions['ImageID'].append(img_id.replace(".jpg", ""))
#predictions['target'].append(closest_face_no)
In [38]:
from skimage import data
# Load the trained file from the module root.
trained_file = data.lbp_frontal_face_cascade_filename()
In [39]:
import matplotlib.pyplot as plt
In [40]:
detector = Cascade(trained_file)
In [41]:
def crop_detected_face(img):
    min_size = int(img.shape[0] * 0.3)
    max_size = int(img.shape[0] * 0.9)
    detected = detector.detect_multi_scale(img=img,
                                           scale_factor=1.1,
                                           step_ratio=1,
                                           min_size=(min_size, min_size),
                                           max_size=(max_size, max_size))
    if len(detected) > 0:
        patch = detected[0]
        cropped = img[patch['r']:patch['r'] + patch['width'], patch['c']: patch['c'] + patch['height']]
        return cropped#(cropped * 255).astype("uint8")
    else:
        if img.dtype is float:
            return (cropped * 255).astype("uint8")
        else:
            return img
In [ ]:

In [42]:
def get_missing_and_target(image_ids, do_crop=True):
    missing_images_cropped = []
    target_images_cropped = []
    for img_id in tqdm(image_ids):
        raw_missing_image = unwatermark_image(skimage.io.imread(os.path.join("data/missing", img_id)))
        if do_crop:
            missing_image = crop_detected_face(raw_missing_image)
            image_is_cropped = missing_image.mean() != raw_missing_image.mean()
        else:
            missing_image = raw_missing_image

        target_image = skimage.io.imread(os.path.join("data/target", img_id))
        imgs = [
            crop_detected_face(img) if image_is_cropped else img
            for img in 
            util.view_as_blocks(target_image, (216, 216, 3)).reshape(100, 216, 216, 3)
        ]
        torch_imgs = torch.cat([image_vectorizer.process_img(img) for img in imgs])

        missing_images_cropped.append(missing_image)
        target_images_cropped.append(torch_imgs.cpu())
    return missing_images_cropped, target_images_cropped
In [43]:
missing_images_cropped, target_images_cropped = get_missing_and_target(image_ids)
<ipython-input-14-64ae217bb57a>:5: RuntimeWarning: Images with dimensions (M, N, 3) are interpreted as 2D+RGB by default. Use `multichannel=False` to interpret as 3D image with last dimension of length 3.
  blurred_img = filters.gaussian(img, 50)
In [44]:
len(image_ids)
Out[44]:
1000
In [45]:
len(target_images_cropped)
Out[45]:
1000
In [46]:
def get_predictions(image_ids, missing_images_cropped, target_images_cropped, metric='cosine'):
    predictions = {"ImageID":[], "target":[]}

    for img_id, missing_image, torch_imgs in tqdm(zip(image_ids, missing_images_cropped, target_images_cropped), total=len(missing_images_cropped)): 
        missing_torch_img = image_vectorizer.process_img(missing_image)
        missing_image_vector = used_model(missing_torch_img).cpu().numpy()

        target_vectors = used_model(torch_imgs.cuda()).cpu().numpy()
        similarities = metrics.pairwise_distances(missing_image_vector, target_vectors, metric=metric)[0]
        closest_face_no = similarities.argmin()

        predictions['ImageID'].append(img_id.replace(".jpg", ""))
        predictions['target'].append(closest_face_no)
    return pd.DataFrame(predictions)
In [ ]:

In [47]:
i = 23
In [50]:
plt.imshow(missing_images_cropped[i])
Out[50]:
<matplotlib.image.AxesImage at 0x7fe46304ee80>
In [ ]:
submission = get_predictions(image_ids, missing_images_cropped, target_images_cropped, metric='cosine')
In [ ]:
found_target_img = target_images_cropped[i][submission['target'][i]].numpy().transpose([1,2,0])
In [ ]:
plt.imshow(found_target_img.astype('float'))
In [ ]:
submission.head()

Segmentation

In [70]:
from deepsense_vision.models.keypointrcnn.keypointrcnn_model import KeypointRCNNModel
from deepsense_vision.models.fasterrcnn.fasterrcnn_model import FasterRCNNModel
from deepsense_vision.models.maskrcnn.maskrcnn_model import MaskRCNNModel
import deepsense_vision
In [71]:
model = MaskRCNNModel()
model.load_from_zoo("coco")
model.to_gpu()
Loading from /home/kuba/.cache/deepsense_vision/maskrcnn-COCO.pt...
MaskRCNN loaded from /home/kuba/.cache/deepsense_vision/maskrcnn-COCO.pt successfully!
In [ ]:

In [72]:
prediction = model.predict_from_array((missing_images_cropped[i] * 255).astype("uint8"))
prediction.visualize()
Out[72]:

Saving the Predictions

Logging in from our AIcrowd account. Make sure you have accepted the puzzle rules before logging in!

%load_ext aicrowd.magic %aicrowd login

In [ ]:
# Saving the predictions
!rm -rf submission/assets
!mkdir -p submission/assets
submission.to_csv(os.path.join("submission", "assets", "submission.csv"), index=False)
In [ ]:
%%bash
pushd submission
rm submission.zip
cp ../baseline-face-recognition-dc9e45f8-3e3f-4040-886b-b6482fa98245.ipynb notebook.ipynb
zip -r submission.zip *
popd
In [ ]:
!aicrowd submission create -c face-recognition -f submission/submission.zip

Submitting our Predictions

In [56]:
%aicrowd notebook submit -c face-recognition -a assets --no-verify
UsageError: Line magic function `%aicrowd` not found.

Congratulations to making your first submission in the puzzle 🎉 . Let's continue with the journey by improving the baseline & making submission! Don't be shy to ask question related to any errors you are getting or doubts in any part of this notebook in discussion forum or in AIcrowd Discord sever, AIcrew will be happy to help you :)

Have a cool new idea that you want to see in the next blitz ? Let us know!

In [ ]: