Loading

AI Blitz #8

F1 Speed Recognition using OCR+Regression Ensemble

F1 Speed Recognition using OCR Models + Multiple Regression Models

g_mothy

Steps

  1. Use clustering to get the type of speedometer.
  2. Make the pixels around the speed into white to localize the text (which of considered for us)
  3. Apply OCR to detect the text from the images.
  4. Use Image Regression to train the model.
  5. Use Ensemble the ocr output,Image Regression output.

Getting Started Code for F1 Speed Recognition Challenge on AIcrowd

Steps

  1. Use clustering to get the type of speedometer.
  2. Make the pixels around the speed into white to localize the text (which of considered for us)
  3. Apply OCR to detect the text from the images.
  4. Use Image Regression to train the model.
  5. Use Ensemble the ocr output,Image Regression output.

Step 1: Download Packages & Data

Download Necessary Packages 📚

In [1]:
!pip install --upgrade fastai
Collecting fastai
  Downloading https://files.pythonhosted.org/packages/e8/79/e8a87e4c20238e114671314426227db8647d2b42744eab79e0917c59865e/fastai-2.3.1-py3-none-any.whl (194kB)
     |████████████████████████████████| 204kB 3.9MB/s 
Requirement already satisfied, skipping upgrade: pyyaml in /usr/local/lib/python3.7/dist-packages (from fastai) (3.13)
Requirement already satisfied, skipping upgrade: scipy in /usr/local/lib/python3.7/dist-packages (from fastai) (1.4.1)
Requirement already satisfied, skipping upgrade: torch<1.9,>=1.7.0 in /usr/local/lib/python3.7/dist-packages (from fastai) (1.8.1+cu101)
Requirement already satisfied, skipping upgrade: torchvision>=0.8.2 in /usr/local/lib/python3.7/dist-packages (from fastai) (0.9.1+cu101)
Requirement already satisfied, skipping upgrade: pillow>6.0.0 in /usr/local/lib/python3.7/dist-packages (from fastai) (7.1.2)
Collecting fastcore<1.4,>=1.3.8
  Downloading https://files.pythonhosted.org/packages/d8/b0/f1fbf554e0bf3c76e1bdc3b82eedfe41fcf656479586be38c64421082b1b/fastcore-1.3.20-py3-none-any.whl (53kB)
     |████████████████████████████████| 61kB 5.1MB/s 
Requirement already satisfied, skipping upgrade: fastprogress>=0.2.4 in /usr/local/lib/python3.7/dist-packages (from fastai) (1.0.0)
Requirement already satisfied, skipping upgrade: pandas in /usr/local/lib/python3.7/dist-packages (from fastai) (1.1.5)
Requirement already satisfied, skipping upgrade: pip in /usr/local/lib/python3.7/dist-packages (from fastai) (19.3.1)
Requirement already satisfied, skipping upgrade: scikit-learn in /usr/local/lib/python3.7/dist-packages (from fastai) (0.22.2.post1)
Requirement already satisfied, skipping upgrade: spacy<4 in /usr/local/lib/python3.7/dist-packages (from fastai) (2.2.4)
Requirement already satisfied, skipping upgrade: matplotlib in /usr/local/lib/python3.7/dist-packages (from fastai) (3.2.2)
Requirement already satisfied, skipping upgrade: requests in /usr/local/lib/python3.7/dist-packages (from fastai) (2.23.0)
Requirement already satisfied, skipping upgrade: packaging in /usr/local/lib/python3.7/dist-packages (from fastai) (20.9)
Requirement already satisfied, skipping upgrade: numpy>=1.13.3 in /usr/local/lib/python3.7/dist-packages (from scipy->fastai) (1.19.5)
Requirement already satisfied, skipping upgrade: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch<1.9,>=1.7.0->fastai) (3.7.4.3)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->fastai) (2.8.1)
Requirement already satisfied, skipping upgrade: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->fastai) (2018.9)
Requirement already satisfied, skipping upgrade: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->fastai) (1.0.1)
Requirement already satisfied, skipping upgrade: plac<1.2.0,>=0.9.6 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (1.1.3)
Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (56.1.0)
Requirement already satisfied, skipping upgrade: srsly<1.1.0,>=1.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (1.0.5)
Requirement already satisfied, skipping upgrade: blis<0.5.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (0.4.1)
Requirement already satisfied, skipping upgrade: wasabi<1.1.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (0.8.2)
Requirement already satisfied, skipping upgrade: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (2.0.5)
Requirement already satisfied, skipping upgrade: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (3.0.5)
Requirement already satisfied, skipping upgrade: catalogue<1.1.0,>=0.0.7 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (1.0.0)
Requirement already satisfied, skipping upgrade: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (4.41.1)
Requirement already satisfied, skipping upgrade: thinc==7.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (7.4.0)
Requirement already satisfied, skipping upgrade: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (1.0.5)
Requirement already satisfied, skipping upgrade: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->fastai) (1.3.1)
Requirement already satisfied, skipping upgrade: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->fastai) (0.10.0)
Requirement already satisfied, skipping upgrade: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->fastai) (2.4.7)
Requirement already satisfied, skipping upgrade: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->fastai) (2.10)
Requirement already satisfied, skipping upgrade: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->fastai) (3.0.4)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->fastai) (2020.12.5)
Requirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->fastai) (1.24.3)
Requirement already satisfied, skipping upgrade: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas->fastai) (1.15.0)
Requirement already satisfied, skipping upgrade: importlib-metadata>=0.20; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from catalogue<1.1.0,>=0.0.7->spacy<4->fastai) (4.0.1)
Requirement already satisfied, skipping upgrade: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=0.20; python_version < "3.8"->catalogue<1.1.0,>=0.0.7->spacy<4->fastai) (3.4.1)
Installing collected packages: fastcore, fastai
  Found existing installation: fastai 1.0.61
    Uninstalling fastai-1.0.61:
      Successfully uninstalled fastai-1.0.61
Successfully installed fastai-2.3.1 fastcore-1.3.20
In [2]:
!pip install -U aicrowd-cli
Collecting aicrowd-cli
  Downloading https://files.pythonhosted.org/packages/a5/8a/fca67e8c1cb1501a9653cd653232bf6fdebbb2393e3de861aad3636a1136/aicrowd_cli-0.1.6-py3-none-any.whl (51kB)
     |████████████████████████████████| 61kB 2.4MB/s 
Collecting gitpython<4,>=3.1.12
  Downloading https://files.pythonhosted.org/packages/27/da/6f6224fdfc47dab57881fe20c0d1bc3122be290198ba0bf26a953a045d92/GitPython-3.1.17-py3-none-any.whl (166kB)
     |████████████████████████████████| 174kB 4.9MB/s 
Collecting requests<3,>=2.25.1
  Downloading https://files.pythonhosted.org/packages/29/c1/24814557f1d22c56d50280771a17307e6bf87b70727d975fd6b2ce6b014a/requests-2.25.1-py2.py3-none-any.whl (61kB)
     |████████████████████████████████| 61kB 5.8MB/s 
Requirement already satisfied, skipping upgrade: toml<1,>=0.10.2 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (0.10.2)
Collecting tqdm<5,>=4.56.0
  Downloading https://files.pythonhosted.org/packages/42/d7/f357d98e9b50346bcb6095fe3ad205d8db3174eb5edb03edfe7c4099576d/tqdm-4.61.0-py2.py3-none-any.whl (75kB)
     |████████████████████████████████| 81kB 7.5MB/s 
Collecting click<8,>=7.1.2
  Downloading https://files.pythonhosted.org/packages/d2/3d/fa76db83bf75c4f8d338c2fd15c8d33fdd7ad23a9b5e57eb6c5de26b430e/click-7.1.2-py2.py3-none-any.whl (82kB)
     |████████████████████████████████| 92kB 6.6MB/s 
Collecting requests-toolbelt<1,>=0.9.1
  Downloading https://files.pythonhosted.org/packages/60/ef/7681134338fc097acef8d9b2f8abe0458e4d87559c689a8c306d0957ece5/requests_toolbelt-0.9.1-py2.py3-none-any.whl (54kB)
     |████████████████████████████████| 61kB 6.3MB/s 
Collecting rich<11,>=10.0.0
  Downloading https://files.pythonhosted.org/packages/6b/39/fbe8d15f0b017d63701f2a42e4ccb9a73cd4175e5c56214c1b5685e3dd79/rich-10.2.2-py3-none-any.whl (203kB)
     |████████████████████████████████| 204kB 36.4MB/s 
Collecting gitdb<5,>=4.0.1
  Downloading https://files.pythonhosted.org/packages/ea/e8/f414d1a4f0bbc668ed441f74f44c116d9816833a48bf81d22b697090dba8/gitdb-4.0.7-py3-none-any.whl (63kB)
     |████████████████████████████████| 71kB 7.7MB/s 
Requirement already satisfied, skipping upgrade: typing-extensions>=3.7.4.0; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from gitpython<4,>=3.1.12->aicrowd-cli) (3.7.4.3)
Requirement already satisfied, skipping upgrade: chardet<5,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (3.0.4)
Requirement already satisfied, skipping upgrade: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (1.24.3)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2020.12.5)
Requirement already satisfied, skipping upgrade: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2.10)
Collecting commonmark<0.10.0,>=0.9.0
  Downloading https://files.pythonhosted.org/packages/b1/92/dfd892312d822f36c55366118b95d914e5f16de11044a27cf10a7d71bbbf/commonmark-0.9.1-py2.py3-none-any.whl (51kB)
     |████████████████████████████████| 51kB 5.5MB/s 
Requirement already satisfied, skipping upgrade: pygments<3.0.0,>=2.6.0 in /usr/local/lib/python3.7/dist-packages (from rich<11,>=10.0.0->aicrowd-cli) (2.6.1)
Collecting colorama<0.5.0,>=0.4.0
  Downloading https://files.pythonhosted.org/packages/44/98/5b86278fbbf250d239ae0ecb724f8572af1c91f4a11edf4d36a206189440/colorama-0.4.4-py2.py3-none-any.whl
Collecting smmap<5,>=3.0.1
  Downloading https://files.pythonhosted.org/packages/68/ee/d540eb5e5996eb81c26ceffac6ee49041d473bc5125f2aa995cf51ec1cf1/smmap-4.0.0-py2.py3-none-any.whl
ERROR: google-colab 1.0.0 has requirement requests~=2.23.0, but you'll have requests 2.25.1 which is incompatible.
ERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.
Installing collected packages: smmap, gitdb, gitpython, requests, tqdm, click, requests-toolbelt, commonmark, colorama, rich, aicrowd-cli
  Found existing installation: requests 2.23.0
    Uninstalling requests-2.23.0:
      Successfully uninstalled requests-2.23.0
  Found existing installation: tqdm 4.41.1
    Uninstalling tqdm-4.41.1:
      Successfully uninstalled tqdm-4.41.1
  Found existing installation: click 8.0.0
    Uninstalling click-8.0.0:
      Successfully uninstalled click-8.0.0
Successfully installed aicrowd-cli-0.1.6 click-7.1.2 colorama-0.4.4 commonmark-0.9.1 gitdb-4.0.7 gitpython-3.1.17 requests-2.25.1 requests-toolbelt-0.9.1 rich-10.2.2 smmap-4.0.0 tqdm-4.61.0

Download Data ⏬

The first step is to download out train test data. We will be training a model on the train data and make predictions on test data. We submit our predictions.

In [4]:
API_KEY = '9cce69d6577e95bdcfaf107bb38f8ff2' #Please enter your API Key from [https://www.aicrowd.com/participants/me]
!aicrowd login --api-key $API_KEY
API Key valid
Saved API Key successfully!
In [5]:
!aicrowd dataset download --challenge f1-speed-recognition
sample_submission.csv: 100% 97.8k/97.8k [00:00<00:00, 661kB/s]
test.zip: 100% 96.9M/96.9M [00:03<00:00, 30.1MB/s]
train.csv: 100% 407k/407k [00:00<00:00, 1.33MB/s]
train.zip: 100% 385M/385M [00:11<00:00, 34.5MB/s]
val.csv: 100% 36.7k/36.7k [00:00<00:00, 539kB/s]
val.zip: 100% 37.8M/37.8M [00:17<00:00, 2.16MB/s]

Below, we create a new directory to put our downloaded data! 🏎

We unzip the ZIP files and move the CSVs.

In [6]:
!rm -rf data
!mkdir data

!unzip -q train.zip  -d data/train
!unzip -q val.zip -d data/val
!unzip -q test.zip  -d data/test

!mv train.csv data/train.csv
!mv val.csv data/val.csv
!mv sample_submission.csv data/sample_submission.csv
In [7]:
!rm -rf ./test.zip ./val.zip ./train.zip

Step2: Perform Clustering to get the labels of two types of speedometers

In [17]:
import pandas as pd
import os
import cv2
import matplotlib.pyplot as plt
from PIL import Image
from tqdm import tqdm
import numpy as np
In [9]:
!git clone https://github.com/zegami/image-similarity-clustering.git
Cloning into 'image-similarity-clustering'...
remote: Enumerating objects: 154, done.
remote: Counting objects: 100% (134/134), done.
remote: Compressing objects: 100% (100/100), done.
remote: Total 154 (delta 76), reused 79 (delta 34), pack-reused 20
Receiving objects: 100% (154/154), 42.29 KiB | 3.84 MiB/s, done.
Resolving deltas: 100% (83/83), done.
In [10]:
%cd ./image-similarity-clustering
/content/image-similarity-clustering
In [12]:
from features import extract_features
from tsne_reducer import tsne
from parse_data import parse_data
from umap_reducer import umap
from sklearn.cluster import KMeans
In [13]:
def get_labels(directory):
    features = extract_features(directory)
    features.to_csv("features.csv",index=False)

    data = parse_data('features.csv', feature_cols='all', unique_col='A')
    reduced_2 = umap(data, write_to='umap_features.csv')

    model = KMeans(n_clusters=2, n_jobs=-1, random_state=728)
    model.fit(reduced_2[[0,1]])
    predictions = model.predict(reduced_2[[0,1]])
    
    image_cluster = pd.DataFrame()
    image_cluster['ImageID'] = reduced_2['ID']
    image_cluster['label'] = pd.DataFrame(predictions)[0]
    image_cluster['ImageID'] = image_cluster['ImageID'].apply(lambda x: int(x.split(".")[0]))
    image_cluster = image_cluster.sort_values(['ImageID']).reset_index(drop=True)
    return image_cluster
In [ ]:
#Cluster Test Data
image_cluster = get_labels('../data/test')
image_cluster.to_csv('../test_label.csv',index=False)

#Cluster Validation Data
image_cluster = get_labels('../data/val')
image_cluster.to_csv('../val_label.csv',index=False)

#Cluster Train Data
image_cluster = get_labels('../data/train')
image_cluster.to_csv('../train_label.csv',index=False)

Step 3: Localize the text by convert surrounding pixels into white

In [15]:
%cd /content
/content
In [ ]:
test_1 = pd.read_csv("./test_label.csv")
val_1 = pd.read_csv("./val_label.csv")
train_1 = pd.read_csv("./train_label.csv")
In [ ]:
!mkdir dataset
!mkdir dataset/train
!mkdir dataset/test
!mkdir dataset/val


def crop_images(data_dir,target_dir,label_file_dir):
    
    label_df =  pd.read_csv(label_file_dir)
    
    for image in tqdm(os.listdir(data_dir)): #os.listdir(data_dir)

        img = Image.open(os.path.join(data_dir,image)).convert('RGB')
  
        # Extracting the image data &
        # creating an numpy array out of it
        img_arr = np.array(img)
         
        label = label_df[label_df['ImageID']==int(image.split(".")[0])]['label'].values[0]
        #print(label)
        if label==0:
            # Turning the pixel values of the 400x400 pixels to white 
            img_arr[0 : 256, 0 : 100] = (255, 255, 255)
            img_arr[0 : 256, 160 : 256] = (255, 255, 255)
            img_arr[0 : 130, 80 : 180] = (255, 255, 255)
            img_arr[170 : 256, 80 : 180] = (255, 255, 255)
        else:
            # Turning the pixel values of the 400x400 pixels to white 
            img_arr[0 : 256, 0 : 80] = (255, 255, 255)
            img_arr[0 : 256, 180 : 256] = (255, 255, 255)
            img_arr[0 : 100, 80 : 180] = (255, 255, 255)
            img_arr[155 : 256, 80 : 180] = (255, 255, 255)

        img = Image.fromarray(img_arr)
      
        #return img
        img.save(os.path.join(target_dir,image))
        
        
    
data_dir = "./data/train"
target_dir = "./dataset/train"
label_file_dir = "./train_label.csv"
crop_images(data_dir,target_dir,label_file_dir)

data_dir = "./data/test"
target_dir = "./dataset/test"
label_file_dir = "./test_label.csv"
crop_images(data_dir,target_dir,label_file_dir)


data_dir = "./data/val"
target_dir = "./dataset/val"
label_file_dir = "./val_label.csv"
crop_images(data_dir,target_dir,label_file_dir)
In [24]:
plt.imshow(cv2.imread("./dataset/test/0.jpg"))
Out[24]:
<matplotlib.image.AxesImage at 0x7ffaf4b4ea50>
In [22]:
plt.imshow(cv2.imread("./dataset/test/1.jpg"))
Out[22]:
<matplotlib.image.AxesImage at 0x7ffaf4bdb850>
In [ ]:

Import packages

In [ ]:
import pandas as pd
from fastai.vision.all import *
from fastai.data.core import *
import os
import cv2
import matplotlib.pyplot as plt
from PIL import Image
from tqdm import tqdm

seed_value =2021
use_cuda = True

def random_seed(seed_value, use_cuda):
    np.random.seed(seed_value) # cpu vars
    torch.manual_seed(seed_value) # cpu  vars
    random.seed(seed_value) # Python
    if use_cuda: 
        torch.cuda.manual_seed(seed_value)
        torch.cuda.manual_seed_all(seed_value) # gpu vars
        torch.backends.cudnn.deterministic = True  #needed
        torch.backends.cudnn.benchmark = False
        
random_seed(seed_value, use_cuda)

Load Data

  • We use pandas 🐼 library to load our data.
  • Pandas loads the data into dataframes and facilitates us to analyse the data.
  • Learn more about it here 🤓
In [25]:
data_folder = "data"
In [26]:
train_df = pd.read_csv(os.path.join(data_folder, "train.csv"))
val_df = pd.read_csv(os.path.join(data_folder, "val.csv"))

Visualize the data 👀

Using Pandas and the Matplot Library in Python, we will be viewing the images in our datasets.

In [27]:
train_df.head()
Out[27]:
ImageID label
0 0 1528
1 1 929
2 2 1504
3 3 938
4 4 1736

Adding .jpg to all the ImageIDs in "ImageID" column. This will help us with adding the path behind the names of these images.

In [ ]:
train_df['ImageID'] = train_df['ImageID'].astype(str)+".jpg"
val_df['ImageID'] = val_df['ImageID']+40000
val_df['ImageID'] = val_df['ImageID'].astype(str)+".jpg"


dir_ = "./dataset/val"
for i in tqdm(os.listdir(dir_)):
    os.rename(os.path.join(dir_,i),os.path.join(dir_,str(int(i.split(".")[0])+40000))+".jpg")
    
val_df['is_valid'] = True
train_df['is_valid'] = False
df = pd.concat([train_df,val_df])

!cp -r ./dataset/val/. ./dataset/train/.

len(os.listdir("./dataset/train")) #44000
In [ ]:

Step 4: TRAINING PHASE [ Image Regression]

Now that we have the dataset is ready, it's time to create a model that we will train on our data!

In [ ]:
data_folder = "./dataset"
dls = ImageDataLoaders.from_df(df, path=os.path.join(data_folder, "train"), bs=16, y_block=RegressionBlock,valid_col='is_valid')
dls.show_batch()

image.png

Models Used for ensemble

  1. resnet18
  2. vgg19
  3. resnet50
In [ ]:
learn = cnn_learner(dls, models.resnet18, metrics=mse)
In [ ]:
learn.fine_tune(20)
In [ ]:
learn.fine_tune(3,base_lr=0.05)
In [ ]:
learn.fine_tune(4,base_lr=0.001)

image.png

Testing Phase 😅

We are almost done. We trained and validated on the training data. Now its the time to predict on test set and make a submission.

Load Test Set

Load the test data on which final submission is to be made.

In [ ]:
test_imgs_name = get_image_files(os.path.join(data_folder, "test"))
test_dls = dls.test_dl(test_imgs_name)

test_img_ids = [re.sub(r"\D", "", str(img_name)) for img_name in test_imgs_name]
In [ ]:
test_dls.show_batch()

image.png

Predict Test Set

Predict on the test set and you are all set to make the submission!

In [ ]:
_,_,results = learn.get_preds(dl = test_dls, with_decoded = True)

results = [i[0] for i in results.numpy()]

Save the prediction to csv

In [ ]:
submission = pd.DataFrame({"ImageID":test_img_ids, "label":results})
submission

image.png

In [ ]:
submission.to_csv("submission_regression.csv", index=False)

Step 5: Apply OCR on Test Set

In [ ]:
!pip install easyocr
In [ ]:
import easyocr
reader = easyocr.Reader(['en'],gpu = True) # need to run only once to load model into memory
In [ ]:
def Sort(sub_li):
    sub_li.sort(key = lambda x: x[2],reverse=True)
    return sub_li

def get_predictions(img_path):
    
    img = cv2.imread(img_path)
    median_blur= cv2.medianBlur(img, 3)

    kernel = np.array([[-1,-1,-1], 
                           [-1, 9,-1],
                           [-1,-1,-1]])

    sharpened = cv2.filter2D(median_blur, -1, kernel) 

    result = reader.readtext(sharpened)
    result = Sort(result)
    
    count =0
    for i in result:
        if i[1].replace(',','').isdigit():
          return i[1].replace(',',''),i[2]


def ocr_predictions_images(data_dir):
    
    predictions =  {}
    for image in tqdm(os.listdir(data_dir)):
        
        img_path = os.path.join(data_dir,image)
        predictions[int(image.split(".")[0])] = get_predictions(img_path)

    return predictions

data_dir = "./dataset/test"
predictions = ocr_predictions_images(data_dir)
In [ ]:
data = pd.DataFrame.from_dict(predictions, orient="index",columns=['label']).reset_index()
data = data.rename(columns={"index":"ImageID"})

data['label'] = data['label'].fillna(np.nan)
In [ ]:
data.to_csv("ocr_result.csv",index=False)
In [ ]:

Step 6: Ensemble

Note: Most of the predictions difference (between regression & ocr) were in between 0 to 158, so if the difference was less than 158, then the regression result was replaced with ocr.

In [ ]:
def process(x):
    global c
    if np.isnan(x[1]):
        return x[2]
    elif (x[1]>=0)and(x[3]<158): #120
        return x[0]
    else:
        return x[2]
In [ ]:
df = pd.read_csv("submission_regression_ensemble.csv").sort_values('ImageID').reset_index(drop=True)
data = pd.read_csv("ocr_result.csv").sort_values('ImageID').reset_index(drop=True)

data['label1'] = data['label'].apply(lambda x: x if x is np.nan else float(x.split(",")[0].split('(')[-1] ))
data['acc'] = data['label'].apply(lambda x: x if x is np.nan else float(x.split(",")[1].split(')')[0]) )

data['label_df'] = df['label']
data['diff'] = abs(data['label_df'] - data['label1'])
In [ ]:
df['label'] = data[['label1','acc','label_df','diff']].apply(lambda x: process(x),axis=1)
In [ ]:
df.to_csv("submisson.csv",index=False)
In [ ]:
!aicrowd submission create -c f1-speed-recognition -f submission.csv
In [ ]:

Observations

  1. Ensemble of Multiple Models used for Image regression
  2. OCR for accurate predictions
  3. Ensemble of both these approaches provided the best result.
In [ ]:


Comments

You must login before you can post a comment.

Execute