ML Battleground
Submitted code for the Hey Barrels challenge.
A getting started code for the Hey Barrels challenge.
Image regression for predicting numerical targets from photo by 'ktrain'.
ML Battleground Hey-Barrels¶
See image regression for predicting numerical targets from photos (e.g., age prediction) example notebook
Author: Victor Krasilnikov
Score = 0.156 / 0.096
STEP 0: Install packages¶
In [1]:
!pip install -q ktrain
In [2]:
!pip install git+https://gitlab.aicrowd.com/aicrowd/aicrowd-cli.git >/dev/null
%load_ext aicrowd.magic
STEP 1: Download the Data¶
In [3]:
API_KEY = '' # Please enter your API Key [https://www.aicrowd.com/participants/me]
%aicrowd login --api-key $API_KEY
In [4]:
%aicrowd dataset list -c hey-barrels
%aicrowd dataset download -c hey-barrels -j 3
!unzip train.zip >/dev/null
!unzip test.zip >/dev/null
In [6]:
!ls
In [7]:
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
import pandas as pd
import numpy as np
import ktrain
from ktrain import vision as vis
print(ktrain.__version__)
In [8]:
TRAIN_DATA_DIR = "train/images"
TEST_DATA_DIR = "test"
In [9]:
submission = pd.read_csv("example_submission.csv")
train_df = pd.read_csv("train/meta-data.csv")
train_df
Out[9]:
In [11]:
# Models for image regression
vis.print_image_regression_models()
STEP 2: Train and Predict for 'barrels_count'¶
In [18]:
LABEL1 = 'barrels_count'
NET = 'pretrained_resnet50'
FREEZE = 15
EPOCHS = 5
SIZE = (224,224)
Create train and val data¶
In [13]:
# LABEL1
data_aug = vis.get_data_aug(horizontal_flip=True)
(train_data, val_data, preproc) = vis.data.images_from_df(train_df,
data_aug = data_aug,
image_column="filename",
label_columns=[LABEL1],
directory=TRAIN_DATA_DIR ,
is_regression=True,
target_size=SIZE,
color_mode='rgb',
random_state=42)
Create a Model and Wrap in Learner
¶
We use the image_regression_model
function to create a ResNet50
model.\
By default, the model freezes all layers except the final randomly-initialized dense layer.
In [14]:
model = vis.image_regression_model(NET, train_data, val_data)
Estimate Learning Rate¶
We will select a learning rate associated with falling loss from the plot displayed.
In [15]:
# wrap model and data in Learner object
learner = ktrain.get_learner(model=model, train_data=train_data, val_data=val_data,
workers=8, use_multiprocessing=False, batch_size=64)
In [16]:
learner.lr_find(max_epochs=10)
learner.lr_plot()
In [19]:
learner.freeze(FREEZE) # unfreeze all but the first FREEZE layers
In [20]:
learner.fit_onecycle(1e-4, EPOCHS)
Out[20]:
Make Predictions¶
In [23]:
# get a Predictor instance that wraps model and Preprocessor object
predictor = ktrain.get_predictor(learner.model, preproc)
In [24]:
# Predict 'barrels_count'
preds = predictor.predict_folder(TEST_DATA_DIR)
preds[0]
Out[24]:
In [25]:
submission[LABEL1] = [round(pred[1]) for pred in preds]
STEP 3: Train and Predict for LABEL2¶
In [26]:
LABEL2 = 'pigs_count'
NET = 'pretrained_resnet50'
FREEZE = 15
EPOCHS = 5
SIZE = (224,224)
In [27]:
# 1
data_aug = vis.get_data_aug(horizontal_flip=True)
(train_data, val_data, preproc) = vis.data.images_from_df(train_df,
data_aug = data_aug,
image_column="filename",
label_columns=[LABEL2],
directory=TRAIN_DATA_DIR ,
is_regression=True,
target_size=SIZE,
color_mode='rgb',
random_state=42)
# 2
model = vis.image_regression_model(NET, train_data, val_data)
# 3
learner = ktrain.get_learner(model=model, train_data=train_data, val_data=val_data,
workers=8, use_multiprocessing=False, batch_size=64)
# 4
learner.freeze(FREEZE)
learner.fit_onecycle(1e-4, EPOCHS)
# 5
predictor = ktrain.get_predictor(learner.model, preproc)
preds = predictor.predict_folder(TEST_DATA_DIR)
preds[0]
# 6
submission[LABEL2] = [round(pred[1]) for pred in preds]
STEP 4: Submite¶
In [28]:
submission.to_csv("KT_Reg_submission.csv", index=False)
submission
Out[28]:
In [29]:
%aicrowd submission create -c hey-barrels -f KT_Reg_submission.csv
In [30]:
try:
from google.colab import files
files.download('KT_Reg_submission.csv')
except ImportError as e:
print("Only for Colab")
Content
Comments
You must login before you can post a comment.