Loading

AI Blitz #8

F1 CAR ROTATION using efficientNet

efficientNet for classification task

Denis_tsaregorodtsev
In [2]:
import torch
workDir='/usr/data/'
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
In [1]:
# this mounts your Google Drive to the Colab VM.
from google.colab import drive
drive.mount('/content/drive', force_remount=True)


%cd '/usr'
!mkdir 'data'
%cd '/usr/data'
Mounted at /content/drive
/usr
/usr/data
In [3]:
!pip install --upgrade fastai
!pip install -U aicrowd-cli
Collecting fastai
  Downloading https://files.pythonhosted.org/packages/e8/79/e8a87e4c20238e114671314426227db8647d2b42744eab79e0917c59865e/fastai-2.3.1-py3-none-any.whl (194kB)
     |████████████████████████████████| 204kB 13.8MB/s 
Requirement already satisfied, skipping upgrade: scipy in /usr/local/lib/python3.7/dist-packages (from fastai) (1.4.1)
Requirement already satisfied, skipping upgrade: pillow>6.0.0 in /usr/local/lib/python3.7/dist-packages (from fastai) (7.1.2)
Requirement already satisfied, skipping upgrade: packaging in /usr/local/lib/python3.7/dist-packages (from fastai) (20.9)
Requirement already satisfied, skipping upgrade: torch<1.9,>=1.7.0 in /usr/local/lib/python3.7/dist-packages (from fastai) (1.8.1+cu101)
Requirement already satisfied, skipping upgrade: pyyaml in /usr/local/lib/python3.7/dist-packages (from fastai) (3.13)
Requirement already satisfied, skipping upgrade: pip in /usr/local/lib/python3.7/dist-packages (from fastai) (19.3.1)
Requirement already satisfied, skipping upgrade: spacy<4 in /usr/local/lib/python3.7/dist-packages (from fastai) (2.2.4)
Requirement already satisfied, skipping upgrade: fastprogress>=0.2.4 in /usr/local/lib/python3.7/dist-packages (from fastai) (1.0.0)
Requirement already satisfied, skipping upgrade: scikit-learn in /usr/local/lib/python3.7/dist-packages (from fastai) (0.22.2.post1)
Requirement already satisfied, skipping upgrade: requests in /usr/local/lib/python3.7/dist-packages (from fastai) (2.23.0)
Requirement already satisfied, skipping upgrade: matplotlib in /usr/local/lib/python3.7/dist-packages (from fastai) (3.2.2)
Collecting fastcore<1.4,>=1.3.8
  Downloading https://files.pythonhosted.org/packages/d8/b0/f1fbf554e0bf3c76e1bdc3b82eedfe41fcf656479586be38c64421082b1b/fastcore-1.3.20-py3-none-any.whl (53kB)
     |████████████████████████████████| 61kB 7.9MB/s 
Requirement already satisfied, skipping upgrade: torchvision>=0.8.2 in /usr/local/lib/python3.7/dist-packages (from fastai) (0.9.1+cu101)
Requirement already satisfied, skipping upgrade: pandas in /usr/local/lib/python3.7/dist-packages (from fastai) (1.1.5)
Requirement already satisfied, skipping upgrade: numpy>=1.13.3 in /usr/local/lib/python3.7/dist-packages (from scipy->fastai) (1.19.5)
Requirement already satisfied, skipping upgrade: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->fastai) (2.4.7)
Requirement already satisfied, skipping upgrade: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch<1.9,>=1.7.0->fastai) (3.7.4.3)
Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (56.1.0)
Requirement already satisfied, skipping upgrade: wasabi<1.1.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (0.8.2)
Requirement already satisfied, skipping upgrade: catalogue<1.1.0,>=0.0.7 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (1.0.0)
Requirement already satisfied, skipping upgrade: plac<1.2.0,>=0.9.6 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (1.1.3)
Requirement already satisfied, skipping upgrade: blis<0.5.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (0.4.1)
Requirement already satisfied, skipping upgrade: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (1.0.5)
Requirement already satisfied, skipping upgrade: thinc==7.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (7.4.0)
Requirement already satisfied, skipping upgrade: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (4.41.1)
Requirement already satisfied, skipping upgrade: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (3.0.5)
Requirement already satisfied, skipping upgrade: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (2.0.5)
Requirement already satisfied, skipping upgrade: srsly<1.1.0,>=1.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai) (1.0.5)
Requirement already satisfied, skipping upgrade: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->fastai) (1.0.1)
Requirement already satisfied, skipping upgrade: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->fastai) (3.0.4)
Requirement already satisfied, skipping upgrade: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->fastai) (2.10)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->fastai) (2020.12.5)
Requirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->fastai) (1.24.3)
Requirement already satisfied, skipping upgrade: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->fastai) (0.10.0)
Requirement already satisfied, skipping upgrade: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->fastai) (1.3.1)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->fastai) (2.8.1)
Requirement already satisfied, skipping upgrade: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->fastai) (2018.9)
Requirement already satisfied, skipping upgrade: importlib-metadata>=0.20; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from catalogue<1.1.0,>=0.0.7->spacy<4->fastai) (4.0.1)
Requirement already satisfied, skipping upgrade: six in /usr/local/lib/python3.7/dist-packages (from cycler>=0.10->matplotlib->fastai) (1.15.0)
Requirement already satisfied, skipping upgrade: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=0.20; python_version < "3.8"->catalogue<1.1.0,>=0.0.7->spacy<4->fastai) (3.4.1)
Installing collected packages: fastcore, fastai
  Found existing installation: fastai 1.0.61
    Uninstalling fastai-1.0.61:
      Successfully uninstalled fastai-1.0.61
Successfully installed fastai-2.3.1 fastcore-1.3.20
Collecting aicrowd-cli
  Downloading https://files.pythonhosted.org/packages/a5/8a/fca67e8c1cb1501a9653cd653232bf6fdebbb2393e3de861aad3636a1136/aicrowd_cli-0.1.6-py3-none-any.whl (51kB)
     |████████████████████████████████| 61kB 6.0MB/s 
Collecting requests<3,>=2.25.1
  Downloading https://files.pythonhosted.org/packages/29/c1/24814557f1d22c56d50280771a17307e6bf87b70727d975fd6b2ce6b014a/requests-2.25.1-py2.py3-none-any.whl (61kB)
     |████████████████████████████████| 61kB 8.0MB/s 
Requirement already satisfied, skipping upgrade: toml<1,>=0.10.2 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (0.10.2)
Collecting gitpython<4,>=3.1.12
  Downloading https://files.pythonhosted.org/packages/27/da/6f6224fdfc47dab57881fe20c0d1bc3122be290198ba0bf26a953a045d92/GitPython-3.1.17-py3-none-any.whl (166kB)
     |████████████████████████████████| 174kB 22.4MB/s 
Collecting requests-toolbelt<1,>=0.9.1
  Downloading https://files.pythonhosted.org/packages/60/ef/7681134338fc097acef8d9b2f8abe0458e4d87559c689a8c306d0957ece5/requests_toolbelt-0.9.1-py2.py3-none-any.whl (54kB)
     |████████████████████████████████| 61kB 10.2MB/s 
Collecting rich<11,>=10.0.0
  Downloading https://files.pythonhosted.org/packages/6b/39/fbe8d15f0b017d63701f2a42e4ccb9a73cd4175e5c56214c1b5685e3dd79/rich-10.2.2-py3-none-any.whl (203kB)
     |████████████████████████████████| 204kB 23.1MB/s 
Collecting tqdm<5,>=4.56.0
  Downloading https://files.pythonhosted.org/packages/72/8a/34efae5cf9924328a8f34eeb2fdaae14c011462d9f0e3fcded48e1266d1c/tqdm-4.60.0-py2.py3-none-any.whl (75kB)
     |████████████████████████████████| 81kB 10.9MB/s 
Collecting click<8,>=7.1.2
  Downloading https://files.pythonhosted.org/packages/d2/3d/fa76db83bf75c4f8d338c2fd15c8d33fdd7ad23a9b5e57eb6c5de26b430e/click-7.1.2-py2.py3-none-any.whl (82kB)
     |████████████████████████████████| 92kB 12.1MB/s 
Requirement already satisfied, skipping upgrade: chardet<5,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (3.0.4)
Requirement already satisfied, skipping upgrade: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (1.24.3)
Requirement already satisfied, skipping upgrade: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2.10)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2020.12.5)
Collecting gitdb<5,>=4.0.1
  Downloading https://files.pythonhosted.org/packages/ea/e8/f414d1a4f0bbc668ed441f74f44c116d9816833a48bf81d22b697090dba8/gitdb-4.0.7-py3-none-any.whl (63kB)
     |████████████████████████████████| 71kB 11.3MB/s 
Requirement already satisfied, skipping upgrade: typing-extensions>=3.7.4.0; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from gitpython<4,>=3.1.12->aicrowd-cli) (3.7.4.3)
Collecting colorama<0.5.0,>=0.4.0
  Downloading https://files.pythonhosted.org/packages/44/98/5b86278fbbf250d239ae0ecb724f8572af1c91f4a11edf4d36a206189440/colorama-0.4.4-py2.py3-none-any.whl
Requirement already satisfied, skipping upgrade: pygments<3.0.0,>=2.6.0 in /usr/local/lib/python3.7/dist-packages (from rich<11,>=10.0.0->aicrowd-cli) (2.6.1)
Collecting commonmark<0.10.0,>=0.9.0
  Downloading https://files.pythonhosted.org/packages/b1/92/dfd892312d822f36c55366118b95d914e5f16de11044a27cf10a7d71bbbf/commonmark-0.9.1-py2.py3-none-any.whl (51kB)
     |████████████████████████████████| 51kB 7.9MB/s 
Collecting smmap<5,>=3.0.1
  Downloading https://files.pythonhosted.org/packages/68/ee/d540eb5e5996eb81c26ceffac6ee49041d473bc5125f2aa995cf51ec1cf1/smmap-4.0.0-py2.py3-none-any.whl
ERROR: google-colab 1.0.0 has requirement requests~=2.23.0, but you'll have requests 2.25.1 which is incompatible.
ERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.
Installing collected packages: requests, smmap, gitdb, gitpython, requests-toolbelt, colorama, commonmark, rich, tqdm, click, aicrowd-cli
  Found existing installation: requests 2.23.0
    Uninstalling requests-2.23.0:
      Successfully uninstalled requests-2.23.0
  Found existing installation: tqdm 4.41.1
    Uninstalling tqdm-4.41.1:
      Successfully uninstalled tqdm-4.41.1
  Found existing installation: click 8.0.0
    Uninstalling click-8.0.0:
      Successfully uninstalled click-8.0.0
Successfully installed aicrowd-cli-0.1.6 click-7.1.2 colorama-0.4.4 commonmark-0.9.1 gitdb-4.0.7 gitpython-3.1.17 requests-2.25.1 requests-toolbelt-0.9.1 rich-10.2.2 smmap-4.0.0 tqdm-4.60.0
In [4]:
API_KEY = '52ab6eb031245b7028158e2f3e993174' #Please enter your API Key from [https://www.aicrowd.com/participants/me]
!aicrowd login --api-key $API_KEY
API Key valid
Saved API Key successfully!
In [5]:
!aicrowd dataset download --challenge f1-car-rotation -j 3
sample_submission.csv: 100% 104k/104k [00:00<00:00, 335kB/s]
train.csv: 100% 449k/449k [00:00<00:00, 690kB/s]
test.zip:   0% 0.00/111M [00:00<?, ?B/s]
val.csv:   0% 0.00/40.9k [00:00<?, ?B/s]
val.csv: 100% 40.9k/40.9k [00:00<00:00, 278kB/s]

train.zip:   0% 0.00/444M [00:00<?, ?B/s]

val.zip:   0% 0.00/44.4M [00:00<?, ?B/s]
test.zip:  30% 33.6M/111M [00:07<00:16, 4.58MB/s]

val.zip:  76% 33.6M/44.4M [00:05<00:01, 6.06MB/s]

val.zip: 100% 44.4M/44.4M [00:06<00:00, 6.93MB/s]

train.zip:  15% 67.1M/444M [00:09<00:52, 7.19MB/s]
test.zip:  91% 101M/111M [00:21<00:02, 4.60MB/s] 
test.zip: 100% 111M/111M [00:24<00:00, 4.62MB/s]

train.zip:  38% 168M/444M [00:30<00:57, 4.79MB/s]
train.zip:  45% 201M/444M [00:34<00:42, 5.66MB/s]
train.zip:  53% 235M/444M [00:37<00:30, 6.84MB/s]
train.zip:  61% 268M/444M [00:46<00:32, 5.38MB/s]
train.zip:  68% 302M/444M [00:49<00:21, 6.44MB/s]
train.zip:  76% 336M/444M [00:53<00:15, 7.06MB/s]
train.zip:  83% 369M/444M [00:56<00:09, 7.53MB/s]
train.zip:  91% 403M/444M [01:01<00:05, 7.71MB/s]
train.zip:  98% 436M/444M [01:05<00:00, 7.84MB/s]
train.zip: 100% 444M/444M [01:05<00:00, 6.78MB/s]
In [6]:
!rm -rf data
!mkdir data

!unzip -q train.zip  -d data/train
!unzip -q val.zip -d data/val
!unzip -q test.zip  -d data/test

!mv train.csv data/train.csv
!mv val.csv data/val.csv
!mv sample_submission.csv data/sample_submission.csv
In [7]:
import torch
from torch.utils.data import Dataset,DataLoader,RandomSampler
from torchvision import transforms as T
import pandas as pd
from PIL import Image

class ImageDataset(Dataset):
  def __init__(self,ImageFold,lblDict,df,transforms):
    self.ImageFold=ImageFold
    self.df=df
    self.trans=transforms
    self.lblDict=lblDict

  def __len__(self):
    return len(self.df)

  def __getitem__(self,ind):
    im=self.load_image(self.df.iloc[ind][0])
    im=self.trans(im)
    return im, self.lblDict[self.df.iloc[ind][1]]


  def load_image(self,ind):
    return Image.open(self.ImageFold+str(self.df.iloc[ind][0])+'.jpg')
In [13]:
trainResnet=T.Compose([
#        T.Resize(imSize),
#        transforms.RandomHorizontalFlip(),
        T.ToTensor(),
        T.Normalize(mean=[0.485, 0.456, 0.406],
                             std=[0.229, 0.224, 0.225])

])

lblDict={'front':0,'back':1,'right':2,'left':3}
df_train=pd.read_csv('data/train.csv')
ds_train_resnet=ImageDataset(workDir+'data/train/',lblDict,df_train,trainResnet)
dl_train_resnet=DataLoader(ds_train_resnet,batch_size=64,shuffle=True,num_workers=2)

df_val=pd.read_csv('data/val.csv')
ds_val_resnet=ImageDataset(workDir+'data/val/',lblDict,df_val,trainResnet)
dl_val_resnet=DataLoader(ds_val_resnet,batch_size=64,shuffle=False,num_workers=2)

dataloaders_dict={'train':dl_train_resnet,'val':dl_val_resnet}
In [14]:
def train_model(model, dataloaders, criterion, optimizer, num_epochs=25):
    since = time.time()
    val_acc_history = []
    best_acc=0
    best_model_wts = copy.deepcopy(model.state_dict())

    for epoch in range(num_epochs):
        print('Epoch {}/{}'.format(epoch, num_epochs - 1))
        print('-' * 10)

        # Each epoch has a training and validation phase
        for phase in ['train', 'val']:
            if phase == 'train':
                model.train()  # Set model to training mode
            else:
                model.eval()   # Set model to evaluate mode

            running_loss = 0.0
            running_corrects = 0
            i=0
            # Iterate over data.
            for inputs, labels in dataloaders[phase]:
                inputs = inputs.to(device)
                labels = labels.to(device)
                # zero the parameter gradients
                optimizer.zero_grad()

                # forward
                # track history if only in train
                with torch.set_grad_enabled(phase == 'train'):
                    i+=128

                    outputs = model(inputs)
                    loss = criterion(outputs, labels)
                    _, preds = torch.max(outputs, 1)
                    if(i % 8192 ==0):
                      print(loss)
                    # backward + optimize only if in training phase
                    if phase == 'train':
                        loss.backward()
                        optimizer.step()

                # statistics
                running_corrects += torch.sum(preds == labels.data)
                running_loss += loss.detach().item()*len(labels)
            epoch_loss = running_loss / (len(dataloaders[phase].dataset))
            epoch_acc= running_corrects / len(dataloaders[phase].dataset)
            if phase == 'val' and epoch_acc > best_acc:
                best_acc = epoch_acc
                best_model_wts = copy.deepcopy(model.state_dict())
            if phase == 'val':
                val_acc_history.append(epoch_acc)
            print('{} Loss: {:.4f}, acc:  {:.4f}'.format(phase, epoch_loss, epoch_acc))

        print()

    time_elapsed = time.time() - since
    print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
    # load best model weights
    model.load_state_dict(best_model_wts)
    return model
In [15]:
import random

import torchvision.models as models
from __future__ import print_function
from __future__ import division
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as plt
import time
import os
import copy

Efficient Net

In [18]:
import torch
!pip install efficientnet_pytorch
from efficientnet_pytorch import EfficientNet
Requirement already satisfied: efficientnet_pytorch in /usr/local/lib/python3.7/dist-packages (0.7.1)
Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (from efficientnet_pytorch) (1.8.1+cu101)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torch->efficientnet_pytorch) (1.19.5)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch->efficientnet_pytorch) (3.7.4.3)
In [19]:
model = EfficientNet.from_pretrained('efficientnet-b3',num_classes = 4)

model.to(device)
criterion=nn.CrossEntropyLoss()
num_epochs=7
optimizer =torch.optim.Adam(model.parameters(), lr=0.001)
model_ft = train_model(model, dataloaders_dict, criterion, optimizer, num_epochs=num_epochs)
torch.save(model.state_dict(), '/content/drive/MyDrive/weights_4_chel_ef_adam1_crossentropy.txt')
Downloading: "https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b3-5fb5a3c3.pth" to /root/.cache/torch/hub/checkpoints/efficientnet-b3-5fb5a3c3.pth
Loaded pretrained weights for efficientnet-b3
Epoch 0/6
----------
tensor(0.0409, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0460, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0010, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0103, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0542, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0092, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0218, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0013, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0871, device='cuda:0', grad_fn=<NllLossBackward>)
train Loss: 0.0549, acc:  0.9815
val Loss: 0.1860, acc:  0.9635

Epoch 1/6
----------
tensor(0.0022, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0267, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0635, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0018, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0005, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0229, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0003, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0132, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0003, device='cuda:0', grad_fn=<NllLossBackward>)
train Loss: 0.0199, acc:  0.9927
val Loss: 0.0191, acc:  0.9925

Epoch 2/6
----------
tensor(0.1361, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0004, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0006, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0140, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0015, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0084, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0028, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0031, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0081, device='cuda:0', grad_fn=<NllLossBackward>)
train Loss: 0.0149, acc:  0.9949
val Loss: 0.0403, acc:  0.9905

Epoch 3/6
----------
tensor(0.0011, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(7.8772e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0351, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0028, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0131, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0055, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0255, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0641, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0003, device='cuda:0', grad_fn=<NllLossBackward>)
train Loss: 0.0111, acc:  0.9963
val Loss: 0.0167, acc:  0.9925

Epoch 4/6
----------
tensor(0.0125, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(2.3813e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0442, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0013, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0034, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0898, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0042, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0114, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0124, device='cuda:0', grad_fn=<NllLossBackward>)
train Loss: 0.0103, acc:  0.9965
val Loss: 0.0147, acc:  0.9955

Epoch 5/6
----------
tensor(0.0058, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0333, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0010, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0011, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0025, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0011, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0445, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0030, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0006, device='cuda:0', grad_fn=<NllLossBackward>)
train Loss: 0.0152, acc:  0.9954
val Loss: 0.0203, acc:  0.9938

Epoch 6/6
----------
tensor(0.0027, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0002, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0358, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0004, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0201, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0493, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0008, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0187, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0345, device='cuda:0', grad_fn=<NllLossBackward>)
train Loss: 0.0065, acc:  0.9977
val Loss: 0.0130, acc:  0.9953

Training complete in 28m 57s
In [20]:
optimizer =torch.optim.Adam(model.parameters(), lr=0.0003)
model_ft = train_model(model, dataloaders_dict, criterion, optimizer, num_epochs=5)
torch.save(model.state_dict(), '/content/drive/MyDrive/weights_4_chel_ef_adam2_entropy.txt')
Epoch 0/4
----------
tensor(9.8228e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.1138e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0001, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(3.2901e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0003, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0002, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.0628e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0039, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.9372e-05, device='cuda:0', grad_fn=<NllLossBackward>)
train Loss: 0.0041, acc:  0.9985
val Loss: 0.0201, acc:  0.9948

Epoch 1/4
----------
tensor(4.8660e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.6810e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0193, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.8564e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0002, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0007, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(5.5613e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0006, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(5.3508e-05, device='cuda:0', grad_fn=<NllLossBackward>)
train Loss: 0.0031, acc:  0.9989
val Loss: 0.0196, acc:  0.9950

Epoch 2/4
----------
tensor(0.0001, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0098, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.3334e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(2.4100e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(9.3794e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(2.3768e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0004, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0004, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(7.1337e-05, device='cuda:0', grad_fn=<NllLossBackward>)
train Loss: 0.0018, acc:  0.9995
val Loss: 0.0218, acc:  0.9955

Epoch 3/4
----------
tensor(0.0240, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(7.7461e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(2.3492e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(3.5408e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.0037e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0002, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(7.1075e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0013, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(8.1546e-05, device='cuda:0', grad_fn=<NllLossBackward>)
train Loss: 0.0028, acc:  0.9991
val Loss: 0.0258, acc:  0.9945

Epoch 4/4
----------
tensor(2.2910e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0007, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0001, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(0.0001, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.4044e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.0400e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.1405e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.1733e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(7.2879e-06, device='cuda:0', grad_fn=<NllLossBackward>)
train Loss: 0.0010, acc:  0.9997
val Loss: 0.0207, acc:  0.9950

Training complete in 20m 40s
In [21]:
optimizer =torch.optim.Adam(model.parameters(), lr=0.00004)
model_ft = train_model(model, dataloaders_dict, criterion, optimizer, num_epochs=4)
torch.save(model.state_dict(), '/content/drive/MyDrive/weights_4_chel_ef_adam3_entropy.txt')
Epoch 0/3
----------
tensor(8.3481e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(2.3540e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(2.5127e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.3579e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.0290e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(2.2432e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(6.6159e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(9.1828e-07, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(6.7259e-06, device='cuda:0', grad_fn=<NllLossBackward>)
train Loss: 0.0004, acc:  0.9998
val Loss: 0.0205, acc:  0.9948

Epoch 1/3
----------
tensor(8.8103e-07, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(2.4997e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.2089e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(6.8479e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(5.0362e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(9.7801e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(8.2699e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(2.0694e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.1559e-05, device='cuda:0', grad_fn=<NllLossBackward>)
train Loss: 0.0001, acc:  1.0000
val Loss: 0.0240, acc:  0.9948

Epoch 2/3
----------
tensor(5.0566e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(5.2749e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(3.6022e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(2.3637e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(4.9919e-07, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(6.2055e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(5.9006e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(4.6566e-07, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(7.7672e-07, device='cuda:0', grad_fn=<NllLossBackward>)
train Loss: 0.0000, acc:  1.0000
val Loss: 0.0240, acc:  0.9955

Epoch 3/3
----------
tensor(6.5379e-07, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.2312e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.0013e-05, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.9762e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(9.8161e-07, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.0468e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.5404e-06, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(3.7812e-07, device='cuda:0', grad_fn=<NllLossBackward>)
tensor(1.2440e-05, device='cuda:0', grad_fn=<NllLossBackward>)
train Loss: 0.0001, acc:  0.9999
val Loss: 0.0300, acc:  0.9945

Training complete in 16m 31s
In [23]:
model.eval()
clsDict={0:'front',1:'back',2:'right',3:'left'}

A=[[i for i in range(10000)],['']*10000]
df=pd.DataFrame(A).transpose()
df.columns=['ImageID','label']
i=0
for f in os.listdir('data/test/'):
  im=Image.open('data/test/'+f)
  tens=torch.reshape(trainResnet(im),(1,3,256,256))
  inputs = tens.to(device)
  outputs = np.argmax(model(inputs).detach().cpu().numpy())
  df.iloc[int(f.split('.')[0]),1]=clsDict[outputs]

df.to_csv('/content/drive/MyDrive/submission.csv',index=False)
In [24]:
!aicrowd submission create -c f1-car-rotation -f '/content/drive/MyDrive/submission.csv'
submission.csv ━━━━━━━━━━━━━━━━━━━━ 100.0%105.6/103.9 KB3.3 MB/s0:00:00
                                                 ╭─────────────────────────╮                                                 
                                                 │ Successfully submitted! │                                                 
                                                 ╰─────────────────────────╯                                                 
                                                       Important links                                                       
┌──────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│  This submission │ https://www.aicrowd.com/challenges/ai-blitz-8/problems/f1-car-rotation/submissions/140275              │
│                  │                                                                                                        │
│  All submissions │ https://www.aicrowd.com/challenges/ai-blitz-8/problems/f1-car-rotation/submissions?my_submissions=true │
│                  │                                                                                                        │
│      Leaderboard │ https://www.aicrowd.com/challenges/ai-blitz-8/problems/f1-car-rotation/leaderboards                    │
│                  │                                                                                                        │
│ Discussion forum │ https://discourse.aicrowd.com/c/ai-blitz-8                                                             │
│                  │                                                                                                        │
│   Challenge page │ https://www.aicrowd.com/challenges/ai-blitz-8/problems/f1-car-rotation                                 │
└──────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┘
{'submission_id': 140275, 'created_at': '2021-05-23T10:57:57.523Z'}

Comments

You must login before you can post a comment.

Execute