Loading

Iceberg Detection

Solution for submission 152223

A detailed solution for submission 152223 submitted for challenge Iceberg Detection

AkashPB

Sample Submission for IceBerg Detection

banner.jpg

Note : Create a copy of the notebook and use the copy for submission. Go to File > Save a Copy in Drive to create a new copy

Setup AIcrowd Utilities 🛠

In this section, we install AIcrowd and setup some environment variables will be be given during the evaluation of the notebook in clouds server. So you will need to keep this header as it is!

In [1]:
!pip install aicrowd-cli

%load_ext aicrowd.magic
Requirement already satisfied: aicrowd-cli in /usr/local/lib/python3.7/dist-packages (0.1.9)
Requirement already satisfied: click<8,>=7.1.2 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (7.1.2)
Requirement already satisfied: rich<11,>=10.0.0 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (10.6.0)
Requirement already satisfied: requests-toolbelt<1,>=0.9.1 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (0.9.1)
Requirement already satisfied: toml<1,>=0.10.2 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (0.10.2)
Requirement already satisfied: requests<3,>=2.25.1 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (2.26.0)
Requirement already satisfied: GitPython==3.1.18 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (3.1.18)
Requirement already satisfied: tqdm<5,>=4.56.0 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (4.62.0)
Requirement already satisfied: typing-extensions>=3.7.4.0 in /usr/local/lib/python3.7/dist-packages (from GitPython==3.1.18->aicrowd-cli) (3.7.4.3)
Requirement already satisfied: gitdb<5,>=4.0.1 in /usr/local/lib/python3.7/dist-packages (from GitPython==3.1.18->aicrowd-cli) (4.0.7)
Requirement already satisfied: smmap<5,>=3.0.1 in /usr/local/lib/python3.7/dist-packages (from gitdb<5,>=4.0.1->GitPython==3.1.18->aicrowd-cli) (4.0.0)
Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2.0.2)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2021.5.30)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (1.24.3)
Requirement already satisfied: commonmark<0.10.0,>=0.9.0 in /usr/local/lib/python3.7/dist-packages (from rich<11,>=10.0.0->aicrowd-cli) (0.9.1)
Requirement already satisfied: pygments<3.0.0,>=2.6.0 in /usr/local/lib/python3.7/dist-packages (from rich<11,>=10.0.0->aicrowd-cli) (2.6.1)
Requirement already satisfied: colorama<0.5.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from rich<11,>=10.0.0->aicrowd-cli) (0.4.4)

How to use this notebook? 📝

notebook overview

  • Update the config parameters. You can define the common variables here
Variable Description
AICROWD_DATASET_PATH Path to the file containing test data (The data will be available at /data/ on aridhia workspace). This should be an absolute path.
AICROWD_OUTPUTS_PATH Path to write the output to.
AICROWD_ASSETS_DIR In case your notebook needs additional files (like model weights, etc.,), you can add them to a directory and specify the path to the directory here (please specify relative path). The contents of this directory will be sent to AIcrowd for evaluation.
AICROWD_API_KEY In order to submit your code to AIcrowd, you need to provide your account's API key. This key is available at https://www.aicrowd.com/participants/me
  • Installing packages. Please use the Install packages 🗃 section to install the packages
  • Training your models. All the code within the Training phase ⚙️ section will be skipped during evaluation. Please make sure to save your model weights in the assets directory and load them in the predictions phase section
In [1]:
import os

# Please use the absolute for the location of the dataset.
# Or you can use relative path with `os.getcwd() + "test_data/test.csv"`
AICROWD_DATASET_PATH = os.getenv("DATASET_PATH", os.getcwd()+"/data/test")

# Output directory is where oyu save your predictions videos 
AICROWD_OUTPUTS_PATH = os.getenv("OUTPUTS_DIR", "")

# Assrts directory is where you can save your models to & read them during evaluation 
AICROWD_ASSETS_DIR = os.getenv("ASSETS_DIR", "assets")

Install packages 🗃

Here we install all libraries that we need in this notebook. This is the section where internet connection is provided.

Note that all other sections don't have internet access so you won't be able to install any libraries in except this section.

In [ ]:
# INSTALL YOUR PACKAGES

!pip install git+https://github.com/qubvel/segmentation_models.pytorch pytorch-argus scikit-video natsort
!pip install sk-video p-tqdm
!pip install -U git+https://github.com/albu/albumentations --no-cache-dir
!pip install -U segmentation-models-pytorch albumentations --user

!pip install git+https://github.com/qubvel/segmentation_models.pytorch

# Installing Scikit Video & FFMPEG
!pip install scikit-video
!pip install ffmpeg
!apt-get install ffmpeg gstreamer1.0-libav vlc
!apt-get update -qq && sudo apt-get -y install \
  autoconf \
  automake \
  build-essential \
  cmake \
  git-core \
  libass-dev \
  libfreetype6-dev \
  libgnutls28-dev \
  libsdl2-dev \
  libtool \
  libva-dev \
  libvdpau-dev \
  libvorbis-dev \
  libxcb1-dev \
  libxcb-shm0-dev \
  libxcb-xfixes0-dev \
  meson \
  ninja-build \
  pkg-config \
  texinfo \
  wget \
  yasm \
  zlib1g-dev
!apt-get install ffmpeg libsm6  libxext6 -y
!pip install sk-video p-tqdm
!apt-get update
!apt-get install -y python3-opencv
!pip install --upgrade opencv
!pip install --upgrade torch torchvision
Collecting git+https://github.com/qubvel/segmentation_models.pytorch
  Cloning https://github.com/qubvel/segmentation_models.pytorch to /tmp/pip-req-build-zjjsl6bj
  Running command git clone -q https://github.com/qubvel/segmentation_models.pytorch /tmp/pip-req-build-zjjsl6bj
Requirement already satisfied: pytorch-argus in /usr/local/lib/python3.7/dist-packages (0.2.1)
Requirement already satisfied: scikit-video in /usr/local/lib/python3.7/dist-packages (1.1.11)
Requirement already satisfied: natsort in /usr/local/lib/python3.7/dist-packages (5.5.0)
Requirement already satisfied: torchvision>=0.5.0 in /usr/local/lib/python3.7/dist-packages (from segmentation-models-pytorch==0.2.0) (0.10.0+cu102)
Requirement already satisfied: pretrainedmodels==0.7.4 in /usr/local/lib/python3.7/dist-packages (from segmentation-models-pytorch==0.2.0) (0.7.4)
Requirement already satisfied: efficientnet-pytorch==0.6.3 in /usr/local/lib/python3.7/dist-packages (from segmentation-models-pytorch==0.2.0) (0.6.3)
Requirement already satisfied: timm==0.4.12 in /usr/local/lib/python3.7/dist-packages (from segmentation-models-pytorch==0.2.0) (0.4.12)
Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (from efficientnet-pytorch==0.6.3->segmentation-models-pytorch==0.2.0) (1.9.0+cu102)
Requirement already satisfied: munch in /usr/local/lib/python3.7/dist-packages (from pretrainedmodels==0.7.4->segmentation-models-pytorch==0.2.0) (2.5.0)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from pretrainedmodels==0.7.4->segmentation-models-pytorch==0.2.0) (4.62.0)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch->efficientnet-pytorch==0.6.3->segmentation-models-pytorch==0.2.0) (3.7.4.3)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torchvision>=0.5.0->segmentation-models-pytorch==0.2.0) (1.19.5)
Requirement already satisfied: pillow>=5.3.0 in /usr/local/lib/python3.7/dist-packages (from torchvision>=0.5.0->segmentation-models-pytorch==0.2.0) (7.1.2)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from scikit-video) (1.4.1)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from munch->pretrainedmodels==0.7.4->segmentation-models-pytorch==0.2.0) (1.15.0)
Requirement already satisfied: sk-video in /usr/local/lib/python3.7/dist-packages (1.1.10)
Requirement already satisfied: p-tqdm in /usr/local/lib/python3.7/dist-packages (1.3.3)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from sk-video) (1.19.5)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from sk-video) (1.4.1)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from p-tqdm) (1.15.0)
Requirement already satisfied: pathos in /usr/local/lib/python3.7/dist-packages (from p-tqdm) (0.2.8)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from p-tqdm) (4.62.0)
Requirement already satisfied: pox>=0.3.0 in /usr/local/lib/python3.7/dist-packages (from pathos->p-tqdm) (0.3.0)
Requirement already satisfied: dill>=0.3.4 in /usr/local/lib/python3.7/dist-packages (from pathos->p-tqdm) (0.3.4)
Requirement already satisfied: multiprocess>=0.70.12 in /usr/local/lib/python3.7/dist-packages (from pathos->p-tqdm) (0.70.12.2)
Requirement already satisfied: ppft>=1.6.6.4 in /usr/local/lib/python3.7/dist-packages (from pathos->p-tqdm) (1.6.6.4)
Collecting git+https://github.com/albu/albumentations
  Cloning https://github.com/albu/albumentations to /tmp/pip-req-build-irjt8m3z
  Running command git clone -q https://github.com/albu/albumentations /tmp/pip-req-build-irjt8m3z
Requirement already satisfied: numpy>=1.11.1 in /usr/local/lib/python3.7/dist-packages (from albumentations==1.0.3) (1.19.5)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from albumentations==1.0.3) (1.4.1)
Requirement already satisfied: scikit-image>=0.16.1 in /usr/local/lib/python3.7/dist-packages (from albumentations==1.0.3) (0.16.2)
Requirement already satisfied: PyYAML in /usr/local/lib/python3.7/dist-packages (from albumentations==1.0.3) (3.13)
Requirement already satisfied: opencv-python>=4.1.1 in /usr/local/lib/python3.7/dist-packages (from albumentations==1.0.3) (4.1.2.30)
Requirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image>=0.16.1->albumentations==1.0.3) (1.1.1)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image>=0.16.1->albumentations==1.0.3) (2.5.1)
Requirement already satisfied: matplotlib!=3.0.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image>=0.16.1->albumentations==1.0.3) (3.2.2)
Requirement already satisfied: pillow>=4.3.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image>=0.16.1->albumentations==1.0.3) (7.1.2)
Requirement already satisfied: imageio>=2.3.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image>=0.16.1->albumentations==1.0.3) (2.4.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.16.1->albumentations==1.0.3) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.16.1->albumentations==1.0.3) (2.8.1)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.16.1->albumentations==1.0.3) (1.3.1)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.16.1->albumentations==1.0.3) (2.4.7)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from cycler>=0.10->matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.16.1->albumentations==1.0.3) (1.15.0)
Requirement already satisfied: decorator<5,>=4.3 in /usr/local/lib/python3.7/dist-packages (from networkx>=2.0->scikit-image>=0.16.1->albumentations==1.0.3) (4.4.2)
Requirement already satisfied: segmentation-models-pytorch in /usr/local/lib/python3.7/dist-packages (0.2.0)
Requirement already satisfied: albumentations in /usr/local/lib/python3.7/dist-packages (1.0.3)
Requirement already satisfied: torchvision>=0.5.0 in /usr/local/lib/python3.7/dist-packages (from segmentation-models-pytorch) (0.10.0+cu102)
Requirement already satisfied: timm==0.4.12 in /usr/local/lib/python3.7/dist-packages (from segmentation-models-pytorch) (0.4.12)
Requirement already satisfied: pretrainedmodels==0.7.4 in /usr/local/lib/python3.7/dist-packages (from segmentation-models-pytorch) (0.7.4)
Requirement already satisfied: efficientnet-pytorch==0.6.3 in /usr/local/lib/python3.7/dist-packages (from segmentation-models-pytorch) (0.6.3)
Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (from efficientnet-pytorch==0.6.3->segmentation-models-pytorch) (1.9.0+cu102)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from pretrainedmodels==0.7.4->segmentation-models-pytorch) (4.62.0)
Requirement already satisfied: munch in /usr/local/lib/python3.7/dist-packages (from pretrainedmodels==0.7.4->segmentation-models-pytorch) (2.5.0)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch->efficientnet-pytorch==0.6.3->segmentation-models-pytorch) (3.7.4.3)
Requirement already satisfied: pillow>=5.3.0 in /usr/local/lib/python3.7/dist-packages (from torchvision>=0.5.0->segmentation-models-pytorch) (7.1.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torchvision>=0.5.0->segmentation-models-pytorch) (1.19.5)
Requirement already satisfied: scikit-image>=0.16.1 in /usr/local/lib/python3.7/dist-packages (from albumentations) (0.16.2)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from albumentations) (1.4.1)
Requirement already satisfied: PyYAML in /usr/local/lib/python3.7/dist-packages (from albumentations) (3.13)
Requirement already satisfied: opencv-python>=4.1.1 in /usr/local/lib/python3.7/dist-packages (from albumentations) (4.1.2.30)
Requirement already satisfied: imageio>=2.3.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image>=0.16.1->albumentations) (2.4.1)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image>=0.16.1->albumentations) (2.5.1)
Requirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image>=0.16.1->albumentations) (1.1.1)
Requirement already satisfied: matplotlib!=3.0.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image>=0.16.1->albumentations) (3.2.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.16.1->albumentations) (1.3.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.16.1->albumentations) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.16.1->albumentations) (2.8.1)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.16.1->albumentations) (2.4.7)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from cycler>=0.10->matplotlib!=3.0.0,>=2.0.0->scikit-image>=0.16.1->albumentations) (1.15.0)
Requirement already satisfied: decorator<5,>=4.3 in /usr/local/lib/python3.7/dist-packages (from networkx>=2.0->scikit-image>=0.16.1->albumentations) (4.4.2)
Collecting git+https://github.com/qubvel/segmentation_models.pytorch
  Cloning https://github.com/qubvel/segmentation_models.pytorch to /tmp/pip-req-build-y34f66cr
  Running command git clone -q https://github.com/qubvel/segmentation_models.pytorch /tmp/pip-req-build-y34f66cr
Requirement already satisfied: torchvision>=0.5.0 in /usr/local/lib/python3.7/dist-packages (from segmentation-models-pytorch==0.2.0) (0.10.0+cu102)
Requirement already satisfied: pretrainedmodels==0.7.4 in /usr/local/lib/python3.7/dist-packages (from segmentation-models-pytorch==0.2.0) (0.7.4)
Requirement already satisfied: efficientnet-pytorch==0.6.3 in /usr/local/lib/python3.7/dist-packages (from segmentation-models-pytorch==0.2.0) (0.6.3)
Requirement already satisfied: timm==0.4.12 in /usr/local/lib/python3.7/dist-packages (from segmentation-models-pytorch==0.2.0) (0.4.12)
Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (from efficientnet-pytorch==0.6.3->segmentation-models-pytorch==0.2.0) (1.9.0+cu102)
Requirement already satisfied: munch in /usr/local/lib/python3.7/dist-packages (from pretrainedmodels==0.7.4->segmentation-models-pytorch==0.2.0) (2.5.0)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from pretrainedmodels==0.7.4->segmentation-models-pytorch==0.2.0) (4.62.0)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch->efficientnet-pytorch==0.6.3->segmentation-models-pytorch==0.2.0) (3.7.4.3)
Requirement already satisfied: pillow>=5.3.0 in /usr/local/lib/python3.7/dist-packages (from torchvision>=0.5.0->segmentation-models-pytorch==0.2.0) (7.1.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torchvision>=0.5.0->segmentation-models-pytorch==0.2.0) (1.19.5)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from munch->pretrainedmodels==0.7.4->segmentation-models-pytorch==0.2.0) (1.15.0)
Requirement already satisfied: scikit-video in /usr/local/lib/python3.7/dist-packages (1.1.11)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from scikit-video) (1.19.5)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from scikit-video) (1.4.1)
Requirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from scikit-video) (7.1.2)
Requirement already satisfied: ffmpeg in /usr/local/lib/python3.7/dist-packages (1.4)
Reading package lists... Done
Building dependency tree       
Reading state information... Done
ffmpeg is already the newest version (7:3.4.8-0ubuntu0.2).
gstreamer1.0-libav is already the newest version (1.14.5-0ubuntu1~18.04.1).
vlc is already the newest version (3.0.8-0ubuntu18.04.1).
0 upgraded, 0 newly installed, 0 to remove and 87 not upgraded.
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Note, selecting 'git' instead of 'git-core'
autoconf is already the newest version (2.69-11).
automake is already the newest version (1:1.15.1-3ubuntu2).
build-essential is already the newest version (12.4ubuntu1).
libtool is already the newest version (2.4.6-2).
libvorbis-dev is already the newest version (1.3.5-4.2).
pkg-config is already the newest version (0.29.1-0ubuntu2).
zlib1g-dev is already the newest version (1:1.2.11.dfsg-0ubuntu2).
libass-dev is already the newest version (1:0.14.0-1).
libva-dev is already the newest version (2.1.0-3).
ninja-build is already the newest version (1.8.2-1).
texinfo is already the newest version (6.5.0.dfsg.1-2).
yasm is already the newest version (1.3.0-2build1).
cmake is already the newest version (3.10.2-1ubuntu2.18.04.2).
git is already the newest version (1:2.17.1-1ubuntu0.8).
libfreetype6-dev is already the newest version (2.8.1-2ubuntu2.1).
libgnutls28-dev is already the newest version (3.5.18-1ubuntu1.4).
libxcb-shm0-dev is already the newest version (1.13-2~ubuntu18.04).
libxcb-xfixes0-dev is already the newest version (1.13-2~ubuntu18.04).
libxcb1-dev is already the newest version (1.13-2~ubuntu18.04).
wget is already the newest version (1.19.4-1ubuntu2.2).
libsdl2-dev is already the newest version (2.0.8+dfsg1-1ubuntu1.18.04.4).
meson is already the newest version (0.45.1-2ubuntu0.18.04.2).
libvdpau-dev is already the newest version (1.3-0ubuntu0~gpu18.04.2).
0 upgraded, 0 newly installed, 0 to remove and 87 not upgraded.
Reading package lists... Done
Building dependency tree       
Reading state information... Done
libsm6 is already the newest version (2:1.2.2-1).
libxext6 is already the newest version (2:1.3.3-1).
ffmpeg is already the newest version (7:3.4.8-0ubuntu0.2).
0 upgraded, 0 newly installed, 0 to remove and 87 not upgraded.
Requirement already satisfied: sk-video in /usr/local/lib/python3.7/dist-packages (1.1.10)
Requirement already satisfied: p-tqdm in /usr/local/lib/python3.7/dist-packages (1.3.3)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from sk-video) (1.4.1)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from sk-video) (1.19.5)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from p-tqdm) (1.15.0)
Requirement already satisfied: pathos in /usr/local/lib/python3.7/dist-packages (from p-tqdm) (0.2.8)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from p-tqdm) (4.62.0)
Requirement already satisfied: ppft>=1.6.6.4 in /usr/local/lib/python3.7/dist-packages (from pathos->p-tqdm) (1.6.6.4)
Requirement already satisfied: pox>=0.3.0 in /usr/local/lib/python3.7/dist-packages (from pathos->p-tqdm) (0.3.0)
Requirement already satisfied: dill>=0.3.4 in /usr/local/lib/python3.7/dist-packages (from pathos->p-tqdm) (0.3.4)
Requirement already satisfied: multiprocess>=0.70.12 in /usr/local/lib/python3.7/dist-packages (from pathos->p-tqdm) (0.70.12.2)
Hit:1 http://security.ubuntu.com/ubuntu bionic-security InRelease
Ign:2 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64  InRelease
Hit:3 http://ppa.launchpad.net/c2d4u.team/c2d4u4.0+/ubuntu bionic InRelease
Hit:4 http://archive.ubuntu.com/ubuntu bionic InRelease
Ign:5 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64  InRelease
Hit:6 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran40/ InRelease
Hit:7 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64  Release
Hit:8 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64  Release
Hit:9 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
Hit:10 http://ppa.launchpad.net/cran/libgit2/ubuntu bionic InRelease
Hit:11 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
Hit:12 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu bionic InRelease
Hit:13 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease
Reading package lists... Done
Reading package lists... Done
Building dependency tree       
Reading state information... Done
python3-opencv is already the newest version (3.2.0+dfsg-4ubuntu0.1).
0 upgraded, 0 newly installed, 0 to remove and 87 not upgraded.
ERROR: Could not find a version that satisfies the requirement opencv (from versions: none)
ERROR: No matching distribution found for opencv
Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (1.9.0+cu102)
Requirement already satisfied: torchvision in /usr/local/lib/python3.7/dist-packages (0.10.0+cu102)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch) (3.7.4.3)
Requirement already satisfied: pillow>=5.3.0 in /usr/local/lib/python3.7/dist-packages (from torchvision) (7.1.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torchvision) (1.19.5)

Define preprocessing code 💻

The code that is common between the training and the prediction sections should be defined here. During evaluation, we completely skip the training section. Please make sure to add any common logic between the training and prediction sections here.

In [2]:
import os
# os.environ['CUDA_VISIBLE_DEVICES'] = '0'
import shutil
import cv2
import random
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
from tqdm import tqdm
import albumentations as albu
from torch.utils.data import Dataset, DataLoader
from natsort import natsorted
import copy
import torch
device = torch.device('cpu')
import segmentation_models_pytorch as smp
import os
import numpy as np
from natsort import natsorted
from glob import glob
import cv2
import skvideo.io
from tqdm import tqdm
from tqdm.notebook import tqdm
import matplotlib.pyplot as plt
cv2.useOptimized()
Out[2]:
True

Training phase ⚙️

You can define your training code here. This sections will be skipped during evaluation.

So, to read your model in Prediction Phrase, you can save your model in assets directory in this training phrase

Downloading Dataset

Hre we are downloading the challange dataset using AIcrowd CLI

In [ ]:
%aicrowd login
Please login here: https://api.aicrowd.com/auth/h8GWk3B9oRFqoKInKZN6oNhHMAvtCc1BJN9NKgTyLuA
API Key valid
Saved API Key successfully!
In [ ]:
# Downloading the Dataset
!rm -rf data
!mkdir data
%aicrowd ds dl -c iceberg-detection -o data

# Unzipping the files
!unzip data/train.zip -d data/train > /dev/null
!unzip data/test.zip -d data/test > /dev/null
In [ ]:
# # YOUR TRAINING CODE
# def load_image_n_transform(img,upper_bright,lower_bright,upper_dark,lower_dark):
#   np_img              = np.array(img)
#   if np_img.mean()>200:
#     gray                = cv2.cvtColor(np_img, cv2.COLOR_BGR2GRAY)
#     canny               = cv2.Canny(gray, upper_bright,lower_bright)
#   else:
#     gray                = cv2.cvtColor(np_img, cv2.COLOR_BGR2GRAY)
#     canny               = cv2.Canny(gray, upper_dark,lower_dark)
#   contours, hierarchy = cv2.findContours(canny,cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
#   contours            = sorted(contours, key=cv2.contourArea)
#   return contours,np_img

# def get_parameters(contours,extent):
#   '''
#       Input the contours and the extent till which we need to extract bounding-boxes
#   '''

#   bounds = min(len(contours),extent)
#   x_axis = []
#   y_axis = []
#   width  = []
#   height = []
#   for i in range(1,bounds+1,1):
#     x,y,w,h = list(cv2.boundingRect(contours[-1*i]))
#     x_axis.append(x)
#     y_axis.append(y)
#     width.append(w)
#     height.append(h)
#   return (x_axis,y_axis,width,height)

# def get_optimal_parameters(contours,extent):
  
#   parameters   = get_parameters(contours,extent)
#   x_parameter  = np.array(parameters[0])
#   y_parameter  = np.array(parameters[1])
#   width_param  = np.array(parameters[2])
#   height_param = np.array(parameters[3])

#   length       = len(x_parameter)

#   x            = np.min(x_parameter)

#   y            = np.min(y_parameter)

#   width        = np.max(x_parameter+width_param)-x

#   height       = np.max(y_parameter+height_param)-y

#   return [x,y,x+width,y+height]

# def get_glaciers(image):
#   image             = cv2.resize(image,(200,200), interpolation = cv2.INTER_AREA)
#   contours,np_img   = load_image_n_transform(image,10,15,10,150)
#   if len(contours)>0:
    
#     parameters        = get_optimal_parameters(contours,6)

#     mask = np.zeros(image.shape[:2],np.uint8)

#     bgdModel = np.zeros((1,65),np.float64)

#     fgdModel = np.zeros((1,65),np.float64)

#     rect = (parameters[0],parameters[1],parameters[2]-parameters[0],parameters[3]-parameters[1])

#     cv2.grabCut(image,mask,rect,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_RECT)
#     # cv2.grabCut(image,mask,None,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_MASK)

#     mask2 = np.where((mask==2)|(mask==0),0,1).astype('uint8')

#     img = image*mask2[:,:,np.newaxis]
#     gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#   else:
#     gray              = np.zeros(image.shape[:2],np.uint8)
#   gray = cv2.resize(gray,(512,512), interpolation = cv2.INTER_AREA)
#   return gray
In [3]:
from google.colab import drive
drive.mount('/content/drive')
!rm -rf assets
!mkdir assets
# To save your model, you can simply write any files in assets directory and read it back in Predictions phrase. 
# file = open(os.path.join("assets", 'model.h5'), 'w')
# file.write('my model')
# file.close()


ENCODER         = 'mobilenet_v2'
ENCODER_WEIGHTS = 'imagenet'

ACTIVATION      = 'sigmoid' 
DEVICE          = 'cpu'

# create segmentation model with pretrained encoder
model = smp.Unet(
    encoder_name=ENCODER, 
    encoder_weights=ENCODER_WEIGHTS, 
    classes=1, 
    activation=ACTIVATION,
    # encoder_depth=5
)



preprocessing_fn = smp.encoders.get_preprocessing_fn(ENCODER, ENCODER_WEIGHTS)

PATH = '/content/drive/MyDrive/best_model.pt'
model.load_state_dict(torch.load(PATH, map_location=DEVICE))

PATH2SAVE = '/content/assets/best_model_param.pt'
torch.save(model.state_dict(), PATH2SAVE)
torch.save(model,'/content/assets/best_model.pt')
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
Downloading: "https://download.pytorch.org/models/mobilenet_v2-b0353104.pth" to /root/.cache/torch/hub/checkpoints/mobilenet_v2-b0353104.pth

Prediction phase 🔎

Generating the videos for the test data.

Make sure you save each video in the AICROWD_OUTPUTS_PATH.

In [3]:
# For example, now you can read your model back
# file = open(os.path.join("assets", 'model.h5'), 'r')
# file.read()
# PATH = os.path.join("assets",'best_model.pt')
PATH_model  = AICROWD_ASSETS_DIR + '/best_model.pt'
PATH_params = AICROWD_ASSETS_DIR + '/best_model_param.pt'
h = 256
w = 256

ENCODER         = 'mobilenet_v2'
ENCODER_WEIGHTS = 'imagenet'

ACTIVATION      = 'sigmoid' 
DEVICE          = 'cpu'

model = torch.load(PATH_model)

preprocessing_fn = smp.encoders.get_preprocessing_fn(ENCODER, ENCODER_WEIGHTS)


model.load_state_dict(copy.deepcopy(torch.load(PATH_params, map_location=DEVICE)))
Out[3]:
<All keys matched successfully>

Generating Random segmentation videos

Here we will make a random video with the specifications need in the submsision.

In [4]:
def preprocess(image):
  
  image_needed             = image
  image_needed             = cv2.resize(image_needed,(h,w), interpolation = cv2.INTER_AREA)
  image                    = cv2.cvtColor(image_needed, cv2.COLOR_BGR2RGB)
    
  mask                     = np.zeros((h,w,1))
  sample                   = get_preprocessing(preprocessing_fn)(image = image,mask=mask)
  image,mask               = sample['image'],sample['mask']
  return image
  
  
def get_preprocessing(preprocessing_fn):
    """Construct preprocessing transform
    
    Args:
        preprocessing_fn (callbale): data normalization function 
            (can be specific for each pretrained neural network)
    Return:
        transform: albumentations.Compose
    
    """
    
    _transform = [
        albu.Lambda(image=preprocessing_fn),
        
        albu.Lambda(image=to_tensor, mask=to_tensor),
    ]
    return albu.Compose(_transform)

def to_tensor(x, **kwargs):
    return x.transpose(2, 0, 1).astype('float32')





def get_glaciers(image):
  
  image    = preprocess(image)
  x_tensor = torch.from_numpy(image).to(DEVICE).unsqueeze(0)
  pr_mask  = model.predict(x_tensor)
  pr_mask  = (pr_mask.squeeze().cpu().numpy().round())
  pr_mask  = cv2.resize(pr_mask,(512,512),cv2.INTER_AREA)
  pr_mask  = pr_mask*255
  _,img    = cv2.threshold(pr_mask, 128, 255, cv2.THRESH_BINARY)
  img      = Image.fromarray(img.astype(np.uint8))
  return img  

def gen_video(i):
    image_file_clear = AICROWD_DATASET_PATH+f'/image_{str(i)}.mp4'

    ######### Reading Images and performing preprocessing 
    clear_video_frame = {}
    image_video_clear = cv2.VideoCapture(image_file_clear)
    
    ret               = True
    count             = 1
    while ret:
        ret,frame     = image_video_clear.read()
        if ret ==True:
            clear_video_frame[count] = get_glaciers(frame)
        count = count+1

    ###### Writing images to given location 
    writer = skvideo.io.FFmpegWriter(os.path.join(AICROWD_OUTPUTS_PATH,  f"segmentation_{i}.mp4"), outputdict={
        '-vcodec': 'libx264',  
        '-crf': '0',          
        '-preset':'veryslow'  
        }) 
    for n in range(1, 24):      
      image = clear_video_frame[n] 
      
      writer.writeFrame(image)

    writer.close()
    # return clear_video_frame
In [ ]:
# Generating the samples

[gen_video(i) for i in  range(len(os.listdir(AICROWD_DATASET_PATH)))]

Submit to AIcrowd 🚀

Submitting the notebook in AIcrowd challenge

In [6]:

Using notebook: /content/drive/MyDrive/Colab Notebooks/model_ready_problem4_aiblitz10_submission for submission...
Removing existing files from submission directory...
Scrubbing API keys from the notebook...
Collecting notebook...
ERROR    Error while reading the git config,                                    
submission.zip ━━━━━━━━━━━━━━━━━━━━ 100.0%112.7/112.7 MB2.5 MB/s0:00:00
                                                  ╭─────────────────────────╮                                                  
                                                  │ Successfully submitted! │                                                  
                                                  ╰─────────────────────────╯                                                  
                                                        Important links                                                        
┌──────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│  This submission │ https://www.aicrowd.com/challenges/ai-blitz-x/problems/iceberg-detection/submissions/152218              │
│                  │                                                                                                          │
│  All submissions │ https://www.aicrowd.com/challenges/ai-blitz-x/problems/iceberg-detection/submissions?my_submissions=true │
│                  │                                                                                                          │
│      Leaderboard │ https://www.aicrowd.com/challenges/ai-blitz-x/problems/iceberg-detection/leaderboards                    │
│                  │                                                                                                          │
│ Discussion forum │ https://discourse.aicrowd.com/c/ai-blitz-x                                                               │
│                  │                                                                                                          │
│   Challenge page │ https://www.aicrowd.com/challenges/ai-blitz-x/problems/iceberg-detection                                 │
└──────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┘
In [ ]:

In [ ]:


Comments

You must login before you can post a comment.

Execute