Loading

Seismic Facies Identification Challenge

[Explainer] Detectron2 & COCO Dataset 🔥 • Web Application & Visualizations • End-to-End Baseline & Tensorflow

Detectron2 & COCO Dataset 🔥 • Web Application & Visualizations • End-to-End Baseline & Tensorflow

Shubhamai

So, me Shubhamai and I have come up with these 3 things -

COCO Dataset & using Detectron2, MMDetection

YES! I have converted this dataset into COCO Dataset and which we train Mask-RCNN using Detectron2.

There we go boys - Colab Link

More things will be added so like this post RIGHT NOW :smile:

Web Application & Visualisation

https://seismic-facies-identification.herokuapp.com/

But this time, I found that a great preprocessing pipeline can help to model to find accurate features and increasing overall accuracy. But it kinda isn’t that easy as it looks —

So I made a Web Application based on that which allows you to play/experiment with many of the image preprocessing functions/methods, changing parameters or writing custom image preprocessing functions to experiment.

And it also contains all the visualizations from the colab notebook .

I hope that it will help you in making the perfect preprocessing pipelines :grin:.

End-to-End Baseline & Tensorflow

https://colab.research.google.com/drive/1t1hF_Vs4xIyLGMw_B9l1G6qzLBxLB5eG?usp=sharing

I have made a complete colab notebook from Data Exploration to Submitting Predictions. Here are some of the glimpse of the image visualization section!

And this 3D Plot!

Tables of Content -

  1. Setting our Workspace :briefcase:
  2. Data Exploration :face_with_monocle:
  3. Image Preprocessing Techniqes :broom:
  4. Creating our Dataset :hammer:
  5. Creating our Model :factory:
  6. Training the Model :steam_locomotive:
  7. Evaluating the model :test_tube:
  8. Testing on test Data :100:
  9. Generate More Data + Some tips & tricks :bulb:

The main libraries covered in this notebook is —

  • Tensorflow 2.0 & Keras
  • Plotly
  • cv2
    and much more…

The model that i am using is UNet, pretty much standard in image segmentation. More is in the colab notebook!

I hope the colab notebook will help you get started in this competition or learning something new :slightly_smiling_face:. If the notebook did help you, make sure to like the post. lol.

https://colab.research.google.com/drive/1t1hF_Vs4xIyLGMw_B9l1G6qzLBxLB5eG?usp=sharing

:red_circle: Please like the topic if this helps in any way possible :slight_smile: . I really appreciate that :smiley:

🌎 Facies Identification Challenge: 3D image interpretation by Machine Learning

In this challange we need to identify facies as an image, from 3D seismic image using Deep Learing with various tools like tensorflow, keras, numpy, pandas, matplotlib, plotly and much much more..

Problem

Segmentating the 3D seismic image into an image with each pixel can be classfied into 6 labels based on patterns in the image.

newplot (4).png

https://www.aicrowd.com/challenges/seismic-facies-identification-challenge#introduction

Dataset

We have 3D datasets both ( features X, and labels Y ) with shape for X in 1006 × 782 × 590, in axis corresponding Z, X, Y and Y in 1006 × 782 × 590 in also axis corresponsing Z, X, Y.

https://www.aicrowd.com/challenges/seismic-facies-identification-challenge/dataset_files

We can say that we have total of 2,378 trainig images with their corresponsing labels and we also have same number of 2,378 testing images which we will predict labels for.

https://www.aicrowd.com/challenges/seismic-facies-identification-challenge#dataset

Evaluation

The evaluation metrics are the F1 score and accuracy.

https://www.aicrowd.com/challenges/seismic-facies-identification-challenge#evaluation-criteria

Tables of Content

  1. Setting our Workspace 💼
    • Downloading our Dataset
    • Importing Necessary Libraries
  1. Data Exploration 🧐

    • Reading our Dataset
    • Image Visualisations
  2. Image Preprocessing Techniqes 🧹

    • Image preprocessing
  3. Creating our Dataset 🔨

    • Loading data into memory
    • Making 2D Images
  4. Creating our Model 🏭

    • Creating Unet Model
    • Setting up hyperparameters
  5. Training the Model 🚂

    • Setting up Tensorboard
    • Start Training!
  6. Evaluating the model 🧪

    • Evaluating our Model
  7. Testing on test Data 💯

  8. Generate More Data + Some tips & tricks 💡

Setting our Workspace 💼

In this section we are going to download our dataset & also downloading some libraries, and then importing up all libraries to get ready!

Downloading our Dataset

In [ ]:
# Downloading training data ( Seismic Images | X )
!wget https://datasets.aicrowd.com/default/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/data_train.npz

# Downloading training data ( Labels | Y )
!wget https://datasets.aicrowd.com/default/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/labels_train.npz

# Downloading Testing Dataset 
!wget https://datasets.aicrowd.com/default/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/data_test_1.npz
--2020-10-17 12:30:32--  https://datasets.aicrowd.com/default/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/data_train.npz
Resolving datasets.aicrowd.com (datasets.aicrowd.com)... 35.189.208.115
Connecting to datasets.aicrowd.com (datasets.aicrowd.com)|35.189.208.115|:443... connected.
HTTP request sent, awaiting response... 302 FOUND
Location: https://s3.us-west-002.backblazeb2.com/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/data_train.npz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=002ae2491b744be0000000002%2F20201017%2Fus-west-002%2Fs3%2Faws4_request&X-Amz-Date=20201017T123038Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=81146a2ec9aeba19548ac23abf4872f0d522419b687b1c19d40b96ec81651020 [following]
--2020-10-17 12:30:38--  https://s3.us-west-002.backblazeb2.com/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/data_train.npz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=002ae2491b744be0000000002%2F20201017%2Fus-west-002%2Fs3%2Faws4_request&X-Amz-Date=20201017T123038Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=81146a2ec9aeba19548ac23abf4872f0d522419b687b1c19d40b96ec81651020
Resolving s3.us-west-002.backblazeb2.com (s3.us-west-002.backblazeb2.com)... 206.190.215.254
Connecting to s3.us-west-002.backblazeb2.com (s3.us-west-002.backblazeb2.com)|206.190.215.254|:443... connected.
HTTP request sent, awaiting response... 200 
Length: 1715555445 (1.6G) [application/octet-stream]
Saving to: ‘data_train.npz’

data_train.npz      100%[===================>]   1.60G  17.8MB/s    in 89s     

2020-10-17 12:32:17 (18.5 MB/s) - ‘data_train.npz’ saved [1715555445/1715555445]

--2020-10-17 12:32:17--  https://datasets.aicrowd.com/default/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/labels_train.npz
Resolving datasets.aicrowd.com (datasets.aicrowd.com)... 35.189.208.115
Connecting to datasets.aicrowd.com (datasets.aicrowd.com)|35.189.208.115|:443... connected.
HTTP request sent, awaiting response... 302 FOUND
Location: https://s3.us-west-002.backblazeb2.com/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/labels_train.npz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=002ae2491b744be0000000002%2F20201017%2Fus-west-002%2Fs3%2Faws4_request&X-Amz-Date=20201017T123258Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=070a2785f94809c1d6ed9f69d002047d1ff579a0c943fef35e0cb1a0bcee2cd2 [following]
--2020-10-17 12:32:58--  https://s3.us-west-002.backblazeb2.com/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/labels_train.npz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=002ae2491b744be0000000002%2F20201017%2Fus-west-002%2Fs3%2Faws4_request&X-Amz-Date=20201017T123258Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=070a2785f94809c1d6ed9f69d002047d1ff579a0c943fef35e0cb1a0bcee2cd2
Resolving s3.us-west-002.backblazeb2.com (s3.us-west-002.backblazeb2.com)... 206.190.215.254
Connecting to s3.us-west-002.backblazeb2.com (s3.us-west-002.backblazeb2.com)|206.190.215.254|:443... connected.
HTTP request sent, awaiting response... 200 
Length: 7160425 (6.8M) [application/octet-stream]
Saving to: ‘labels_train.npz’

labels_train.npz    100%[===================>]   6.83M  5.49MB/s    in 1.2s    

2020-10-17 12:33:08 (5.49 MB/s) - ‘labels_train.npz’ saved [7160425/7160425]

--2020-10-17 12:33:08--  https://datasets.aicrowd.com/default/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/data_test_1.npz
Resolving datasets.aicrowd.com (datasets.aicrowd.com)... 35.189.208.115
Connecting to datasets.aicrowd.com (datasets.aicrowd.com)|35.189.208.115|:443... connected.
HTTP request sent, awaiting response... 302 FOUND
Location: https://s3.us-west-002.backblazeb2.com/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/data_test_1.npz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=002ae2491b744be0000000002%2F20201017%2Fus-west-002%2Fs3%2Faws4_request&X-Amz-Date=20201017T123312Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=3e419cd13d54249c50952d1062203a197c2a40d60a6ba01a675b7ff417ec4385 [following]
--2020-10-17 12:33:12--  https://s3.us-west-002.backblazeb2.com/aicrowd-public-datasets/seamai-facies-challenge/v0.1/public/data_test_1.npz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=002ae2491b744be0000000002%2F20201017%2Fus-west-002%2Fs3%2Faws4_request&X-Amz-Date=20201017T123312Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=3e419cd13d54249c50952d1062203a197c2a40d60a6ba01a675b7ff417ec4385
Resolving s3.us-west-002.backblazeb2.com (s3.us-west-002.backblazeb2.com)... 206.190.215.254
Connecting to s3.us-west-002.backblazeb2.com (s3.us-west-002.backblazeb2.com)|206.190.215.254|:443... connected.
HTTP request sent, awaiting response... 200 
Length: 731382806 (698M) [application/octet-stream]
Saving to: ‘data_test_1.npz’

data_test_1.npz     100%[===================>] 697.50M  17.5MB/s    in 39s     

2020-10-17 12:33:56 (17.9 MB/s) - ‘data_test_1.npz’ saved [731382806/731382806]

Importing Necessary Libraries

In [ ]:
!pip install git+https://github.com/tensorflow/examples.git
!pip install git+https://github.com/karolzak/keras-unet

# # install dependencies: (use cu101 because colab has CUDA 10.1)
!pip install -U torch==1.5 torchvision==0.6 -f https://download.pytorch.org/whl/cu101/torch_stable.html 
!pip install cython pyyaml==5.1
!pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
!gcc --version

# install detectron2:
!pip install detectron2==0.1.2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/index.html

!pip install imantics
Collecting git+https://github.com/tensorflow/examples.git
  Cloning https://github.com/tensorflow/examples.git to /tmp/pip-req-build-frvmgcpl
  Running command git clone -q https://github.com/tensorflow/examples.git /tmp/pip-req-build-frvmgcpl
Requirement already satisfied: absl-py in /usr/local/lib/python3.6/dist-packages (from tensorflow-examples===35f4ae1e805c97aa63da565f61e4b81f66da1422-) (0.10.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from tensorflow-examples===35f4ae1e805c97aa63da565f61e4b81f66da1422-) (1.15.0)
Building wheels for collected packages: tensorflow-examples
  Building wheel for tensorflow-examples (setup.py) ... done
  Created wheel for tensorflow-examples: filename=tensorflow_examples-35f4ae1e805c97aa63da565f61e4b81f66da1422_-cp36-none-any.whl size=137927 sha256=1b0c01fb7ad460af04327ba8b88e6957b61ab64c27cf980127f2148069013090
  Stored in directory: /tmp/pip-ephem-wheel-cache-1m5fanzw/wheels/83/64/b3/4cfa02dc6f9d16bf7257892c6a7ec602cd7e0ff6ec4d7d714d
Successfully built tensorflow-examples
Installing collected packages: tensorflow-examples
Successfully installed tensorflow-examples-35f4ae1e805c97aa63da565f61e4b81f66da1422-
Collecting git+https://github.com/karolzak/keras-unet
  Cloning https://github.com/karolzak/keras-unet to /tmp/pip-req-build-uml7m95l
  Running command git clone -q https://github.com/karolzak/keras-unet /tmp/pip-req-build-uml7m95l
Building wheels for collected packages: keras-unet
  Building wheel for keras-unet (setup.py) ... done
  Created wheel for keras-unet: filename=keras_unet-0.1.2-cp36-none-any.whl size=16995 sha256=0e316062b26f2d7af94b337efbb1e07400bad12389e25f60de004c51e5522712
  Stored in directory: /tmp/pip-ephem-wheel-cache-2pgkknt2/wheels/b3/3a/85/c3df1c96b5d83dcd2c09b634e72a98cafcf12a52501ac5cd77
Successfully built keras-unet
Installing collected packages: keras-unet
Successfully installed keras-unet-0.1.2
Looking in links: https://download.pytorch.org/whl/cu101/torch_stable.html
Collecting torch==1.5
  Downloading https://download.pytorch.org/whl/cu101/torch-1.5.0%2Bcu101-cp36-cp36m-linux_x86_64.whl (703.8MB)
     |████████████████████████████████| 703.8MB 27kB/s 
Collecting torchvision==0.6
  Downloading https://download.pytorch.org/whl/cu101/torchvision-0.6.0%2Bcu101-cp36-cp36m-linux_x86_64.whl (6.6MB)
     |████████████████████████████████| 6.6MB 46kB/s 
Requirement already satisfied, skipping upgrade: numpy in /usr/local/lib/python3.6/dist-packages (from torch==1.5) (1.18.5)
Requirement already satisfied, skipping upgrade: future in /usr/local/lib/python3.6/dist-packages (from torch==1.5) (0.16.0)
Requirement already satisfied, skipping upgrade: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision==0.6) (7.0.0)
Installing collected packages: torch, torchvision
  Found existing installation: torch 1.6.0+cu101
    Uninstalling torch-1.6.0+cu101:
      Successfully uninstalled torch-1.6.0+cu101
  Found existing installation: torchvision 0.7.0+cu101
    Uninstalling torchvision-0.7.0+cu101:
      Successfully uninstalled torchvision-0.7.0+cu101
Successfully installed torch-1.5.0+cu101 torchvision-0.6.0+cu101
Requirement already satisfied: cython in /usr/local/lib/python3.6/dist-packages (0.29.21)
Collecting pyyaml==5.1
  Downloading https://files.pythonhosted.org/packages/9f/2c/9417b5c774792634834e730932745bc09a7d36754ca00acf1ccd1ac2594d/PyYAML-5.1.tar.gz (274kB)
     |████████████████████████████████| 276kB 2.8MB/s 
Building wheels for collected packages: pyyaml
  Building wheel for pyyaml (setup.py) ... done
  Created wheel for pyyaml: filename=PyYAML-5.1-cp36-cp36m-linux_x86_64.whl size=44075 sha256=3e2c9f335c5ccc60e69e701b66bb535aa70e96ef852b7138ee26056643ddc940
  Stored in directory: /root/.cache/pip/wheels/ad/56/bc/1522f864feb2a358ea6f1a92b4798d69ac783a28e80567a18b
Successfully built pyyaml
Installing collected packages: pyyaml
  Found existing installation: PyYAML 3.13
    Uninstalling PyYAML-3.13:
      Successfully uninstalled PyYAML-3.13
Successfully installed pyyaml-5.1
Collecting git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI
  Cloning https://github.com/cocodataset/cocoapi.git to /tmp/pip-req-build-kpgx5uyk
  Running command git clone -q https://github.com/cocodataset/cocoapi.git /tmp/pip-req-build-kpgx5uyk
Requirement already satisfied, skipping upgrade: setuptools>=18.0 in /usr/local/lib/python3.6/dist-packages (from pycocotools==2.0) (50.3.0)
Requirement already satisfied, skipping upgrade: cython>=0.27.3 in /usr/local/lib/python3.6/dist-packages (from pycocotools==2.0) (0.29.21)
Requirement already satisfied, skipping upgrade: matplotlib>=2.1.0 in /usr/local/lib/python3.6/dist-packages (from pycocotools==2.0) (3.2.2)
Requirement already satisfied, skipping upgrade: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (0.10.0)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (2.8.1)
Requirement already satisfied, skipping upgrade: numpy>=1.11 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (1.18.5)
Requirement already satisfied, skipping upgrade: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (2.4.7)
Requirement already satisfied, skipping upgrade: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=2.1.0->pycocotools==2.0) (1.2.0)
Requirement already satisfied, skipping upgrade: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib>=2.1.0->pycocotools==2.0) (1.15.0)
Building wheels for collected packages: pycocotools
  Building wheel for pycocotools (setup.py) ... done
  Created wheel for pycocotools: filename=pycocotools-2.0-cp36-cp36m-linux_x86_64.whl size=266458 sha256=f9380190c48084dd7af6b4016246105ed6ac36c7f2d9bb62901d4019a3cf2689
  Stored in directory: /tmp/pip-ephem-wheel-cache-t7pvade2/wheels/90/51/41/646daf401c3bc408ff10de34ec76587a9b3ebfac8d21ca5c3a
Successfully built pycocotools
Installing collected packages: pycocotools
  Found existing installation: pycocotools 2.0.2
    Uninstalling pycocotools-2.0.2:
      Successfully uninstalled pycocotools-2.0.2
Successfully installed pycocotools-2.0
1.5.0+cu101 True
gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Looking in links: https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/index.html
Collecting detectron2==0.1.2
  Downloading https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.5/detectron2-0.1.2%2Bcu101-cp36-cp36m-linux_x86_64.whl (6.2MB)
     |████████████████████████████████| 6.2MB 411kB/s 
Requirement already satisfied: Pillow in /usr/local/lib/python3.6/dist-packages (from detectron2==0.1.2) (7.0.0)
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.6/dist-packages (from detectron2==0.1.2) (1.3.0)
Requirement already satisfied: tensorboard in /usr/local/lib/python3.6/dist-packages (from detectron2==0.1.2) (2.3.0)
Collecting yacs>=0.1.6
  Downloading https://files.pythonhosted.org/packages/38/4f/fe9a4d472aa867878ce3bb7efb16654c5d63672b86dc0e6e953a67018433/yacs-0.1.8-py3-none-any.whl
Collecting mock
  Downloading https://files.pythonhosted.org/packages/cd/74/d72daf8dff5b6566db857cfd088907bb0355f5dd2914c4b3ef065c790735/mock-4.0.2-py3-none-any.whl
Requirement already satisfied: pydot in /usr/local/lib/python3.6/dist-packages (from detectron2==0.1.2) (1.3.0)
Requirement already satisfied: tabulate in /usr/local/lib/python3.6/dist-packages (from detectron2==0.1.2) (0.8.7)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from detectron2==0.1.2) (0.16.0)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from detectron2==0.1.2) (3.2.2)
Collecting fvcore
  Downloading https://files.pythonhosted.org/packages/8f/14/3d359bd5526262b15dfbb471cc1680a6aa384ed5883f0455c859f9b4224e/fvcore-0.1.2.post20201016.tar.gz
Requirement already satisfied: tqdm>4.29.0 in /usr/local/lib/python3.6/dist-packages (from detectron2==0.1.2) (4.41.1)
Requirement already satisfied: termcolor>=1.1 in /usr/local/lib/python3.6/dist-packages (from detectron2==0.1.2) (1.1.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (0.4.1)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (3.2.2)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (3.12.4)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (50.3.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (1.7.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (1.17.2)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (2.23.0)
Requirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (1.32.0)
Requirement already satisfied: numpy>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (1.18.5)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (1.15.0)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (0.35.1)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (0.10.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard->detectron2==0.1.2) (1.0.1)
Requirement already satisfied: PyYAML in /usr/local/lib/python3.6/dist-packages (from yacs>=0.1.6->detectron2==0.1.2) (5.1)
Requirement already satisfied: pyparsing>=2.1.4 in /usr/local/lib/python3.6/dist-packages (from pydot->detectron2==0.1.2) (2.4.7)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->detectron2==0.1.2) (2.8.1)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->detectron2==0.1.2) (1.2.0)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->detectron2==0.1.2) (0.10.0)
Collecting portalocker
  Downloading https://files.pythonhosted.org/packages/89/a6/3814b7107e0788040870e8825eebf214d72166adf656ba7d4bf14759a06a/portalocker-2.0.0-py2.py3-none-any.whl
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->detectron2==0.1.2) (1.3.0)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard->detectron2==0.1.2) (2.0.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard->detectron2==0.1.2) (4.1.1)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard->detectron2==0.1.2) (4.6)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard->detectron2==0.1.2) (0.2.8)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2==0.1.2) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2==0.1.2) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2==0.1.2) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard->detectron2==0.1.2) (2020.6.20)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->detectron2==0.1.2) (3.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard->detectron2==0.1.2) (3.2.0)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.6/dist-packages (from rsa<5,>=3.1.4; python_version >= "3"->google-auth<2,>=1.6.3->tensorboard->detectron2==0.1.2) (0.4.8)
Building wheels for collected packages: fvcore
  Building wheel for fvcore (setup.py) ... done
  Created wheel for fvcore: filename=fvcore-0.1.2.post20201016-cp36-none-any.whl size=44196 sha256=515c7e811e56805981547d30fec694ff443cac59cb9ffd84100179768527f2e9
  Stored in directory: /root/.cache/pip/wheels/f3/3f/35/86873c1ddea45a9fb1ba7921232ea15c570165a9d4f4d831c7
Successfully built fvcore
Installing collected packages: yacs, mock, portalocker, fvcore, detectron2
Successfully installed detectron2-0.1.2+cu101 fvcore-0.1.2.post20201016 mock-4.0.2 portalocker-2.0.0 yacs-0.1.8
Collecting imantics
  Downloading https://files.pythonhosted.org/packages/1a/ff/8f92fa03b42f14860bc882d08187b359d3b8f9ef670d4efbed090d451c58/imantics-0.1.12.tar.gz
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from imantics) (1.18.5)
Requirement already satisfied: opencv-python>=3 in /usr/local/lib/python3.6/dist-packages (from imantics) (4.1.2.30)
Requirement already satisfied: lxml in /usr/local/lib/python3.6/dist-packages (from imantics) (4.2.6)
Collecting xmljson
  Downloading https://files.pythonhosted.org/packages/91/2d/7191efe15406b8b99e2b5905ca676a8a3dc2936416ade7ed17752902c250/xmljson-0.2.1-py2.py3-none-any.whl
Building wheels for collected packages: imantics
  Building wheel for imantics (setup.py) ... done
  Created wheel for imantics: filename=imantics-0.1.12-cp36-none-any.whl size=16034 sha256=f5724970536ff60df0f5669aef09a9bcd471861fbe8d1d7d5fdce02c34ee4815
  Stored in directory: /root/.cache/pip/wheels/73/93/1c/9e2fc52eb74441941bc76cac441ddcc2c7ad67b18e1849e62a
Successfully built imantics
Installing collected packages: xmljson, imantics
Successfully installed imantics-0.1.12 xmljson-0.2.1
In [ ]:
# For data preprocessing & manipulation
import numpy as np
import pandas as pd

# FOr data visualisations & graphs
import matplotlib.pyplot as plt
import plotly.graph_objects as go
import plotly.express as px
from plotly.subplots import make_subplots

# utilities
from tqdm.notebook import tqdm
import datetime 
from IPython.display import HTML
import os

# For Deep learning
import tensorflow as tf
from tensorflow_examples.models.pix2pix import pix2pix
import tensorflow_datasets as tfds
import tensorflow_addons as tfa

# For Image Preprocessing
import cv2

# Detectron2


import detectron2
from detectron2.utils.logger import setup_logger
from imantics import Polygons, Mask
setup_logger()

import random

# import some common detectron2 utilities
from detectron2 import model_zoo

from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import MetadataCatalog
from detectron2.structures import BoxMode

from pycocotools import mask
from skimage import measure

from detectron2.data import DatasetCatalog, MetadataCatalog

from detectron2.engine import DefaultTrainer
from detectron2.config import get_cfg

# Setting a bigger figure size
plt.rcParams["figure.figsize"] = (20, 15)

Data Exploration 🧐

In this section we are going to explore our dataset, firstly load it and seeing some array, categories and then image visualisations

Reading Our Dataset

In [ ]:
# Reading our Training dataset ( Seismic Images | X )
data = np.load("/content/data_train.npz", 
               allow_pickle=True, mmap_mode = 'r')

# Reading our Traning Dataset ( Labels | Y)
labels = np.load("/content/labels_train.npz", 
                 allow_pickle=True, mmap_mode = 'r')

# Picking the actual data
X = data['data']
Y = labels['labels']
In [ ]:
# Dimensions of features & labels 

X.shape, Y.shape
In [ ]:
# Showing the data

X[:, 6, :], Y[:, 6, :]

Here we are making a 2D array of image, so we are picking the 6th index of X axis and seing the Z and Y axis values!

Also it looks like that we have got negative values also in X, but the Y looks good!

In [ ]:
np.unique(Y)

Ther are 6 different unique values in labels, as said before, each pixel can be classified into 6 different labels

Image Visualisations

In [ ]:
# Making a subplot with 1 row and 2 column
fig = make_subplots(1, 2, subplot_titles=("Image", "Label"))

# Visualising a section of the 3D array
fig.add_trace(go.Heatmap(z=X[:, :, 70][:300, :300]), 1, 1)

fig.add_trace(go.Heatmap(z=Y[:, :, 70][:300, :300]), 1, 2)

fig.update_layout(height=600, width=1100, title_text="Seismic Image & Label")

HTML(fig.to_html())
Output hidden; open in https://colab.research.google.com to view.
In [ ]:
# Making a subplot with 1 row and 2 column
fig = make_subplots(1, 2, subplot_titles=("Image", "Label"), specs=[[{"type": "Surface"}, {"type": "Surface"}]])

# Making a 3D Surphace graph with image and corresponsing label
fig.add_trace(go.Surface(z=X[:,75, :][:300, :300]), 1, 1)
fig.add_trace(go.Surface(z=Y[:,75, :][:300, :300]), 1, 2)

fig.update_layout(height=600, width=1100, title_text="Seismic Image & Label in 3D!")

HTML(fig.to_html())
Output hidden; open in https://colab.research.google.com to view.
In [ ]:
# Making a subplot with 1 row and 2 column
fig = make_subplots(1, 2, subplot_titles=("Image", "Label"))

# Making a contour graph
fig.add_trace(go.Contour(
        z=X[:,34, :][:300, :300]), 1, 1)

fig.add_trace(go.Contour(
        z=Y[:,34, :][:300, :300]
    ), 1, 2)


fig.update_layout(height=600, width=1100, title_text="Seismic Image & Label in with contours")

HTML(fig.to_html())
Output hidden; open in https://colab.research.google.com to view.
In [ ]:
# Making a subplot with 2 row and 2 column
fig = make_subplots(2, 2, subplot_titles=("Image", "Label", "Label Histogram"))

# Making a contour graph
fig.add_trace(go.Contour(
        z=X[:,34, :][:300, :300], contours_coloring='lines',
        line_width=2,), 1, 1)

# Showing the label ( also the contour )
fig.add_trace(go.Contour(
        z=Y[:,34, :][:300, :300]
    ), 1, 2)

# Showing histogram for the label column
fig.add_trace(go.Histogram(x=Y[:,34, :][:300, :300].ravel()), 2, 1)


fig.update_layout(height=800, width=1100, title_text="Seismic Image & Label in with contours ( only line )")

HTML(fig.to_html())
Output hidden; open in https://colab.research.google.com to view.
In [ ]:
# Making a subplot with 2 row and 1 column
fig = make_subplots(2, 1, subplot_titles=("Image", "label"))

# Making a contour graph
fig.add_trace(
    go.Contour(
        z=X[:,:, 56][:200, :200]
    ), 1, 1)

fig.add_trace(go.Contour(
        z=Y[:,:, 56][:200, :200]
    ), 2, 1)

fig.update_layout(height=1000, width=1100, title_text="Seismic Image & Label in with contours ( More Closer Look )")

HTML(fig.to_html())
Output hidden; open in https://colab.research.google.com to view.

Image Preprocessing Techniqes 🧹

In this section we are going to take a look at some image processing technique to see how we can improve the features so that our model and give more accuracy!

In [ ]:
# Reading a sample seismic image with label
img = X[:,:, 56]
label = Y[:, :, 56]

plt.imshow(img, cmap='gray')
plt.show()
plt.imshow(label)
Out[ ]:
<matplotlib.image.AxesImage at 0x7fa9b9630128>
In [ ]:
# Image Thresholding
ret,thresh1 = cv2.threshold(img,0,255,cv2.THRESH_TOZERO)
plt.imshow(thresh1, cmap='gray')
Out[ ]:
<matplotlib.image.AxesImage at 0x7f9be3d7de10>
In [ ]:
# Sobel Y
sobely = cv2.Sobel(img,cv2.CV_64F, 0, 4,ksize=5)
plt.imshow(sobely, cmap='gray')
Out[ ]:
<matplotlib.image.AxesImage at 0x7f9be6bef080>
In [ ]:
# Erosion

kernel = np.ones((5,5),np.uint8)
erosion = cv2.erode(img,kernel,iterations = 1)
plt.imshow(erosion, cmap='gray')
Out[ ]:
<matplotlib.image.AxesImage at 0x7f9be6e8b978>
In [ ]:
# Dialation

dilation = cv2.dilate(img,kernel,iterations = 1)
plt.imshow(dilation, cmap='gray')
Out[ ]:
<matplotlib.image.AxesImage at 0x7f9be3492f98>
In [ ]:
# Sharping Image

kernel = np.array([[0, -1, -1],[2, -1, 2],[-1, 2, -1]], np.float32) 

sharp = cv2.filter2D(thresh1, -1, kernel)

plt.imshow(sharp, cmap='gray')
Out[ ]:
<matplotlib.image.AxesImage at 0x7f9be5d56358>
In [ ]:
# Making a subplot containing all image preprocessing

fig,a =  plt.subplots(4,2)

x = np.arange(1,5)

plt.title("All Image Processing")

a[0][0].imshow(img , cmap='gray')
a[0][0].set_title('Original')

a[0][1].imshow(label)
a[0][1].set_title('Label')

a[1][0].imshow(thresh1, cmap='gray')
a[1][0].set_title('Threshold')

a[1][1].imshow(sobely, cmap='gray')
a[1][1].set_title('Sobel Y')

a[2][0].imshow(erosion, cmap='gray')
a[2][0].set_title('Erosion')

a[2][1].imshow(dilation, cmap='gray')
a[2][1].set_title('Dialation')

a[3][0].imshow(sharp, cmap='gray')
a[3][0].set_title('Sharpen')

fig.delaxes(a[3,1])



plt.show()

Creating our Model 🏭

IN this section we are going to create a UNet model from scratch using keras & tensorflow!

Creating UNet Model

The tensorflow guide on image segmentation https://www.tensorflow.org/tutorials/images/segmentation helped me a lot of implment UNet Model. Really Recommended to take a look at that before continuing forward!

In [ ]:
# Making the Base Model First

# Using transfer learning ( MobileNetV2 Model )
base_model = tf.keras.applications.MobileNetV2(input_shape=[128, 128, 3], include_top=False)

# Use the activations of these layers
layer_names = [
    'block_1_expand_relu',   # 64x64
    'block_3_expand_relu',   # 32x32
    'block_6_expand_relu',   # 16x16
    'block_13_expand_relu',  # 8x8
    'block_16_project',      # 4x4
]
layers = [base_model.get_layer(name).output for name in layer_names]

# Creating thr base Model
down_stack = tf.keras.Model(inputs=base_model.input, outputs=layers)

# Setting base model trainable to false
down_stack.trainable = False
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/mobilenet_v2/mobilenet_v2_weights_tf_dim_ordering_tf_kernels_1.0_128_no_top.h5
9412608/9406464 [==============================] - 0s 0us/step
In [ ]:
up_stack = [
    pix2pix.upsample(512, 3), 
    pix2pix.upsample(256, 3), 
    pix2pix.upsample(128, 3), 
    pix2pix.upsample(64, 3),
]
In [ ]:
# Making the unet model

def unet_model():
  inputs = tf.keras.layers.Input(shape=[128, 128, 3])
  x = inputs

  # Downsampling through the model
  skips = down_stack(x)
  x = skips[-1]
  skips = reversed(skips[:-1])

  # Upsampling and establishing the skip connections
  for up, skip in zip(up_stack, skips):
    x = up(x)
    concat = tf.keras.layers.Concatenate()
    x = concat([x, skip])

  # This is the last layer of the model
  last = tf.keras.layers.Conv2DTranspose(
      1, 3, strides=2,
      padding='same') 

  x = last(x)

  return tf.keras.Model(inputs=inputs, outputs=x)
In [ ]:
# Creating the Model
model = unet_model()
In [ ]:
tf.keras.utils.plot_model(model, show_shapes=True)
Out[ ]:

Setting up hyperparameters & Callbacks

Dice Loss

In [ ]:
# Here https://stackoverflow.com/questions/49012025/generalized-dice-loss-for-multi-class-segmentation-keras-implementation

def gen_dice(y_true, y_pred, eps=1e-6):
    """both tensors are [b, h, w, classes] and y_pred is in logit form"""

    # [b, h, w, classes]
    pred_tensor = tf.nn.softmax(y_pred)
    y_true_shape = tf.shape(y_true)

    # [b, h*w, classes]
    y_true = tf.reshape(y_true, [-1, y_true_shape[1]*y_true_shape[2], y_true_shape[3]])
    y_pred = tf.reshape(pred_tensor, [-1, y_true_shape[1]*y_true_shape[2], y_true_shape[3]])

    # [b, classes]
    # count how many of each class are present in 
    # each image, if there are zero, then assign
    # them a fixed weight of eps
    counts = tf.reduce_sum(y_true, axis=1)
    weights = 1. / (counts ** 2)
    weights = tf.where(tf.math.is_finite(weights), weights, eps)

    multed = tf.reduce_sum(y_true * y_pred, axis=1)
    summed = tf.reduce_sum(y_true + y_pred, axis=1)

    # [b]
    numerators = tf.reduce_sum(weights*multed, axis=-1)
    denom = tf.reduce_sum(weights*summed, axis=-1)
    dices = 1. - 2. * numerators / denom
    dices = tf.where(tf.math.is_finite(dices), dices, tf.zeros_like(dices))
    return tf.reduce_mean(dices)
In [ ]:
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1, write_images=True, update_freq='batch')

model.compile(optimizer='adam',
              loss='mean_squared_error', 
              metrics=['accuracy'])

Creating our Dataset 🔨

In this section we are going to load all the dataset into memory/RAM ( because memory/RAM is faster then hard drive/SSD 😄 ) and then getting into right shape

Loading data into memory

In [ ]:
with np.load('/content/data_train.npz') as dataset:
        train_dataset = dataset['data']

with np.load('/content/labels_train.npz') as labels:
        train_labels = labels['labels']
In [ ]:
train_dataset.shape
Out[ ]:
(1006, 782, 590)

Making 2D Images

In [ ]:
training_img_data = []
training_label_data = []

for i in tqdm(range(0, 580)):
  img = train_dataset[:, :, i]
  label = train_labels[:, :, i]

  img = np.expand_dims(img, axis=2).astype('float32')
  label = np.expand_dims(label, axis=2).astype('float32')

  img = cv2.resize(img, (128, 128))
  label = cv2.resize(label, (128, 128))

  img = img/np.amax(img)
  img = np.clip(img, 0, 255)
  img = (img*255).astype(int)

  img = img/255.

  img = cv2.merge([img,img,img])

  training_img_data.append(img) 
  training_label_data.append(label)

In [ ]:
# Changing it into a numpy array

training_img_data = np.asarray(training_img_data)
training_label_data = np.asarray(training_label_data)
training_img_data.shape, training_label_data.shape
Out[ ]:
((580, 128, 128, 3), (580, 128, 128))
In [ ]:
training_img_data[0, :, :, 0]
Out[ ]:
array([[0.        , 0.        , 0.        , ..., 0.        , 0.        ,
        0.00392157],
       [0.        , 0.        , 0.        , ..., 0.        , 0.        ,
        0.        ],
       [0.14509804, 0.18823529, 0.17647059, ..., 0.        , 0.        ,
        0.02352941],
       ...,
       [0.04313725, 0.        , 0.08627451, ..., 0.00392157, 0.        ,
        0.        ],
       [0.        , 0.01176471, 0.10980392, ..., 0.        , 0.        ,
        0.05490196],
       [0.        , 0.        , 0.00784314, ..., 0.        , 0.        ,
        0.05098039]])
In [ ]:
plt.imshow(training_img_data[0, :, :])
Out[ ]:
<matplotlib.image.AxesImage at 0x7f4a05b41f60>

Training the Model 🚂

Setting up Tensorboard

In [ ]:
%load_ext tensorboard
In [ ]:
%tensorboard --logdir logs

Start Training!

In [ ]:
model_history = model.fit(training_img_data, training_label_data, 
                          validation_split=0.1,
                          epochs=20,
                          callbacks=[tensorboard_callback])
Epoch 1/20
17/17 [==============================] - 4s 238ms/step - loss: 0.4915 - accuracy: 0.1291 - val_loss: 0.7476 - val_accuracy: 0.2153
Epoch 2/20
17/17 [==============================] - 4s 227ms/step - loss: 0.4688 - accuracy: 0.1297 - val_loss: 0.6843 - val_accuracy: 0.2156
Epoch 3/20
17/17 [==============================] - 4s 229ms/step - loss: 0.4519 - accuracy: 0.1306 - val_loss: 0.6528 - val_accuracy: 0.2170
Epoch 4/20
17/17 [==============================] - 4s 228ms/step - loss: 0.4139 - accuracy: 0.1310 - val_loss: 0.6454 - val_accuracy: 0.2189
Epoch 5/20
17/17 [==============================] - 4s 229ms/step - loss: 0.3846 - accuracy: 0.1320 - val_loss: 0.6539 - val_accuracy: 0.2202
Epoch 6/20
17/17 [==============================] - 4s 227ms/step - loss: 0.3673 - accuracy: 0.1324 - val_loss: 0.6464 - val_accuracy: 0.2198
Epoch 7/20
17/17 [==============================] - 4s 227ms/step - loss: 0.3555 - accuracy: 0.1329 - val_loss: 0.6542 - val_accuracy: 0.2211
Epoch 8/20
17/17 [==============================] - 4s 228ms/step - loss: 0.3348 - accuracy: 0.1338 - val_loss: 0.6279 - val_accuracy: 0.2204
Epoch 9/20
17/17 [==============================] - 4s 232ms/step - loss: 0.3249 - accuracy: 0.1338 - val_loss: 0.6186 - val_accuracy: 0.2213
Epoch 10/20
17/17 [==============================] - 4s 237ms/step - loss: 0.3117 - accuracy: 0.1347 - val_loss: 0.5655 - val_accuracy: 0.2209
Epoch 11/20
17/17 [==============================] - 4s 234ms/step - loss: 0.2918 - accuracy: 0.1348 - val_loss: 0.5657 - val_accuracy: 0.2216
Epoch 12/20
17/17 [==============================] - 4s 235ms/step - loss: 0.2778 - accuracy: 0.1356 - val_loss: 0.5547 - val_accuracy: 0.2215
Epoch 13/20
17/17 [==============================] - 4s 231ms/step - loss: 0.2623 - accuracy: 0.1357 - val_loss: 0.5934 - val_accuracy: 0.2218
Epoch 14/20
17/17 [==============================] - 4s 228ms/step - loss: 0.2556 - accuracy: 0.1360 - val_loss: 0.5359 - val_accuracy: 0.2217
Epoch 15/20
17/17 [==============================] - 4s 230ms/step - loss: 0.2467 - accuracy: 0.1365 - val_loss: 0.4920 - val_accuracy: 0.2215
Epoch 16/20
17/17 [==============================] - 4s 234ms/step - loss: 0.2324 - accuracy: 0.1364 - val_loss: 0.5006 - val_accuracy: 0.2217
Epoch 17/20
17/17 [==============================] - 4s 235ms/step - loss: 0.2231 - accuracy: 0.1370 - val_loss: 0.5232 - val_accuracy: 0.2219
Epoch 18/20
17/17 [==============================] - 4s 228ms/step - loss: 0.2186 - accuracy: 0.1371 - val_loss: 0.4991 - val_accuracy: 0.2216
Epoch 19/20
17/17 [==============================] - 4s 228ms/step - loss: 0.2037 - accuracy: 0.1375 - val_loss: 0.4891 - val_accuracy: 0.2217
Epoch 20/20
17/17 [==============================] - 4s 228ms/step - loss: 0.1930 - accuracy: 0.1375 - val_loss: 0.5006 - val_accuracy: 0.2222

We have got overfitting, but that i will leave that up to you how you improve that 🙂

Evaluating the model 🧪

In [ ]:
pred_mask = model.predict(training_img_data)
In [ ]:
plt.imshow(pred_mask[0, :, :, 0])
Out[ ]:
<matplotlib.image.AxesImage at 0x7f01fbd95208>
In [ ]:
plt.imshow(training_label_data[0, :, :])
Out[ ]:
<matplotlib.image.AxesImage at 0x7f01fbd73ba8>

Testing on test Data 💯

In this section we are going to test the model using testing set and then saving all our predictions

In [ ]:
# Reading the test data
test = np.load("/content/data_test_1.npz", 
                 allow_pickle=True, mmap_mode = 'r')

test_data = test['data']
In [ ]:
# Function to Preprocessing the inputs to match the model input

def preprocess_input(data, axis):
  for i in range(0, axis):
    img = test_data[i, :, :]
    img = np.expand_dims(img, axis=2).astype('float32')
    

    img = cv2.resize(img, (128, 128))

    img = img/np.amax(img)
    img = np.clip(img, 0, 255)
    img = (img*255).astype(int)
    img = cv2.merge([img,img,img])

    data.append(img)

  return data
In [ ]:
test_image = []
# Preprocessing the inputs
test_image = preprocess_input(test_image, 1006)

# Converting it into a numpy array
test_image = np.asarray(test_image)
In [ ]:
test_image.shape
Out[ ]:
((1006, 128, 128, 3),)
In [ ]:
# Predicting all images and converting it to each pixel to integers
test_predictions = model.predict(test_image).astype(int)
In [ ]:
np.unique(test_predictions), test_predictions.shape
Out[ ]:
(array([0, 1, 2, 3, 4, 5, 6, 7, 8]), (1006, 128, 128, 1))
In [ ]:
# Making the pixel values in range 1 - 6
test_predictions[test_predictions > 6] = 6
test_predictions[test_predictions < 1] = 1

np.unique(test_predictions)
Out[ ]:
array([1, 2, 3, 4, 5, 6])
In [ ]:
# Function Resizing the images to match the output

def resize_img(data, shape):
  local_data = []
  for i in data:
    img = i[:, :, 0].astype('float32')
    img = cv2.resize(img, shape)
    local_data.append(img)
  return np.asarray(local_data)
In [ ]:
# Resizing the image

test_predictions = resize_img(test_predictions, (251, 782))
In [ ]:
# Making sure that the output matches

test_predictions.shape, test_predictions.dtype
Out[ ]:
((1006, 782, 251), dtype('float32'))
In [ ]:
# Converting into intergers

test_predictions = test_predictions.astype(int)
test_predictions.dtype, test_predictions
Out[ ]:
(dtype('int64'), array([[[3, 3, 3, ..., 4, 4, 4],
         [3, 3, 3, ..., 4, 4, 4],
         [3, 3, 3, ..., 4, 4, 4],
         ...,
         [1, 1, 1, ..., 2, 2, 2],
         [1, 1, 1, ..., 2, 2, 2],
         [1, 1, 1, ..., 2, 2, 2]],
 
        [[3, 3, 3, ..., 3, 3, 3],
         [3, 3, 3, ..., 3, 3, 3],
         [3, 3, 3, ..., 3, 3, 3],
         ...,
         [1, 1, 1, ..., 1, 1, 2],
         [1, 1, 1, ..., 1, 1, 2],
         [1, 1, 1, ..., 1, 1, 2]],
 
        [[3, 3, 3, ..., 4, 4, 4],
         [3, 3, 3, ..., 4, 4, 4],
         [3, 3, 3, ..., 4, 4, 4],
         ...,
         [1, 1, 1, ..., 2, 2, 3],
         [1, 1, 1, ..., 2, 2, 3],
         [1, 1, 1, ..., 2, 2, 3]],
 
        ...,
 
        [[3, 3, 3, ..., 4, 4, 4],
         [3, 3, 3, ..., 4, 4, 4],
         [3, 3, 3, ..., 4, 4, 4],
         ...,
         [1, 1, 1, ..., 1, 1, 1],
         [1, 1, 1, ..., 1, 1, 1],
         [1, 1, 1, ..., 1, 1, 1]],
 
        [[3, 3, 3, ..., 3, 3, 4],
         [3, 3, 3, ..., 3, 3, 4],
         [3, 3, 3, ..., 3, 3, 4],
         ...,
         [1, 1, 1, ..., 1, 1, 1],
         [1, 1, 1, ..., 1, 1, 1],
         [1, 1, 1, ..., 1, 1, 1]],
 
        [[3, 3, 3, ..., 3, 3, 4],
         [3, 3, 3, ..., 3, 3, 4],
         [3, 3, 3, ..., 3, 3, 4],
         ...,
         [1, 1, 1, ..., 1, 1, 1],
         [1, 1, 1, ..., 1, 1, 1],
         [1, 1, 1, ..., 1, 1, 1]]]))
In [ ]:
# Saving the Predictions

np.savez_compressed(
    "prediction.npz",
    prediction=test_predictions
)

Generate More Data + Some tips & tricks 💡

  • MSE loss in not normally used in Image Segmentation , try different!
  • Data Argumentation isn't done, we can try that, it will improve significantly
  • I didn't train the model on complete dataset, this also can be done ( X and y indices can also be trained ! )
  • Accuracy is not a metrics for image segmentation, try different like Dice or something similar

Keras U-NET

In [ ]:
one_hot_train_label_data = []
for img in training_label_data:

  img = img.astype(int)
  
  one_hot_train_label_data.append(np.eye(img.max()+1)[img])  

one_hot_train_label_data = np.array(one_hot_train_label_data)

one_hot_train_label_data.shape
Out[ ]:
(580, 128, 128, 7)
In [ ]:
one_hot_train_label_data[0, :, :, 1]
Out[ ]:
array([[0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       ...,
       [1., 1., 1., ..., 1., 1., 1.],
       [1., 1., 1., ..., 1., 1., 1.],
       [1., 1., 1., ..., 1., 1., 1.]])
In [ ]:
from keras_unet.utils import plot_imgs

plot_imgs(org_imgs=training_img_data, mask_imgs=one_hot_train_label_data[:, :, :, 1], nm_img_to_plot=10, figsize=6)
In [ ]:
from keras_unet.utils import get_augmented

#.reshape(training_label_data.shape[0], training_label_data.shape[1], training_label_data.shape[2], 1)

train_gen = get_augmented(
    training_img_data, one_hot_train_label_data[:,:,:,1].reshape(training_label_data.shape[0], training_label_data.shape[1], training_label_data.shape[2], 1), batch_size=2,
    data_gen_args = dict(
        rotation_range=15.,
        width_shift_range=0.05,
        height_shift_range=0.05,
        shear_range=50,
        zoom_range=0.2,
        horizontal_flip=True,
        vertical_flip=True,
        fill_mode='constant'
    ))
In [ ]:
sample_batch = next(train_gen)
xx, yy = sample_batch
print(xx.shape, yy.shape)
from keras_unet.utils import plot_imgs

plot_imgs(org_imgs=xx, mask_imgs=yy, nm_img_to_plot=2, figsize=6)
(2, 128, 128, 3) (2, 128, 128, 1)
In [ ]:
from keras_unet.models import custom_unet

input_shape = training_img_data[0].shape

model = custom_unet(
    input_shape,
    use_batch_norm=True,
    num_classes=1,
    filters=64,
    dropout=0.2,
    output_activation='sigmoid'
)
In [ ]:
model.summary()
Model: "functional_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 128, 128, 3) 0                                            
__________________________________________________________________________________________________
conv2d (Conv2D)                 (None, 128, 128, 64) 1728        input_1[0][0]                    
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 128, 128, 64) 256         conv2d[0][0]                     
__________________________________________________________________________________________________
spatial_dropout2d (SpatialDropo (None, 128, 128, 64) 0           batch_normalization[0][0]        
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 128, 128, 64) 36864       spatial_dropout2d[0][0]          
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 128, 128, 64) 256         conv2d_1[0][0]                   
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D)    (None, 64, 64, 64)   0           batch_normalization_1[0][0]      
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 64, 64, 128)  73728       max_pooling2d[0][0]              
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 64, 64, 128)  512         conv2d_2[0][0]                   
__________________________________________________________________________________________________
spatial_dropout2d_1 (SpatialDro (None, 64, 64, 128)  0           batch_normalization_2[0][0]      
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 64, 64, 128)  147456      spatial_dropout2d_1[0][0]        
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 64, 64, 128)  512         conv2d_3[0][0]                   
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 32, 32, 128)  0           batch_normalization_3[0][0]      
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 32, 32, 256)  294912      max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 32, 32, 256)  1024        conv2d_4[0][0]                   
__________________________________________________________________________________________________
spatial_dropout2d_2 (SpatialDro (None, 32, 32, 256)  0           batch_normalization_4[0][0]      
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 32, 32, 256)  589824      spatial_dropout2d_2[0][0]        
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 32, 32, 256)  1024        conv2d_5[0][0]                   
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D)  (None, 16, 16, 256)  0           batch_normalization_5[0][0]      
__________________________________________________________________________________________________
conv2d_6 (Conv2D)               (None, 16, 16, 512)  1179648     max_pooling2d_2[0][0]            
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 16, 16, 512)  2048        conv2d_6[0][0]                   
__________________________________________________________________________________________________
spatial_dropout2d_3 (SpatialDro (None, 16, 16, 512)  0           batch_normalization_6[0][0]      
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (None, 16, 16, 512)  2359296     spatial_dropout2d_3[0][0]        
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 16, 16, 512)  2048        conv2d_7[0][0]                   
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D)  (None, 8, 8, 512)    0           batch_normalization_7[0][0]      
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (None, 8, 8, 1024)   4718592     max_pooling2d_3[0][0]            
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 8, 8, 1024)   4096        conv2d_8[0][0]                   
__________________________________________________________________________________________________
spatial_dropout2d_4 (SpatialDro (None, 8, 8, 1024)   0           batch_normalization_8[0][0]      
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 8, 8, 1024)   9437184     spatial_dropout2d_4[0][0]        
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 8, 8, 1024)   4096        conv2d_9[0][0]                   
__________________________________________________________________________________________________
conv2d_transpose (Conv2DTranspo (None, 16, 16, 512)  2097664     batch_normalization_9[0][0]      
__________________________________________________________________________________________________
concatenate (Concatenate)       (None, 16, 16, 1024) 0           conv2d_transpose[0][0]           
                                                                 batch_normalization_7[0][0]      
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (None, 16, 16, 512)  4718592     concatenate[0][0]                
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 16, 16, 512)  2048        conv2d_10[0][0]                  
__________________________________________________________________________________________________
conv2d_11 (Conv2D)              (None, 16, 16, 512)  2359296     batch_normalization_10[0][0]     
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 16, 16, 512)  2048        conv2d_11[0][0]                  
__________________________________________________________________________________________________
conv2d_transpose_1 (Conv2DTrans (None, 32, 32, 256)  524544      batch_normalization_11[0][0]     
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, 32, 32, 512)  0           conv2d_transpose_1[0][0]         
                                                                 batch_normalization_5[0][0]      
__________________________________________________________________________________________________
conv2d_12 (Conv2D)              (None, 32, 32, 256)  1179648     concatenate_1[0][0]              
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 32, 32, 256)  1024        conv2d_12[0][0]                  
__________________________________________________________________________________________________
conv2d_13 (Conv2D)              (None, 32, 32, 256)  589824      batch_normalization_12[0][0]     
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 32, 32, 256)  1024        conv2d_13[0][0]                  
__________________________________________________________________________________________________
conv2d_transpose_2 (Conv2DTrans (None, 64, 64, 128)  131200      batch_normalization_13[0][0]     
__________________________________________________________________________________________________
concatenate_2 (Concatenate)     (None, 64, 64, 256)  0           conv2d_transpose_2[0][0]         
                                                                 batch_normalization_3[0][0]      
__________________________________________________________________________________________________
conv2d_14 (Conv2D)              (None, 64, 64, 128)  294912      concatenate_2[0][0]              
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 64, 64, 128)  512         conv2d_14[0][0]                  
__________________________________________________________________________________________________
conv2d_15 (Conv2D)              (None, 64, 64, 128)  147456      batch_normalization_14[0][0]     
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 64, 64, 128)  512         conv2d_15[0][0]                  
__________________________________________________________________________________________________
conv2d_transpose_3 (Conv2DTrans (None, 128, 128, 64) 32832       batch_normalization_15[0][0]     
__________________________________________________________________________________________________
concatenate_3 (Concatenate)     (None, 128, 128, 128 0           conv2d_transpose_3[0][0]         
                                                                 batch_normalization_1[0][0]      
__________________________________________________________________________________________________
conv2d_16 (Conv2D)              (None, 128, 128, 64) 73728       concatenate_3[0][0]              
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 128, 128, 64) 256         conv2d_16[0][0]                  
__________________________________________________________________________________________________
conv2d_17 (Conv2D)              (None, 128, 128, 64) 36864       batch_normalization_16[0][0]     
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 128, 128, 64) 256         conv2d_17[0][0]                  
__________________________________________________________________________________________________
conv2d_18 (Conv2D)              (None, 128, 128, 1)  65          batch_normalization_17[0][0]     
==================================================================================================
Total params: 31,049,409
Trainable params: 31,037,633
Non-trainable params: 11,776
__________________________________________________________________________________________________
In [ ]:
from tensorflow.keras.callbacks import ModelCheckpoint


model_filename = 'segm_model_v0.h5'
callback_checkpoint = ModelCheckpoint(
    model_filename, 
    verbose=1, 
    monitor='val_loss', 
    save_best_only=True,
)
In [ ]:
from keras.optimizers import Adam, SGD
from keras_unet.metrics import iou, iou_thresholded
from keras_unet.losses import jaccard_distance

model.compile(
    #optimizer=Adam(), 
    optimizer=SGD(lr=0.01, momentum=0.99),
    loss='binary_crossentropy',
    #loss=jaccard_distance,
    metrics=[iou, iou_thresholded]
)
In [ ]:
history = model.fit_generator(
    train_gen,
    steps_per_epoch=100,
    epochs=10,
    
    #validation_data=(x_val, y_val),
    callbacks=[callback_checkpoint]
)
WARNING:tensorflow:From <ipython-input-28-5f8247f1ec0f>:7: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
Please use Model.fit, which supports generators.
Epoch 1/10
 99/100 [============================>.] - ETA: 0s - loss: 0.3179 - iou: 0.1798 - iou_thresholded: 0.1769WARNING:tensorflow:Can save best model only with val_loss available, skipping.
100/100 [==============================] - 4s 36ms/step - loss: 0.3170 - iou: 0.1808 - iou_thresholded: 0.1792
Epoch 2/10
 99/100 [============================>.] - ETA: 0s - loss: 0.2129 - iou: 0.2553 - iou_thresholded: 0.3197WARNING:tensorflow:Can save best model only with val_loss available, skipping.
100/100 [==============================] - 4s 36ms/step - loss: 0.2129 - iou: 0.2549 - iou_thresholded: 0.3198
Epoch 3/10
 99/100 [============================>.] - ETA: 0s - loss: 0.1847 - iou: 0.3012 - iou_thresholded: 0.3614WARNING:tensorflow:Can save best model only with val_loss available, skipping.
100/100 [==============================] - 4s 36ms/step - loss: 0.1844 - iou: 0.3014 - iou_thresholded: 0.3621
Epoch 4/10
 99/100 [============================>.] - ETA: 0s - loss: 0.1448 - iou: 0.3994 - iou_thresholded: 0.4964WARNING:tensorflow:Can save best model only with val_loss available, skipping.
100/100 [==============================] - 4s 36ms/step - loss: 0.1458 - iou: 0.3987 - iou_thresholded: 0.4952
Epoch 5/10
 99/100 [============================>.] - ETA: 0s - loss: 0.1389 - iou: 0.4160 - iou_thresholded: 0.5294WARNING:tensorflow:Can save best model only with val_loss available, skipping.
100/100 [==============================] - 4s 36ms/step - loss: 0.1397 - iou: 0.4156 - iou_thresholded: 0.5287
Epoch 6/10
 99/100 [============================>.] - ETA: 0s - loss: 0.1223 - iou: 0.4383 - iou_thresholded: 0.5331WARNING:tensorflow:Can save best model only with val_loss available, skipping.
100/100 [==============================] - 4s 36ms/step - loss: 0.1219 - iou: 0.4408 - iou_thresholded: 0.5357
Epoch 7/10
 99/100 [============================>.] - ETA: 0s - loss: 0.1022 - iou: 0.5253 - iou_thresholded: 0.6189WARNING:tensorflow:Can save best model only with val_loss available, skipping.
100/100 [==============================] - 4s 36ms/step - loss: 0.1020 - iou: 0.5263 - iou_thresholded: 0.6203
Epoch 8/10
 99/100 [============================>.] - ETA: 0s - loss: 0.1027 - iou: 0.5380 - iou_thresholded: 0.6494WARNING:tensorflow:Can save best model only with val_loss available, skipping.
100/100 [==============================] - 4s 36ms/step - loss: 0.1033 - iou: 0.5357 - iou_thresholded: 0.6463
Epoch 9/10
 99/100 [============================>.] - ETA: 0s - loss: 0.0899 - iou: 0.5801 - iou_thresholded: 0.6687WARNING:tensorflow:Can save best model only with val_loss available, skipping.
100/100 [==============================] - 4s 36ms/step - loss: 0.0897 - iou: 0.5793 - iou_thresholded: 0.6681
Epoch 10/10
 99/100 [============================>.] - ETA: 0s - loss: 0.0747 - iou: 0.6282 - iou_thresholded: 0.7151WARNING:tensorflow:Can save best model only with val_loss available, skipping.
100/100 [==============================] - 4s 36ms/step - loss: 0.0751 - iou: 0.6288 - iou_thresholded: 0.7156
In [ ]:


Comments

You must login before you can post a comment.

Execute