Loading

CERVC

Baseline for CERVC Challenge

A getting started code with a simple SVM model for the challenge.

ashivani

AIcrowd-Logo

Getting Started Code for Cervc Educational Challenge

Author : Ayush Shivani

Download Necessary Packages

In [1]:
!pip install aicrowd-cli
Collecting aicrowd-cli
  Downloading https://files.pythonhosted.org/packages/29/18/2dcc043573e489f6134e4a76644f640874d3fa4d8f3e0593bf54a7c8b53a/aicrowd_cli-0.1.2-py3-none-any.whl (42kB)
     |████████████████████████████████| 51kB 4.1MB/s 
Collecting requests-toolbelt<1,>=0.9.1
  Downloading https://files.pythonhosted.org/packages/60/ef/7681134338fc097acef8d9b2f8abe0458e4d87559c689a8c306d0957ece5/requests_toolbelt-0.9.1-py2.py3-none-any.whl (54kB)
     |████████████████████████████████| 61kB 4.5MB/s 
Collecting gitpython<4,>=3.1.12
  Downloading https://files.pythonhosted.org/packages/a6/99/98019716955ba243657daedd1de8f3a88ca1f5b75057c38e959db22fb87b/GitPython-3.1.14-py3-none-any.whl (159kB)
     |████████████████████████████████| 163kB 11.6MB/s 
Collecting requests<3,>=2.25.1
  Downloading https://files.pythonhosted.org/packages/29/c1/24814557f1d22c56d50280771a17307e6bf87b70727d975fd6b2ce6b014a/requests-2.25.1-py2.py3-none-any.whl (61kB)
     |████████████████████████████████| 61kB 6.5MB/s 
Collecting rich<11,>=10.0.0
  Downloading https://files.pythonhosted.org/packages/1a/da/2a1f064dc620ab47f3f826ae085384084b71ea05c8c21d67f1dfc29189ab/rich-10.1.0-py3-none-any.whl (201kB)
     |████████████████████████████████| 204kB 12.5MB/s 
Collecting tqdm<5,>=4.56.0
  Downloading https://files.pythonhosted.org/packages/72/8a/34efae5cf9924328a8f34eeb2fdaae14c011462d9f0e3fcded48e1266d1c/tqdm-4.60.0-py2.py3-none-any.whl (75kB)
     |████████████████████████████████| 81kB 8.1MB/s 
Requirement already satisfied: click<8,>=7.1.2 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (7.1.2)
Requirement already satisfied: toml<1,>=0.10.2 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (0.10.2)
Collecting gitdb<5,>=4.0.1
  Downloading https://files.pythonhosted.org/packages/ea/e8/f414d1a4f0bbc668ed441f74f44c116d9816833a48bf81d22b697090dba8/gitdb-4.0.7-py3-none-any.whl (63kB)
     |████████████████████████████████| 71kB 7.6MB/s 
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2020.12.5)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2.10)
Requirement already satisfied: chardet<5,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (3.0.4)
Collecting commonmark<0.10.0,>=0.9.0
  Downloading https://files.pythonhosted.org/packages/b1/92/dfd892312d822f36c55366118b95d914e5f16de11044a27cf10a7d71bbbf/commonmark-0.9.1-py2.py3-none-any.whl (51kB)
     |████████████████████████████████| 51kB 6.8MB/s 
Requirement already satisfied: pygments<3.0.0,>=2.6.0 in /usr/local/lib/python3.7/dist-packages (from rich<11,>=10.0.0->aicrowd-cli) (2.6.1)
Collecting colorama<0.5.0,>=0.4.0
  Downloading https://files.pythonhosted.org/packages/44/98/5b86278fbbf250d239ae0ecb724f8572af1c91f4a11edf4d36a206189440/colorama-0.4.4-py2.py3-none-any.whl
Requirement already satisfied: typing-extensions<4.0.0,>=3.7.4 in /usr/local/lib/python3.7/dist-packages (from rich<11,>=10.0.0->aicrowd-cli) (3.7.4.3)
Collecting smmap<5,>=3.0.1
  Downloading https://files.pythonhosted.org/packages/68/ee/d540eb5e5996eb81c26ceffac6ee49041d473bc5125f2aa995cf51ec1cf1/smmap-4.0.0-py2.py3-none-any.whl
ERROR: google-colab 1.0.0 has requirement requests~=2.23.0, but you'll have requests 2.25.1 which is incompatible.
ERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.
Installing collected packages: requests, requests-toolbelt, smmap, gitdb, gitpython, commonmark, colorama, rich, tqdm, aicrowd-cli
  Found existing installation: requests 2.23.0
    Uninstalling requests-2.23.0:
      Successfully uninstalled requests-2.23.0
  Found existing installation: tqdm 4.41.1
    Uninstalling tqdm-4.41.1:
      Successfully uninstalled tqdm-4.41.1
Successfully installed aicrowd-cli-0.1.2 colorama-0.4.4 commonmark-0.9.1 gitdb-4.0.7 gitpython-3.1.14 requests-2.25.1 requests-toolbelt-0.9.1 rich-10.1.0 smmap-4.0.0 tqdm-4.60.0
In [ ]:

Download Data

The first step is to download out train test data. We will be training a model on the train data and make predictions on test data. We submit our predictions.

In [2]:
API_KEY = "cf6330dec358de63587e9c9a3e7201e1" #Please enter your API Key from [https://www.aicrowd.com/participants/me]
!aicrowd login --api-key $API_KEY
API Key valid
Saved API Key successfully!
In [3]:
!aicrowd dataset download --challenge cervc
test.csv: 100% 6.90k/6.90k [00:00<00:00, 314kB/s]
train.csv: 100% 28.0k/28.0k [00:00<00:00, 1.06MB/s]
In [4]:
!rm -rf data
!mkdir data
!mv train.csv data/train.csv
!mv test.csv data/test.csv

Import packages

In [5]:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.svm import SVC
from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score

Load Data

  • We use pandas 🐼 library to load our data.
  • Pandas loads the data into dataframes and facilitates us to analyse the data.
  • Learn more about it here 🤓
In [6]:
all_data_path = "data/train.csv" #path where data is stored
In [7]:
all_data = pd.read_csv(all_data_path) #load data in dataframe using pandas

Visualize the data 👀

In [8]:
all_data.head()
Out[8]:
Age Number.of.sexual.partners First.sexual.intercourse Num.of.pregnancies Smokes Smokes..years. Hormonal.Contraceptives Hormonal.Contraceptives..years. IUD IUD..years. STDs STDs..number. STDs..Number.of.diagnosis STDs..Time.since.first.diagnosis STDs..Time.since.last.diagnosis Biopsy
0 36 3 20 2 0 0.000000 1 6.00 0 0.0 1 1 1 16 16 0
1 29 2 20 1 0 0.000000 1 0.50 0 0.0 0 0 0 1 1 0
2 36 3 18 3 1 1.266973 1 9.00 0 0.0 0 0 0 1 1 0
3 20 3 17 2 0 0.000000 1 0.25 0 0.0 0 0 0 1 1 1
4 29 5 17 5 0 0.000000 1 0.58 0 0.0 0 0 0 1 1 0

Split Data into Train and Validation 🔪

  • The next step is to think of a way to test how well our model is performing. we cannot use the test data given as it does not contain the data labels for us to verify.
  • The workaround this is to split the given training data into training and validation. Typically validation sets give us an idea of how our model will perform on unforeseen data. it is like holding back a chunk of data while training our model and then using it to for the purpose of testing. it is a standard way to fine-tune hyperparameters in a model.
  • There are multiple ways to split a dataset into validation and training sets. following are two popular ways to go about it, k-fold, leave one out. 🧐
  • Validation sets are also used to avoid your model from overfitting on the train dataset.
In [9]:
X_train, X_val= train_test_split(all_data, test_size=0.2, random_state=42)
  • We have decided to split the data with 20 % as validation and 80 % as training.
  • To learn more about the train_test_split function click here. 🧐
  • This is of course the simplest way to validate your model by simply taking a random chunk of the train set and setting it aside solely for the purpose of testing our train model on unseen data. as mentioned in the previous block, you can experiment 🔬 with and choose more sophisticated techniques and make your model better.
  • Now, since we have our data splitted into train and validation sets, we need to get the corresponding labels separated from the data.
  • with this step we are all set move to the next step with a prepared dataset.
In [10]:
X_train,y_train = X_train.iloc[:,:-1],X_train.iloc[:,-1]
X_val,y_val = X_val.iloc[:,:-1],X_val.iloc[:,-1]

TRAINING PHASE 🏋️

Define the Model

  • We have fixed our data and now we are ready to train our model.

  • There are a ton of classifiers to choose from some being Logistic Regression, SVM, Random Forests, Decision Trees, etc.🧐

  • Remember that there are no hard-laid rules here. you can mix and match classifiers, it is advisable to read up on the numerous techniques and choose the best fit for your solution , experimentation is the key.

  • A good model does not depend solely on the classifier but also on the features you choose. So make sure to analyse and understand your data well and move forward with a clear view of the problem at hand. you can gain important insight from here.🧐

In [11]:
classifier = SVC(gamma='auto')

#from sklearn.linear_model import LogisticRegression
# classifier = LogisticRegression()
  • To start you off, We have used a basic Support Vector Machines classifier here.
  • But you can tune parameters and increase the performance. To see the list of parameters visit here.
  • Do keep in mind there exist sophisticated techniques for everything, the key as quoted earlier is to search them and experiment to fit your implementation.

To read more about other sklearn classifiers visit here 🧐. Try and use other classifiers to see how the performance of your model changes. Try using Logistic Regression or MLP and compare how the performance changes.

Train the Model

In [12]:
classifier.fit(X_train, y_train)
Out[12]:
SVC(C=1.0, break_ties=False, cache_size=200, class_weight=None, coef0=0.0,
    decision_function_shape='ovr', degree=3, gamma='auto', kernel='rbf',
    max_iter=-1, probability=False, random_state=None, shrinking=True,
    tol=0.001, verbose=False)

Got a warning! Dont worry, its just beacuse the number of iteration is very less(defined in the classifier in the above cell).Increase the number of iterations and see if the warning vanishes and also see how the performance changes.Do remember increasing iterations also increases the running time.( Hint: max_iter=500)

Validation Phase 🤔

Wonder how well your model learned! Lets check it.

Predict on Validation

Now we predict using our trained model on the validation set we created and evaluate our model on unforeseen data.

In [13]:
y_pred = classifier.predict(X_val)

Evaluate the Performance

  • We have used basic metrics to quantify the performance of our model.
  • This is a crucial step, you should reason out the metrics and take hints to improve aspects of your model.
  • Do read up on the meaning and use of different metrics. there exist more metrics and measures, you should learn to use them correctly with respect to the solution,dataset and other factors.
  • F1 score and Log Loss are the metrics for this challenge
In [14]:
precision = precision_score(y_val,y_pred,average='micro')
recall = recall_score(y_val,y_pred,average='micro')
accuracy = accuracy_score(y_val,y_pred)
f1 = f1_score(y_val,y_pred,average='macro')
In [15]:
print("Accuracy of the model is :" ,accuracy)
print("Recall of the model is :" ,recall)
print("Precision of the model is :" ,precision)
print("F1 score of the model is :" ,f1)
Accuracy of the model is : 0.9492753623188406
Recall of the model is : 0.9492753623188406
Precision of the model is : 0.9492753623188406
F1 score of the model is : 0.48698884758364314

Testing Phase 😅

We are almost done. We trained and validated on the training data. Now its the time to predict on test set and make a submission.

Load Test Set

Load the test data on which final submission is to be made.

In [16]:
final_test_path = "data/test.csv"
final_test = pd.read_csv(final_test_path)

Predict Test Set

Predict on the test set and you are all set to make the submission !

In [17]:
submission = classifier.predict(final_test)

Save the prediction to csv

In [18]:
submission = pd.DataFrame(submission)
submission.to_csv('submission.csv',header=['Biopsy'],index=False)

Making Direct Submission thought Aicrowd CLI

In [26]:
!aicrowd submission create -c cervc -f submission.csv
submission.csv ━━━━━━━━━━━━━━━━━━━━━━━━━━ 100.0%1,996/351 bytes?0:00:00
                                                   ╭─────────────────────────╮                                                   
                                                   │ Successfully submitted! │                                                   
                                                   ╰─────────────────────────╯                                                   
                                                         Important links                                                         
┌──────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│  This submission │ https://www.aicrowd.com/challenges/nit-kurukshetra-ai-blitz/problems/cervc/submissions/128426              │
│                  │                                                                                                            │
│  All submissions │ https://www.aicrowd.com/challenges/nit-kurukshetra-ai-blitz/problems/cervc/submissions?my_submissions=true │
│                  │                                                                                                            │
│      Leaderboard │ https://www.aicrowd.com/challenges/nit-kurukshetra-ai-blitz/problems/cervc/leaderboards                    │
│                  │                                                                                                            │
│ Discussion forum │ https://discourse.aicrowd.com/c/nit-kurukshetra-ai-blitz                                                   │
│                  │                                                                                                            │
│   Challenge page │ https://www.aicrowd.com/challenges/nit-kurukshetra-ai-blitz/problems/cervc                                 │
└──────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
{'submission_id': 128426, 'created_at': '2021-04-06T10:00:35.487Z'}
In [ ]:

In [ ]:


Comments

You must login before you can post a comment.

Execute