Loading

SCRBL

[Getting Started Notebook] SCRBL Challange

This is a Baseline Code to get you started with the challenge.

gauransh_k

This is a Baseline Code to get you started with the challenge. This is a Baseline Code to get you started with the challenge.

AIcrowd-Logo

Getting Started Code for Scrambled Challenge

Authors : Gauransh Kumar, Shraddhaa Mohan, Rohit Midha

Download Necessary Packages

In [1]:
import sys
!{sys.executable} -m pip install numpy
!{sys.executable} -m pip install pandas
!{sys.executable} -m pip install scikit-learn
!{sys.executable} -m pip install aicrowd-cli
%load_ext aicrowd.magic
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
Requirement already satisfied: numpy in /home/gauransh/anaconda3/lib/python3.8/site-packages (1.18.5)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
Requirement already satisfied: pandas in /home/gauransh/anaconda3/lib/python3.8/site-packages (1.3.2)
Requirement already satisfied: numpy>=1.17.3 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from pandas) (1.18.5)
Requirement already satisfied: python-dateutil>=2.7.3 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from pandas) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from pandas) (2021.1)
Requirement already satisfied: six>=1.5 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from python-dateutil>=2.7.3->pandas) (1.15.0)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
Requirement already satisfied: scikit-learn in /home/gauransh/anaconda3/lib/python3.8/site-packages (0.24.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from scikit-learn) (2.2.0)
Requirement already satisfied: scipy>=0.19.1 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from scikit-learn) (1.6.2)
Requirement already satisfied: joblib>=0.11 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from scikit-learn) (1.0.1)
Requirement already satisfied: numpy>=1.13.3 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from scikit-learn) (1.18.5)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
Requirement already satisfied: aicrowd-cli in /home/gauransh/anaconda3/lib/python3.8/site-packages (0.1.10)
Requirement already satisfied: GitPython==3.1.18 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from aicrowd-cli) (3.1.18)
Requirement already satisfied: rich<11,>=10.0.0 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from aicrowd-cli) (10.15.2)
Requirement already satisfied: tqdm<5,>=4.56.0 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from aicrowd-cli) (4.60.0)
Requirement already satisfied: requests-toolbelt<1,>=0.9.1 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from aicrowd-cli) (0.9.1)
Requirement already satisfied: click<8,>=7.1.2 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from aicrowd-cli) (7.1.2)
Requirement already satisfied: toml<1,>=0.10.2 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from aicrowd-cli) (0.10.2)
Requirement already satisfied: requests<3,>=2.25.1 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from aicrowd-cli) (2.26.0)
Requirement already satisfied: pyzmq==22.1.0 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from aicrowd-cli) (22.1.0)
Requirement already satisfied: gitdb<5,>=4.0.1 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from GitPython==3.1.18->aicrowd-cli) (4.0.9)
Requirement already satisfied: smmap<6,>=3.0.1 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from gitdb<5,>=4.0.1->GitPython==3.1.18->aicrowd-cli) (5.0.0)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from requests<3,>=2.25.1->aicrowd-cli) (1.26.6)
Requirement already satisfied: idna<4,>=2.5 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from requests<3,>=2.25.1->aicrowd-cli) (3.1)
Requirement already satisfied: charset-normalizer~=2.0.0 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from requests<3,>=2.25.1->aicrowd-cli) (2.0.0)
Requirement already satisfied: certifi>=2017.4.17 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from requests<3,>=2.25.1->aicrowd-cli) (2021.10.8)
Requirement already satisfied: commonmark<0.10.0,>=0.9.0 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from rich<11,>=10.0.0->aicrowd-cli) (0.9.1)
Requirement already satisfied: pygments<3.0.0,>=2.6.0 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from rich<11,>=10.0.0->aicrowd-cli) (2.10.0)
Requirement already satisfied: colorama<0.5.0,>=0.4.0 in /home/gauransh/anaconda3/lib/python3.8/site-packages (from rich<11,>=10.0.0->aicrowd-cli) (0.4.4)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)
WARNING: Ignoring invalid distribution -umpy (/home/gauransh/anaconda3/lib/python3.8/site-packages)

Download data

The first step is to download out train test data. We will be training a classifier on the train data and make predictions on val and test data. We submit our predictions

In [7]:
#Donwload the datasets
!rm -rf data
!mkdir data
%aicrowd ds dl -c scrbl -o data
In [8]:
!unzip data/train.zip -d data/
!unzip data/test.zip -d data/
!unzip data/val.zip -d data/
Archive:  data/train.zip
  inflating: data/train.csv          
Archive:  data/test.zip
  inflating: data/test.csv           
Archive:  data/val.zip
  inflating: data/val.csv            

Import packages

In [9]:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
from sklearn.neural_network import MLPClassifier
from sklearn.svm import SVC
from sklearn.metrics import f1_score,precision_score,recall_score,accuracy_score,log_loss

Load Data

  • We use pandas ๐Ÿผ library to load our data.
  • Pandas loads the data into dataframes and facilitates us to analyse the data.
  • Learn more about it here ๐Ÿค“
In [10]:
train_path = "data/train.csv" #path where train data is stored
val_path = "data/val.csv" #path where val data is stored
In [11]:
train_data = pd.read_csv(train_path) #load data in dataframe using pandas
val_data = pd.read_csv(val_path)

Visualize the data 👀

In [12]:
train_data.head()
Out[12]:
text label
0 A captive portal is a web page accessed with a... unscrambled
1 Honeymoon Ahead is a 1945 American comedy film... unscrambled
2 Pass Creek Bridge is a covered bridge in the c... unscrambled
3 A machine-readable passport (MRP) is a machine... unscrambled
4 Three Jane's 1997 and by Kevin Addiction direc... scrambled
In [13]:
val_data.head()
Out[13]:
text label
0 Lewellyn Anthony Gonsalvez (born 11 September ... unscrambled
1 Paul D. Thacker, sometimes bylined as Paul Tha... unscrambled
2 A Lego clone is a line or brand of children's ... scrambled
3 An enhancer trap is a method in molecular biol... unscrambled
4 Henry de Botebrigge or Henry of Budbridge (die... scrambled

The dataset contains texts along with the labels as unscrambled or scrambled.

In [14]:
X_train,y_train = train_data['text'],train_data['label']
X_val,y_val = val_data['text'],val_data['label']
print(X_train)
0         A captive portal is a web page accessed with a...
1         Honeymoon Ahead is a 1945 American comedy film...
2         Pass Creek Bridge is a covered bridge in the c...
3         A machine-readable passport (MRP) is a machine...
4         Three Jane's 1997 and by Kevin Addiction direc...
                                ...                        
599997    A gas-filled tube, also known as a discharge t...
599998    M-68 is an east west state trunkline highway l...
599999    Brian E. Mueller is an American academic and u...
600000    The Zagreb Indoors (currently sponsored by PBZ...
600001    Cryptostylis ovata, commonly known as the slip...
Name: text, Length: 600002, dtype: object

TRAINING PHASE 🏋️

Preprocessing

Text files are actually series of words (ordered). In order to run machine learning algorithms we need to convert the text files into numerical feature vectors. We will be using bag of words model for our example. Briefly, we segment each text file into words (for English splitting by space), and count number of times each word occurs in each document and finally assign each word an integer id. Each unique word in our dictionary will correspond to a feature (descriptive feature).

Scikit-learn has a high level component which will create feature vectors for us CountVectorizer. More about it here.

In [15]:
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_train)
X_train_counts.shape
Out[15]:
(600002, 558044)

Here by doing count_vect.fit_transform(X_train), we are learning the vocabulary dictionary and it returns a Document-Term matrix. [n_samples, n_features].

TF: Just counting the number of words in each document has 1 issue: it will give more weightage to longer documents than shorter documents. To avoid this, we can use frequency (TF - Term Frequencies) i.e. #count(word) / #Total words, in each document.

TF-IDF: Finally, we can even reduce the weightage of more common words like (the, is, an etc.) which occurs in all document. This is called as TF-IDF i.e Term Frequency times inverse document frequency.

In [16]:
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
Out[16]:
(600002, 558044)

Define the Model

  • We have fixed our data and now we are ready to train our model.

  • There are a ton of classifiers to choose from some being Naive Bayes,Logistic Regression, SVM, Random Forests, Decision Trees, etc.๐Ÿง

  • Remember that there are no hard-laid rules here. you can mix and match classifiers, it is advisable to read up on the numerous techniques and choose the best fit for your solution , experimentation is the key.

  • A good model does not depend solely on the classifier but also on the features you choose. So make sure to analyse and understand your data well and move forward with a clear view of the problem at hand. you can gain important insight from here.๐Ÿง

In [17]:
# classifier = SVC(gamma='auto')

classifier = MultinomialNB()

# from sklearn.linear_model import LogisticRegression
# classifier = LogisticRegression()
  • To start you off, We have used a basic Naive Bayes classifier here.
  • But you can tune parameters and increase the performance. To see the list of parameters visit here.
  • Do keep in mind there exist sophisticated techniques for everything, the key as quoted earlier is to search them and experiment to fit your implementation.

To read more about other sklearn classifiers visit here ๐Ÿง. Try and use other classifiers to see how the performance of your model changes. Try using Logistic Regression or SVM and compare how the performance changes.

Train the Model

Building a pipeline: We can write less code and do all of the above, by building a pipeline as follows:

In [18]:
text_clf = Pipeline([('vect', CountVectorizer(stop_words='english')),
                      ('tfidf', TfidfTransformer()),
                      ('clf', classifier)])
text_clf = text_clf.fit(X_train, y_train)

Tip: To Improve your accuracy you can do something called stemming. Stemming is the process of reducing inflected (or sometimes derived) words to their word stem, base or root form. E.g. A stemming algorithm reduces the words โ€œfishingโ€, โ€œfishedโ€, and โ€œfisherโ€ to the root word, โ€œfishโ€.

We need NLTK which can be installed from here. NLTK comes with various stemmers which can help reducing the words to their root form. Below we have used Snowball stemmer which works very well for English language.

In [ ]:
"""import nltk
# Download the correct package
nltk.download('stopwords')
from nltk.stem.snowball import SnowballStemmer
stemmer = SnowballStemmer("english", ignore_stopwords=True)

# Creating a new Count Vectorizer
class StemmedCountVectorizer(CountVectorizer):
    def build_analyzer(self):
        analyzer = super(StemmedCountVectorizer, self).build_analyzer()
        return lambda doc: ([stemmer.stem(w) for w in analyzer(doc)])
stemmed_count_vect = StemmedCountVectorizer(stop_words='english')

text_clf = Pipeline([('vect', stemmed_count_vect),
                      ('tfidf', TfidfTransformer()),
                      ('clf', classifier)])
text_clf = text_clf.fit(X_train, y_train)"""

Validation Phase 🤔

Wonder how well your model learned! Lets check it.

Predict on Validation

Now we predict using our trained model on the validation set we created and evaluate our model on unforeseen data.

In [19]:
y_pred = text_clf.predict(X_val)
print(y_pred)
['unscrambled' 'unscrambled' 'unscrambled' ... 'scrambled' 'unscrambled'
 'scrambled']

Evaluate the Performance

  • We have used basic metrics to quantify the performance of our model.
  • This is a crucial step, you should reason out the metrics and take hints to improve aspects of your model.
  • Do read up on the meaning and use of different metrics. there exist more metrics and measures, you should learn to use them correctly with respect to the solution,dataset and other factors.
  • F1 score and Log Loss are the metrics for this challenge
In [20]:
precision = precision_score(y_val,y_pred,average='micro')
recall = recall_score(y_val,y_pred,average='micro')
accuracy = accuracy_score(y_val,y_pred)
f1 = f1_score(y_val,y_pred,average='macro')
In [21]:
print("Accuracy of the model is :" ,accuracy)
print("Recall of the model is :" ,recall)
print("Precision of the model is :" ,precision)
print("F1 score of the model is :" ,f1)
Accuracy of the model is : 0.55085
Recall of the model is : 0.55085
Precision of the model is : 0.55085
F1 score of the model is : 0.5506074930182191

Testing Phase 😅

We are almost done. We trained and validated on the training data. Now its the time to predict on test set and make a submission.

Load Test Set

Load the test data on which final submission is to be made.

In [22]:
final_test_path = "data/test.csv"
final_test = pd.read_csv(final_test_path)
len(final_test)
Out[22]:
300000

Predict Test Set

Predict on the test set and you are all set to make the submission !

In [23]:
submission = text_clf.predict(final_test['text'])

Save the prediction to csv

In [24]:
# Saving the pandas dataframe
!rm -rf assets
!mkdir assets
submission = pd.DataFrame(submission)
submission.to_csv('assets/submission.csv',header=['label'],index=False)

๐Ÿšง Note :

  • Do take a look at the submission format.
  • The submission file should contain a header.
  • Follow all submission guidelines strictly to avoid inconvenience.

Make a submission using the aicrwd -cli

In [25]:
!!aicrowd submission create -c scrbl -f assets/submission.csv
Out[25]:
['submission.csv โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 100.0% โ€ข 3.3/3.3 MB โ€ข 569.6 kB/s โ€ข 0:00:00',
 '                                              โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ                                               ',
 '                                              โ”‚ Successfully submitted! โ”‚                                               ',
 '                                              โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ                                               ',
 '                                                    Important links                                                     ',
 'โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”',
 'โ”‚  This submission โ”‚ https://www.aicrowd.com/challenges/aicrowd-blitz-2/problems/scrbl/submissions/175257              โ”‚',
 'โ”‚                  โ”‚                                                                                                   โ”‚',
 'โ”‚  All submissions โ”‚ https://www.aicrowd.com/challenges/aicrowd-blitz-2/problems/scrbl/submissions?my_submissions=true โ”‚',
 'โ”‚                  โ”‚                                                                                                   โ”‚',
 'โ”‚      Leaderboard โ”‚ https://www.aicrowd.com/challenges/aicrowd-blitz-2/problems/scrbl/leaderboards                    โ”‚',
 'โ”‚                  โ”‚                                                                                                   โ”‚',
 'โ”‚ Discussion forum โ”‚ https://discourse.aicrowd.com/c/aicrowd-blitz-2                                                   โ”‚',
 'โ”‚                  โ”‚                                                                                                   โ”‚',
 'โ”‚   Challenge page โ”‚ https://www.aicrowd.com/challenges/aicrowd-blitz-2/problems/scrbl                                 โ”‚',
 'โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜',
 "{'submission_id': 175257, 'created_at': '2022-02-23T12:43:40.681Z'}"]

Comments

You must login before you can post a comment.

Execute