ADDI Alzheimers Detection Challenge
FastAI Tabular Starter
Minimal submission making predictions using fastai's tabular learner
A short demo notebook getting an entry in with FastAI. It doesn't do very well - there is no adjusting for class baalance, no feature engineering, no dealing with missing values... just a basic starting point.
FastAI Tabular Starter Notebook by @johnowhitaker¶
What is the notebook about?¶
The challenge is to use the features extracted from the Clock Drawing Test to build an automated and algorithm to predict whether each participant is one of three phases:
1) Pre-Alzheimer’s (Early Warning) 2) Post-Alzheimer’s (Detection) 3) Normal (Not an Alzheimer’s patient)
In this starter notebook we will solve this task using fastai. Make sure you don't edit out the section headings as AICROWD uses these to split this notebook up for submission. All code that is needed for both train and test goes in the preprocessing section, for example. I've tried to highlight where I've added code that isn't in the original template.
- Installing packages. Please use the Install packages 🗃 section to install the packages
- Training your models. All the code within the Training phase ⚙️ section will be skipped during evaluation. Please make sure to save your model weights in the assets directory and load them in the predictions phase section
Setup AIcrowd Utilities 🛠¶
We use this to bundle the files for submission and create a submission on AIcrowd. Do not edit this block.
!pip install -q -U aicrowd-cli
%load_ext aicrowd.magic
AIcrowd Runtime Configuration 🧷¶
Define configuration parameters. Please include any files needed for the notebook to run under ASSETS_DIR
. We will copy the contents of this directory to your final submission file 🙂
The dataset is available under /ds_shared_drive
on the workspace.
import os
# Please use the absolute for the location of the dataset.
# Or you can use relative path with `os.getcwd() + "test_data/validation.csv"`
AICROWD_DATASET_PATH = os.getenv("DATASET_PATH", "/ds_shared_drive/validation.csv")
AICROWD_PREDICTIONS_PATH = os.getenv("PREDICTIONS_PATH", "predictions.csv")
AICROWD_ASSETS_DIR = "assets"
AICROWD_API_KEY = "" # Get your key from https://www.aicrowd.com/participants/me
Install packages 🗃¶
Please add all pacakage installations in this section
!pip install -q numpy pandas scikit-learn
!pip install -q -U fastcore fastai # Need -U otherwise we're stuck with an old version on their docker
Define preprocessing code 💻¶
The code that is common between the training and the prediction sections should be defined here. During evaluation, we completely skip the training section. Please make sure to add any common logic between the training and prediction sections here.
Import common packages¶
Please import packages that are common for training and prediction phases here.
import numpy as np
import pandas as pd
from sklearn.metrics import f1_score, log_loss
from fastai.tabular.all import *
Training phase ⚙️¶
You can define your training code here. This sections will be skipped during evaluation.
# Loading the training data
df = pd.read_csv(os.getenv("DATASET_PATH", "/ds_shared_drive/train.csv"))
print(df.shape)
df.head()
Following the example in the fastai docs, we construct our dataloaders:
splits = RandomSplitter(valid_pct=0.2)(range_of(df))
to = TabularPandas(df.fillna(0), procs=[Categorify, FillMissing,Normalize],
cat_names = ['intersection_pos_rel_centre'],
cont_names = list(df.drop(['row_id', 'intersection_pos_rel_centre', 'diagnosis'], axis=1).columns),
y_names='diagnosis',
splits=splits)
dls = to.dataloaders(bs=64)
dls.show_batch()
With these done, we can create a learner and train it for a short while.
learn = tabular_learner(dls, metrics=accuracy)
learn.fit_one_cycle(5)
We're getting a log loss of 0.17 on our validation set, which is a random sub-sample of train. However, we should really look at the score on the provided validation set, which more closely matches the test set:
val = pd.read_csv(os.getenv("DATASET_PATH", "/ds_shared_drive/validation.csv"))
val = pd.merge(val, pd.read_csv(os.getenv("DATASET_PATH", "/ds_shared_drive/validation_ground_truth.csv")),
how='left', on='row_id')
print(val.shape)
test_dl = learn.dls.test_dl(val.fillna(0))
probs, y = learn.get_preds(dl=test_dl)
print('Log Loss on provided validation set:', log_loss(val['diagnosis'].values, probs.numpy()))
Saving the trained model is as easy as:
learn.export(AICROWD_ASSETS_DIR + '/fai_model1.mdl')
Prediction phase 🔎¶
Please make sure to save the weights from the training section in your assets directory and load them in this section
learn = load_learner(AICROWD_ASSETS_DIR + '/fai_model1.mdl') # load the model
Load test data¶
test_data = pd.read_csv(AICROWD_DATASET_PATH)
test_data.head()
Generate predictions¶
test_dl = learn.dls.test_dl(test_data.fillna(0)) # New dataloader with the test data
preds, _ = learn.get_preds(dl=test_dl)
preds = preds.numpy() # Convert to numpy array
predictions = {
"row_id": test_data["row_id"].values,
"normal_diagnosis_probability": [p[0] for p in preds],
"post_alzheimer_diagnosis_probability": [p[1] for p in preds],
"pre_alzheimer_diagnosis_probability": [p[2] for p in preds],
}
predictions_df = pd.DataFrame.from_dict(predictions)
predictions_df.head(3)
Save predictions 📨¶
predictions_df.to_csv(AICROWD_PREDICTIONS_PATH, index=False)
Submit to AIcrowd 🚀¶
NOTE: PLEASE SAVE THE NOTEBOOK BEFORE SUBMITTING IT (Ctrl + S)
!aicrowd login --api-key $AICROWD_API_KEY
!DATASET_PATH=$AICROWD_DATASET_PATH \
aicrowd notebook submit \
--assets-dir $AICROWD_ASSETS_DIR \
--challenge addi-alzheimers-detection-challenge
Content
Comments
You must login before you can post a comment.