Loading

F1 SMOKE ELIMINATION

Smoke Elimination using FFA-Net Model MSE: 12.538

FFA-Net: Feature Fusion Attention Network for Single Image Dehazing

g_mothy

Top Scoring Solution Notebook 

F1 Smoke Elimination

Step 1: Download the data

Download Packages

In [1]:
!pip install --upgrade aicrowd-cli
Collecting aicrowd-cli
  Downloading https://files.pythonhosted.org/packages/a5/8a/fca67e8c1cb1501a9653cd653232bf6fdebbb2393e3de861aad3636a1136/aicrowd_cli-0.1.6-py3-none-any.whl (51kB)
     |████████████████████████████████| 61kB 5.4MB/s 
Collecting rich<11,>=10.0.0
  Downloading https://files.pythonhosted.org/packages/6b/39/fbe8d15f0b017d63701f2a42e4ccb9a73cd4175e5c56214c1b5685e3dd79/rich-10.2.2-py3-none-any.whl (203kB)
     |████████████████████████████████| 204kB 11.8MB/s 
Collecting tqdm<5,>=4.56.0
  Downloading https://files.pythonhosted.org/packages/72/8a/34efae5cf9924328a8f34eeb2fdaae14c011462d9f0e3fcded48e1266d1c/tqdm-4.60.0-py2.py3-none-any.whl (75kB)
     |████████████████████████████████| 81kB 8.4MB/s 
Collecting requests-toolbelt<1,>=0.9.1
  Downloading https://files.pythonhosted.org/packages/60/ef/7681134338fc097acef8d9b2f8abe0458e4d87559c689a8c306d0957ece5/requests_toolbelt-0.9.1-py2.py3-none-any.whl (54kB)
     |████████████████████████████████| 61kB 7.4MB/s 
Collecting click<8,>=7.1.2
  Downloading https://files.pythonhosted.org/packages/d2/3d/fa76db83bf75c4f8d338c2fd15c8d33fdd7ad23a9b5e57eb6c5de26b430e/click-7.1.2-py2.py3-none-any.whl (82kB)
     |████████████████████████████████| 92kB 7.9MB/s 
Collecting gitpython<4,>=3.1.12
  Downloading https://files.pythonhosted.org/packages/27/da/6f6224fdfc47dab57881fe20c0d1bc3122be290198ba0bf26a953a045d92/GitPython-3.1.17-py3-none-any.whl (166kB)
     |████████████████████████████████| 174kB 14.1MB/s 
Requirement already satisfied, skipping upgrade: toml<1,>=0.10.2 in /usr/local/lib/python3.7/dist-packages (from aicrowd-cli) (0.10.2)
Collecting requests<3,>=2.25.1
  Downloading https://files.pythonhosted.org/packages/29/c1/24814557f1d22c56d50280771a17307e6bf87b70727d975fd6b2ce6b014a/requests-2.25.1-py2.py3-none-any.whl (61kB)
     |████████████████████████████████| 61kB 8.2MB/s 
Collecting commonmark<0.10.0,>=0.9.0
  Downloading https://files.pythonhosted.org/packages/b1/92/dfd892312d822f36c55366118b95d914e5f16de11044a27cf10a7d71bbbf/commonmark-0.9.1-py2.py3-none-any.whl (51kB)
     |████████████████████████████████| 51kB 7.3MB/s 
Requirement already satisfied, skipping upgrade: typing-extensions<4.0.0,>=3.7.4; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from rich<11,>=10.0.0->aicrowd-cli) (3.7.4.3)
Requirement already satisfied, skipping upgrade: pygments<3.0.0,>=2.6.0 in /usr/local/lib/python3.7/dist-packages (from rich<11,>=10.0.0->aicrowd-cli) (2.6.1)
Collecting colorama<0.5.0,>=0.4.0
  Downloading https://files.pythonhosted.org/packages/44/98/5b86278fbbf250d239ae0ecb724f8572af1c91f4a11edf4d36a206189440/colorama-0.4.4-py2.py3-none-any.whl
Collecting gitdb<5,>=4.0.1
  Downloading https://files.pythonhosted.org/packages/ea/e8/f414d1a4f0bbc668ed441f74f44c116d9816833a48bf81d22b697090dba8/gitdb-4.0.7-py3-none-any.whl (63kB)
     |████████████████████████████████| 71kB 8.7MB/s 
Requirement already satisfied, skipping upgrade: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (1.24.3)
Requirement already satisfied, skipping upgrade: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2.10)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (2020.12.5)
Requirement already satisfied, skipping upgrade: chardet<5,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.25.1->aicrowd-cli) (3.0.4)
Collecting smmap<5,>=3.0.1
  Downloading https://files.pythonhosted.org/packages/68/ee/d540eb5e5996eb81c26ceffac6ee49041d473bc5125f2aa995cf51ec1cf1/smmap-4.0.0-py2.py3-none-any.whl
ERROR: google-colab 1.0.0 has requirement requests~=2.23.0, but you'll have requests 2.25.1 which is incompatible.
ERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.
Installing collected packages: commonmark, colorama, rich, tqdm, requests, requests-toolbelt, click, smmap, gitdb, gitpython, aicrowd-cli
  Found existing installation: tqdm 4.41.1
    Uninstalling tqdm-4.41.1:
      Successfully uninstalled tqdm-4.41.1
  Found existing installation: requests 2.23.0
    Uninstalling requests-2.23.0:
      Successfully uninstalled requests-2.23.0
  Found existing installation: click 8.0.0
    Uninstalling click-8.0.0:
      Successfully uninstalled click-8.0.0
Successfully installed aicrowd-cli-0.1.6 click-7.1.2 colorama-0.4.4 commonmark-0.9.1 gitdb-4.0.7 gitpython-3.1.17 requests-2.25.1 requests-toolbelt-0.9.1 rich-10.2.2 smmap-4.0.0 tqdm-4.60.0
In [2]:
API_KEY = "9cce69d6577e95bdcfaf107bb38f8ff2"
!aicrowd login --api-key $API_KEY
API Key valid
Saved API Key successfully!
In [3]:
!aicrowd dataset download --challenge f1-smoke-elimination -j 3
train.zip:   0% 0.00/328M [00:00<?, ?B/s]
test.zip:   0% 0.00/38.6M [00:00<?, ?B/s]

train.zip:  20% 67.1M/328M [00:04<00:17, 15.3MB/s]
test.zip: 100% 38.6M/38.6M [00:04<00:00, 9.10MB/s]


sample_submission.zip:  87% 33.6M/38.8M [00:05<00:00, 6.57MB/s]
train.zip:  31% 101M/328M [00:06<00:13, 16.5MB/s] 

sample_submission.zip: 100% 38.8M/38.8M [00:05<00:00, 7.41MB/s]
train.zip:  51% 168M/328M [00:09<00:08, 18.7MB/s]
val.zip: 100% 32.8M/32.8M [00:03<00:00, 9.22MB/s]
train.zip: 100% 328M/328M [00:17<00:00, 18.2MB/s]
In [4]:
!rm -rf data
!mkdir data


!unzip train.zip -d data/train >/dev/null
!unzip val.zip -d data/val >/dev/null
!unzip test.zip -d data/test >/dev/null
In [5]:
!rm -rf ./test.zip ./train.zip ./val.zip ./sample_submission.zip

Step 2: Download the code

In [6]:
!git clone https://github.com/zhilin007/FFA-Net
Cloning into 'FFA-Net'...
remote: Enumerating objects: 182, done.
remote: Total 182 (delta 0), reused 0 (delta 0), pack-reused 182
Receiving objects: 100% (182/182), 19.04 MiB | 33.22 MiB/s, done.
Resolving deltas: 100% (80/80), done.

Trained_models are available in google drive: https://drive.google.com/file/d/1QoFKNohireWa0bu_izB1v4Bc3HMaaREr/view?usp=sharing Put models in the net/trained_models/folder.

In [7]:
from google_drive_downloader import GoogleDriveDownloader as gdd

gdd.download_file_from_google_drive(file_id='1QoFKNohireWa0bu_izB1v4Bc3HMaaREr',
                                    dest_path='./pretrainedModels.zip',
                                    unzip=True)
Downloading 1QoFKNohireWa0bu_izB1v4Bc3HMaaREr into ./pretrainedModels.zip... Done.
Unzipping...Done.
In [8]:
!rm -rf ./pretrainedModels.zip

!cp ./its_train_ffa_3_19.pk ./FFA-Net/net/trained_models/.
!cp ./ots_train_ffa_3_19.pk ./FFA-Net/net/trained_models/.

!rm ./its_train_ffa_3_19.pk
!rm ./ots_train_ffa_3_19.pk
In [9]:
%%writefile ./FFA-Net/net/data_utils.py

import torch.utils.data as data
import torchvision.transforms as tfs
from torchvision.transforms import functional as FF
import os,sys
sys.path.append('.')
sys.path.append('..')
import numpy as np
import torch
import random
from PIL import Image
from torch.utils.data import DataLoader
from matplotlib import pyplot as plt
from torchvision.utils import make_grid
from metrics import *
from option import opt
BS=opt.bs
print(BS)
crop_size='whole_img'
if opt.crop:
    crop_size=opt.crop_size

def tensorShow(tensors,titles=None):
        '''
        t:BCWH
        '''
        fig=plt.figure()
        for tensor,tit,i in zip(tensors,titles,range(len(tensors))):
            img = make_grid(tensor)
            npimg = img.numpy()
            ax = fig.add_subplot(211+i)
            ax.imshow(np.transpose(npimg, (1, 2, 0)))
            ax.set_title(tit)
        plt.show()

class RESIDE_Dataset(data.Dataset):
    def __init__(self,path,train,size=crop_size,format='.jpg'):
        super(RESIDE_Dataset,self).__init__()
        self.size=size
        print('crop size',size)
        self.train=train
        self.format=format
        self.haze_imgs_dir=os.listdir(os.path.join(path,'smoke'))
        self.haze_imgs=[os.path.join(path,'smoke',img) for img in self.haze_imgs_dir]
        self.clear_dir=os.path.join(path,'clear')
    def __getitem__(self, index):
        haze=Image.open(self.haze_imgs[index])
        if isinstance(self.size,int):
            while haze.size[0]<self.size or haze.size[1]<self.size :
                index=random.randint(0,20000)
                haze=Image.open(self.haze_imgs[index])
        img=self.haze_imgs[index]
        id=img.split('/')[-1].split('_')[0]
        clear_name=id#+self.format
        clear=Image.open(os.path.join(self.clear_dir,clear_name))
        clear=tfs.CenterCrop(haze.size[::-1])(clear)
        if not isinstance(self.size,str):
            i,j,h,w=tfs.RandomCrop.get_params(haze,output_size=(self.size,self.size))
            haze=FF.crop(haze,i,j,h,w)
            clear=FF.crop(clear,i,j,h,w)
        haze,clear=self.augData(haze.convert("RGB") ,clear.convert("RGB") )
        return haze,clear
    def augData(self,data,target):
        if self.train:
            rand_hor=random.randint(0,1)
            rand_rot=random.randint(0,3)
            data=tfs.RandomHorizontalFlip(rand_hor)(data)
            target=tfs.RandomHorizontalFlip(rand_hor)(target)
            if rand_rot:
                data=FF.rotate(data,90*rand_rot)
                target=FF.rotate(target,90*rand_rot)
        data=tfs.ToTensor()(data)
        data=tfs.Normalize(mean=[0.64, 0.6, 0.58],std=[0.14,0.15, 0.152])(data)
        target=tfs.ToTensor()(target)
        return  data ,target
    def __len__(self):
        return len(self.haze_imgs)

import os
pwd=os.getcwd()
print(pwd)
path='../../data'#path to your 'data' folder

ITS_train_loader=DataLoader(dataset=RESIDE_Dataset(path+'/train',train=True,size=crop_size),batch_size=BS,shuffle=True)
ITS_test_loader=DataLoader(dataset=RESIDE_Dataset(path+'/val',train=False,size='whole img'),batch_size=1,shuffle=False)

OTS_train_loader=DataLoader(dataset=RESIDE_Dataset(path+'/train',train=True,format='.jpg'),batch_size=BS,shuffle=True)
OTS_test_loader=DataLoader(dataset=RESIDE_Dataset(path+'/val',train=False,size='whole img',format='.jpg'),batch_size=1,shuffle=False)

if __name__ == "__main__":
    pass
Overwriting ./FFA-Net/net/data_utils.py
In [10]:
%%writefile ./FFA-Net/net/main.py

import torch,os,sys,torchvision,argparse
import torchvision.transforms as tfs
from metrics import psnr,ssim
from models import *
import time,math
import numpy as np
from torch.backends import cudnn
from torch import optim
import torch,warnings
from torch import nn
#from tensorboardX import SummaryWriter
import torchvision.utils as vutils
warnings.filterwarnings('ignore')
from option import opt,model_name,log_dir
from data_utils import *
from torchvision.models import vgg16
print('log_dir :',log_dir)
print('model_name:',model_name)

models_={
	'ffa':FFA(gps=opt.gps,blocks=opt.blocks),
}
loaders_={
	'its_train':ITS_train_loader,
	'its_test':ITS_test_loader,
	'ots_train':OTS_train_loader,
	'ots_test':OTS_test_loader
}
start_time=time.time()
T=opt.steps	
def lr_schedule_cosdecay(t,T,init_lr=opt.lr):
	lr=0.5*(1+math.cos(t*math.pi/T))*init_lr
	return lr

def train(net,loader_train,loader_test,optim,criterion):
	losses=[]
	start_step=0
	max_ssim=0
	max_psnr=0
	ssims=[]
	psnrs=[]
	if opt.resume and os.path.exists(opt.model_dir):
		print(f'resume from {opt.model_dir}')
		ckp=torch.load(opt.model_dir)
		losses=ckp['losses']
		net.load_state_dict(ckp['model'])
		start_step=ckp['step']
		max_ssim=ckp['max_ssim']
		max_psnr=ckp['max_psnr']
		psnrs=ckp['psnrs']
		ssims=ckp['ssims']
		with torch.no_grad():
			max_ssim,max_psnr=test(net,loader_test, max_psnr,max_ssim,start_step)
		print(f'start_step:{start_step} start training --- max_ssim: {max_ssim:.4f} max_psnr: {max_psnr:.4f}')
	else :
		print('train from scratch *** ')
	for step in range(start_step+1,opt.steps+1):
		net.train()
		lr=opt.lr
		if not opt.no_lr_sche:
			lr=lr_schedule_cosdecay(step,T)
			for param_group in optim.param_groups:
				param_group["lr"] = lr  
		x,y=next(iter(loader_train))
		x=x.to(opt.device);y=y.to(opt.device)
		out=net(x)
		loss=criterion[0](out,y)
		if opt.perloss:
			loss2=criterion[1](out,y)
			loss=loss+0.04*loss2
		
		loss.backward()
		
		optim.step()
		optim.zero_grad()
		losses.append(loss.item())
		print(f'\rtrain loss : {loss.item():.5f}| step :{step}/{opt.steps}|lr :{lr :.7f} |time_used :{(time.time()-start_time)/60 :.1f}',end='',flush=True)

		#with SummaryWriter(logdir=log_dir,comment=log_dir) as writer:
		#	writer.add_scalar('data/loss',loss,step)

		if step % opt.eval_step ==0 :
			with torch.no_grad():
				ssim_eval,psnr_eval=test(net,loader_test, max_psnr,max_ssim,step)

			print(f'\nstep :{step} |ssim:{ssim_eval:.4f}| psnr:{psnr_eval:.4f}')

			# with SummaryWriter(logdir=log_dir,comment=log_dir) as writer:
			# 	writer.add_scalar('data/ssim',ssim_eval,step)
			# 	writer.add_scalar('data/psnr',psnr_eval,step)
			# 	writer.add_scalars('group',{
			# 		'ssim':ssim_eval,
			# 		'psnr':psnr_eval,
			# 		'loss':loss
			# 	},step)
			ssims.append(ssim_eval)
			psnrs.append(psnr_eval)
			if ssim_eval > max_ssim and psnr_eval > max_psnr :
				max_ssim=max(max_ssim,ssim_eval)
				max_psnr=max(max_psnr,psnr_eval)
				torch.save({
							'step':step,
							'max_psnr':max_psnr,
							'max_ssim':max_ssim,
							'ssims':ssims,
							'psnrs':psnrs,
							'losses':losses,
							'model':net.state_dict()
				},opt.model_dir)
				print(f'\n model saved at step :{step}| max_psnr:{max_psnr:.4f}|max_ssim:{max_ssim:.4f}')

	np.save(f'./numpy_files/{model_name}_{opt.steps}_losses.npy',losses)
	np.save(f'./numpy_files/{model_name}_{opt.steps}_ssims.npy',ssims)
	np.save(f'./numpy_files/{model_name}_{opt.steps}_psnrs.npy',psnrs)

def test(net,loader_test,max_psnr,max_ssim,step):
	net.eval()
	torch.cuda.empty_cache()
	ssims=[]
	psnrs=[]
	#s=True
	for i ,(inputs,targets) in enumerate(loader_test):
		inputs=inputs.to(opt.device);targets=targets.to(opt.device)
		pred=net(inputs)
		# # print(pred)
		# tfs.ToPILImage()(torch.squeeze(targets.cpu())).save('111.png')
		# vutils.save_image(targets.cpu(),'target.png')
		# vutils.save_image(pred.cpu(),'pred.png')
		ssim1=ssim(pred,targets).item()
		psnr1=psnr(pred,targets)
		ssims.append(ssim1)
		psnrs.append(psnr1)
		#if (psnr1>max_psnr or ssim1 > max_ssim) and s :
		#		ts=vutils.make_grid([torch.squeeze(inputs.cpu()),torch.squeeze(targets.cpu()),torch.squeeze(pred.clamp(0,1).cpu())])
		#		vutils.save_image(ts,f'samples/{model_name}/{step}_{psnr1:.4}_{ssim1:.4}.png')
		#		s=False
	return np.mean(ssims) ,np.mean(psnrs)


if __name__ == "__main__":
	loader_train=loaders_[opt.trainset]
	loader_test=loaders_[opt.testset]
	net=models_[opt.net]
	net=net.to(opt.device)
	if opt.device=='cuda':
		net=torch.nn.DataParallel(net)
		cudnn.benchmark=True
	criterion = []
	criterion.append(nn.L1Loss().to(opt.device))
	if opt.perloss:
			vgg_model = vgg16(pretrained=True).features[:16]
			vgg_model = vgg_model.to(opt.device)
			for param in vgg_model.parameters():
				param.requires_grad = False
			criterion.append(PerLoss(vgg_model).to(opt.device))
	optimizer = optim.Adam(params=filter(lambda x: x.requires_grad, net.parameters()),lr=opt.lr, betas = (0.9, 0.999), eps=1e-08)
	optimizer.zero_grad()
	train(net,loader_train,loader_test,optimizer,criterion)
Overwriting ./FFA-Net/net/main.py
In [11]:
%%writefile ./FFA-Net/net/test.py

import os,argparse
import numpy as np
from PIL import Image
from models import *
import torch
import torch.nn as nn
import torchvision.transforms as tfs 
import torchvision.utils as vutils
import matplotlib.pyplot as plt
from torchvision.utils import make_grid
abs=os.getcwd()+'/'
def tensorShow(tensors,titles=['smoke']):
        fig=plt.figure()
        for tensor,tit,i in zip(tensors,titles,range(len(tensors))):
            img = make_grid(tensor)
            npimg = img.numpy()
            ax = fig.add_subplot(221+i)
            ax.imshow(np.transpose(npimg, (1, 2, 0)))
            ax.set_title(tit)
        plt.show()

parser=argparse.ArgumentParser()
parser.add_argument('--task',type=str,default='its',help='its or ots')
parser.add_argument('--test_imgs',type=str,default='test_imgs',help='Test imgs folder')
opt=parser.parse_args()
dataset=opt.task
gps=3
blocks=19
img_dir=abs+opt.test_imgs+'/'
output_dir=abs+f'clear/'
print("pred_dir:",output_dir)
if not os.path.exists(output_dir):
    os.mkdir(output_dir)
model_dir=abs+f'trained_models/{dataset}_train_ffa_{gps}_{blocks}.pk'
device='cuda' if torch.cuda.is_available() else 'cpu'
ckp=torch.load(model_dir,map_location=device)
net=FFA(gps=gps,blocks=blocks)
net=nn.DataParallel(net)
net.load_state_dict(ckp['model'])
net.eval()
for im in os.listdir(img_dir):
    print(f'\r {im}',end='',flush=True)
    haze = Image.open(img_dir+im)
    haze1= tfs.Compose([
        tfs.ToTensor(),
        tfs.Normalize(mean=[0.64, 0.6, 0.58],std=[0.14,0.15, 0.152])
    ])(haze)[None,::]
    haze_no=tfs.ToTensor()(haze)[None,::]
    with torch.no_grad():
        pred = net(haze1)
    ts=torch.squeeze(pred.clamp(0,1).cpu())
    #tensorShow([haze_no,pred.clamp(0,1).cpu()],['smoke','pred'])
    vutils.save_image(ts,output_dir+im.split('.')[0]+'.jpg')
Overwriting ./FFA-Net/net/test.py

Change the Working Directory

In [12]:
%cd ./FFA-Net/net
/content/FFA-Net/net

Finetune the model for custom dataset

Finetune ITS Pretrained Model

In [13]:
!python main.py --net='ffa' --blocks=19 --gps=3 --bs=2 --lr=0.0001 --trainset='its_train' --testset='its_test' --steps=500000 --eval_step=5000
Namespace(blocks=19, bs=2, crop=False, crop_size=240, device='cuda', eval_step=5000, gps=3, lr=0.0001, model_dir='./trained_models/its_train_ffa_3_19.pk', net='ffa', no_lr_sche=False, perloss=False, resume=True, steps=500000, testset='its_test', trainset='its_train')
model_dir: ./trained_models/its_train_ffa_3_19.pk
2
/content/FFA-Net/net
crop size whole_img
crop size whole img
crop size whole_img
crop size whole img
log_dir : logs/its_train_ffa_3_19
model_name: its_train_ffa_3_19
resume from ./trained_models/its_train_ffa_3_19.pk
start_step:475000 start training --- max_ssim: 0.8306 max_psnr: 19.5928
train loss : 0.07586| step :475057/500000|lr :0.0000006 |time_used :7.9Traceback (most recent call last):
  File "main.py", line 159, in <module>
    train(net,loader_train,loader_test,optimizer,criterion)
  File "main.py", line 75, in train
    optim.step()
  File "/usr/local/lib/python3.7/dist-packages/torch/optim/optimizer.py", line 89, in wrapper
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/torch/optim/adam.py", line 119, in step
    group['eps'])
  File "/usr/local/lib/python3.7/dist-packages/torch/optim/_functional.py", line 92, in adam
    denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(eps)
KeyboardInterrupt

FineTune OTS pretrained Model

In [14]:
#!python main.py --net='ffa' --crop --crop_size=240 --blocks=19 --gps=3 --bs=2 --lr=0.0001 --trainset='ots_train' --testset='ots_test' --steps=1000000 --eval_step=5000

Test the Model

In [15]:
!python test.py --task='its' --test_imgs='../../data/test/smoke'

#!python test.py --task='ots' --test_imgs='../../data/test/smoke'
pred_dir: /content/FFA-Net/net/clear/
 3906.jpgTraceback (most recent call last):
  File "test.py", line 52, in <module>
    ts=torch.squeeze(pred.clamp(0,1).cpu())
KeyboardInterrupt
In [15]:

Visualize the data

In [16]:
import matplotlib.pyplot as plt

img = plt.imread("clear/927.jpg")
plt.imshow(img)

Save the prediction to zip

In [ ]:
!zip submission.zip -r clear/ > /dev/null

Making direct submission thought aicrowd-cli

In [ ]:
!aicrowd submission create -c f1-smoke-elimination -f submission.zip
In [ ]:


Comments

You must login before you can post a comment.

Execute