Loading

IIT-M RL-ASSIGNMENT-2-TAXI

Solution for submission 132354

A detailed solution for submission 132354 submitted for challenge IIT-M RL-ASSIGNMENT-2-TAXI

nischith_shadagopan_m_n_cs18b102

What is the notebook about?

Problem - Taxi Environment Algorithms

This problem deals with a taxi environment and stochastic actions. The tasks you have to do are:

  • Implement Policy Iteration
  • Implement Modified Policy Iteration
  • Implement Value Iteration
  • Implement Gauss Seidel Value Iteration
  • Visualize the results
  • Explain the results

How to use this notebook? 📝

  • This is a shared template and any edits you make here will not be saved.You should make a copy in your own drive. Click the "File" menu (top-left), then "Save a Copy in Drive". You will be working in your copy however you like.

  • Update the config parameters. You can define the common variables here

Variable Description
AICROWD_DATASET_PATH Path to the file containing test data. This should be an absolute path.
AICROWD_RESULTS_DIR Path to write the output to.
AICROWD_ASSETS_DIR In case your notebook needs additional files (like model weights, etc.,), you can add them to a directory and specify the path to the directory here (please specify relative path). The contents of this directory will be sent to AIcrowd for evaluation.
AICROWD_API_KEY In order to submit your code to AIcrowd, you need to provide your account's API key. This key is available at https://www.aicrowd.com/participants/me

Setup AIcrowd Utilities 🛠

We use this to bundle the files for submission and create a submission on AIcrowd. Do not edit this block.

In [33]:
!pip install aicrowd-cli > /dev/null

AIcrowd Runtime Configuration 🧷

Get login API key from https://www.aicrowd.com/participants/me

In [34]:
import os

AICROWD_DATASET_PATH = os.getenv("DATASET_PATH", os.getcwd()+"/13d77bb0-b325-4e95-a03b-833eb6694acd_a2_taxi_inputs.zip")
AICROWD_RESULTS_DIR = os.getenv("OUTPUTS_DIR", "results")
In [ ]:

API Key valid
Saved API Key successfully!
In [ ]:
!unzip $AICROWD_DATASET_PATH
In [ ]:
DATASET_DIR = 'inputs/'

Taxi Environment

Read the environment to understand the functions, but do not edit anything

In [ ]:
import numpy as np

class TaxiEnv_HW2:
    def __init__(self, states, actions, probabilities, rewards, initial_policy):
        self.possible_states = states
        self._possible_actions = {st: ac for st, ac in zip(states, actions)}
        self._ride_probabilities = {st: pr for st, pr in zip(states, probabilities)}
        self._ride_rewards = {st: rw for st, rw in zip(states, rewards)}
        self.initial_policy = initial_policy
        self._verify()

    def _check_state(self, state):
        assert state in self.possible_states, "State %s is not a valid state" % state

    def _verify(self):
        """ 
        Verify that data conditions are met:
        Number of actions matches shape of next state and actions
        Every probability distribution adds up to 1 
        """
        ns = len(self.possible_states)
        for state in self.possible_states:
            ac = self._possible_actions[state]
            na = len(ac)

            rp = self._ride_probabilities[state]
            assert np.all(rp.shape == (na, ns)), "Probabilities shape mismatch"
        
            rr = self._ride_rewards[state]
            assert np.all(rr.shape == (na, ns)), "Rewards shape mismatch"

            assert np.allclose(rp.sum(axis=1), 1), "Probabilities don't add up to 1"

    def possible_actions(self, state):
        """ Return all possible actions from a given state """
        self._check_state(state)
        return self._possible_actions[state]

    def ride_probabilities(self, state, action):
        """ 
        Returns all possible ride probabilities from a state for a given action
        For every action a list with the returned with values in the same order as self.possible_states
        """
        actions = self.possible_actions(state)
        ac_idx = actions.index(action)
        return self._ride_probabilities[state][ac_idx]

    def ride_rewards(self, state, action):
        actions = self.possible_actions(state)
        ac_idx = actions.index(action)
        return self._ride_rewards[state][ac_idx]

Example of Environment usage

In [ ]:
def check_taxienv():
    # These are the values as used in the pdf, but they may be changed during submission, so do not hardcode anything

    states = ['A', 'B', 'C']

    actions = [['1','2','3'], ['1','2'], ['1','2','3']]

    probs = [np.array([[1/2,  1/4,  1/4],
                    [1/16, 3/4,  3/16],
                    [1/4,  1/8,  5/8]]),

            np.array([[1/2,   0,     1/2],
                    [1/16,  7/8,  1/16]]),

            np.array([[1/4,  1/4,  1/2],
                    [1/8,  3/4,  1/8],
                    [3/4,  1/16, 3/16]]),]

    rewards = [np.array([[10,  4,  8],
                        [ 8,  2,  4],
                        [ 4,  6,  4]]),

            np.array([[14,  0, 18],
                        [ 8, 16,  8]]),

            np.array([[10,  2,  8],
                        [6,   4,  2],
                        [4,   0,  8]]),]
    initial_policy = {'A': '1', 'B': '1', 'C': '1'}

    env = TaxiEnv_HW2(states, actions, probs, rewards, initial_policy)
    print("All possible states", env.possible_states)
    print("All possible actions from state B", env.possible_actions('B'))
    print("Ride probabilities from state A with action 2", env.ride_probabilities('A', '2'))
    print("Ride rewards from state C with action 3", env.ride_rewards('C', '3'))

    base_kwargs = {"states": states, "actions": actions, 
                "probabilities": probs, "rewards": rewards,
                "initial_policy": initial_policy}
    return base_kwargs

base_kwargs = check_taxienv()
env = TaxiEnv_HW2(**base_kwargs)

Task 1 - Policy Iteration

Run policy iteration on the environment and generate the policy and expected reward

In [ ]:
# 1.1 Policy Iteration
def policy_iteration(taxienv, gamma):
    # A list of all the states
    states = taxienv.possible_states
    # Initial values
    values = {s: 0 for s in states}

    # This is a dictionary of states to policies -> e.g {'A': '1', 'B': '2', 'C': '1'}
    policy = taxienv.initial_policy.copy()

    ## Begin code here
    while True :
        while True:
            delta = 0
            for s in states:
                j = values[s]
                a = policy[s]
                P = taxienv.ride_probabilities(s, a)
                r = taxienv.ride_rewards(s, a)
                sum = 0
                for i in range(len(states)):
                    sum = sum + P[i] * (r[i] + gamma*values[states[i]])
                values[s] = sum
                delta = max(delta, abs(j - sum))
            if delta < 1e-8:
                break
        done = 1
        for s in states:
            b = policy[s]
            candidates = []
            for a in taxienv.possible_actions(s):
                P = taxienv.ride_probabilities(s, a)
                r = taxienv.ride_rewards(s, a)
                sum = 0
                for i in range(len(P)):
                    sum = sum + P[i] * (r[i] + gamma*values[states[i]])
                candidates.append(sum)
            policy[s] = taxienv.possible_actions(s)[np.argmax(candidates)]
            if b!= policy[s]:
                done = 0
        if done == 1:
            break

    # Hints - 
    # Do not hardcode anything
    # Only the final result is required for the results
    # Put any extra data in "extra_info" dictonary for any plots etc
    # Use the helper functions taxienv.ride_rewards, taxienv.ride_probabilities,  taxienv.possible_actions
    # For terminating condition use the condition exactly mentioned in the pdf

    
    # Put your extra information needed for plots etc in this dictionary
    extra_info = {}

    ## Do not edit below this line

    # Final results
    return {"Expected Reward": values, "Policy": policy}, extra_info

Task 2 - Policy Iteration for multiple values of gamma

Ideally this code should run as is

In [ ]:
# 1.2 Policy Iteration with different values of gamma
def run_policy_iteration(env):
    gamma_values = np.arange(5, 100, 5)/100
    results, extra_info = {}, {}
    for gamma in gamma_values:
        results[gamma], extra_info[gamma] = policy_iteration(env, gamma)
    return results, extra_info

results, extra_info = run_policy_iteration(env)

Task 3 - Modifed Policy Iteration

Implement modified policy iteration (where Value iteration is done for fixed m number of steps)

In [ ]:
# 1.3 Modified Policy Iteration
def modified_policy_iteration(taxienv, gamma, m):
    # A list of all the states
    states = taxienv.possible_states
    # Initial values
    values = {s: 0 for s in states}

    # This is a dictionary of states to policies -> e.g {'A': '1', 'B': '2', 'C': '1'}
    policy = taxienv.initial_policy.copy()

    ## Begin code here
    while True :
        for iter in range(m):
            temp = values.copy()
            for s in states:
                a = policy[s]
                P = np.array(taxienv.ride_probabilities(s, a))
                r = np.array(taxienv.ride_rewards(s, a))
                sum = 0
                for i in range(len(states)):
                    sum = sum + P[i] * (r[i] + gamma*temp[states[i]])
                values[s] = sum
        done = 1
        for s in states:
            b = policy[s]
            max_reward = -2e9
            for a in taxienv.possible_actions(s):
                P = taxienv.ride_probabilities(s, a)
                r = taxienv.ride_rewards(s, a)
                sum = 0
                for i in range(len(P)):
                    sum = sum + P[i] * (r[i] + gamma*values[states[i]])
                if sum > max_reward:
                    max_reward = sum
                    policy[s] = a
            if b!= policy[s]:
                done = 0
        if done == 1:
            break
    # Hints - 
    # Do not hardcode anything
    # Only the final result is required for the results
    # Put any extra data in "extra_info" dictonary for any plots etc
    # Use the helper functions taxienv.ride_rewards, taxienv.ride_probabilities,  taxienv.possible_actions
    # For terminating condition use the condition exactly mentioned in the pdf

    
    # Put your extra information needed for plots etc in this dictionary
    extra_info = {}

    ## Do not edit below this line


    # Final results
    return {"Expected Reward": values, "Policy": policy}, extra_info

Task 4 Modified policy iteration for multiple values of m

Ideally this code should run as is

In [ ]:
def run_modified_policy_iteration(env):
    m_values = np.arange(1, 15)
    gamma = 0.9
    results, extra_info = {}, {}
    for m in m_values:
        results[m], extra_info[m] = modified_policy_iteration(env, gamma, m)
    return results, extra_info

results, extra_info = run_modified_policy_iteration(env)

Task 5 Value Iteration

Implement value iteration and find the policy and expected rewards

In [ ]:
# 1.4 Value Iteration
def value_iteration(taxienv, gamma):
    # A list of all the states
    states = taxienv.possible_states
    # Initial values
    values = {s: 0 for s in states}

    # This is a dictionary of states to policies -> e.g {'A': '1', 'B': '2', 'C': '1'}
    policy = taxienv.initial_policy.copy()

    ## Begin code here
    while True:
        delta = 0
        temp = values.copy()
        for s in states:
            max_sum = -2e9
            for a in taxienv.possible_actions(s):
                P = np.array(taxienv.ride_probabilities(s, a))
                r = np.array(taxienv.ride_rewards(s, a))
                sum = 0
                for i in range(len(states)):
                    sum = sum + P[i] * (r[i] + gamma*temp[states[i]])
                if sum > max_sum:
                    max_sum = sum
                    policy[s] = a
            values[s] = max_sum
            delta = max(delta, abs(temp[s] - values[s]))
        if delta < 1e-8:
            break
    # Hints - 
    # Do not hardcode anything
    # Only the final result is required for the results
    # Put any extra data in "extra_info" dictonary for any plots etc
    # Use the helper functions taxienv.ride_rewards, taxienv.ride_probabilities,  taxienv.possible_actions
    # For terminating condition use the condition exactly mentioned in the pdf


    # Put your extra information needed for plots etc in this dictionary
    extra_info = {}

    ## Do not edit below this line

    # Final results
    return {"Expected Reward": values, "Policy": policy}, extra_info

Task 6 Value Iteration with multiple values of gamma

Ideally this code should run as is

In [ ]:
def run_value_iteration(env):
    gamma_values = np.arange(5, 100, 5)/100
    results = {}
    results, extra_info = {}, {}
    for gamma in gamma_values:
        results[gamma], extra_info[gamma] = value_iteration(env, gamma)
    return results, extra_info
  
results, extra_info = run_value_iteration(env)

Task 7 Gauss Seidel Value Iteration

Implement Gauss Seidel Value Iteration

In [ ]:
# 1.4 Gauss Seidel Value Iteration
def gauss_seidel_value_iteration(taxienv, gamma):
    # A list of all the states
    # For Gauss Seidel Value Iteration - iterate through the values in the same order
    states = taxienv.possible_states

    # Initial values
    values = {s: 0 for s in states}

    # This is a dictionary of states to policies -> e.g {'A': '1', 'B': '2', 'C': '1'}
    policy = taxienv.initial_policy.copy()

    # Hints - 
    # Do not hardcode anything
    # For Gauss Seidel Value Iteration - iterate through the values in the same order as taxienv.possible_states
    # Only the final result is required for the results
    # Put any extra data in "extra_info" dictonary for any plots etc
    # Use the helper functions taxienv.ride_rewards, taxienv.ride_probabilities,  taxienv.possible_actions
    # For terminating condition use the condition exactly mentioned in the pdf

    ## Begin code here
    while True:
        delta = 0
        temp = values.copy()
        for s in states:
            max_sum = -2e9
            for a in taxienv.possible_actions(s):
                P = np.array(taxienv.ride_probabilities(s, a))
                r = np.array(taxienv.ride_rewards(s, a))
                sum = 0
                for i in range(len(states)):
                    sum = sum + P[i] * (r[i] + gamma*values[states[i]])
                if sum > max_sum:
                    max_sum = sum
                    policy[s] = a
            values[s] = max_sum
            delta = max(delta, abs(temp[s] - values[s]))
        if delta < 1e-8:
            break
    # Put your extra information needed for plots etc in this dictionary
    extra_info = {}

    ## Do not edit below this line

    # Final results
    return {"Expected Reward": values, "Policy": policy}, extra_info

Task 8 Gauss Seidel Value Iteration with multiple values of gamma

Ideally this code should run as is

In [ ]:
def run_gauss_seidel_value_iteration(env):
    gamma_values = np.arange(5, 100, 5)/100
    results = {}
    results, extra_info = {}, {}
    for gamma in gamma_values:
        results[gamma], extra_info[gamma] = gauss_seidel_value_iteration(env, gamma)
    return results, extra_info

results, extra_info = run_gauss_seidel_value_iteration(env)

Generate Results ✅

In [ ]:
# Do not edit this cell
def get_results(kwargs):

    taxienv = TaxiEnv_HW2(**kwargs)

    policy_iteration_results = run_policy_iteration(taxienv)[0]
    modified_policy_iteration_results = run_modified_policy_iteration(taxienv)[0]
    value_iteration_results = run_value_iteration(taxienv)[0]
    gs_vi_results = run_gauss_seidel_value_iteration(taxienv)[0]

    final_results = {}
    final_results["policy_iteration"] = policy_iteration_results
    final_results["modifed_policy_iteration"] = modified_policy_iteration_results
    final_results["value_iteration"] = value_iteration_results
    final_results["gauss_seidel_iteration"] = gs_vi_results

    return final_results
In [ ]:
# Do not edit this cell, generate results with it as is
if not os.path.exists(AICROWD_RESULTS_DIR):
    os.mkdir(AICROWD_RESULTS_DIR)

for params_file in os.listdir(DATASET_DIR):
  kwargs = np.load(os.path.join(DATASET_DIR, params_file), allow_pickle=True).item()
  results = get_results(kwargs)
  idx = params_file.split('_')[-1][:-4]
  np.save(os.path.join(AICROWD_RESULTS_DIR, 'results_' + idx), results)

Check your local score

This score is not your final score, and it doesn't use the marks weightages. This is only for your reference of how arrays are matched and with what tolerance.

In [ ]:
# Check your score on the given test cases (There are more private test cases not provided)
target_folder = 'targets'
result_folder = AICROWD_RESULTS_DIR

def check_algo_match(results, targets):
    param_matches = []
    for k in results:
        param_results = results[k]
        param_targets = targets[k]
        policy_match = param_results['Policy'] == param_targets['Policy']
        rv = [v for k, v in param_results['Expected Reward'].items()]
        tv = [v for k, v in param_targets['Expected Reward'].items()]
        rewards_match = np.allclose(rv, tv, rtol=3)
        equal = rewards_match and policy_match
        param_matches.append(equal)
    return np.mean(param_matches)

def check_score(target_folder, result_folder):
    match = []
    for out_file in os.listdir(result_folder):
        res_file = os.path.join(result_folder, out_file)
        results = np.load(res_file, allow_pickle=True).item()
        idx = out_file.split('_')[-1][:-4]  # Extract the file number
        target_file = os.path.join(target_folder, f"targets_{idx}.npy")
        targets = np.load(target_file, allow_pickle=True).item()
        algo_match = []
        for k in targets:
            algo_results = results[k]
            algo_targets = targets[k]
            algo_match.append(check_algo_match(algo_results, algo_targets))
        match.append(np.mean(algo_match))
    print(match)
    return np.mean(match)

if os.path.exists(target_folder):
    print("Shared data Score (normalized to 1):", check_score(target_folder, result_folder))

Visualize results of Policy Iteration with multiple values of gamma

Add code to visualize the results

Optimal value for each state for different values of gamma

In [ ]:
## Visualize policy iteration with multiple values of gamma
import pandas as pd
gamma_values = np.arange(5, 100, 5)/100
results, extra_info = run_policy_iteration(env)
temp1 = [results[gamma]["Expected Reward"] for gamma in gamma_values]
temp2 = [results[gamma]["Policy"] for gamma in gamma_values]
Jopt = pd.DataFrame(temp1)
policyopt = pd.DataFrame(temp2)
Jopt['gamma'] = gamma_values
policyopt['gamma'] = gamma_values
pd.set_option("display.precision", 3)
print(Jopt)
print(policyopt)

Subjective questions

1.a How are values of $\gamma$ affecting results of policy iteration

  • With smaller $\gamma$, the future rewards are less important and hence the optimal policy is heavily influenced by the immediate rewards. But with higher $\gamma$, the future rewards will also play a more important role.

  • We see for smaller $\gamma$, the optimal policy is 1 for all states. This is because the one stage reward corresponding to action 1 is highest as compared to other actions for most transitions

  • But for higher $\gamma$, the future rewards will also matter and hence we get an optimal policy which is same as the one we had got in Assignment 1 where we had done DP.

1.b For modified policy itetaration, do you find any improvement if you choose m=10.

import matplotlib.pyplot as plt 
import pandas as pd
import math
from tqdm import tqdm
from matplotlib.colors import ListedColormap

results, extra_info = run_modified_policy_iteration(env)
m_values = np.arange(1, 15)
rewards = {}
for s in env.possible_states:
    rewards[s] = []
for m in m_values:
    rew = results[m]["Expected Reward"]
    for s in env.possible_states:
        rewards[s].append(rew[s])
for s in env.possible_states:
    plt.plot(m_values, rewards[s])
    plt.title('J('+str(s)+') vs m')
    plt.xlabel('m')
    plt.ylabel('J('+str(s)+')')    
    plt.show()

download (1).png

As we can see in the above plot with higher m we get better rewards. This is because with higher m we can better approximate the reward as compared to smaller m.

1.c Compare and contrast the behavior of Value Iteration and Gauss Seidel Value Iteraton

Gauss Seidel Value iteratation converges faster than value iteration. Hence at any iteration, Gauss Seidel Value iteration gives better reward.

Submit to AIcrowd 🚀

In [ ]:
!DATASET_PATH=$AICROWD_DATASET_PATH aicrowd notebook submit -c iit-m-rl-assignment-2-taxi -a assets
1228

Comments

You must login before you can post a comment.

Execute