From db5253595eb216eafd16281b2a5df8053e2b9657 Mon Sep 17 00:00:00 2001 From: "eldokarim.rk@gmail.com" Date: Tue, 19 Nov 2024 12:05:55 -0500 Subject: [PATCH] Updated --- 02_activities/assignments/assignment_2.ipynb | 117 +- 02_activities/assignments/assignment_2a.ipynb | 1089 ----------------- 2 files changed, 47 insertions(+), 1159 deletions(-) delete mode 100644 02_activities/assignments/assignment_2a.ipynb diff --git a/02_activities/assignments/assignment_2.ipynb b/02_activities/assignments/assignment_2.ipynb index ca2cdf9a9..49b476b2f 100644 --- a/02_activities/assignments/assignment_2.ipynb +++ b/02_activities/assignments/assignment_2.ipynb @@ -25,7 +25,7 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": 14, "metadata": {}, "outputs": [], "source": [ @@ -43,7 +43,7 @@ }, { "cell_type": "code", - "execution_count": 5, + "execution_count": 15, "metadata": {}, "outputs": [], "source": [ @@ -54,7 +54,7 @@ " 'native-country', 'income'\n", "]\n", "adult_dt = (pd.read_csv(r\"C:\\Users\\ibast\\Downloads\\adult\\adult.data\", header = None, names = columns)\n", - " .assign(income = lambda x: (x.income.str.strip() == '>50K')*1))\n" + " .assign(income = lambda x: (x.income.str.strip() == '>50K')*1))" ] }, { @@ -75,7 +75,7 @@ }, { "cell_type": "code", - "execution_count": 10, + "execution_count": 16, "metadata": {}, "outputs": [ { @@ -90,7 +90,6 @@ } ], "source": [ - "\n", "X = adult_dt.drop(columns=['income'])\n", "Y = adult_dt['income']\n", "\n", @@ -100,7 +99,7 @@ "print(f'X_train shape: {X_train.shape}')\n", "print(f'X_test shape: {X_test.shape}')\n", "print(f'Y_train shape: {Y_train.shape}')\n", - "print(f'Y_test shape: {Y_test.shape}')\n" + "print(f'Y_test shape: {Y_test.shape}')" ] }, { @@ -119,18 +118,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "train_test_split, the random_state ensures that the random shuffling of data during splitting happens in a consistent way every time you run the code. This parameter can be set to any integer, and as long as the value remains the same, you will get the same split every time.\n", - "Useful for Reproducibility:\n", - "Consistency in Results: When working with machine learning models, it’s essential to obtain consistent results. By setting a fixed random_state, we ensure that any analysis or model we develop on the split data remains reproducible by others.\n", - "Comparison Across Models: If you compare multiple models or modify your approach, having the same train-test split allows you to attribute differences in results to model changes, not data variations.\n", - "Ease of Collaboration: For shared projects or published work, reproducibility ensures that collaborators or reviewers can replicate your results, strengthening the credibility and reliability of findings." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "*(Comment here.)*" + "train_test_split, the random_state ensures that the random shuffling of data during splitting happens in a consistent way every time you run the code. This parameter can be set to any integer, and as long as the value remains the same, you will get the same split every time. Useful for Reproducibility: Consistency in Results: When working with machine learning models, it’s essential to obtain consistent results. By setting a fixed random_state, we ensure that any analysis or model we develop on the split data remains reproducible by others. Comparison Across Models: If you compare multiple models or modify your approach, having the same train-test split allows you to attribute differences in results to model changes, not data variations. Ease of Collaboration: For shared projects or published work, reproducibility ensures that collaborators or reviewers can replicate your results, strengthening the credibility and reliability of findings." ] }, { @@ -168,11 +156,10 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 7, "metadata": {}, "outputs": [], "source": [ - "\n", "from sklearn.compose import ColumnTransformer\n", "from sklearn.impute import KNNImputer, SimpleImputer\n", "from sklearn.preprocessing import RobustScaler, OneHotEncoder\n", @@ -198,7 +185,7 @@ " ('num', numerical_transformer, numerical_features),\n", " ('cat', categorical_transformer, categorical_features)\n", " ]\n", - ")\n" + ")" ] }, { @@ -219,7 +206,7 @@ }, { "cell_type": "code", - "execution_count": 19, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -720,14 +707,12 @@ " ('classifier', RandomForestClassifier())])" ] }, - "execution_count": 19, + "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ - "\n", - "\n", "preprocessing = ColumnTransformer(\n", " transformers=[\n", " ('num', Pipeline([\n", @@ -747,7 +732,8 @@ " ('classifier', RandomForestClassifier())\n", "])\n", "\n", - "pipe.fit(X_train, y_train)\n" + "# pipeline fitting\n", + "pipe.fit(X_train, Y_train) \n" ] }, { @@ -765,18 +751,15 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 17, "metadata": {}, "outputs": [], "source": [ - "\n", - "\n", "# model pipeline\n", "model_pipeline = Pipeline(steps=[\n", " ('preprocessing', preprocessor), # Add the ColumnTransformer for preprocessing\n", " ('classifier', RandomForestClassifier(random_state=42)) # Add the Random Forest classifier\n", - "])\n", - "\n" + "])" ] }, { @@ -788,19 +771,26 @@ }, { "cell_type": "code", - "execution_count": 12, + "execution_count": 18, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import train_test_split\n", "\n", "# Assuming `X` is the features DataFrame and `Y` is the target DataFrame\n", - "X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, random_state=42)\n" + "X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, random_state=42)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Calculate the mean of each metric. " ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 19, "metadata": {}, "outputs": [ { @@ -841,36 +831,36 @@ " \n", " \n", " 3\n", - " 11.685792\n", - " 0.115046\n", + " 12.708740\n", + " 0.113908\n", " -0.356791\n", " -0.082511\n", " \n", " \n", " 0\n", - " 11.893299\n", - " 0.141734\n", + " 12.483332\n", + " 0.128106\n", " -0.357675\n", " -0.081189\n", " \n", " \n", " 1\n", - " 12.135140\n", - " 0.110448\n", + " 15.979565\n", + " 0.129162\n", " -0.369239\n", " -0.081516\n", " \n", " \n", " 2\n", - " 12.090408\n", - " 0.109544\n", + " 13.308411\n", + " 0.123250\n", " -0.375988\n", " -0.081469\n", " \n", " \n", " 4\n", - " 11.872093\n", - " 0.122813\n", + " 11.884736\n", + " 0.111188\n", " -0.380379\n", " -0.081368\n", " \n", @@ -880,14 +870,14 @@ ], "text/plain": [ " fit_time score_time test_neg_log_loss train_neg_log_loss\n", - "3 11.685792 0.115046 -0.356791 -0.082511\n", - "0 11.893299 0.141734 -0.357675 -0.081189\n", - "1 12.135140 0.110448 -0.369239 -0.081516\n", - "2 12.090408 0.109544 -0.375988 -0.081469\n", - "4 11.872093 0.122813 -0.380379 -0.081368" + "3 12.708740 0.113908 -0.356791 -0.082511\n", + "0 12.483332 0.128106 -0.357675 -0.081189\n", + "1 15.979565 0.129162 -0.369239 -0.081516\n", + "2 13.308411 0.123250 -0.375988 -0.081469\n", + "4 11.884736 0.111188 -0.380379 -0.081368" ] }, - "execution_count": 13, + "execution_count": 19, "metadata": {}, "output_type": "execute_result" } @@ -905,19 +895,12 @@ "cv_df_sorted = cv_df.sort_values(by='test_neg_log_loss', ascending=False)\n", "\n", "# results\n", - "cv_df_sorted\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Calculate the mean of each metric. " + "cv_df_sorted" ] }, { "cell_type": "code", - "execution_count": 18, + "execution_count": 20, "metadata": {}, "outputs": [ { @@ -931,7 +914,6 @@ } ], "source": [ - "\n", "cv_df = pd.DataFrame(cv_results)\n", "\n", "test_metrics = cv_df.filter(regex='test_')\n", @@ -939,7 +921,7 @@ "mean_metrics = test_metrics.mean()\n", "\n", "print(\"Mean of cross-validation folds:\")\n", - "print(mean_metrics)\n" + "print(mean_metrics)" ] }, { @@ -953,7 +935,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 21, "metadata": {}, "outputs": [ { @@ -961,7 +943,7 @@ "output_type": "stream", "text": [ "Performance metrics on the testing data:\n", - "{'negative_log_loss': -0.378541864258079, 'roc_auc': np.float64(0.9016127012724575), 'accuracy': 0.8548469648889344, 'balanced_accuracy': np.float64(0.7752599723955951)}\n" + "{'negative_log_loss': -0.37940273142901965, 'roc_auc': np.float64(0.8994978803967568), 'accuracy': 0.8540280479066434, 'balanced_accuracy': np.float64(0.7732333499701753)}\n" ] } ], @@ -983,7 +965,7 @@ "}\n", "\n", "print(\"Performance metrics on the testing data:\")\n", - "print(performance_metrics)\n" + "print(performance_metrics)" ] }, { @@ -1013,12 +995,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Binary Classification Compatibility:\n", - "Most machine learning models, including the RandomForestClassifier, expect numerical labels for classification tasks. By converting the income column into binary values (0 and 1), where 0 represents an income of <=50K and 1 represents >50K, you make the target variable suitable for classification.\n", - "Simplification:\n", - "Instead of dealing with string labels like \">50K\" and \"<50K\", the target variable is converted into numeric form directly within the data loading process. This simplifies data handling and makes it easier to apply statistical methods.\n", - "Improved Model Performance:\n", - "Some machine learning algorithms, such as gradient boosting and logistic regression, perform better with numerical labels for binary classification, as they can compute loss functions more efficiently. By recoding the target, these algorithms can better compute the decision boundary.\n" + "Binary Classification Compatibility: Most machine learning models, including the RandomForestClassifier, expect numerical labels for classification tasks. By converting the income column into binary values (0 and 1), where 0 represents an income of <=50K and 1 represents >50K, you make the target variable suitable for classification. Simplification: Instead of dealing with string labels like \">50K\" and \"<50K\", the target variable is converted into numeric form directly within the data loading process. This simplifies data handling and makes it easier to apply statistical methods. Improved Model Performance: Some machine learning algorithms, such as gradient boosting and logistic regression, perform better with numerical labels for binary classification, as they can compute loss functions more efficiently. By recoding the target, these algorithms can better compute the decision boundary." ] }, { diff --git a/02_activities/assignments/assignment_2a.ipynb b/02_activities/assignments/assignment_2a.ipynb deleted file mode 100644 index ca2cdf9a9..000000000 --- a/02_activities/assignments/assignment_2a.ipynb +++ /dev/null @@ -1,1089 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Assignment 2" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "In this assigment, we will work with the *Adult* data set. Please download the data from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/dataset/2/adult). Extract the data files into the subdirectory: `../05_src/data/adult/` (relative to `./05_src/`)." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Load the data\n", - "\n", - "Assuming that the files `adult.data` and `adult.test` are in `../05_src/data/adult/`, then you can use the code below to load them." - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": {}, - "outputs": [], - "source": [ - "# import prerequisite python modules\n", - "import pandas as pd\n", - "import numpy as np\n", - "from sklearn.model_selection import train_test_split, cross_validate\n", - "from sklearn.pipeline import Pipeline\n", - "from sklearn.compose import ColumnTransformer\n", - "from sklearn.preprocessing import OneHotEncoder, RobustScaler\n", - "from sklearn.impute import SimpleImputer, KNNImputer\n", - "from sklearn.ensemble import RandomForestClassifier\n", - "from sklearn.metrics import accuracy_score, balanced_accuracy_score, log_loss, roc_auc_score" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "metadata": {}, - "outputs": [], - "source": [ - "import pandas as pd\n", - "columns = [\n", - " 'age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status',\n", - " 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week',\n", - " 'native-country', 'income'\n", - "]\n", - "adult_dt = (pd.read_csv(r\"C:\\Users\\ibast\\Downloads\\adult\\adult.data\", header = None, names = columns)\n", - " .assign(income = lambda x: (x.income.str.strip() == '>50K')*1))\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Get X and Y\n", - "\n", - "Create the features data frame and target data:\n", - "\n", - "+ Create a dataframe `X` that holds the features (all columns that are not `income`).\n", - "+ Create a dataframe `Y` that holds the target data (`income`).\n", - "+ From `X` and `Y`, obtain the training and testing data sets:\n", - "\n", - " - Use a train-test split of 70-30%. \n", - " - Set the random state of the splitting function to 42." - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "X_train shape: (22792, 14)\n", - "X_test shape: (9769, 14)\n", - "Y_train shape: (22792,)\n", - "Y_test shape: (9769,)\n" - ] - } - ], - "source": [ - "\n", - "X = adult_dt.drop(columns=['income'])\n", - "Y = adult_dt['income']\n", - "\n", - "X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3, random_state=42)\n", - "\n", - "# shapes \n", - "print(f'X_train shape: {X_train.shape}')\n", - "print(f'X_test shape: {X_test.shape}')\n", - "print(f'Y_train shape: {Y_train.shape}')\n", - "print(f'Y_test shape: {Y_test.shape}')\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Random States\n", - "\n", - "Please comment: \n", - "\n", - "+ What is the [random state](https://scikit-learn.org/stable/glossary.html#term-random_state) of the [splitting function](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html)? \n", - "+ Why is it [useful](https://en.wikipedia.org/wiki/Reproducibility)?" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "train_test_split, the random_state ensures that the random shuffling of data during splitting happens in a consistent way every time you run the code. This parameter can be set to any integer, and as long as the value remains the same, you will get the same split every time.\n", - "Useful for Reproducibility:\n", - "Consistency in Results: When working with machine learning models, it’s essential to obtain consistent results. By setting a fixed random_state, we ensure that any analysis or model we develop on the split data remains reproducible by others.\n", - "Comparison Across Models: If you compare multiple models or modify your approach, having the same train-test split allows you to attribute differences in results to model changes, not data variations.\n", - "Ease of Collaboration: For shared projects or published work, reproducibility ensures that collaborators or reviewers can replicate your results, strengthening the credibility and reliability of findings." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "*(Comment here.)*" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Preprocessing\n", - "\n", - "Create a [Column Transformer](https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html) that treats the features as follows:\n", - "\n", - "- Numerical variables\n", - "\n", - " * Apply [KNN-based imputation for completing missing values](https://scikit-learn.org/stable/modules/generated/sklearn.impute.KNNImputer.html):\n", - " \n", - " + Consider the 7 nearest neighbours.\n", - " + Weight each neighbour by the inverse of its distance, causing closer neigbours to have more influence than more distant ones.\n", - " * [Scale features using statistics that are robust to outliers](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html#sklearn.preprocessing.RobustScaler).\n", - "\n", - "- Categorical variables: \n", - " \n", - " * Apply a [simple imputation strategy](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html#sklearn.impute.SimpleImputer):\n", - "\n", - " + Use the most frequent value to complete missing values, also called the *mode*.\n", - "\n", - " * Apply [one-hot encoding](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html):\n", - " \n", - " + Handle unknown labels if they exist.\n", - " + Drop one column for binary variables.\n", - " \n", - " \n", - "The column transformer should look like this:\n", - "\n", - "![](./images/assignment_2__column_transformer.png)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "\n", - "from sklearn.compose import ColumnTransformer\n", - "from sklearn.impute import KNNImputer, SimpleImputer\n", - "from sklearn.preprocessing import RobustScaler, OneHotEncoder\n", - "from sklearn.pipeline import Pipeline\n", - "\n", - "numerical_features = X.select_dtypes(include=[np.number]).columns.tolist()\n", - "categorical_features = X.select_dtypes(exclude=[np.number]).columns.tolist()\n", - "\n", - "# Pipeline numerical \n", - "numerical_transformer = Pipeline(steps=[\n", - " ('knn_imputer', KNNImputer(n_neighbors=7, weights='distance')), # Impute using KNN with distance weighting\n", - " ('robust_scaler', RobustScaler()) # Robust scaling for outlier resistance\n", - "])\n", - "\n", - "# Pipeline categorical \n", - "categorical_transformer = Pipeline(steps=[\n", - " ('simple_imputer', SimpleImputer(strategy='most_frequent')), # Impute using the mode\n", - " ('one_hot_encoder', OneHotEncoder(handle_unknown='ignore', drop='if_binary')) # One-hot encode, drop for binary\n", - "])\n", - "\n", - "preprocessor = ColumnTransformer(\n", - " transformers=[\n", - " ('num', numerical_transformer, numerical_features),\n", - " ('cat', categorical_transformer, categorical_features)\n", - " ]\n", - ")\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Model Pipeline\n", - "\n", - "Create a [model pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html): \n", - "\n", - "+ Add a step labelled `preprocessing` and assign the Column Transformer from the previous section.\n", - "+ Add a step labelled `classifier` and assign a [`RandomForestClassifier()`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) to it.\n", - "\n", - "The pipeline looks like this:\n", - "\n", - "![](./images/assignment_2__pipeline.png)" - ] - }, - { - "cell_type": "code", - "execution_count": 19, - "metadata": {}, - "outputs": [ - { - "data": { - "text/html": [ - "
Pipeline(steps=[('preprocessing',\n",
-       "                 ColumnTransformer(transformers=[('num',\n",
-       "                                                  Pipeline(steps=[('imputer',\n",
-       "                                                                   KNNImputer(n_neighbors=7,\n",
-       "                                                                              weights='distance')),\n",
-       "                                                                  ('scaler',\n",
-       "                                                                   RobustScaler())]),\n",
-       "                                                  ['age', 'fnlwgt',\n",
-       "                                                   'education-num',\n",
-       "                                                   'capital-gain',\n",
-       "                                                   'capital-loss',\n",
-       "                                                   'hours-per-week']),\n",
-       "                                                 ('cat',\n",
-       "                                                  Pipeline(steps=[('imputer',\n",
-       "                                                                   SimpleImputer(strategy='most_frequent')),\n",
-       "                                                                  ('onehot',\n",
-       "                                                                   OneHotEncoder(drop='if_binary',\n",
-       "                                                                                 handle_unknown='ignore'))]),\n",
-       "                                                  ['workclass', 'education',\n",
-       "                                                   'marital-status',\n",
-       "                                                   'occupation', 'relationship',\n",
-       "                                                   'race', 'sex',\n",
-       "                                                   'native-country'])])),\n",
-       "                ('classifier', RandomForestClassifier())])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
" - ], - "text/plain": [ - "Pipeline(steps=[('preprocessing',\n", - " ColumnTransformer(transformers=[('num',\n", - " Pipeline(steps=[('imputer',\n", - " KNNImputer(n_neighbors=7,\n", - " weights='distance')),\n", - " ('scaler',\n", - " RobustScaler())]),\n", - " ['age', 'fnlwgt',\n", - " 'education-num',\n", - " 'capital-gain',\n", - " 'capital-loss',\n", - " 'hours-per-week']),\n", - " ('cat',\n", - " Pipeline(steps=[('imputer',\n", - " SimpleImputer(strategy='most_frequent')),\n", - " ('onehot',\n", - " OneHotEncoder(drop='if_binary',\n", - " handle_unknown='ignore'))]),\n", - " ['workclass', 'education',\n", - " 'marital-status',\n", - " 'occupation', 'relationship',\n", - " 'race', 'sex',\n", - " 'native-country'])])),\n", - " ('classifier', RandomForestClassifier())])" - ] - }, - "execution_count": 19, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "\n", - "\n", - "preprocessing = ColumnTransformer(\n", - " transformers=[\n", - " ('num', Pipeline([\n", - " ('imputer', KNNImputer(n_neighbors=7, weights=\"distance\")),\n", - " ('scaler', RobustScaler())\n", - " ]), numerical_features),\n", - " ('cat', Pipeline([\n", - " ('imputer', SimpleImputer(strategy='most_frequent')),\n", - " ('onehot', OneHotEncoder(handle_unknown='ignore', drop='if_binary'))\n", - " ]), categorical_features)\n", - " ]\n", - ")\n", - "\n", - "# Define the pipeline\n", - "pipe = Pipeline(steps=[\n", - " ('preprocessing', preprocessing),\n", - " ('classifier', RandomForestClassifier())\n", - "])\n", - "\n", - "pipe.fit(X_train, y_train)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Cross-Validation\n", - "\n", - "Evaluate the model pipeline using [`cross_validate()`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html):\n", - "\n", - "+ Measure the following [preformance metrics](https://scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values): negative log loss, ROC AUC, accuracy, and balanced accuracy.\n", - "+ Report the training and validation results. \n", - "+ Use five folds.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "\n", - "\n", - "# model pipeline\n", - "model_pipeline = Pipeline(steps=[\n", - " ('preprocessing', preprocessor), # Add the ColumnTransformer for preprocessing\n", - " ('classifier', RandomForestClassifier(random_state=42)) # Add the Random Forest classifier\n", - "])\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Display the fold-level results as a pandas data frame and sorted by negative log loss of the test (validation) set." - ] - }, - { - "cell_type": "code", - "execution_count": 12, - "metadata": {}, - "outputs": [], - "source": [ - "from sklearn.model_selection import train_test_split\n", - "\n", - "# Assuming `X` is the features DataFrame and `Y` is the target DataFrame\n", - "X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, random_state=42)\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "c:\\Users\\ibast\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\sklearn\\preprocessing\\_encoders.py:242: UserWarning: Found unknown categories in columns [7] during transform. These unknown categories will be encoded as all zeros\n", - " warnings.warn(\n" - ] - }, - { - "data": { - "text/html": [ - "
\n", - "\n", - "\n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - " \n", - "
fit_timescore_timetest_neg_log_losstrain_neg_log_loss
311.6857920.115046-0.356791-0.082511
011.8932990.141734-0.357675-0.081189
112.1351400.110448-0.369239-0.081516
212.0904080.109544-0.375988-0.081469
411.8720930.122813-0.380379-0.081368
\n", - "
" - ], - "text/plain": [ - " fit_time score_time test_neg_log_loss train_neg_log_loss\n", - "3 11.685792 0.115046 -0.356791 -0.082511\n", - "0 11.893299 0.141734 -0.357675 -0.081189\n", - "1 12.135140 0.110448 -0.369239 -0.081516\n", - "2 12.090408 0.109544 -0.375988 -0.081469\n", - "4 11.872093 0.122813 -0.380379 -0.081368" - ] - }, - "execution_count": 13, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "from sklearn.model_selection import cross_validate\n", - "\n", - "scoring = ['neg_log_loss']\n", - "\n", - "cv_results = cross_validate(model_pipeline, X_train, y_train, cv=5, scoring=scoring, return_train_score=True)\n", - "\n", - "# Convert results to a DataFrame\n", - "cv_df = pd.DataFrame(cv_results)\n", - "\n", - "cv_df_sorted = cv_df.sort_values(by='test_neg_log_loss', ascending=False)\n", - "\n", - "# results\n", - "cv_df_sorted\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Calculate the mean of each metric. " - ] - }, - { - "cell_type": "code", - "execution_count": 18, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Mean of cross-validation folds:\n", - "test_neg_log_loss -0.368014\n", - "dtype: float64\n" - ] - } - ], - "source": [ - "\n", - "cv_df = pd.DataFrame(cv_results)\n", - "\n", - "test_metrics = cv_df.filter(regex='test_')\n", - "\n", - "mean_metrics = test_metrics.mean()\n", - "\n", - "print(\"Mean of cross-validation folds:\")\n", - "print(mean_metrics)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Calculate the same performance metrics (negative log loss, ROC AUC, accuracy, and balanced accuracy) using the testing data `X_test` and `Y_test`. Display results as a dictionary.\n", - "\n", - "*Tip*: both, `roc_auc()` and `neg_log_loss()` will require prediction scores from `pipe.predict_proba()`. However, for `roc_auc()` you should only pass the last column `Y_pred_proba[:, 1]`. Use `Y_pred_proba` with `neg_log_loss()`." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Performance metrics on the testing data:\n", - "{'negative_log_loss': -0.378541864258079, 'roc_auc': np.float64(0.9016127012724575), 'accuracy': 0.8548469648889344, 'balanced_accuracy': np.float64(0.7752599723955951)}\n" - ] - } - ], - "source": [ - "from sklearn.metrics import log_loss, roc_auc_score, accuracy_score, balanced_accuracy_score\n", - "\n", - "# prediction probabilities\n", - "Y_pred_proba = pipe.predict_proba(X_test)\n", - "\n", - "# Calculate binary predictions\n", - "Y_pred = pipe.predict(X_test)\n", - "\n", - "# Step 3: Compute each metric\n", - "performance_metrics = {\n", - " 'negative_log_loss': -log_loss(Y_test, Y_pred_proba),\n", - " 'roc_auc': roc_auc_score(Y_test, Y_pred_proba[:, 1]),\n", - " 'accuracy': accuracy_score(Y_test, Y_pred),\n", - " 'balanced_accuracy': balanced_accuracy_score(Y_test, Y_pred)\n", - "}\n", - "\n", - "print(\"Performance metrics on the testing data:\")\n", - "print(performance_metrics)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Target Recoding\n", - "\n", - "In the first code chunk of this document, we loaded the data and immediately recoded the target variable `income`. Why is this [convenient](https://scikit-learn.org/stable/modules/model_evaluation.html#binary-case)?\n", - "\n", - "The specific line was:\n", - "\n", - "```\n", - "adult_dt = (pd.read_csv('../05_src/data/adult/adult.data', header = None, names = columns)\n", - " .assign(income = lambda x: (x.income.str.strip() == '>50K')*1))\n", - "```" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "(Answer here.)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Binary Classification Compatibility:\n", - "Most machine learning models, including the RandomForestClassifier, expect numerical labels for classification tasks. By converting the income column into binary values (0 and 1), where 0 represents an income of <=50K and 1 represents >50K, you make the target variable suitable for classification.\n", - "Simplification:\n", - "Instead of dealing with string labels like \">50K\" and \"<50K\", the target variable is converted into numeric form directly within the data loading process. This simplifies data handling and makes it easier to apply statistical methods.\n", - "Improved Model Performance:\n", - "Some machine learning algorithms, such as gradient boosting and logistic regression, perform better with numerical labels for binary classification, as they can compute loss functions more efficiently. By recoding the target, these algorithms can better compute the decision boundary.\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Criteria\n", - "\n", - "The [rubric](./assignment_2_rubric_clean.xlsx) contains the criteria for assessment." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Submission Information\n", - "\n", - "🚨 **Please review our [Assignment Submission Guide](https://github.com/UofT-DSI/onboarding/blob/main/onboarding_documents/submissions.md)** 🚨 for detailed instructions on how to format, branch, and submit your work. Following these guidelines is crucial for your submissions to be evaluated correctly.\n", - "\n", - "### Submission Parameters:\n", - "* Submission Due Date: `HH:MM AM/PM - DD/MM/YYYY`\n", - "* The branch name for your repo should be: `assignment-2`\n", - "* What to submit for this assignment:\n", - " * This Jupyter Notebook (assignment_2.ipynb) should be populated and should be the only change in your pull request.\n", - "* What the pull request link should look like for this assignment: `https://github.com//production/pull/`\n", - " * Open a private window in your browser. Copy and paste the link to your pull request into the address bar. Make sure you can see your pull request properly. This helps the technical facilitator and learning support staff review your submission easily.\n", - "\n", - "Checklist:\n", - "- [ ] Created a branch with the correct naming convention.\n", - "- [ ] Ensured that the repository is public.\n", - "- [ ] Reviewed the PR description guidelines and adhered to them.\n", - "- [ ] Verify that the link is accessible in a private browser window.\n", - "\n", - "If you encounter any difficulties or have questions, please don't hesitate to reach out to our team via our Slack at `#cohort-3-help`. Our Technical Facilitators and Learning Support staff are here to help you navigate any challenges." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Reference\n", - "\n", - "Becker,Barry and Kohavi,Ronny. (1996). Adult. UCI Machine Learning Repository. https://doi.org/10.24432/C5XW20." - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": "Python 3", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.8" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -}