From e4a00d46c2ba199e5859b7daa46bc648f69bbd68 Mon Sep 17 00:00:00 2001 From: Nikhil Reddy Date: Thu, 14 Nov 2024 20:25:02 -0800 Subject: [PATCH] publish note 23 --- _quarto.yml | 2 +- docs/case_study_HCE/case_study_HCE.html | 6 + .../loss_transformations.html | 34 +- .../figure-pdf/cell-13-output-1.pdf | Bin 9193 -> 9193 bytes .../figure-pdf/cell-14-output-1.pdf | Bin 15000 -> 15000 bytes .../figure-pdf/cell-15-output-1.pdf | Bin 8394 -> 8394 bytes .../figure-pdf/cell-4-output-1.pdf | Bin 11041 -> 11041 bytes .../figure-pdf/cell-5-output-1.pdf | Bin 103470 -> 103470 bytes .../figure-pdf/cell-7-output-2.pdf | Bin 11239 -> 11239 bytes .../figure-pdf/cell-8-output-1.pdf | Bin 9752 -> 9752 bytes docs/cv_regularization/cv_reg.html | 20 +- docs/eda/eda.html | 162 +-- .../eda_files/figure-pdf/cell-62-output-1.pdf | Bin 16671 -> 16671 bytes .../eda_files/figure-pdf/cell-67-output-1.pdf | Bin 10991 -> 10991 bytes .../eda_files/figure-pdf/cell-68-output-1.pdf | Bin 12638 -> 12638 bytes .../eda_files/figure-pdf/cell-69-output-1.pdf | Bin 9239 -> 9239 bytes .../eda_files/figure-pdf/cell-71-output-1.pdf | Bin 19825 -> 19825 bytes .../eda_files/figure-pdf/cell-75-output-1.pdf | Bin 16799 -> 16799 bytes .../eda_files/figure-pdf/cell-76-output-1.pdf | Bin 21577 -> 21577 bytes .../eda_files/figure-pdf/cell-77-output-1.pdf | Bin 11851 -> 11851 bytes .../feature_engineering.html | 30 +- .../figure-pdf/cell-8-output-2.pdf | Bin 9247 -> 9247 bytes .../figure-pdf/cell-9-output-2.pdf | Bin 9545 -> 9545 bytes docs/gradient_descent/gradient_descent.html | 54 +- .../figure-pdf/cell-21-output-2.pdf | Bin 11767 -> 11767 bytes docs/index.html | 6 + .../inference_causality.html | 52 +- .../figure-pdf/cell-14-output-2.pdf | Bin 20716 -> 20716 bytes .../figure-pdf/cell-16-output-2.pdf | Bin 17984 -> 17984 bytes docs/intro_lec/introduction.html | 6 + docs/intro_to_modeling/intro_to_modeling.html | 22 +- .../figure-html/cell-2-output-1.png | Bin 86360 -> 86625 bytes .../figure-pdf/cell-2-output-1.pdf | Bin 9969 -> 9964 bytes .../figure-pdf/cell-3-output-1.pdf | Bin 15408 -> 15408 bytes .../figure-pdf/cell-7-output-1.pdf | Bin 14938 -> 14938 bytes .../figure-pdf/cell-9-output-1.pdf | Bin 16000 -> 16000 bytes .../logistic_regression_1/logistic_reg_1.html | 34 +- .../figure-html/cell-3-output-1.png | Bin 117595 -> 117822 bytes .../figure-html/cell-4-output-1.png | Bin 133861 -> 134505 bytes .../figure-html/cell-5-output-1.png | Bin 175646 -> 175912 bytes .../figure-html/cell-8-output-1.png | Bin 181141 -> 180892 bytes .../figure-pdf/cell-10-output-1.pdf | Bin 13791 -> 13791 bytes .../figure-pdf/cell-11-output-1.pdf | Bin 13937 -> 13937 bytes .../figure-pdf/cell-13-output-1.pdf | Bin 10478 -> 10478 bytes .../figure-pdf/cell-3-output-1.pdf | Bin 19583 -> 19592 bytes .../figure-pdf/cell-4-output-1.pdf | Bin 19631 -> 19608 bytes .../figure-pdf/cell-5-output-1.pdf | Bin 19960 -> 19970 bytes .../figure-pdf/cell-6-output-1.pdf | Bin 11733 -> 11733 bytes .../figure-pdf/cell-7-output-1.pdf | Bin 12423 -> 12423 bytes .../figure-pdf/cell-8-output-1.pdf | Bin 25457 -> 25413 bytes .../images/confusion_matrix.png | Bin 0 -> 151621 bytes .../images/confusion_matrix_sklearn.png | Bin 0 -> 15459 bytes .../images/decision_boundary.png | Bin 0 -> 127425 bytes .../images/decision_boundary_true.png | Bin 0 -> 88765 bytes .../logistic_regression_2/images/f1_score.png | Bin 0 -> 98492 bytes .../images/f1_score_plot.png | Bin 0 -> 103326 bytes .../images/linear_separability_1D.png | Bin 0 -> 112851 bytes .../images/linear_separability_2D.png | Bin 0 -> 135131 bytes .../images/log_reg_summary.png | Bin 0 -> 56403 bytes .../images/mean_cross_entropy_loss_plot.png | Bin 0 -> 108925 bytes .../images/pr_curve_perfect.png | Bin 0 -> 142651 bytes .../images/pr_curve_thresholds.png | Bin 0 -> 167486 bytes .../images/precision-recall-thresh.png | Bin 0 -> 91133 bytes .../images/precision_recall_graphic.png | Bin 0 -> 528786 bytes .../images/reg_loss_finite_argmin.png | Bin 0 -> 113345 bytes .../images/roc_curve.png | Bin 0 -> 146967 bytes .../images/roc_curve_perfect.png | Bin 0 -> 132921 bytes .../roc_curve_worse_predictor_differing_T.png | Bin 0 -> 161601 bytes .../images/roc_curve_worst_predictor.png | Bin 0 -> 227376 bytes .../images/toy_linear_separable_dataset.png | Bin 0 -> 46173 bytes .../images/toy_linear_separable_dataset_2.png | Bin 0 -> 44101 bytes docs/logistic_regression_2/images/tpr_fpr.png | Bin 0 -> 24717 bytes .../images/unreg_loss_infinite_argmin.png | Bin 0 -> 105634 bytes .../images/varying_threshold.png | Bin 0 -> 87845 bytes .../logistic_regression_2/logistic_reg_2.html | 1181 +++++++++++++++++ docs/ols/ols.html | 12 +- docs/pandas_1/pandas_1.html | 100 +- docs/pandas_2/pandas_2.html | 148 ++- docs/pandas_3/pandas_3.html | 122 +- docs/probability_1/probability_1.html | 6 + docs/probability_2/probability_2.html | 6 + docs/regex/regex.html | 54 +- docs/sampling/sampling.html | 40 +- .../figure-html/cell-13-output-2.png | Bin 32921 -> 31066 bytes .../figure-html/cell-15-output-2.png | Bin 58114 -> 56665 bytes docs/search.json | 82 +- docs/sql_I/sql_I.html | 48 +- docs/sql_II/sql_II.html | 174 +-- docs/visualization_1/visualization_1.html | 50 +- .../figure-pdf/cell-10-output-2.pdf | Bin 14751 -> 14751 bytes .../figure-pdf/cell-11-output-1.pdf | Bin 11421 -> 11421 bytes .../figure-pdf/cell-12-output-1.pdf | Bin 12962 -> 12962 bytes .../figure-pdf/cell-13-output-1.pdf | Bin 15653 -> 15653 bytes .../figure-pdf/cell-14-output-1.pdf | Bin 13198 -> 13198 bytes .../figure-pdf/cell-15-output-1.pdf | Bin 13903 -> 13903 bytes .../figure-pdf/cell-17-output-2.pdf | Bin 16169 -> 16169 bytes .../figure-pdf/cell-18-output-2.pdf | Bin 11504 -> 11504 bytes .../figure-pdf/cell-19-output-2.pdf | Bin 13869 -> 13869 bytes .../figure-pdf/cell-20-output-2.pdf | Bin 14660 -> 14660 bytes .../figure-pdf/cell-21-output-1.pdf | Bin 11648 -> 11648 bytes .../figure-pdf/cell-22-output-1.pdf | Bin 11461 -> 11461 bytes .../figure-pdf/cell-23-output-1.pdf | Bin 12128 -> 12128 bytes .../figure-pdf/cell-3-output-1.pdf | Bin 11274 -> 11274 bytes .../figure-pdf/cell-4-output-1.pdf | Bin 11328 -> 11328 bytes .../figure-pdf/cell-5-output-1.pdf | Bin 11395 -> 11395 bytes .../figure-pdf/cell-7-output-1.pdf | Bin 23251 -> 23251 bytes .../figure-pdf/cell-8-output-1.pdf | Bin 11931 -> 11931 bytes .../figure-pdf/cell-9-output-1.pdf | Bin 13379 -> 13379 bytes docs/visualization_2/visualization_2.html | 56 +- .../figure-html/cell-18-output-1.png | Bin 98285 -> 98907 bytes .../figure-pdf/cell-10-output-1.pdf | Bin 10169 -> 10169 bytes .../figure-pdf/cell-11-output-1.pdf | Bin 5887 -> 5887 bytes .../figure-pdf/cell-12-output-1.pdf | Bin 11927 -> 11927 bytes .../figure-pdf/cell-13-output-1.pdf | Bin 14012 -> 14012 bytes .../figure-pdf/cell-14-output-1.pdf | Bin 13643 -> 13643 bytes .../figure-pdf/cell-15-output-1.pdf | Bin 13905 -> 13905 bytes .../figure-pdf/cell-16-output-1.pdf | Bin 17703 -> 17703 bytes .../figure-pdf/cell-17-output-1.pdf | Bin 15914 -> 15914 bytes .../figure-pdf/cell-18-output-1.pdf | Bin 17771 -> 17732 bytes .../figure-pdf/cell-19-output-1.pdf | Bin 15715 -> 15715 bytes .../figure-pdf/cell-20-output-1.pdf | Bin 14911 -> 14911 bytes .../figure-pdf/cell-21-output-1.pdf | Bin 40952 -> 40952 bytes .../figure-pdf/cell-22-output-1.pdf | Bin 13919 -> 13919 bytes .../figure-pdf/cell-23-output-1.pdf | Bin 14978 -> 14978 bytes .../figure-pdf/cell-24-output-1.pdf | Bin 16210 -> 16210 bytes .../figure-pdf/cell-25-output-2.pdf | Bin 16563 -> 16563 bytes .../figure-pdf/cell-26-output-1.pdf | Bin 14791 -> 14791 bytes .../figure-pdf/cell-3-output-1.pdf | Bin 12068 -> 12068 bytes .../figure-pdf/cell-4-output-1.pdf | Bin 9274 -> 9274 bytes .../figure-pdf/cell-5-output-1.pdf | Bin 10244 -> 10244 bytes .../figure-pdf/cell-6-output-1.pdf | Bin 10243 -> 10243 bytes .../figure-pdf/cell-7-output-1.pdf | Bin 10130 -> 10130 bytes .../figure-pdf/cell-8-output-1.pdf | Bin 12591 -> 12591 bytes .../figure-pdf/cell-9-output-1.pdf | Bin 11286 -> 11286 bytes index.tex | 905 +++++++++++-- logistic_regression_2/images/f1_score.png | Bin 0 -> 98492 bytes .../images/f1_score_plot.png | Bin 0 -> 103326 bytes logistic_regression_2/logistic_reg_2.qmd | 111 +- 138 files changed, 2842 insertions(+), 681 deletions(-) create mode 100644 docs/logistic_regression_2/images/confusion_matrix.png create mode 100644 docs/logistic_regression_2/images/confusion_matrix_sklearn.png create mode 100644 docs/logistic_regression_2/images/decision_boundary.png create mode 100644 docs/logistic_regression_2/images/decision_boundary_true.png create mode 100644 docs/logistic_regression_2/images/f1_score.png create mode 100644 docs/logistic_regression_2/images/f1_score_plot.png create mode 100644 docs/logistic_regression_2/images/linear_separability_1D.png create mode 100644 docs/logistic_regression_2/images/linear_separability_2D.png create mode 100644 docs/logistic_regression_2/images/log_reg_summary.png create mode 100644 docs/logistic_regression_2/images/mean_cross_entropy_loss_plot.png create mode 100644 docs/logistic_regression_2/images/pr_curve_perfect.png create mode 100644 docs/logistic_regression_2/images/pr_curve_thresholds.png create mode 100644 docs/logistic_regression_2/images/precision-recall-thresh.png create mode 100644 docs/logistic_regression_2/images/precision_recall_graphic.png create mode 100644 docs/logistic_regression_2/images/reg_loss_finite_argmin.png create mode 100644 docs/logistic_regression_2/images/roc_curve.png create mode 100644 docs/logistic_regression_2/images/roc_curve_perfect.png create mode 100644 docs/logistic_regression_2/images/roc_curve_worse_predictor_differing_T.png create mode 100644 docs/logistic_regression_2/images/roc_curve_worst_predictor.png create mode 100644 docs/logistic_regression_2/images/toy_linear_separable_dataset.png create mode 100644 docs/logistic_regression_2/images/toy_linear_separable_dataset_2.png create mode 100644 docs/logistic_regression_2/images/tpr_fpr.png create mode 100644 docs/logistic_regression_2/images/unreg_loss_infinite_argmin.png create mode 100644 docs/logistic_regression_2/images/varying_threshold.png create mode 100644 docs/logistic_regression_2/logistic_reg_2.html create mode 100644 logistic_regression_2/images/f1_score.png create mode 100644 logistic_regression_2/images/f1_score_plot.png diff --git a/_quarto.yml b/_quarto.yml index 4b86c5b8..d8d91f52 100644 --- a/_quarto.yml +++ b/_quarto.yml @@ -39,7 +39,7 @@ book: - sql_I/sql_I.qmd - sql_II/sql_II.qmd - logistic_regression_1/logistic_reg_1.qmd - # - logistic_regression_2/logistic_reg_2.qmd + - logistic_regression_2/logistic_reg_2.qmd # - pca_1/pca_1.qmd # - pca_2/pca_2.qmd # - clustering/clustering.qmd diff --git a/docs/case_study_HCE/case_study_HCE.html b/docs/case_study_HCE/case_study_HCE.html index 36744ca6..5d8434bb 100644 --- a/docs/case_study_HCE/case_study_HCE.html +++ b/docs/case_study_HCE/case_study_HCE.html @@ -283,6 +283,12 @@ 22  Logistic Regression I + + diff --git a/docs/constant_model_loss_transformations/loss_transformations.html b/docs/constant_model_loss_transformations/loss_transformations.html index 2f616f10..373f8491 100644 --- a/docs/constant_model_loss_transformations/loss_transformations.html +++ b/docs/constant_model_loss_transformations/loss_transformations.html @@ -312,6 +312,12 @@ 22  Logistic Regression I + + @@ -519,7 +525,7 @@

+
Code
import numpy as np
@@ -534,7 +540,7 @@ 

data_linear = dugongs[["Length", "Age"]]

-
+
Code
# Big font helper
@@ -556,7 +562,7 @@ 

plt.style.use("default") # Revert style to default mpl

-
+
Code
# Constant Model + MSE
@@ -589,7 +595,7 @@ 

+
Code
# SLR + MSE
@@ -652,7 +658,7 @@ 

+
Code
# Predictions
@@ -664,7 +670,7 @@ 

yhats_linear = [theta_0_hat + theta_1_hat * x for x in xs]

-
+
Code
# Constant Model Rug Plot
@@ -694,7 +700,7 @@ 

+
Code
# SLR model scatter plot 
@@ -808,7 +814,7 @@ 

11.4 Comparing Loss Functions

We’ve now tried our hand at fitting a model under both MSE and MAE cost functions. How do the two results compare?

Let’s consider a dataset where each entry represents the number of drinks sold at a bubble tea store each day. We’ll fit a constant model to predict the number of drinks that will be sold tomorrow.

-
+
drinks = np.array([20, 21, 22, 29, 33])
 drinks
@@ -816,7 +822,7 @@

+
np.mean(drinks), np.median(drinks)
(np.float64(25.0), np.float64(22.0))
@@ -826,7 +832,7 @@

Notice that the MSE above is a smooth function – it is differentiable at all points, making it easy to minimize using numerical methods. The MAE, in contrast, is not differentiable at each of its “kinks.” We’ll explore how the smoothness of the cost function can impact our ability to apply numerical optimization in a few weeks.

How do outliers affect each cost function? Imagine we replace the largest value in the dataset with 1000. The mean of the data increases substantially, while the median is nearly unaffected.

-
+
drinks_with_outlier = np.append(drinks, 1033)
 display(drinks_with_outlier)
 np.mean(drinks_with_outlier), np.median(drinks_with_outlier)
@@ -840,7 +846,7 @@

This means that under the MSE, the optimal model parameter \(\hat{\theta}\) is strongly affected by the presence of outliers. Under the MAE, the optimal parameter is not as influenced by outlying data. We can generalize this by saying that the MSE is sensitive to outliers, while the MAE is robust to outliers.

Let’s try another experiment. This time, we’ll add an additional, non-outlying datapoint to the data.

-
+
drinks_with_additional_observation = np.append(drinks, 35)
 drinks_with_additional_observation
@@ -912,7 +918,7 @@

+
Code
# `corrcoef` computes the correlation coefficient between two variables
@@ -944,7 +950,7 @@ 

and "Length". What is making the raw data deviate from a linear relationship? Notice that the data points with "Length" greater than 2.6 have disproportionately high values of "Age" relative to the rest of the data. If we could manipulate these data points to have lower "Age" values, we’d “shift” these points downwards and reduce the curvature in the data. Applying a logarithmic transformation to \(y_i\) (that is, taking \(\log(\) "Age" \()\) ) would achieve just that.

An important word on \(\log\): in Data 100 (and most upper-division STEM courses), \(\log\) denotes the natural logarithm with base \(e\). The base-10 logarithm, where relevant, is indicated by \(\log_{10}\).

-
+
Code
z = np.log(y)
@@ -979,7 +985,7 @@ 

\[\log{(y)} = \theta_0 + \theta_1 x\] \[y = e^{\theta_0 + \theta_1 x}\] \[y = (e^{\theta_0})e^{\theta_1 x}\] \[y_i = C e^{k x}\]

For some constants \(C\) and \(k\).

\(y\) is an exponential function of \(x\). Applying an exponential fit to the untransformed variables corroborates this finding.

-
+
Code
plt.figure(dpi=120, figsize=(4, 3))
diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-13-output-1.pdf b/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-13-output-1.pdf
index 48b0a3bfa6ad45fcb9bb6de9311ba7d697ec812e..deb32752f8d27a10bbdbaf2f8465755159af2510 100644
GIT binary patch
delta 19
bcmaFq{?dKJ8wEBKBLhPVE;qM6&3(U^9D2k

diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-15-output-1.pdf b/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-15-output-1.pdf
index 346c9c54a67f3a5b3e0a31248826cfea7a773dfa..45c9ade46031d343d9ad46d14b3f9cc3f5c0c50d 100644
GIT binary patch
delta 19
bcmX@*c*=3ZLs>QxBLhPVQ>LvvFz)6LIiA2R^}Q56TF

diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-4-output-1.pdf b/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-4-output-1.pdf
index 37778d7d9d38c05ccb704725910abeb1cefa0d9b..6178405c560ba8d20012b9463250ef7f24570c5d 100644
GIT binary patch
delta 19
acmZ1&wlHkNZ*?{kBLhPVV!Z

delta 24
gcmZ3tf^FRjwuUW?^ZVJ149!i=Ot&xXXY6DF0C>U()c^nh

diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-7-output-2.pdf b/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-7-output-2.pdf
index 752c0a29c79bb89834150ae5d9542342de2a7262..0edcda59d9d242db99b8c888ebfe859d625367c8 100644
GIT binary patch
delta 19
acmaDJ{ycocObs>@BLhPV
  22  Logistic Regression I
   
+ +
@@ -418,7 +424,7 @@


In sklearn, the train_test_split function (documentation) of the model_selection module allows us to automatically generate train-test splits.

We will work with the vehicles dataset from previous lectures. As before, we will attempt to predict the mpg of a vehicle from transformations of its hp. In the cell below, we allocate 20% of the full dataset to testing, and the remaining 80% to training.

-
+
Code
import pandas as pd
@@ -437,7 +443,7 @@ 

Y = vehicles["mpg"]

-
+
from sklearn.model_selection import train_test_split
 
 # `test_size` specifies the proportion of the full dataset that should be allocated to testing
@@ -459,7 +465,7 @@ 

After performing our train-test split, we fit a model to the training set and assess its performance on the test set.

-
+
import sklearn.linear_model as lm
 from sklearn.metrics import mean_squared_error
 
@@ -639,7 +645,7 @@ 

\(\lambda\) the regularization penalty hyperparameter; it needs to be determined prior to training the model, so we must find the best value via cross-validation.

The process of finding the optimal \(\hat{\theta}\) to minimize our new objective function is called L1 regularization. It is also sometimes known by the acronym “LASSO”, which stands for “Least Absolute Shrinkage and Selection Operator.”

Unlike ordinary least squares, which can be solved via the closed-form solution \(\hat{\theta}_{OLS} = (\mathbb{X}^{\top}\mathbb{X})^{-1}\mathbb{X}^{\top}\mathbb{Y}\), there is no closed-form solution for the optimal parameter vector under L1 regularization. Instead, we use the Lasso model class of sklearn.

-
+
import sklearn.linear_model as lm
 
 # The alpha parameter represents our lambda term
@@ -657,7 +663,7 @@ 

16.2.3 Scaling Features for Regularization

The regularization procedure we just performed had one subtle issue. To see what it is, let’s take a look at the design matrix for our lasso_model.

-
+
Code
X_train.head()
@@ -720,7 +726,7 @@

\(\hat{y}\) because it is so much greater than the values of the other features. For hp to have much of an impact at all on the prediction, it must be scaled by a large model parameter.

By inspecting the fitted parameters of our model, we see that this is the case – the parameter for hp is much larger in magnitude than the parameter for hp^4.

-
+
pd.DataFrame({"Feature":X_train.columns, "Parameter":lasso_model.coef_})
@@ -784,7 +790,7 @@

\[\hat\theta_{\text{ridge}} = (\mathbb{X}^{\top}\mathbb{X} + n\lambda I)^{-1}\mathbb{X}^{\top}\mathbb{Y}\]

This solution exists even if \(\mathbb{X}\) is not full column rank. This is a major reason why L2 regularization is often used – it can produce a solution even when there is colinearity in the features. We will discuss the concept of colinearity in a future lecture, but we will not derive this result in Data 100, as it involves a fair bit of matrix calculus.

In sklearn, we perform L2 regularization using the Ridge class. It runs gradient descent to minimize the L2 objective function. Notice that we scale the data before regularizing.

-
+
ridge_model = lm.Ridge(alpha=1) # alpha represents the hyperparameter lambda
 ridge_model.fit(X_train, Y_train)
 
diff --git a/docs/eda/eda.html b/docs/eda/eda.html
index 9b4a09df..20a201c5 100644
--- a/docs/eda/eda.html
+++ b/docs/eda/eda.html
@@ -315,6 +315,12 @@
   
  22  Logistic Regression I
   
+ +
@@ -403,7 +409,7 @@

Data Cleaning and EDA

-
+
Code
import numpy as np
@@ -468,7 +474,7 @@ 

5.1.1.1 CSV

CSVs, which stand for Comma-Separated Values, are a common tabular data format. In the past two pandas lectures, we briefly touched on the idea of file format: the way data is encoded in a file for storage. Specifically, our elections and babynames datasets were stored and loaded as CSVs:

-
+
pd.read_csv("data/elections.csv").head(5)
@@ -539,7 +545,7 @@