diff --git a/blog/home.css b/blog.css similarity index 87% rename from blog/home.css rename to blog.css index 787e862..05d92d7 100644 --- a/blog/home.css +++ b/blog.css @@ -36,6 +36,14 @@ a:hover { flex-direction: column; } +.code-block { + font-family: monospace; + white-space: pre; + max-width: 80%; + overflow: auto; + text-align: left; +} + @media (max-width: 600px) { h1 { diff --git a/blog.html b/blog.html index cedc4c3..ea6364d 100644 --- a/blog.html +++ b/blog.html @@ -5,7 +5,7 @@ luisa's blog - + @@ -13,7 +13,7 @@
- + home

diff --git a/blog/a_violent_introduction_to_linear_regression.html b/blog/a_violent_introduction_to_linear_regression.html index 5d82630..d745f19 100644 --- a/blog/a_violent_introduction_to_linear_regression.html +++ b/blog/a_violent_introduction_to_linear_regression.html @@ -5,13 +5,14 @@ luisa's blog - + +

june 24, 2024

sick and tired of gentle introductions to *insert an ml concept*?
worry no more, for today, we will catapult you to the wolves of linear regression.

@@ -68,9 +69,11 @@

the derivation of the derivatives is relatively simple calculus, which actually took me way too long to do:

\[ MSE = \frac{1}{n} \sum^n_{i = 1} (y_i - \hat{y}_i)^2, \space e = (y_i - (mx_i + b))\]

-

\[ \frac{\partial{e^2}}{\partial{b}} = \frac{\partial{(y_i - wx_i - b)}^2}{\partial{b}} = -1 \times 2(y_i - wx_i - b) = -2(y_i - wx_i - b) \]

+

\[ \frac{\partial{e^2}}{\partial{b}} = \frac{\partial{(y_i - wx_i - b)}^2}{\partial{b}} \]

+

\[ = -1 \times 2(y_i - wx_i - b) = -2(y_i - wx_i - b) \]

\[ \frac{\partial{MSE}}{\partial{b}} = \frac{1}{N} \sum^n_{i = 1} -2(y_i - wx_i - b)\]

-

\[ \frac{\partial{e^2}}{\partial{w}} = \frac{\partial{(y_i - wx_i - b)}^2}{\partial{w}} = -x_i \times 2(y_i - wx_i - b) = -2x_i(y_i - wx_i - b) \]

+

\[ \frac{\partial{e^2}}{\partial{w}} = \frac{\partial{(y_i - wx_i - b)}^2}{\partial{w}} \]

+

\[ = -x_i \times 2(y_i - wx_i - b) = -2x_i(y_i - wx_i - b)\]

\[ \frac{\partial{MSE}}{\partial{w}} = \frac{1}{N} \sum^n_{i = 1} -2x_i(y_i - wx_i - b)\]

wielding one parameter in each hand, we can now improve our parameters by having them go a little bit towards the direction negative of their gradient:

@@ -107,23 +110,21 @@

and... this is the actual algorithm for linear regression with stochastic gradient descent, this should be straightforward after all that buildup:

-
 
-                randomly initialize weights W and b in the correct shape
-                split the data into the train and test splits
-                for every epoch:
-                    for every data point in the training set: 
-                        update the weights W
-                        update the bias b
-                evaluate the model's predictions on the test set 
-            
+
+randomly initialize weights W and b in the correct shape +split the data into the train and test splits +for every epoch: + for every data point in the training set: + update the weights W + update the bias b +evaluate the model's predictions on the test set +

the actual implementation of the above is left as an exercise for the reader...

-

just kidding, you can find it here , written with just python and numpy.

+

just kidding, you can find it here, written with just python and numpy.

-

souces: d2l book, chapter 3 -

- + \ No newline at end of file diff --git a/blog/blog.css b/blog/blogpost.css similarity index 62% rename from blog/blog.css rename to blog/blogpost.css index e614524..913bdfa 100644 --- a/blog/blog.css +++ b/blog/blogpost.css @@ -1,16 +1,43 @@ +h2 { + letter-spacing: 0.05em; +} + body { display: flex; flex-direction: column; align-items: center; - height: 100vh; - padding: 5vh; + height: 100vh; +} + +a { + color: var(--lightwhite); + justify-content: space-between; + text-decoration: none; } -h3 { - color: var(--lightwhite) +a:hover { + color: var(--blue); } -.centered { +.code-block { + font-family: monospace; + white-space: pre-wrap; + overflow: auto; + text-align: left; + align-items: center; + justify-content: center; +} + +.center-box { + width: 90%; + max-width: 600px; + padding: 4vw; + display: flex; + flex-direction: column; + text-align: left; +} + +.image { padding-top: 1vh; padding-bottom: 1vh; justify-content: center; @@ -21,17 +48,14 @@ h3 { .head { display: flex; flex-direction: column; - width: 60vw; align-items: center; text-align: center; } .body { justify-content: start; - width: 60vw; - text-align: justify; + text-align: left; font-family: 'Book Antiqua', 'Palatino Linotype', 'Georgia', Palatino, serif; - font-size: 17px; letter-spacing: 0.03em; line-height: 1.7; padding-bottom: 9vh; @@ -41,7 +65,6 @@ h3 { display: flex; justify-content: flex-start; align-items: center; - width: 90%; gap: 3vw; height: auto; flex-direction: row; @@ -49,7 +72,6 @@ h3 { .separator { background-color: var(--lightwhite); - width: 40vw; height: 1px; margin-top: 2vh; margin-bottom: 2vh; @@ -57,8 +79,6 @@ h3 { .whisper { - font-size: 14px; - width: 60vw; color: var(--grey1); text-align: center; align-items: center; @@ -66,8 +86,6 @@ h3 { } .glossary { - font-size: 15px; - width: 60vw; color: var(--grey1); text-align: right; align-items: center; @@ -78,7 +96,6 @@ h3 { #intro { font-style: italic; text-align: center; - font-size: 17px; color : var(--lightwhite) } @@ -93,18 +110,33 @@ a:hover { text-decoration: underline; } -@media (max-width: 600px) { +#makeitup { + width: 50%; +} - h1 { - font-size: 23px; - } +#gptstatistics { + width: 80% +} + +@media (max-width: 600px) { body { + line-height: 1.6; + display: flex; + flex-direction: column; + height: 100vh; font-size: 13px; + overflow-x: hidden; + text-align: center; + padding: 3vw; + } + + #makeitup { + width: 100%; } - a { - font-size: 13px; + #gptstatistics { + width: 100% } - + } \ No newline at end of file diff --git a/blog/statistics_for_evil.html b/blog/statistics_for_evil.html index 777e3f6..2a9bf0a 100644 --- a/blog/statistics_for_evil.html +++ b/blog/statistics_for_evil.html @@ -5,13 +5,14 @@ luisa's blog - + +
-

+

statistics for evil -

+

july 27, 2024

did you know that hong kong's higher life expectancy relative to india is directly related to their meat consumption?

- +

of course you didn't! someone on the internet made up that correlation.

actually! it is not even a correlation! mathematically you can draw a line through any two points, i could be plotting the number of birkenstock sales in the united states and the ocurrence of foot fungus per 1000 people, and those two dots would still make a line.

@@ -66,17 +67,18 @@

people who make their bed every morning are 206.8% more likely to be millio

titles like "want to be a millionaire? make your bed", and "making your bed can make you a millionaire" spin correlation into causation. which is arguably the deadliest sin of statistics. even GPT knows it's fake:

-
- +
+

thank you for listening to my rant about bad statistics on the internet. remember kids, don't believe everything you read, not just because statistics can be used to tell false stories, but because you can also just make things up:

-
- +
+
- + +
\ No newline at end of file diff --git a/blog/the_reading_itinerary.html b/blog/the_reading_itinerary.html index 0d6027f..918a6e4 100644 --- a/blog/the_reading_itinerary.html +++ b/blog/the_reading_itinerary.html @@ -5,13 +5,14 @@ luisa's blog - + +

august 16, 2024

"In the instant before it was over and pure nothing, he heard all the human voices in the world."

@@ -144,6 +145,7 @@

Purity by Jonathan Franzen

the source code for the map can be found here

+ \ No newline at end of file diff --git a/styles.css b/main.css similarity index 100% rename from styles.css rename to main.css diff --git a/index.html b/main.html similarity index 97% rename from index.html rename to main.html index d122621..83034b4 100644 --- a/index.html +++ b/main.html @@ -6,7 +6,7 @@ luisali - + diff --git a/resources/luisaresume.pdf b/resources/luisaresume.pdf deleted file mode 100644 index 5b6e2a8..0000000 Binary files a/resources/luisaresume.pdf and /dev/null differ