diff --git a/lessons/big-o.md b/lessons/big-o.md index def72bc..b810887 100644 --- a/lessons/big-o.md +++ b/lessons/big-o.md @@ -87,8 +87,8 @@ Notice the purple line we added. Now as we add more terms to the array, it takes This sort of analysis is useful for taking a high level view. It's a useful tool for deciding if your designed implementation is going to much the performance profile that you need. -A good example would be if we were designing a comment system for a site and it had a sorting and filtering ability. If this is for a school and there would only ever be a few comments at a time, we probably don't need to do too much Big O analysis because it's such a small set of people that a computer can overcome just about any computational inefficiency we have. In this case I'd value human time over computer time and just go with the simplest solution and not worry about the Big O unless performance because a problem later. +A good example would be if we were designing a comment system for a site and it had a sorting and filtering ability. If this is for a school and there would only ever be a few comments at a time, we probably don't need to do too much Big O analysis because it's such a small set of people that a computer can overcome just about any computational inefficiency we have. In this case I'd value human time over computer time and just go with the simplest solution and not worry about the Big O unless performance becomes a problem later. Okay, now, if we're designing a comment system but it's for Reddit.com, our needs change _dramatically_. We're now talking about pipelines of millions of users making billions of comments. Our performance targets need to change to address such volume. A O(n²) alogrithm would crash the site. -This is absolutely essential to know about Big O analysis. It's useless without context. If you're asked is in a situation if a O(n) or a O(n²) your answer should be "it depends" or "I need more context". If the O(n) algorithm is extremely difficult to comprehend and the O(n²) algorithm is dramatically easier to understand and performance is a total non-issue, then the O(n²) is a much better choice. It's similar to asking a carpenter if they want a hammer or a sledgehammer without any context. Their answer will be "what do you need me to do?". +This is absolutely essential to know about Big O analysis. It's useless without context. If you're asked if O(n) or O(n²) is better in a certain situation your answer should be "it depends" or "I need more context". If the O(n) algorithm is extremely difficult to comprehend and the O(n²) algorithm is dramatically easier to understand and performance is a total non-issue, then the O(n²) is a much better choice. It's similar to asking a carpenter if they want a hammer or a sledgehammer without any context. Their answer will be "what do you need me to do?". diff --git a/lessons/bubble-sort.md b/lessons/bubble-sort.md index da51dd8..9bbaffc 100644 --- a/lessons/bubble-sort.md +++ b/lessons/bubble-sort.md @@ -52,7 +52,7 @@ End of the array, did we swap anything? No. List is sorted. That's it! -So, one more fun trick. Eventually, on your first runthrough, you will find the biggest number and will continue swapping with it until it reaches the last element in the array (like we did with 5 in our example.) This is why it's called bubble sort: the biggest elements bubble to the last spots. Thinking about this for a second, that means at the end of the iteration we're guaranteed to have the biggest element at the last spot. At the end of our first iteration above, 5 is the last element in the array. That means the last element in the array is definitely at its correct spot. That part of the array is sorted. So, on our second run through of the array, we technically don't need to ask `Are 4 and 5 out of order? No.` because know ahead of time that 5 is going to be bigger no matter what. So we can check one less element each iteration because we know at the end of each iteration we have one more guaranteed element to be in its correct. At the end of the second iteration we'll have 4 in its correct place, then 3, then 2, etc. +So, one more fun trick. Eventually, on your first runthrough, you will find the biggest number and will continue swapping with it until it reaches the last element in the array (like we did with 5 in our example.) This is why it's called bubble sort: the biggest elements bubble to the last spots. Thinking about this for a second, that means at the end of the iteration we're guaranteed to have the biggest element at the last spot. At the end of our first iteration above, 5 is the last element in the array. That means the last element in the array is definitely at its correct spot. That part of the array is sorted. So, on our second run through of the array, we technically don't need to ask `Are 4 and 5 out of order? No.` because we know ahead of time that 5 is going to be bigger no matter what. So we can check one less element each iteration because we know at the end of each iteration we have one more guaranteed element to be in its correct position. At the end of the second iteration we'll have 4 in its correct place, then 3, then 2, etc. It's a little optimization (in our Big O it'd affect the coefficient so it'd wouldn't reflected in the Big O.) but it does make it a bit faster. It makes the sorting go faster as algorithm progresses. @@ -64,7 +64,7 @@ What is the absolute best, ideal case for this algorithm? A sorted list. It'd ru What is the absolute worst case scenario? It'd be a backwards sorted list. Then it'd have to run all comparisons and always swap. Assuming we did the optimized version above, that'd be 4 comparisons and swaps on the first go-around, then 3, then 2, then 1, totalling 10 comparisons and swaps. This will grow exponentially as the list gets bigger. This would be O(n²). -Okay, so average case. You can think of average case as "what happens if we through a randomly sorted list at this". For our bubble sort, it'd be like the one we did above: some numbers in order, some not. What happens if we through 10 at it versus 1000. The amount of comparisons and swaps would grow exponentially. As such, we'd say it's O(n²) as well. +Okay, so average case. You can think of average case as "what happens if we go through a randomly sorted list at this". For our bubble sort, it'd be like the one we did above: some numbers in order, some not. What happens if we go through 10 at it versus 1000. The amount of comparisons and swaps would grow exponentially. As such, we'd say it's O(n²) as well. What about the spatial complexity? In our case, we're operating on the array itself and not creating anything else in memory, so the spatial complexity of this is O(1) since it'd never grow. Because we are operating on the array itself, we'd say this sort is **destructive**. What if you needed to keep the original array in addition to the sorted one? You couldn't use this sorting method or you'd have to pre-emptively make a copy. Other sorting algorithms are non-destructive. diff --git a/lessons/insertion-sort.md b/lessons/insertion-sort.md index 6875320..205d46d 100644 --- a/lessons/insertion-sort.md +++ b/lessons/insertion-sort.md @@ -12,9 +12,9 @@ description: "A more useful tool, insertion sort is a tool that developers will Let's talk about a sort that is one that you will use occasionally in certain contexts, insertion sort. -With insertion sort, you treat the first part of your list as sorted and the second part of your list as unsorted. Our algorithm will start by saying everything the 1 index (so just index 0, the first element) is sorted and everything after unsorted. By definition a list of one is already sorted. From there, we start with the next element in the list (in this case, the 1 index, the second element) and loop backwards over our sorted list, asking "is the element that I'm looking to insert larger than what's here? If not, you work your way to the back of the array. If you land at the first element of the sorted part of the list, what you have is smaller than everything else and you put it at the start. You then repeat this until you've done it over the whole list! +With insertion sort, you treat the first part of your list as sorted and the second part of your list as unsorted. Our algorithm will start by saying everything before the 1 index (so just index 0, the first element) is sorted and everything after is unsorted. By definition a list of one is already sorted. From there, we start with the next element in the list (in this case, the 1 index, the second element) and loop backwards over our sorted list, asking "is the element that I'm looking to insert larger than what's here? If not, you work your way to the back of the array. If you land at the first element of the sorted part of the list, what you have is smaller than everything else and you put it at the start. You then repeat this until you've done it over the whole list! -The mechanism by which we'll do this is that we'll keep moving bigger elements to the right by swapping items in the array as we move across the element. When we come to the place where we're going to insert, we'll stop doing those swaps/ +The mechanism by which we'll do this is that we'll keep moving bigger elements to the right by swapping items in the array as we move across the element. When we come to the place where we're going to insert, we'll stop doing those swaps. Let's do an example. diff --git a/lessons/merge-sort.md b/lessons/merge-sort.md index 007cc6d..f7c8975 100644 --- a/lessons/merge-sort.md +++ b/lessons/merge-sort.md @@ -12,7 +12,7 @@ description: "One of the most versatile and useful sorts, merge sorts has wide a We're going to take our new found super power of recurison and apply it to sorting a list. -So what if we have a big list that's unsorted? Well, what if we break it into two smaller lists and sort those? Okay, great but now we just have the same problem just smaller. Cool, let's break that those each into smaller lists and try again. +So what if we have a big list that's unsorted? Well, what if we break it into two smaller lists and sort those? Okay, great but now we just have the same problem just smaller. Cool, let's break those each into smaller lists and try again. Eventually we're going to end up with lists of length 1 or 0. By definition (as I keep saying lol) a list of 1 or 0 is already sorted. Done! Sounds like a base case, doesn't it? diff --git a/lessons/recursion.md b/lessons/recursion.md index cedc6db..d8ded14 100644 --- a/lessons/recursion.md +++ b/lessons/recursion.md @@ -27,7 +27,7 @@ function countTo(max, current, list) { const counts = countTo(5, 1, []); ``` -Notice that we call `countTo` inside of `countTo`. This is recursion. However this one isn't super useful as I could have easily use a for loop. However I wanted to show you a very simple case of recursion. Let's then talk about a bit more about when to use a recursive function. +Notice that we call `countTo` inside of `countTo`. This is recursion. However this one isn't super useful as I could have easily use a for loop. However I wanted to show you a very simple case of recursion. Let's talk about when to use a recursive function. ## When it's useful @@ -63,7 +63,7 @@ A little mind-melting, right? Let's break it down really quick. ![diagram of calls in the fibonacci sequences on fibonacci(5). each one eventually breaks down into calling a fibonacci(2) or a fibonacci(1) call which returns 1](./images/fibonacci.png) -Let's break this down call-by-call. Since recursive functions will break down into further recursive calls, all of them eventually eventually end in a base case. In the `fibonacci(5)` call, you can see eventually all of them end up in n either equalling 2 or 1, our base case. So to get our final answer of 5 (which is the correct answer) we end up adding 1 to itself 5 times to get 5. That seems silly, right? +Let's break this down call-by-call. Since recursive functions will break down into further recursive calls, all of them eventually end in a base case. In the `fibonacci(5)` call, you can see eventually all of them end up in n either equalling 2 or 1, our base case. So to get our final answer of 5 (which is the correct answer) we end up adding 1 to itself 5 times to get 5. That seems silly, right? So what if we call `fibonacci(30)`? The answer is 832040. You guessed it, we add 1 to itelf, 832040 times. What if we call `fibonacci(200)`? Chances are you'll get a stack overflow. diff --git a/lessons/spatial-complexity.md b/lessons/spatial-complexity.md index 750c111..27c8327 100644 --- a/lessons/spatial-complexity.md +++ b/lessons/spatial-complexity.md @@ -10,7 +10,7 @@ So far we've just talked about _computational complexity_. In general if someone ## Linear -Let's say we have an algorithm that for every item in the array, it needs to create another array in the process of sorting it. So for an array of length 10, our algorithm will create 10 arrays. For an array of 100, it'd create 100 extra arrays (or something close, remember these are broad strokes, not exact.) This would be O(n) in terms of its spatial complexity. We'll do some sorts that do this. +Let's say we have an algorithm that for every item in the array, it needs to create another array in the process of sorting it. So for an array of length 10, our algorithm will create 10 arrays. For an array of 100, it'd create 100 extra arrays (or something close, remember these are broad strokes, not exact). This would be O(n) in terms of its spatial complexity. We'll do some sorts that do this. ## Logrithmic @@ -34,7 +34,7 @@ I will say O(n²) in spatial complexity is pretty rare and a big red flag. ## Okay, sure, but why -As before, this is just a tool to make sure your design fits your needs. One isn't necessarily better than the other. And very frequently you need to make the trade off of computational complexity vs spatial. Some algoriths eat a lot of memory but go fast and there are lots that eat zero memory but go slow. It just depends on what your needs are. +As before, this is just a tool to make sure your design fits your needs. One isn't necessarily better than the other. And very frequently you need to make the trade off of computational complexity vs spatial. Some algorithms eat a lot of memory but go fast and there are lots that eat zero memory but go slow. It just depends on what your needs are. Here's an example: let's say you're writing code that's going to be run a PlayStation 3 and it needs to sort 1000 TV shows according to what show you think the customer is going to want to see. PS3s have a decent processor but very little memory available to apps. In this case, we'd want to trade off in favor of spatial complexity and trade off against computational complexity: the processor can do more work so we can save memory. diff --git a/lessons/trade-offs.md b/lessons/trade-offs.md index ca7dfe5..69dbf3f 100644 --- a/lessons/trade-offs.md +++ b/lessons/trade-offs.md @@ -14,7 +14,7 @@ It's actually the mindset that you're gaining that's actually the useful part. " Really though, it's the additional function in my brain that as I'm writing code is thinking about "can this be done better". -Let's talk about "better" for a second. The temptation after learning about these concepts is to try to optimize every piece of code that you come across. "How can this have the loweset Big O?" is the wrong question to ask. Let's go over some guiding principles. +Let's talk about "better" for a second. The temptation after learning about these concepts is to try to optimize every piece of code that you come across. "How can this have the lowest Big O?" is the wrong question to ask. Let's go over some guiding principles. - There are no rules. "Always do _blank_". Everything has context. These are just tools and loose decision-making frameworks for you to use to make a contextually good choice. - There are frequently multiple good choices and almost never a perfect, "right" choice.