diff --git a/problems/1105/paxtonfitzpatrick.md b/problems/1105/paxtonfitzpatrick.md new file mode 100644 index 0000000..a18bb3d --- /dev/null +++ b/problems/1105/paxtonfitzpatrick.md @@ -0,0 +1,104 @@ +# [Problem 1105: Filling Bookcase Shelves](https://leetcode.com/problems/filling-bookcase-shelves/description/?envType=daily-question) + +## Initial thoughts (stream-of-consciousness) + +- this is an interesting problem. We need to divide the sequences of books into subsequences of shelves, optimizing for the minimum sum of the shelves' heights we can achieve with shelves that are at most `shelfWidth` wide. +- okay my initial thought is that a potential algorithm could go something like this: + - We know the first book **must** go on the first shelf, so place it there. The height of the first shelf is now the height of the first book. + - Then, for each subsequent book: + - if the book can fit on the same shelf as the previous book without increasing the shelf's height (i.e., its height is $\le$ the curent shelf height (the height of the tallest book on the shelf so far) and its with + the width of all books placed on the shelf so far is $\le$ `shelfWidth`), then place it on the same shelf. + - elif the book can't fit on the same shelf as the previous without exceeding `shelfWidth`, then we **must** place it on the next shelf + - I think we'll then have to tackle the sub-problem of whether moving some of the preceding books from the last shelf to this next shelf would decrease the height of that last shelf without increasing the height of this next shelf... or maybe it's okay to increase this next shelf's height if doing so decreases the previous one's by a larger amount? This feels like it could get convoluted fast... + - else, the book *can* fit on the same shelf as the previous but *would* increase the shelf's height, so we need to determine whether it's better to place it on the current shelf or start a new shelf. + - is this conceptually the same sub-problem as the one above? Not sure... +- I think the basic thing we're optimizing for is having tall books on the same shelf as other tall books whenever possible. This makes me think we might want to try to identify "optimal runs" of books in the array containing as many tall books as possible whose total width is $\le$ `shelfWidth`. Maybe I could: + - sort a copy of the input list by book height to find the tallest books + - then in the original list, search outwards (left & right) from the index of each tallest book to try to create groupings of books that contain as many tall books as possible. + - How would I formalize "as many tall books as possible"? Maximizing the sum of the grouped books' heights doesn't seem quite right... + - Since I want tall books together *and* short books together, maybe I could come up with a scoring system for groupings that penalizes books of different heights being in the same group? Something like trying to minimize the sum of the pairwise differences between heights of books in the same group? + - anything that involves "pairwise" raises a red flag for me though because it hints at an $O(n^2)$ operation, which I'd like to avoid + - Though I'm not sure how I'd do this without leaving the occasional odd short book on its own outside of any subarray, meaning it'd end up on its own shelf, which isn't good... +- the general steps I wrote out above also make me think of recursion, since they entail figuring out which of two options is better (place a book on the current shelf or start a new shelf when either is possible) by optimizing the bookshelf downstream of both decisions and then comparing the result. + - I think framing this as a recursion problem also resolves my confusion about the potential need to "backtrack" in the case where we're forced by `shelfWidth` to start a new shelf, in order to determine wether it'd be better to have placed some previous books on that new shelf as well -- if we simply test out the result of placing each book on the same shelf and on a new shelf, then we wouldn't have to worry about that because all of those combinations of books on a shelf would be covered. + - the downside that testing both options (same shelf and new shelf) for *every* book, for *every other* would make the runtime $O(2^n)$, I think, which is pretty rough. + - although... maybe I could reduce this significantly? Say we put book $n$ on a given shelf, then put book $n+1$ on that same shelf, then put book $n+2$ on a new shelf. Or, we put book $n$ on a given shelf, then put book $n+1$ on a new shelf, then also put book $n+2$ on a new shelf. In both cases, all subsequent calls in that branch of the recursion tree follow from a shelf that starts with book $n+2$. So if I can set up the recursive function so that it depends only on the current book and the amount of room left on the current shelf, then I could use something like `functools.lru_cache()` to memoize the results of equivalent recursive calls. I think this would end up covering the vast majority of calls if I can set it up right. + - actually, I think `functools.lru_cache()` would be overkill since it adds a lot of overhead to enable introspection, discarding old results, etc. I think I'd be better off just using a regular dict instead. + - also, even before the memoization, I think it should be slightly better than $O(2^n)$ because there will be instances of the second case I noted above where the next book *can't* be placed on the current shelf and **must** be placed on the next shelf. + - I'm not 100% sure the recursive function can be written to take arguments that'll work for the memoization (hashable, shared between calls that should share a memoized result and not those that don't, etc.) the memoization, but I think I'll go with this for now and see if I can make it work +- so how would I set this up recursively? + - I think the "base case" will be when I call the recursive function on the last book. At that point, I'll want to return the height of the current shelf so that in the recursive case, I can add the result of a recursive call to the current shelf's height to get the height of the bookshelf past the current book. + - actually, what I do will be slightly different for the two possible cases, because I'll need to compare the rest-of-bookshelf height between the two options to choose the optimal one: + - if placing the next book on a new shelf, the rest-of-bookshelf height will be the current shelf's height plus the returned height + - if placing the next book on the current shelf, the rest-of-bookshelf height will be the larger of the current shelf's height and the book's height, plus the returned height + - and then I'll need to compare those two heights and return the smaller one + - so I think I'll need to set the function up to take as arguments (at least): + - the index of the current book + - the height of the current shelf + - the remaining width on the current shelf + - possibly the cache object and `books` list, unless I make them available in scope some other way + - and then I can format the book index and remaining shelf width as a string to use as a key in the cache dict +- okay it's possible there are some additional details I haven't thought of yet, but I'm gonna try this + +## Refining the problem, round 2 thoughts + +## Attempted solution(s) + +```python +class Solution: + def minHeightShelves(self, books: List[List[int]], shelfWidth: int) -> int: + return self._recurse_books(books, 0, shelfWidth, shelfWidth, 0, {}) + + def _recurse_books( + self, + books, + curr_ix, + full_shelf_width, + shelf_width_left, + curr_shelf_height, + call_cache + ): + # base case (no books left): + if curr_ix == len(books): + return curr_shelf_height + + cache_key = f'{curr_ix}-{shelf_width_left}' + if cache_key in call_cache: + return call_cache[cache_key] + + # test placing book on new shelf + total_height_new_shelf = curr_shelf_height + self._recurse_books( + books, + curr_ix + 1, + full_shelf_width, + full_shelf_width - books[curr_ix][0], + books[curr_ix][1], + call_cache + ) + + # if book can fit on current shelf, also test placing it there + if books[curr_ix][0] <= shelf_width_left: + # check if current book is new tallest book on shelf + if books[curr_ix][1] > curr_shelf_height: + curr_shelf_height = books[curr_ix][1] + + total_height_curr_shelf = self._recurse_books( + books, + curr_ix + 1, + full_shelf_width, + shelf_width_left - books[curr_ix][0], + curr_shelf_height, + call_cache + ) + if total_height_curr_shelf < total_height_new_shelf: + call_cache[cache_key] = total_height_curr_shelf + return total_height_curr_shelf + else: + call_cache[cache_key] = total_height_new_shelf + return total_height_new_shelf + + call_cache[cache_key] = total_height_new_shelf + return total_height_new_shelf + +``` + +![](https://github.com/user-attachments/assets/d79f6e5d-28f1-4a11-8254-92847001bbd1) diff --git a/problems/1460/paxtonfitzpatrick.md b/problems/1460/paxtonfitzpatrick.md new file mode 100644 index 0000000..380be9b --- /dev/null +++ b/problems/1460/paxtonfitzpatrick.md @@ -0,0 +1,39 @@ +# [Problem 1460: Make Two Arrays Equal by Reversing Subarrays](https://leetcode.com/problems/make-two-arrays-equal-by-reversing-subarrays/description/?envType=daily-question) + +## Initial thoughts (stream-of-consciousness) + +- okay my first thought is: as long as all of the elements in `target` and `arr` are the same, regardless of their order, is it possible to do some series of reversals to make them equal? +- I'm pretty sure it is... and the fact that this is an "easy" problem, while coming up with an algorithm for determining what subarrays to reverse in order to check for this seems challenging, also makes me think that. +- I can't come up with a counterexample or logical reason why this wouldn't be the case, so I'll go with it. + +## Refining the problem, round 2 thoughts + +- I think the simplest way to do this will to be to sort both arrays and then test whether they're equal. That'll take $O(n\log n)$ time, which is fine. +- Even though `return sorted(arr) == sorted(target)` would be better practice in general, for the purposes of this problem I'll sort the arrays in place since that'll cut cut the memory used in half. + +## Attempted solution(s) + +```python +class Solution: + def canBeEqual(self, target: List[int], arr: List[int]) -> bool: + target.sort() + arr.sort() + return target == arr +``` + +![](https://github.com/user-attachments/assets/d6c036c2-d3f9-4d4d-9521-82ad96ceebed) + +## Refining the problem further + +- okay I just realized there's actually a way to do this in $O(n)$ time instead of $O(n\log n)$. I'm not sure it'll *actually* be faster in practice, since my first solution was pretty fast -- and it'll definitely use more memory (though still $O(n)$) -- but it's super quick so worth trying. +- basically as long as the number of occurrences of each element in the two arrays is the same, then they'll be the same when sorted. So we can skip the sorting and just compare the counts with a `collections.Counter()`: + +```python +class Solution: + def canBeEqual(self, target: List[int], arr: List[int]) -> bool: + return Counter(target) == Counter(arr) +``` + +![](https://github.com/user-attachments/assets/d902d70d-1725-4816-a099-a1e10a82eb10) + +So basically identical runtime and memory usage. I guess the upper limit of 1,000-element arrays isn't large enough for the asymptotic runtime improvement to make up for the overhead of constructing the `Counter`s, and the upper limit of 1,000 unique array values isn't large enough for the additional memory usage to make much of a difference. diff --git a/problems/1508/paxtonfitzpatrick.md b/problems/1508/paxtonfitzpatrick.md new file mode 100644 index 0000000..bc34096 --- /dev/null +++ b/problems/1508/paxtonfitzpatrick.md @@ -0,0 +1,51 @@ +# [Problem 1508: Range Sum of Sorted Subarray Sums](https://leetcode.com/problems/range-sum-of-sorted-subarray-sums/description/?envType=daily-question) + +## Initial thoughts (stream-of-consciousness) + +- okay I'm guessing that finding the solution the way they describe in the problem setup is going to be too slow, otherwise that'd be kinda obvious... though the constraints say `nums` can only have up to 1,000 elements, which means a maximum of 500,500 sums, which isn't *that* crazy... +- maybe there's a trick to figuring out which values in `nums` will contribute to the sums between `left` and `right` in the sorted array of sums? Or even just the first `right` sums? +- I can't fathom why we're given `n` as an argument... at first I thought it might point towards it being useful in the expected approach, but even if so, we could just compute it from `nums` in $O(1)$ time... +- For the brute force approach (i.e., doing it how the prompt describes), I can think of some potential ways to speed up computing the sums of all subarrays... e.g., we could cache the sums of subarrays between various indices `i` and `j`, and then use those whenever we need to compute the sum of another subarray that includes `nums[i:j]`. But I don't think these would end up getting re-used enough to make the trade-off of having to check `i`s and `j`s all the time worth it, just to save *part* of the $O(n)$ runtime of the already very fast `sum()` function. +- ah, a better idea of how to speed that up: computing the sums of all continuous subarrays would take $O(n^3)$ time, because we compute $n^2$ sums, and `sum()` takes $O(n)$ time. But if I keep a running total for the inner loop and, for each element, add it to the running total and append that result, rather than recomputing the sum of all items up to the newest added one, that should reduce the runtime to $O(n^2)$. +- This gave me another idea about "recycling" sums -- if I compute the cumulative sum for each element in `nums` and store those in a list `cumsums`, then I can compute the sum of any subarray `nums[i:j]` as `cumsums[j] - cumsum[i-1]`. Though unfortunately, I don't think this will actually save me any time since it still ends up being $n^2$ operations. +- Nothing better is coming to me for this one, so I think I'm going to just implement the brute force approach and see if it's fast enough. Maybe I'll have an epiphany while I'm doing that. If not, I'll check out the editorial solution cause I'm pretty curious what's going on here. + - The complexity for the brute force version is a bit rough... iterating `nums` and constructing list of sums will take $O(n^2)$ time and space, then sorting that list of sums will take $O(n^2 \log n^2)$ time, which is asymptotically equivalent to $O(n^2 \log n)$. + +## Refining the problem, round 2 thoughts + + +## Attempted solution(s) + +```python +class Solution: + def rangeSum(self, nums: List[int], n: int, left: int, right: int) -> int: + sums = [] + for i in range(n): + subsum = 0 + for j in range(i, n): + subsum += nums[j] + sums.append(subsum) + sums.sort() + return sum(sums[left-1:right]) % (1e9 + 7) +``` + +![](https://github.com/user-attachments/assets/fd4e974b-3abb-443e-ba30-40490e326f75) + +Wow, that's a lot better than I expected. Looks like most people actually went with this approach. I'll try the cumulative sum version I mentioned above just quickly too... + +```python +class Solution: + def rangeSum(self, nums: List[int], n: int, left: int, right: int) -> int: + # accumulate is itertools.accumulate, add is operator.add, both already + # imported in leetcode environment + sums = list(accumulate(nums, add)) + nums[1:] + for i in range(1, len(nums)-1): + for j in range(i+1, len(nums)): + sums.append(sums[j] - sums[i-1]) + sums.sort() + return sum(sums[left-1:right]) % (10**9 + 7) +``` + +![](https://github.com/user-attachments/assets/8946b8e3-91b0-404d-b131-87fc15a8835d) + +Slightly slower, which I guess makes sense since it's doing basically the same thing but with a bit more overhead. diff --git a/problems/1653/paxtonfitzpatrick.md b/problems/1653/paxtonfitzpatrick.md new file mode 100644 index 0000000..09e7e1f --- /dev/null +++ b/problems/1653/paxtonfitzpatrick.md @@ -0,0 +1,103 @@ +# [Problem 1653: Minimum Deletions to Make String Balanced](https://leetcode.com/problems/minimum-deletions-to-make-string-balanced/description/?envType=daily-question) + +## Initial thoughts (stream-of-consciousness) + +- okay I don't see an immediately obvious solution to this one +- one initial idea I have is that the closer to the "wrong" end of the string a character is (i.e., closer to the front for b's and the back for a's), the more confident we are that we'll want to delete it. So maybe I could do something like: + - initialize 2 indices into the string: `i = 0` and `j = len(s) - 1`, and a counter `n_deletions = 0` + - `while s[i] == 'a'`, consume a character from the front of the string and incrememnt `i` by 1 + - `if s[i+1] == 'b'`, switch to consuming characters from the back of the string and decrementing `j` by 1 + - `if s[j-1] == 'a'`, switch back to the front of the string, consume the `'b'` we stopped at before, increment `n_deletions` by 1, continue consuming `'a'`s, and so on... + - stop when `i + 1 == j` or `j - 1 == i` and return `n_deletions` + - I'm not sure this will work though... I think I'll try playing this out with the two examples and see what happens: + - Example 1: + - `s = "aababbab"`; `i = 0`; `j = 7`; `n_deletions = 0` + - consume `s[0]` + - consume `s[1]` + - `s[2]` would be a `"b"` so switch to consuming from the right + - consume `s[7]` + - `s[6]` would be an `"a"` so switch back to consuming from the left + - consume `s[2]` and update `n_deletions` to 1 + - consume `s[3]` + - `s[4]` would be a `"b"` so switch to consuming from the right + - consume `s[6]` and update `n_deletions` to 2 + - consume `s[5]` + - consume `s[4]` + - `i == 3`, so instead of consuming `s[3]`, `return n_deletions` + - Example 2: + - `s = "bbaaaaabb"`; `i = 0`; `j = 8`; `n_deletions = 0` + - `s[0]` would be a `"b"` so switch to right + - consume `s[8]` + - consume `s[7]` + - `s[6]` would be an `"a"` so switch to left + - consume `s[0]` and update `n_deletions` to 1 + - `s[1]` would be an `"b"` so switch to right + - consume `s[6]` and update `n_deletions` to 2 + - `s[5]` would be an `"a"` so switch to left + - consume `s[1]` and update `n_deletions` to 3 + - ... + - Okay so it worked for the first example but not the second. I thought that might happen, because this approach assumes there are both a's in the "b section" and b's in the "a section" of the string, which isn't necessarily true. I think I could tweak the rules I used for consuming characters/"looking ahead" to make this work for the 2nd example, but I don't think it'll work in the general case. +- So that approach didn't work because it assumed there was a 1:1 relationship between a's to be removed from the back and b's to be removed from the front. But the minimum number of deletions can involve all a's, all b's, or some uneven combination of them. So what if instead, I iterate through the string and, for each index, count the number of a's to the right and b's to the left (which are what would have to be removed), and then just return the min of those counts? + - This seems promising, but I'd have to figure out how to do it efficiently... the simplest approach would involve something like, e.g., for each index `i`, doing `sum(1 for i in range(i) if s[i] == 'b') + sum(1 for i in range(i+1, len(s)) if s[i] == 'a')`. But that'd take $O(n^2)$ time... + - Ah -- I could figure out the number of b's to the left of each index in $O(n)$ time by looping through the string and tracking the cumulative count of b's, and storing that in some list. And then since every letter in the string is either an a or a b, for each index `i`, the number of a's to the left of it would be `i - n_left_bs`. Then once I get to the end of the string I'll know the total number b's and therefore the total number of a's, so the number of a's to the right of each index would be the total number of a's minus the number of a's to the left of each index. + - ... or I could just use `str.count()` instead 🤦 duh. Either that or looping through the list of cumulative b-counts to figure out the a-counts for each index would take $O(n)$ time, but `str.count()` is a C function so it'll be much faster than a Python loop. In fact it might end up being faster than the math logic I'd do at each iteration. Maybe I'll try both ways. + - In fact, I don't even think I'd have to store the cumulative count(s) in a list; I could just track a running minimum with a variable. That's reduce the space complexity from $O(n)$ to $O(1)$. + - should the current index be included in the count of a's/b's to the left or right? I don't think it matters. + - I can't think of any edge cases that would potentially trip me up here... I think the only thing worth doing is checking whether the string is all a's or all b's, in which case I can just return 0. + +## Refining the problem, round 2 thoughts + +## Attempted solution(s) + +```python +class Solution: + def minimumDeletions(self, s: str) -> int: + total_as = s.count('a') + if total_as == 0 or total_as == len(s): + return 0 + + n_bs_left = 0 + min_deletions = len(s) + for i in range(len(s)): + if s[i] == 'b': + n_bs_left += 1 + # n_as_left = i + 1 - n_bs_left + # n_as_right = total_as - n_as_left + # n_deletions = n_bs_left + n_as_right + # simplified equivalent: + n_deletions = 2 * n_bs_left + total_as - i - 1 + if n_deletions < min_deletions: + min_deletions = n_deletions + + return min_deletions +``` + +![](https://github.com/user-attachments/assets/44cace17-7991-4f81-8ca2-e04bbad1bd0b) + +That's weird... I wonder what hapened there. I don't think there's something fundamental wrong with the logic of my solution itself if it passed 155/157 test cases, but I'll try to figure out what's unique about this one. + +Ah -- since I treat the "current" character as one of the characters "to the left" of the current position, I never consider the case where all characters are on the right and none are on the left. And because of that, I miss instances where the minimum deletion is deleting all a's. + +I can fix this pretty easily by just returning the minimum of the `min_distance` my approach identifies and the `total_as` in the string. + +```python +class Solution: + def minimumDeletions(self, s: str) -> int: + total_as = s.count('a') + if total_as == 0 or total_as == len(s): + return 0 + + n_bs_left = 0 + min_deletions = len(s) + for i in range(len(s)): + if s[i] == 'b': + n_bs_left += 1 + if 2 * n_bs_left + total_as - i - 1 < min_deletions: + min_deletions = 2 * n_bs_left + total_as - i - 1 + + if min_deletions < total_as: + return min_deletions + return total_as +``` + +![](https://github.com/user-attachments/assets/cb112e59-24fe-4245-976f-35c1747d32cd) diff --git a/problems/2053/paxtonfitzpatrick.md b/problems/2053/paxtonfitzpatrick.md new file mode 100644 index 0000000..1d4369e --- /dev/null +++ b/problems/2053/paxtonfitzpatrick.md @@ -0,0 +1,67 @@ +# [Problem 2053: Kth Distinct String in an Array](https://leetcode.com/problems/kth-distinct-string-in-an-array/description/?envType=daily-question) + +## Initial thoughts (stream-of-consciousness) + +- this seems pretty straightforward. Though there might be a slightly more optimal way than what I'm thinking of. My plan is to: + - use a `collections.Counter()` to get the number of occurrences of each string in `arr`. This takes $O(n)$ time and space. + - I then want to find the items that have a count of 1, and get the $k$th one of those. There are a few ways I could do this: + - initialize a variable `n_dist` to 0, then loop over `arr` and check if each item's count is 1. If yes, increment `n_dist` by 1. If `n_dist == k`, return the item. If I exhaust the list, return `""`. This would take $O(n)$ time. + - `Counter`s preserve insertion order, so I could instead loop over its items and increment `n_dist` by 1 each time I encounter a key whose value is 1, and return the key when `n_dist == k` (or `""` if I exhaust the `Counter`). This would take $O(m)$ time where $m$ is the number of unique items in `arr`, which could be all of them, so it'd be $O(n)$ in the worst case. + - use `Counter.most_common()` to get a list of `(item, count)` tuples in descending order of count, with tied counts appearing in insertion order. Index this with `-k` to get the $k$th least common item and its count. If its count is 1, return the item; otherwise return `""`. I initially thought this might be the best approach, but I think it's actually the worst one because `.most_common()` has to sort the items by count internally, which would could take up to $O(n \log n)$ time if all items in `arr` are unique. + - Given this, I'll go with the second option... though I might also try the 3rd one to compare, because my intuition is that for small $n$s (`arr` contains at most 1,000 items), an $O(n \log n)$ operation in C might end up being faster than an $O(n)$ operation in Python. + - actually, never mind -- I can't index it with `-k`, I'd have to iterate over the `most_common()` list in reverse to find the total number of elements with a count of 1 (`n_distincts`), then index it with `-(n_distincts - k)` to get the $k$th item with a count of 1. That makes it not worth it, I think. + +## Refining the problem, round 2 thoughts + +- This won't change the asymptotic complexity, but I thought of a way to do this in $O(n)$ time instead of $O(2n)$, at the cost of some additional (less than $O(n)$) space. I could: + - initialize a dict `distincts` to store distinct items as keys with values of `None` + - this basically acts as a set that provides $O(1)$ lookup, insertion, and deletion, but also preserves insertion order + - initialize a set `non_distincts` to store items that are not distinct + - for each item in `arr`: + - if it's in `non_distincts`, continue on. If it's not, then check whether it's in `distincts` + - if so, remove it from `distincts` and add it to `non_distincts` + - otherwise, add it to `distincts` + - if the length of `distincts` is $\lt k$, return `""`. Otherwise, use `itertools.islice` to return the $k$th key +- I might try this as well, just to compare. + +## Attempted solution(s) + +### Approach #1 + +```python +class Solution: + def kthDistinct(self, arr: List[str], k: int) -> str: + counts = Counter(arr) + n_distincts = 0 + for item, count in counts.items(): + if count == 1: + n_distincts += 1 + if n_distincts == k: + return item + return "" +``` + +![](https://github.com/user-attachments/assets/d5cdb924-29a6-451f-9798-8016ced24060) + +### Approach #2 + +```python +class Solution: + def kthDistinct(self, arr: List[str], k: int) -> str: + distincts = {} + non_distincts = set() + for item in arr: + if item not in non_distincts: + if item in distincts: + del distincts[item] + non_distincts.add(item) + else: + distincts[item] = None + if len(distincts) < k: + return "" + return next(islice(distincts.keys(), k-1, None)) +``` + +![](https://github.com/user-attachments/assets/b6f4e593-ada0-4e97-9353-410cf1933e84) + +Huh, I'm surprised both that this one was slower and that it used less memory. I think with a sufficiently large `arr`, the result would be the opposite. diff --git a/problems/2134/paxtonfitzpatrick.md b/problems/2134/paxtonfitzpatrick.md new file mode 100644 index 0000000..2aebf6e --- /dev/null +++ b/problems/2134/paxtonfitzpatrick.md @@ -0,0 +1,44 @@ +# [Problem 2134: Minimum Swaps to Group All 1's Together II](https://leetcode.com/problems/minimum-swaps-to-group-all-1s-together-ii/description/?envType=daily-question) + +## Initial thoughts (stream-of-consciousness) + +- okay, this one seems pretty easy unless there's a "gotcha" I'm not seeing +- first I'll need to figure out how many 1s there are in the list, which is just the sum of the list since it's all 1s and 0s. Call this $k$ +- then I can just move a sliding window of length $k$ across the list and check how many swaps it'd take to make all items within the window 1s. This is just $k$ minus the sum of the window. The smallest sum from any window is the answer. +- the one tricky part might be handling the wrap-around of the circular array. I could probably use something like `itertools.cycle` for this, but since that returns an iterator, that'd make the slicing of the list to get each window trickier. So the simplest thing will be to just stick a few (up to $k$) elements from the beginning of the list onto the end before checking windows, so I can include those early items in windows starting with items towards the end just with regular indexing. + - or, I could do it without modifying the input list by just checking whether the current index + $k$ is greater than the length of the list, and if so, appending the right number of elements from the beginning to the window. or I could maintain separate index variables for the start and end, and wrap the end one when necessary + - Either of these would require assigning each window to a variable, which would use a little extra memory, and also a conditional check each iteration (or try/except), which would add a little runtime. But also, increasing the length of the input list would too. Hard to say which would be more efficient at the end of the day, but modifying the input is bad practice so even though that doesn't *really* matter here, I'll go with one of the others + +## Refining the problem, round 2 thoughts + +- in my approach above, summing the list to find $k$ upfront would take $O(n)$ time, sliding the window across the list would also take $O(n)$ time, and checking the sum of each window would take $O(k)$ time, so the overall time complexity would be $O(n \times k)$, which could be $O(n^2)$ if $k = n$. But I can reduce this to just $O(n)$ if I don't sum each window separately -- instead, when I shift the window by 1 index, I can subtract the left element that's no longer in the next window and add the right element that just entered the window. +- any edge cases to deal with? + - if $k$ is 0, then there are no 1s in the list and the answer is 0 swaps + - if $k$ is the length of the full list, then the list is all 1s and the answer is also 0 swaps + - actually the first case above also applies if there's only one 1 ($k$ is 1), and the second also applies if there's only one 0 ($k$ is the length of the list minus 1) + +## Attempted solution(s) + +```python +class Solution: + def minSwaps(self, nums: List[int]) -> int: + len_nums = len(nums) + k = sum(nums) + if k < 2 or k > len_nums - 2: + return 0 + + max_ones = window_ones = sum(nums[:k]) + start_ix = 1 + end_ix = k + while start_ix < len_nums: + window_ones += nums[end_ix] - nums[start_ix - 1] + if window_ones > max_ones: + max_ones = window_ones + start_ix += 1 + end_ix += 1 + if end_ix == len_nums: + end_ix = 0 + return k - max_ones +``` + +![](https://github.com/user-attachments/assets/7e1f9869-a314-4caf-a249-cf206d875f44) diff --git a/problems/2678/paxtonfitzpatrick.md b/problems/2678/paxtonfitzpatrick.md new file mode 100644 index 0000000..4ed78d5 --- /dev/null +++ b/problems/2678/paxtonfitzpatrick.md @@ -0,0 +1,20 @@ +# [Problem 2678: Number of Senior Citizens](https://leetcode.com/problems/number-of-senior-citizens/description/?envType=daily-question) + +## Initial thoughts (stream-of-consciousness) + +- yay, finally an easy one for a change. This is just a one-liner summing a generator expression that filters based on the age characters in the string. + +## Refining the problem, round 2 thoughts + +- little hacky optimization: since the ages are always 2 characters long, we know they must always be between `1` and `99`, and 1-digit ages must have leading zeros. This means we can just compare the age strings to `"60"` instead of converting them to ints and comparing them to `60`, because Python compares sequences by their lexicographical ordering -- in the case of string, using each character's unicode code point. +- this could reduce the runtime by *hundreds* of nanoseconds!!! 🏃‍♂️💨 + +## Attempted solution(s) + +```python +class Solution: + def countSeniors(self, details: List[str]) -> int: + return sum(d[11:13] > '60' for d in details) +``` + +![](https://github.com/user-attachments/assets/ce9815a7-4ed1-421f-96ca-84bf07571bd6) diff --git a/problems/273/paxtonfitzpatrick.md b/problems/273/paxtonfitzpatrick.md new file mode 100644 index 0000000..401b071 --- /dev/null +++ b/problems/273/paxtonfitzpatrick.md @@ -0,0 +1,109 @@ +# [Problem 273: Integer to English Words](https://leetcode.com/problems/integer-to-english-words/description/?envType=daily-question) + +## Initial thoughts (stream-of-consciousness) + +- oh this one looks like fun as well. Interesting that it's labeled hard... on first glance it seems like it'll require a larger-than-usual amount of coding, but nothing particularly complex conceptually +- I think there's gonna be a few different reusable components I'll need: + - a mapping from digits to their english word when in the ones place (e.g., `{'1': 'One', '2': 'Two', ...}`) + - this will also give me the word representation of the hundreds place by just adding 'Hundred' after it + - a mapping from digits to their english words when in the tens place (e.g., `{'2': 'Twenty', '3': 'Thirty', ...}`) + - note: I'll need to handle the teens separately. While parsing the number, if I run into a 1 in the tens place, I'll use a specific helper function/mapping that consumes both the tens and ones place to get the correct word (e.g., '13' ➡️ 'Thirteen') + - I think I'll define these mappings on the class so they can be reused across test cases without needing to redefine them + - another note: it looks like the output is expected to be in title case, so I'll need to remember to set things up that way + - a helper function that takes a sequence of 3 numbers and uses the mappings above to convert them to their combined english words (e.g., f('123') ➡️ 'One Hundred Twenty Three') +- and then my main function will: + - convert the input number to a string + - split it into groups of 3 digits, with the $\lt$ 3-letter group at the beginning if its length isn't divisible by 3 + - I can do both of the above steps using the ',' string format specifier to convert the int to a string with commas, and then splitting it into a list on the commas + - process each of those groups independently using the helper function above, and add its appropriate "suffix" (e.g., 'Thousand', 'Million', etc.) + - the constraints say `num` can be at most $2^{31} - 1$, which is 2,147,483,647, so I'll only need to handle up to 'Billion' + - join the groups together and return the result +- I don't foresee any major issues with this approach, so I'm gonna go ahead and start implementing it + +## Refining the problem, round 2 thoughts + +- any edge cases to consider? + - only potential "gotcha" I can think of is that typically 0s should lead to an empty string, or no aded output (e.g., 410000 ➡️ 'Four Hundred Ten Thousand'; 5002 ➡️ 'Five Thousand Two'), except for when the full number itself is 0, in which case the output should be 'Zero'. But I'll just add a check for this at the beginning of the main function +- what would the time and space complexity of this be? + - I think the only part that scales with the input is looping over the 3-digit groups to process them, so the number of groups I need to process will increase by 1 as `num` increases by a factor of 1,000. I think this would mean both the time and space complexities are $O(\log n)$ + +## Attempted solution(s) + +```python +class Solution: + ones_mapping = { + '0': '', + '1': 'One', + '2': 'Two', + '3': 'Three', + '4': 'Four', + '5': 'Five', + '6': 'Six', + '7': 'Seven', + '8': 'Eight', + '9': 'Nine' + } + tens_mapping = { + '2': 'Twenty', + '3': 'Thirty', + '4': 'Forty', + '5': 'Fifty', + '6': 'Sixty', + '7': 'Seventy', + '8': 'Eighty', + '9': 'Ninety' + } + teens_mapping = { + '10': 'Ten', + '11': 'Eleven', + '12': 'Twelve', + '13': 'Thirteen', + '14': 'Fourteen', + '15': 'Fifteen', + '16': 'Sixteen', + '17': 'Seventeen', + '18': 'Eighteen', + '19': 'Nineteen' + } + group_suffixes = ('', ' Thousand', ' Million', ' Billion') + + def numberToWords(self, num: int) -> str: + if num == 0: + return 'Zero' + result = '' + for digit_group, suffix in zip( + reversed(f'{num:,}'.split(',')), + self.group_suffixes + ): + if digit_group != '000': + group_words = self._process_digit_group(digit_group) + suffix + result = f'{group_words} {result}' + return result.strip() + + def _process_digit_group(self, digits): + match len(digits): + case 1: + return self.ones_mapping[digits] + case 2: + if digits[0] == '1': + return self.teens_mapping[digits] + return ( + f'{self.tens_mapping[digits[0]]} ' + f'{self.ones_mapping[digits[1]]}' + ).rstrip() + if digits[0] == '0': + group_words = '' + else: + group_words = self.ones_mapping[digits[0]] + ' Hundred' + match digits[1]: + case '0': + return f'{group_words} {self.ones_mapping[digits[2]]}'.strip() + case '1': + return f'{group_words} {self.teens_mapping[digits[1:]]}'.lstrip() + return ( + f'{group_words} {self.tens_mapping[digits[1]]} ' + f'{self.ones_mapping[digits[2]]}' + ).strip() +``` + +![](https://github.com/user-attachments/assets/71d3b83a-56de-48a2-8209-f787c0034b4d) diff --git a/problems/3016/paxtonfitzpatrick.md b/problems/3016/paxtonfitzpatrick.md new file mode 100644 index 0000000..cb30008 --- /dev/null +++ b/problems/3016/paxtonfitzpatrick.md @@ -0,0 +1,47 @@ +# [Problem 3016: Minimum Number of Pushes to Type Word II](https://leetcode.com/problems/minimum-number-of-pushes-to-type-word-ii/description/?envType=daily-question) + +## Initial thoughts (stream-of-consciousness) + +- oh this looks fun +- okay we have 8 keys to work with, and we want to minimize the number of pushes to type the given word, which means we want to prioritize letter that we type more often being the first asigned to a key +- okay my immediate idea is: + - initialize variables `total_presses` to 0 and `key_slot` to 1 + - use a `collections.Counter` to get the count of each letter in `word`, and the `.most_common()` method to get those counts ordered by frequency + - assign the most commonly used letter to the first slot of key 2, the second to the first slot of key 3, and so on. After assigning the first slot of key 9, start assigning letters to keys' 2nd slots and increment `key_slot` by 1 + - each time a letter is assigned to a key, increment `total_presses` by the number of times it occurs in `word` times `key_slot` + - when we reach the end of of the `.most_common()` list, return `total_presses` +- I can't think of any edge cases or scenarios that'd break this logic... and also this *seems* to be what's happening in the examples... plus the runtime is $O(n) space complexity is $O(1)$, which is just about the best we could possibly do here... so I'm going to go ahead and implement it. + - (the `Counter` object takes $O(n)$ space and the sorting required for `.most_common()` takes $O(n \log n)$ with respect to the number of items in it, but in this case that number is guaranteed to always be $\le$ 26, so it's effectively $O(1)$) + - ah, actually -- now that I think of it, the fact that this container's size is fixed means I don't actually have to use a `Counter` at all, I can just pre-allocate a list of 26 0s and increment the value at each given letter's index as I iterate through the input word. I'll still need to sort the list (in reverse order) just like I would the counter, but this way I avoid the overhead of the `Counter` object itself. + +## Refining the problem, round 2 thoughts + +## Attempted solution(s) + +```python +class Solution: + def minimumPushes(self, word: str) -> int: + letter_counts = [0] * 26 + for letter in word: + # 97 is ord('a') + letter_counts[ord(letter) - 97] += 1 + letter_counts.sort(reverse=True) + total_presses = 0 + key_slot = 1 + curr_key = 2 + for count in letter_counts: + if count == 0: + return total_presses + total_presses += count * key_slot + if curr_key == 9: + curr_key = 2 + key_slot += 1 + else: + curr_key += 1 + return total_presses +``` + +![](https://github.com/user-attachments/assets/15ebd036-fd52-4606-9028-4ebaacc50e31) + +- huh... that's an interesting bimodal distribution. Seems like people are using one of two approaches to solve this, but I can't immediately think of a more efficient way than what I did... I guess I'll check the editorial and see +- ah, another heap solution. I really should learn how those work at some point.