Skip to content

Commit

Permalink
Day 19, Part 2
Browse files Browse the repository at this point in the history
  • Loading branch information
michaeladler committed Dec 27, 2023
1 parent 19c6021 commit db19e4e
Show file tree
Hide file tree
Showing 4 changed files with 58 additions and 186 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@ Compiled using clang 16 and LTO.
| 16 | 34.4 ms | 33.5 ms |
| 17 | 315.3 ms | 604 ms |
| 18 | | 379 us |
| 19 | | 891 us |

## 🙏 Acknowledgments and Resources

Expand Down
21 changes: 11 additions & 10 deletions puzzle/day19.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@

--- Day 19: Aplenty ---

The Elves of Gear Island are thankful for your help and send you on your way. They even have a hang glider that someone stole from Desert Island; since you're already going that direction, it would help them a lot if you would use it to get
down there and return it to them.
The Elves of Gear Island are thankful for your help and send you on your way. They even have a hang glider that someone stole from Desert Island; since you're already going that direction, it would help them a lot if you would use it to get down there and return it to them.

As you reach the bottom of the relentless avalanche of machine parts, you discover that they're already forming a formidable heap. Don't worry, though - a group of Elves is already here organizing the parts, and they have a system.

Expand All @@ -13,8 +12,8 @@ m: Musical (it makes a noise when you hit it)
a: Aerodynamic
s: Shiny

Then, each part is sent through a series of workflows that will ultimately accept or reject the part. Each workflow has a name and contains a list of rules; each rule specifies a condition and where to send the part if the condition is
true. The first rule that matches the part being considered is applied immediately, and the part moves on to the destination described by the rule. (The last rule in each workflow has no condition and always applies if reached.)
Then, each part is sent through a series of workflows that will ultimately accept or reject the part. Each workflow has a name and contains a list of rules; each rule specifies a condition and where to send the part if the condition is true. The first rule that matches the
part being considered is applied immediately, and the part moves on to the destination described by the rule. (The last rule in each workflow has no condition and always applies if reached.)

Consider the workflow ex{x>10:one,m<20:two,a>30:R,A}. This workflow is named ex and contains four rules. If workflow ex were considering a specific part, it would perform the following steps in order:

Expand Down Expand Up @@ -53,15 +52,13 @@ The workflows are listed first, followed by a blank line, then the ratings of th
{x=2461,m=1339,a=466,s=291}: in -> px -> qkq -> crn -> R
{x=2127,m=1623,a=2188,s=1013}: in -> px -> rfg -> A

Ultimately, three parts are accepted. Adding up the x, m, a, and s rating for each of the accepted parts gives 7540 for the part with x=787, 4623 for the part with x=2036, and 6951 for the part with x=2127. Adding all of the ratings for all
of the accepted parts gives the sum total of 19114.
Ultimately, three parts are accepted. Adding up the x, m, a, and s rating for each of the accepted parts gives 7540 for the part with x=787, 4623 for the part with x=2036, and 6951 for the part with x=2127. Adding all of the ratings for all of the accepted parts gives the
sum total of 19114.

Sort through all of the parts you've been given; what do you get if you add together all of the rating numbers for all of the parts that ultimately get accepted?

Your puzzle answer was 391132.

The first half of this puzzle is complete! It provides one gold star: *

--- Part Two ---

Even with your help, the sorting process still isn't fast enough.
Expand All @@ -74,9 +71,13 @@ In the above example, there are 167409079868000 distinct combinations of ratings

Consider only your list of workflows; the list of part ratings that the Elves wanted you to sort is no longer relevant. How many distinct combinations of ratings will be accepted by the Elves' workflows?

Answer:
Your puzzle answer was 128163929109524.

Both parts of this puzzle are complete! They provide two gold stars: **

At this point, you should return to your Advent calendar and try another puzzle.

Although it hasn't changed, you can still get your puzzle input.
If you still want to see it, you can get your puzzle input.

You can also [Shareon Twitter Mastodon] this puzzle.

220 changes: 45 additions & 175 deletions src/day19/solve.c
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@
#define MAX_RULES 8
#define MAX_NODES 1024
#define MAX_NEIGHBORS 8
#define LOWER 1
#define UPPER 4000

typedef enum { LT, GT, JUMP } rule_kind_e;

Expand All @@ -40,44 +42,26 @@ typedef struct {
#include <ust.h>

typedef struct {
CharSlice99 name;
int id;
} name_id_t;

#define P
#define T name_id_t
#include <ust.h>

typedef struct {
int neighbor[MAX_NEIGHBORS];
int neighbor_count;
} adj_list_t;

typedef struct {
int n;
adj_list_t adj[MAX_NODES];
} graph_t;

typedef struct {
int a; // inclusive
int b; // inclusive
int lower, upper; // inclusive
} interval_t;

typedef struct {
interval_t x, m, a, s;
} constraint_t;

static size_t name_hash(name_id_t *item) { return (size_t)XXH3_64bits(item->name.ptr, item->name.len); }

static int name_equal(name_id_t *lhs, name_id_t *rhs) { return CharSlice99_primitive_eq(lhs->name, rhs->name); }
typedef struct {
constraint_t constraint;
CharSlice99 src;
int rule_idx;
} state_t;

static size_t workflow_t_hash(workflow_t *wf) { return (size_t)XXH3_64bits(wf->name.ptr, wf->name.len); }

static int workflow_t_equal(workflow_t *lhs, workflow_t *rhs) { return CharSlice99_primitive_eq(lhs->name, rhs->name); }

static inline bool rule_matches(rule_t *self, data_t data) {
if (self->kind == JUMP) return true;
int data_value;
int data_value = 0;
switch (self->variable) {
case 'x': data_value = data.x; break;
case 'm': data_value = data.m; break;
Expand All @@ -95,36 +79,25 @@ static inline CharSlice99 *workflow_next(workflow_t *wf, data_t data) {
return NULL;
}

#define LOWER 1
#define UPPER 4000

static inline void interval_apply_rule(interval_t *interval, rule_t *rule) {
if (rule->kind == LT) {
interval->b = rule->value - 1; // interval.upper < rule->value
interval->upper = rule->value - 1; // interval.upper < rule->value
} else if (rule->kind == GT) {
interval->a = rule->value + 1; // interval.lower > rule->value
interval->lower = rule->value + 1; // interval.lower > rule->value
}
}

static inline void interval_negate_rule(interval_t *interval, rule_t *rule) {
if (rule->kind == LT) {
interval->a = rule->value; // interval.lower >= rule->value
interval->lower = rule->value; // interval.lower >= rule->value
} else if (rule->kind == GT) {
interval->b = rule->value; // interval.upper <= rule->value
}
}

static inline char kind_to_char(rule_kind_e kind) {
switch (kind) {
case LT: return '<';
case GT: return '>';
case JUMP: return 'J';
interval->upper = rule->value; // interval.upper <= rule->value
}
}

static inline i64 interval_cardinality(interval_t interval) {
if (interval.b < interval.a) return 0;
return interval.b - interval.a + 1;
if (interval.upper < interval.lower) return 0;
return interval.upper - interval.lower + 1;
}

static inline i64 constraint_cardinality(constraint_t c) {
Expand All @@ -133,145 +106,41 @@ static inline i64 constraint_cardinality(constraint_t c) {
}

static inline constraint_t constraint_apply_rule(constraint_t constraint, rule_t *rule) {
if (rule->kind != JUMP) {
log_debug("applying rule: %c %c %d to jump to %.*s", rule->variable, kind_to_char(rule->kind), rule->value,
rule->destination.len, rule->destination.ptr);
}
constraint_t result = constraint;
switch (rule->variable) {
case 'x': interval_apply_rule(&result.x, rule); break;
case 'm': interval_apply_rule(&result.m, rule); break;
case 'a': interval_apply_rule(&result.a, rule); break;
case 's': interval_apply_rule(&result.s, rule); break;
}
log_debug("constraints: %d<=x<=%d, %d<=m<=%d, %d<=a<=%d, %d<=s<=%d", result.x.a, result.x.b, result.m.a, result.m.b,
result.a.a, result.a.b, result.s.a, result.s.b);
return result;
}

static inline constraint_t constraint_negate_rule(constraint_t constraint, rule_t *rule) {
if (rule->kind != JUMP) {
log_debug("negating rule: %c %c %d to jump to %.*s", rule->variable, kind_to_char(rule->kind), rule->value,
rule->destination.len, rule->destination.ptr);
}
constraint_t result = constraint;
switch (rule->variable) {
case 'x': interval_negate_rule(&result.x, rule); break;
case 'm': interval_negate_rule(&result.m, rule); break;
case 'a': interval_negate_rule(&result.a, rule); break;
case 's': interval_negate_rule(&result.s, rule); break;
}
log_debug("constraints: %d<=x<=%d, %d<=m<=%d, %d<=a<=%d, %d<=s<=%d", result.x.a, result.x.b, result.m.a, result.m.b,
result.a.a, result.a.b, result.s.a, result.s.b);
return result;
}

static inline bool constraint_is_valid(constraint_t c) {
return c.x.a <= c.x.b && c.m.a <= c.m.b && c.a.a <= c.a.b && c.s.a <= c.s.b;
}

static inline i64 process_path(int path[], int path_len, ust_workflow_t *workflows, CharSlice99 *id_to_name,
constraint_t initial) {
log_debug(">> found new path to destination:");
CharSlice99 from = id_to_name[path[0]];
constraint_t all_constraints[2][128];
int all_constraints_count[2] = {1, 0};
all_constraints[0][0] = initial;
int idx_active = 0;
for (int i = 1; i < path_len; i++) {

int count_other = 0;
int idx_other = 1 - idx_active;

workflow_t *wf = &ust_workflow_t_find(workflows, (workflow_t){.name = from})->key;
CharSlice99 to = id_to_name[path[i]];
log_debug("looking for rules %.*s -> %.*s", from.len, from.ptr, to.len, to.ptr);
for (int j = 0; j < all_constraints_count[idx_active]; j++) {
constraint_t c = all_constraints[idx_active][j];
for (int k = 0; k < wf->rule_count; k++) {
rule_t *rule = &wf->rule[k];
if (CharSlice99_primitive_eq(rule->destination, to)) {
constraint_t new_c = constraint_apply_rule(c, rule);
if (constraint_is_valid(new_c)) { all_constraints[idx_other][count_other++] = new_c; }
} else {
constraint_t new_c = constraint_negate_rule(c, rule);
if (!constraint_is_valid(new_c)) { break; }
c = new_c;
}
}
}
all_constraints_count[idx_other] = count_other;
idx_active = idx_other;
from = to;
}

i64 total = 0;
log_debug(">> final result:");
for (int i = 0; i < all_constraints_count[idx_active]; i++) {
constraint_t c = all_constraints[idx_active][i];
total += constraint_cardinality(c);
log_debug("accepted: x: [%d, %d], m: [%d, %d], a: [%d, %d], s: [%d, %d]", c.x.a, c.x.b, c.m.a, c.m.b, c.a.a,
c.a.b, c.s.a, c.s.b);
}
log_debug(">> total: %ld", total);
return total;
}

static i64 find_all_paths(graph_t *graph, int current, int destination, bool visited[], int path[], int *path_idx,
ust_workflow_t *workflows, CharSlice99 *id_to_name) {

i64 total = 0;

// mark the current node and store it in path[]
visited[current] = true;
path[*path_idx] = current;
*path_idx = *path_idx + 1;

if (current == destination) {
interval_t initial = {.a = LOWER, .b = UPPER};
constraint_t constraint = {.x = initial, .m = initial, .a = initial, .s = initial};
total += process_path(path, *path_idx, workflows, id_to_name, constraint);
} else {
adj_list_t *lst = &graph->adj[current];
for (int i = 0; i < lst->neighbor_count; i++) {
int neighbor_id = lst->neighbor[i];
if (!visited[neighbor_id]) {
total +=
find_all_paths(graph, neighbor_id, destination, visited, path, path_idx, workflows, id_to_name);
}
}
}

// Remove current vertex from path[] and mark it as
// unvisited
*path_idx = *path_idx - 1;
visited[current] = false;

return total;
}

void solve(char *buf, size_t buf_size, Solution *result) {
int part1 = 0;
size_t pos = 0;

_cleanup_(ust_workflow_t_free) ust_workflow_t workflows = ust_workflow_t_init(workflow_t_hash, workflow_t_equal);
ust_workflow_t_reserve(&workflows, 256);

int id_count = 0;
_cleanup_(ust_name_id_t_free) ust_name_id_t name_to_id = ust_name_id_t_init(name_hash, name_equal);
ust_name_id_t_reserve(&name_to_id, 256);
CharSlice99 id_to_name[MAX_NODES];

while (1) {
workflow_t wf = {.rule_count = 0};

size_t start = pos;
while (buf[pos] != '{') pos++;
wf.name = CharSlice99_new(&buf[start], pos - start);

ust_name_id_t_insert(&name_to_id, (name_id_t){.id = id_count, .name = wf.name});
id_to_name[id_count++] = wf.name;

pos++;

// next are the rules
Expand Down Expand Up @@ -311,11 +180,6 @@ void solve(char *buf, size_t buf_size, Solution *result) {
ust_workflow_t_node *start_node = ust_workflow_t_find(&workflows, (workflow_t){.name = start});
CharSlice99 accepted = CharSlice99_from_str("A"), rejected = CharSlice99_from_str("R");

ust_name_id_t_insert(&name_to_id, (name_id_t){.id = id_count, .name = accepted});
id_to_name[id_count++] = accepted;
ust_name_id_t_insert(&name_to_id, (name_id_t){.id = id_count, .name = rejected});
id_to_name[id_count++] = rejected;

// part 1
pos++;
while (pos < buf_size) {
Expand All @@ -336,41 +200,47 @@ void solve(char *buf, size_t buf_size, Solution *result) {
}
workflow_t *current = &start_node->key;
while (1) {
CharSlice99 next = *workflow_next(current, data);
if (CharSlice99_primitive_eq(next, accepted)) {
CharSlice99 *next = workflow_next(current, data);
assert(next != NULL);
if (CharSlice99_primitive_eq(*next, accepted)) {
part1 += data.x + data.a + data.m + data.s;
break;
} else if (CharSlice99_primitive_eq(next, rejected)) {
} else if (CharSlice99_primitive_eq(*next, rejected)) {
break;
}
current = &ust_workflow_t_find(&workflows, (workflow_t){.name = next})->key;
current = &ust_workflow_t_find(&workflows, (workflow_t){.name = *next})->key;
}
}
}

// part 2
graph_t graph = {.n = id_count};
foreach (ust_workflow_t, &workflows, it) { // build up graph
int id = ust_name_id_t_find(&name_to_id, (name_id_t){.name = it.node->key.name})->key.id;
workflow_t *wf = &it.node->key;
adj_list_t *lst = &graph.adj[id];
lst->neighbor_count = 0;

for (int i = 0; i < wf->rule_count; i++) {
rule_t *r = &wf->rule[i];
ust_name_id_t_node *node = ust_name_id_t_find(&name_to_id, (name_id_t){.name = r->destination});
i64 part2 = 0;

state_t stack[4096];
interval_t initial = {.lower = LOWER, .upper = UPPER};
stack[0] = (state_t){.src = start,
.constraint = (constraint_t){.x = initial, .m = initial, .a = initial, .s = initial},
.rule_idx = 0};
int stack_size = 1;
while (stack_size != 0) {
state_t state = stack[--stack_size]; // pop
if (CharSlice99_primitive_eq(state.src, accepted)) {
part2 += constraint_cardinality(state.constraint);
} else if (CharSlice99_primitive_eq(state.src, rejected)) {
continue;
} else {
ust_workflow_t_node *node = ust_workflow_t_find(&workflows, (workflow_t){.name = state.src});
assert(node != NULL);
int neighbor_id = node->key.id;
lst->neighbor[lst->neighbor_count++] = neighbor_id;
workflow_t *wf = &node->key;
rule_t *rule = &wf->rule[state.rule_idx];
constraint_t yes = constraint_apply_rule(state.constraint, rule);
constraint_t no = constraint_negate_rule(state.constraint, rule);
stack[stack_size++] = (state_t){.src = rule->destination, .rule_idx = 0, .constraint = yes};
if (rule->kind != JUMP) {
stack[stack_size++] = (state_t){.src = state.src, .rule_idx = state.rule_idx + 1, .constraint = no};
}
}
};

int start_id = ust_name_id_t_find(&name_to_id, (name_id_t){.name = start})->key.id;
int dest_id = ust_name_id_t_find(&name_to_id, (name_id_t){.name = accepted})->key.id;
bool visited[MAX_NODES];
for (size_t i = 0; i < sizeof(visited); i++) visited[i] = false;
int path[MAX_NODES], path_idx = 0;
i64 part2 = find_all_paths(&graph, start_id, dest_id, visited, path, &path_idx, &workflows, id_to_name);
}

snprintf(result->part1, sizeof(result->part1), "%d", part1);
snprintf(result->part2, sizeof(result->part2), "%ld", part2);
Expand Down
Loading

0 comments on commit db19e4e

Please sign in to comment.