Skip to content

Commit

Permalink
fix colab links
Browse files Browse the repository at this point in the history
  • Loading branch information
amakelov committed Jul 5, 2024
1 parent 2526b06 commit f59aa99
Show file tree
Hide file tree
Showing 32 changed files with 3,079 additions and 3,073 deletions.
2 changes: 1 addition & 1 deletion docs/docs/topics/01_storage_and_ops.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# `Storage` & the `@op` Decorator
<a href="https://colab.research.google.com/github/amakelov/mandala/blob/master/docs_notebooks/topics/01_storage_and_ops.ipynb">
<a href="https://colab.research.google.com/github/amakelov/mandala/blob/master/docs_source/topics/01_storage_and_ops.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a>

A `Storage` object holds all data (saved calls, code and dependencies) for a
Expand Down
21 changes: 11 additions & 10 deletions docs/docs/topics/02_retracing.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Patterns for Incremental Computation & Development
<a href="https://colab.research.google.com/github/amakelov/mandala/blob/master/docs_notebooks/topics/02_retracing.ipynb">
<a href="https://colab.research.google.com/github/amakelov/mandala/blob/master/docs_source/topics/02_retracing.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a>

**`@op`-decorated functions are designed to be composed** with one another. This
Expand Down Expand Up @@ -82,7 +82,7 @@ with storage:
Loading data
Training model
Getting accuracy
AtomRef(0.99, hid='d16...', cid='12a...')
AtomRef(1.0, hid='d16...', cid='b67...')


## Retracing your steps with memoization
Expand All @@ -102,8 +102,8 @@ with storage:
```

AtomRef(hid='d0f...', cid='908...', in_memory=False) AtomRef(hid='f1a...', cid='69f...', in_memory=False)
AtomRef(hid='caf...', cid='9e4...', in_memory=False)
AtomRef(hid='d16...', cid='12a...', in_memory=False)
AtomRef(hid='caf...', cid='5b8...', in_memory=False)
AtomRef(hid='d16...', cid='b67...', in_memory=False)


This puts all the `Ref`s along the way in your local variables (as if you've
Expand All @@ -118,7 +118,7 @@ storage.unwrap(acc)



0.99
1.0



Expand All @@ -140,17 +140,17 @@ with storage:
print(acc)
```

AtomRef(hid='d16...', cid='12a...', in_memory=False)
AtomRef(hid='d16...', cid='b67...', in_memory=False)
Training model
Getting accuracy
AtomRef(1.0, hid='6fd...', cid='b67...')
Loading data
Training model
Getting accuracy
AtomRef(0.8, hid='158...', cid='f0a...')
AtomRef(0.84, hid='158...', cid='6c4...')
Training model
Getting accuracy
AtomRef(0.9, hid='214...', cid='24c...')
AtomRef(0.91, hid='214...', cid='97b...')


Note that the first value of `acc` from the nested loop is with
Expand Down Expand Up @@ -178,8 +178,9 @@ with storage:
print(n_class, n_estimators, storage.unwrap(acc))
```

2 5 0.99
2 5 1.0
2 10 1.0
5 10 0.91


## Memoized code as storage interface
Expand All @@ -198,5 +199,5 @@ with storage:
print(storage.unwrap(acc), storage.unwrap(model))
```

0.8 RandomForestClassifier(max_depth=2, n_estimators=5)
0.84 RandomForestClassifier(max_depth=2, n_estimators=5)

30 changes: 15 additions & 15 deletions docs/docs/topics/03_cf.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Query the Storage with `ComputationFrame`s
<a href="https://colab.research.google.com/github/amakelov/mandala/blob/master/docs_notebooks/topics/03_cf.ipynb">
<a href="https://colab.research.google.com/github/amakelov/mandala/blob/master/docs_source/topics/03_cf.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a>

## Why `ComputationFrame`s?
Expand Down Expand Up @@ -260,13 +260,13 @@ print(cf.df(values='refs').to_markdown())

Extracting tuples from the computation graph:
var_0@output_0, var_1@output_1 = train_model(y_train=y_train, n_estimators=n_estimators, X_train=X_train)
Joining on columns: {'X_train', 'train_model', 'y_train', 'n_estimators'}
| | n_estimators | y_train | X_train | train_model | var_1 | var_0 |
Joining on columns: {'y_train', 'X_train', 'n_estimators', 'train_model'}
| | X_train | n_estimators | y_train | train_model | var_1 | var_0 |
|---:|:-----------------------------------------------------|:-----------------------------------------------------|:-----------------------------------------------------|:----------------------------------------------|:-----------------------------------------------------|:-----------------------------------------------------|
| 0 | AtomRef(hid='9fd...', cid='4ac...', in_memory=False) | AtomRef(hid='faf...', cid='83f...', in_memory=False) | AtomRef(hid='efa...', cid='a6d...', in_memory=False) | Call(train_model, cid='5af...', hid='514...') | AtomRef(hid='784...', cid='238...', in_memory=False) | AtomRef(hid='331...', cid='e64...', in_memory=False) |
| 1 | AtomRef(hid='235...', cid='c04...', in_memory=False) | AtomRef(hid='faf...', cid='83f...', in_memory=False) | AtomRef(hid='efa...', cid='a6d...', in_memory=False) | Call(train_model, cid='204...', hid='c55...') | AtomRef(hid='5b7...', cid='f0a...', in_memory=False) | AtomRef(hid='208...', cid='c75...', in_memory=False) |
| 2 | AtomRef(hid='120...', cid='9bc...', in_memory=False) | AtomRef(hid='faf...', cid='83f...', in_memory=False) | AtomRef(hid='efa...', cid='a6d...', in_memory=False) | Call(train_model, cid='3be...', hid='e60...') | AtomRef(hid='646...', cid='acb...', in_memory=False) | AtomRef(hid='522...', cid='d5a...', in_memory=False) |
| 3 | AtomRef(hid='98c...', cid='29d...', in_memory=False) | AtomRef(hid='faf...', cid='83f...', in_memory=False) | AtomRef(hid='efa...', cid='a6d...', in_memory=False) | Call(train_model, cid='c4f...', hid='5f7...') | AtomRef(hid='760...', cid='46b...', in_memory=False) | AtomRef(hid='b25...', cid='462...', in_memory=False) |
| 0 | AtomRef(hid='efa...', cid='a6d...', in_memory=False) | AtomRef(hid='98c...', cid='29d...', in_memory=False) | AtomRef(hid='faf...', cid='83f...', in_memory=False) | Call(train_model, cid='c4f...', hid='5f7...') | AtomRef(hid='760...', cid='46b...', in_memory=False) | AtomRef(hid='b25...', cid='462...', in_memory=False) |
| 1 | AtomRef(hid='efa...', cid='a6d...', in_memory=False) | AtomRef(hid='9fd...', cid='4ac...', in_memory=False) | AtomRef(hid='faf...', cid='83f...', in_memory=False) | Call(train_model, cid='5af...', hid='514...') | AtomRef(hid='784...', cid='238...', in_memory=False) | AtomRef(hid='331...', cid='e64...', in_memory=False) |
| 2 | AtomRef(hid='efa...', cid='a6d...', in_memory=False) | AtomRef(hid='235...', cid='c04...', in_memory=False) | AtomRef(hid='faf...', cid='83f...', in_memory=False) | Call(train_model, cid='204...', hid='c55...') | AtomRef(hid='5b7...', cid='f0a...', in_memory=False) | AtomRef(hid='208...', cid='c75...', in_memory=False) |
| 3 | AtomRef(hid='efa...', cid='a6d...', in_memory=False) | AtomRef(hid='120...', cid='9bc...', in_memory=False) | AtomRef(hid='faf...', cid='83f...', in_memory=False) | Call(train_model, cid='3be...', hid='e60...') | AtomRef(hid='646...', cid='acb...', in_memory=False) | AtomRef(hid='522...', cid='d5a...', in_memory=False) |


##
Expand Down Expand Up @@ -512,14 +512,14 @@ print(cf.df().drop(columns=['X_train', 'y_train']).to_markdown())
X_train@output_0, y_train@output_2 = generate_dataset(random_seed=random_seed)
var_0@output_0, var_1@output_1 = train_model(y_train=y_train, n_estimators=n_estimators, X_train=X_train)
var_2@output_0 = eval_model(model=var_0)
Joining on columns: {'generate_dataset', 'train_model', 'random_seed', 'n_estimators', 'X_train', 'y_train'}
Joining on columns: {'generate_dataset', 'train_model', 'var_0', 'random_seed', 'n_estimators', 'X_train', 'y_train'}
| | random_seed | generate_dataset | n_estimators | train_model | var_1 | var_0 | eval_model | var_2 |
|---:|--------------:|:---------------------------------------------------|---------------:|:----------------------------------------------|--------:|:-----------------------------------------------------|:---------------------------------------------|--------:|
| 0 | 42 | Call(generate_dataset, cid='19a...', hid='c3f...') | 40 | Call(train_model, cid='5af...', hid='514...') | 0.82 | RandomForestClassifier(max_depth=2, n_estimators=40) | Call(eval_model, cid='38f...', hid='5d3...') | 0.81 |
| 1 | 42 | Call(generate_dataset, cid='19a...', hid='c3f...') | 20 | Call(train_model, cid='204...', hid='c55...') | 0.8 | RandomForestClassifier(max_depth=2, n_estimators=20) | | nan |
| 2 | 42 | Call(generate_dataset, cid='19a...', hid='c3f...') | 80 | Call(train_model, cid='3be...', hid='e60...') | 0.83 | RandomForestClassifier(max_depth=2, n_estimators=80) | Call(eval_model, cid='137...', hid='d32...') | 0.82 |
| 3 | 42 | Call(generate_dataset, cid='19a...', hid='c3f...') | 10 | Call(train_model, cid='c4f...', hid='5f7...') | 0.74 | RandomForestClassifier(max_depth=2, n_estimators=10) | | nan |
Joining on columns: {'random_seed', 'y_train', 'X_train', 'generate_dataset', 'n_estimators', 'train_model'}
Joining on columns: {'random_seed', 'y_train', 'X_train', 'generate_dataset', 'var_0', 'n_estimators', 'train_model'}
| | n_estimators | random_seed | generate_dataset | train_model | var_1 | var_0 | eval_model | var_2 |
|---:|---------------:|--------------:|:---------------------------------------------------|:----------------------------------------------|--------:|:-----------------------------------------------------|:---------------------------------------------|--------:|
| 0 | 80 | 42 | Call(generate_dataset, cid='19a...', hid='c3f...') | Call(train_model, cid='3be...', hid='e60...') | 0.83 | RandomForestClassifier(max_depth=2, n_estimators=80) | Call(eval_model, cid='137...', hid='d32...') | 0.82 |
| 1 | 40 | 42 | Call(generate_dataset, cid='19a...', hid='c3f...') | Call(train_model, cid='5af...', hid='514...') | 0.82 | RandomForestClassifier(max_depth=2, n_estimators=40) | Call(eval_model, cid='38f...', hid='5d3...') | 0.81 |
| 2 | 20 | 42 | Call(generate_dataset, cid='19a...', hid='c3f...') | Call(train_model, cid='204...', hid='c55...') | 0.8 | RandomForestClassifier(max_depth=2, n_estimators=20) | | nan |
| 3 | 10 | 42 | Call(generate_dataset, cid='19a...', hid='c3f...') | Call(train_model, cid='c4f...', hid='5f7...') | 0.74 | RandomForestClassifier(max_depth=2, n_estimators=10) | | nan |


Importantly, we see that some computations only partially follow the full
Expand Down
Loading

0 comments on commit f59aa99

Please sign in to comment.