diff --git a/docs/_freeze/posts/ci-analysis/index/execute-results/html.json b/docs/_freeze/posts/ci-analysis/index/execute-results/html.json
index 2c529381d079..fa4a2204cfa6 100644
--- a/docs/_freeze/posts/ci-analysis/index/execute-results/html.json
+++ b/docs/_freeze/posts/ci-analysis/index/execute-results/html.json
@@ -1,7 +1,7 @@
{
- "hash": "68f0afd28a0c7a6e975bb04cf7f07235",
+ "hash": "47294033e490cc53cd08275f84de9edd",
"result": {
- "markdown": "---\ntitle: \"Analysis of Ibis's CI performance\"\nauthor: \"Phillip Cloud\"\ndate: \"2023-01-09\"\ncategories:\n - blog\n - bigquery\n - continuous integration\n - data engineering\n - dogfood\n---\n\n## Summary\n\nThis notebook takes you through an analysis of Ibis's CI data using ibis on top of [Google BigQuery](https://cloud.google.com/bigquery).\n\n- First, we load some data and poke around at it to see what's what.\n- Second, we figure out some useful things to calculate based on our poking.\n- Third, we'll visualize the results of calculations to showcase what changed and how.\n\n## Imports\n\nLet's start out by importing ibis and turning on interactive mode.\n\n::: {#55fba71a .cell execution_count=1}\n``` {.python .cell-code}\nimport ibis\nfrom ibis import _\n\nibis.options.interactive = True\n```\n:::\n\n\n## Connect to BigQuery\n\nWe connect to BigQuery using the `ibis.connect` API, which accepts a URL string indicating the backend and various bit of information needed to connect to the backend. Here we're using BigQuery, so we need the project id (`ibis-gbq`) and the dataset id (`workflows`).\n\nDatasets are analogous to schemas in other systems.\n\n::: {#a414988f .cell execution_count=2}\n``` {.python .cell-code}\nurl = \"bigquery://ibis-gbq/workflows\"\ncon = ibis.connect(url)\n```\n:::\n\n\nLet's see what tables are available.\n\n::: {#24c17c0e .cell execution_count=3}\n``` {.python .cell-code}\ncon.list_tables()\n```\n\n::: {.cell-output .cell-output-display execution_count=3}\n```\n['analysis', 'jobs', 'workflows']\n```\n:::\n:::\n\n\n## Analysis\n\nHere we've got our first bit of interesting information: the `jobs` and `workflows` tables.\n\n### Terminology\n\nBefore we jump in, it helps to lay down some terminology.\n\n- A **workflow** corresponds to an individual GitHub Actions YAML file in a GitHub repository under the `.github/workflows` directory.\n- A **job** is a named set of steps to run inside a **workflow** file.\n\n### What's in the `workflows` table?\n\nEach row in the `workflows` table corresponds to a **workflow run**.\n\n- A **workflow run** is an instance of a workflow that was triggered by some entity: a GitHub user, bot, or other entity. Each row of the `workflows` table is a **workflow run**.\n\n### What's in the `jobs` table?\n\nSimilarly, each row in the `jobs` table is a **job run**. That is, for a given **workflow run** there are a set of jobs run with it.\n\n- A **job run** is an instance of a job *in a workflow*. It is associated with a single **workflow run**.\n\n## Rationale\n\nThe goal of this analysis is to try to understand ibis's CI performance, and whether the amount of time we spent waiting on CI has decreased, stayed the same or increased. Ideally, we can understand the pieces that contribute to the change or lack thereof.\n\n### Metrics\n\nTo that end there are a few interesting metrics to look at:\n\n- **job run** *duration*: this is the amount of time it takes for a given job to complete\n- **workflow run** *duration*: the amount of time it takes for *all* job runs in a workflow run to complete.\n- **queueing** *duration*: the amount time time spent waiting for the *first* job run to commence.\n\n### Mitigating Factors\n\n- Around October 2021, we changed our CI infrastructure to use [Poetry](https://python-poetry.org/) instead of [Conda](https://docs.conda.io/en/latest/). The goal there was to see if we could cache dependencies using the lock file generated by poetry. We should see whether that had any effect.\n- At the end of November 2022, we switch to the Team Plan (a paid GitHub plan) for the Ibis organzation. This tripled the amount of **job runs** that could execute in parallel. We should see if that helped anything.\n\nAlright, let's jump into some data!\n\n::: {#ac165685 .cell execution_count=4}\n``` {.python .cell-code}\njobs = con.tables.jobs[_.started_at < \"2023-01-09\"]\njobs\n```\n\n::: {.cell-output .cell-output-display execution_count=4}\n```{=html}\n
\n```\n:::\n:::\n\n\nThese first few columns in the `jobs` table aren't that interesting so we should look at what else is there\n\n::: {#67783451 .cell execution_count=5}\n``` {.python .cell-code}\njobs.columns\n```\n\n::: {.cell-output .cell-output-display execution_count=5}\n```\n['url',\n 'steps',\n 'status',\n 'started_at',\n 'runner_group_name',\n 'run_attempt',\n 'name',\n 'labels',\n 'node_id',\n 'id',\n 'runner_id',\n 'run_url',\n 'run_id',\n 'check_run_url',\n 'html_url',\n 'runner_name',\n 'runner_group_id',\n 'head_sha',\n 'conclusion',\n 'completed_at']\n```\n:::\n:::\n\n\nA bunch of these aren't that useful for our purposes. However, `run_id`, `started_at`, `completed_at` are useful for us. The [GitHub documentation for job information](https://docs.github.com/en/rest/actions/workflow-jobs?apiVersion=2022-11-28#get-a-job-for-a-workflow-run) provides useful detail about the meaning of these fields.\n\n- `run_id`: the workflow run associated with this job run\n- `started_at`: when the job started\n- `completed_at`: when the job completed\n\nWhat we're interested in to a first degree is the job duration, so let's compute that.\n\nWe also need to compute when the last job for a given `run_id` started and when it completed. We'll use the former to compute the queueing duration, and the latter to compute the total time it took for a given workflow run to complete.\n\n::: {#c89b9246 .cell execution_count=6}\n``` {.python .cell-code}\nrun_id_win = ibis.window(group_by=_.run_id)\njobs = jobs.select(\n _.run_id,\n job_duration=_.completed_at.cast(\"int\") - _.started_at.cast(\"int\"),\n last_job_started_at=_.started_at.max().over(run_id_win),\n last_job_completed_at=_.completed_at.max().over(run_id_win),\n)\njobs\n```\n\n::: {.cell-output .cell-output-display execution_count=6}\n```{=html}\n
\n```\n:::\n:::\n\n\nAgain we have a bunch of columns that aren't so useful to us, so let's see what else is there.\n\n::: {#bf73436e .cell execution_count=8}\n``` {.python .cell-code}\nworkflows.columns\n```\n\n::: {.cell-output .cell-output-display execution_count=8}\n```\n['workflow_url',\n 'workflow_id',\n 'triggering_actor',\n 'run_number',\n 'run_attempt',\n 'updated_at',\n 'cancel_url',\n 'rerun_url',\n 'check_suite_node_id',\n 'pull_requests',\n 'id',\n 'node_id',\n 'status',\n 'repository',\n 'jobs_url',\n 'previous_attempt_url',\n 'artifacts_url',\n 'html_url',\n 'head_sha',\n 'head_repository',\n 'run_started_at',\n 'head_branch',\n 'url',\n 'event',\n 'name',\n 'actor',\n 'created_at',\n 'check_suite_url',\n 'check_suite_id',\n 'conclusion',\n 'head_commit',\n 'logs_url']\n```\n:::\n:::\n\n\nWe don't care about many of these for the purposes of this analysis, however we need the `id` and a few values derived from the `run_started_at` column.\n\n- `id`: the unique identifier of the **workflow run**\n- `run_started_at`: the time the workflow run started\n\nWe compute the date the run started at so we can later compare it to the dates where we added poetry and switched to the team plan.\n\n::: {#69dfa49b .cell execution_count=9}\n``` {.python .cell-code}\nworkflows = workflows.select(\n _.id, _.run_started_at, started_date=_.run_started_at.date()\n)\nworkflows\n```\n\n::: {.cell-output .cell-output-display execution_count=9}\n```{=html}\n
\n```\n:::\n:::\n\n\nWe need to associate jobs and workflows somehow, so let's join them on the relevant key fields.\n\n::: {#8d4786e3 .cell execution_count=10}\n``` {.python .cell-code}\njoined = jobs.join(workflows, jobs.run_id == workflows.id)\njoined\n```\n\n::: {.cell-output .cell-output-display execution_count=10}\n```{=html}\n
\n```\n:::\n:::\n\n\nSweet! Now we have workflow runs and job runs together in the same table, let's start exploring summarization.\n\nLet's encode our knowledge about when the poetry move happened and also when we moved to the team plan.\n\n::: {#1bc567e0 .cell execution_count=11}\n``` {.python .cell-code}\nfrom datetime import date\n\nPOETRY_MERGED_DATE = date(2021, 10, 15)\nTEAMIZATION_DATE = date(2022, 11, 28)\n```\n:::\n\n\nLet's compute some indicator variables indicating whether a given row contains data after poetry changes occurred, and do the same for the team plan.\n\nLet's also compute queueing time and workflow duration.\n\n::: {#c12f7377 .cell execution_count=12}\n``` {.python .cell-code}\nstats = joined.select(\n _.started_date,\n _.job_duration,\n has_poetry=_.started_date > POETRY_MERGED_DATE,\n has_team=_.started_date > TEAMIZATION_DATE,\n queueing_time=_.last_job_started_at.cast(\"int\")\n - _.run_started_at.cast(\"int\"),\n workflow_duration=_.last_job_completed_at.cast(\"int\")\n - _.run_started_at.cast(\"int\"),\n)\nstats\n```\n\n::: {.cell-output .cell-output-display execution_count=12}\n```{=html}\n
\n```\n:::\n:::\n\n\nLet's create a column ranging from 0 to 2 inclusive where:\n\n- 0: no improvements\n- 1: just poetry\n- 2: poetry and the team plan\n\nLet's also give them some names that'll look nice on our plots.\n\n::: {#d40f4002 .cell execution_count=13}\n``` {.python .cell-code}\nstats = stats.mutate(\n raw_improvements=_.has_poetry.cast(\"int\") + _.has_team.cast(\"int\")\n).mutate(\n improvements=(\n _.raw_improvements.case()\n .when(0, \"None\")\n .when(1, \"Poetry\")\n .when(2, \"Poetry + Team Plan\")\n .else_(\"NA\")\n .end()\n ),\n team_plan=ibis.where(_.raw_improvements > 1, \"Poetry + Team Plan\", \"None\"),\n)\nstats\n```\n\n::: {.cell-output .cell-output-display execution_count=13}\n```{=html}\n
\n```\n:::\n:::\n\n\nFinally, we can summarize by averaging the different durations, grouping on the variables of interest.\n\n::: {#dc30c9b1 .cell execution_count=14}\n``` {.python .cell-code}\nUSECS_PER_MIN = 60_000_000\n\nagged = stats.group_by([_.started_date, _.improvements, _.team_plan]).agg(\n job=_.job_duration.div(USECS_PER_MIN).mean(),\n workflow=_.workflow_duration.div(USECS_PER_MIN).mean(),\n queueing_time=_.queueing_time.div(USECS_PER_MIN).mean(),\n)\nagged\n```\n\n::: {.cell-output .cell-output-display execution_count=14}\n```{=html}\n
\n```\n:::\n:::\n\n\nIf at any point you want to inspect the SQL you'll be running, ibis has you covered with `ibis.to_sql`.\n\n::: {#f3e910a7 .cell execution_count=15}\n``` {.python .cell-code}\nibis.to_sql(agged)\n```\n\n::: {.cell-output .cell-output-display execution_count=15}\n```sql\nWITH t0 AS (\n SELECT\n t6.*\n FROM `ibis-gbq`.workflows.jobs AS t6\n WHERE\n t6.`started_at` < '2023-01-09'\n), t1 AS (\n SELECT\n t6.`id`,\n t6.`run_started_at`,\n DATE(t6.`run_started_at`) AS `started_date`\n FROM `ibis-gbq`.workflows.workflows AS t6\n), t2 AS (\n SELECT\n t0.`run_id`,\n UNIX_MICROS(t0.`completed_at`) - UNIX_MICROS(t0.`started_at`) AS `job_duration`,\n MAX(t0.`started_at`) OVER (PARTITION BY t0.`run_id`) AS `last_job_started_at`,\n MAX(t0.`completed_at`) OVER (PARTITION BY t0.`run_id`) AS `last_job_completed_at`\n FROM t0\n), t3 AS (\n SELECT\n `started_date`,\n `job_duration`,\n `started_date` > CAST('2021-10-15' AS DATE) AS `has_poetry`,\n `started_date` > CAST('2022-11-28' AS DATE) AS `has_team`,\n UNIX_MICROS(`last_job_started_at`) - UNIX_MICROS(`run_started_at`) AS `queueing_time`,\n UNIX_MICROS(`last_job_completed_at`) - UNIX_MICROS(`run_started_at`) AS `workflow_duration`\n FROM t2\n INNER JOIN t1\n ON t2.`run_id` = t1.`id`\n), t4 AS (\n SELECT\n t3.*,\n CAST(t3.`has_poetry` AS INT64) + CAST(t3.`has_team` AS INT64) AS `raw_improvements`\n FROM t3\n)\nSELECT\n t5.`started_date`,\n t5.`improvements`,\n t5.`team_plan`,\n avg(IEEE_DIVIDE(t5.`job_duration`, 60000000)) AS `job`,\n avg(IEEE_DIVIDE(t5.`workflow_duration`, 60000000)) AS `workflow`,\n avg(IEEE_DIVIDE(t5.`queueing_time`, 60000000)) AS `queueing_time`\nFROM (\n SELECT\n t4.*,\n CASE t4.`raw_improvements`\n WHEN 0\n THEN 'None'\n WHEN 1\n THEN 'Poetry'\n WHEN 2\n THEN 'Poetry + Team Plan'\n ELSE 'NA'\n END AS `improvements`,\n CASE WHEN t4.`raw_improvements` > 1 THEN 'Poetry + Team Plan' ELSE 'None' END AS `team_plan`\n FROM t4\n) AS t5\nGROUP BY\n 1,\n 2,\n 3\n```\n:::\n:::\n\n\n# Plot the Results\n\nIbis doesn't have builtin plotting support, so we need to pull our results into pandas.\n\nHere I'm using `plotnine` (a Python port of `ggplot2`), which has great integration with pandas DataFrames.\n\n::: {#baa6d5bb .cell execution_count=16}\n``` {.python .cell-code}\nraw_df = agged.execute()\nraw_df\n```\n\n::: {.cell-output .cell-output-display execution_count=16}\n```{=html}\n
\n\n
\n \n
\n
\n
started_date
\n
improvements
\n
team_plan
\n
job
\n
workflow
\n
queueing_time
\n
\n \n \n
\n
0
\n
2022-05-21
\n
Poetry
\n
None
\n
3.315453
\n
12.056705
\n
10.856003
\n
\n
\n
1
\n
2021-06-23
\n
None
\n
None
\n
8.804329
\n
18.838528
\n
0.799567
\n
\n
\n
2
\n
2022-05-12
\n
Poetry
\n
None
\n
4.912492
\n
17.443804
\n
13.617164
\n
\n
\n
3
\n
2022-09-11
\n
Poetry
\n
None
\n
3.318782
\n
12.665244
\n
11.561670
\n
\n
\n
4
\n
2021-04-08
\n
None
\n
None
\n
8.366981
\n
13.957233
\n
0.276730
\n
\n
\n
...
\n
...
\n
...
\n
...
\n
...
\n
...
\n
...
\n
\n
\n
779
\n
2022-10-09
\n
Poetry
\n
None
\n
3.472283
\n
12.489749
\n
9.092648
\n
\n
\n
780
\n
2021-03-24
\n
None
\n
None
\n
9.499082
\n
16.419903
\n
1.801063
\n
\n
\n
781
\n
2022-03-06
\n
Poetry
\n
None
\n
2.727943
\n
11.757324
\n
10.942026
\n
\n
\n
782
\n
2021-11-22
\n
Poetry
\n
None
\n
2.608860
\n
10.306637
\n
7.481462
\n
\n
\n
783
\n
2022-06-08
\n
Poetry
\n
None
\n
3.214470
\n
12.617276
\n
11.713546
\n
\n \n
\n
784 rows × 6 columns
\n
\n```\n:::\n:::\n\n\nGenerally, `plotnine` works with long, tidy data so let's use `pandas.melt` to get there.\n\n::: {#045cdbb1 .cell execution_count=17}\n``` {.python .cell-code}\nimport pandas as pd\n\ndf = pd.melt(\n raw_df,\n id_vars=[\"started_date\", \"improvements\", \"team_plan\"],\n var_name=\"entity\",\n value_name=\"duration\",\n)\ndf.head()\n```\n\n::: {.cell-output .cell-output-display execution_count=17}\n```{=html}\n
\n\n
\n \n
\n
\n
started_date
\n
improvements
\n
team_plan
\n
entity
\n
duration
\n
\n \n \n
\n
0
\n
2022-05-21
\n
Poetry
\n
None
\n
job
\n
3.315453
\n
\n
\n
1
\n
2021-06-23
\n
None
\n
None
\n
job
\n
8.804329
\n
\n
\n
2
\n
2022-05-12
\n
Poetry
\n
None
\n
job
\n
4.912492
\n
\n
\n
3
\n
2022-09-11
\n
Poetry
\n
None
\n
job
\n
3.318782
\n
\n
\n
4
\n
2021-04-08
\n
None
\n
None
\n
job
\n
8.366981
\n
\n \n
\n
\n```\n:::\n:::\n\n\nLet's make our theme lighthearted by using `xkcd`-style plots.\n\n::: {#029124e7 .cell execution_count=18}\n``` {.python .cell-code}\nfrom plotnine import *\n\ntheme_set(theme_xkcd())\n```\n:::\n\n\nCreate a few labels for our plot.\n\n::: {#cb9f40c8 .cell execution_count=19}\n``` {.python .cell-code}\npoetry_label = f\"Poetry\\n{POETRY_MERGED_DATE}\"\nteam_label = f\"Team Plan\\n{TEAMIZATION_DATE}\"\n```\n:::\n\n\nWithout the following line you may see large amount of inconsequential warnings that make the notebook unusable.\n\n::: {#539939bc .cell execution_count=20}\n``` {.python .cell-code}\nimport logging\n\n# without this, findfont logging spams the notebook making it unusable\nlogging.getLogger('matplotlib.font_manager').disabled = True\n```\n:::\n\n\nHere we show job durations, coloring the points differently depending on whether they have no improvements, poetry, or poetry + team plan.\n\n::: {#6db462db .cell execution_count=21}\n``` {.python .cell-code}\n(\n ggplot(\n df.loc[df.entity == \"job\"].reset_index(drop=True),\n aes(x=\"started_date\", y=\"duration\", color=\"factor(improvements)\"),\n )\n + geom_point()\n + geom_vline(\n xintercept=[TEAMIZATION_DATE, POETRY_MERGED_DATE],\n colour=[\"blue\", \"green\"],\n linetype=\"dashed\",\n )\n + scale_color_brewer(\n palette=7,\n type='qual',\n limits=[\"None\", \"Poetry\", \"Poetry + Team Plan\"],\n )\n + geom_text(x=POETRY_MERGED_DATE, label=poetry_label, y=15, color=\"blue\")\n + geom_text(x=TEAMIZATION_DATE, label=team_label, y=10, color=\"blue\")\n + stat_smooth(method=\"lm\")\n + labs(x=\"Date\", y=\"Duration (minutes)\")\n + ggtitle(\"Job Duration\")\n + theme(\n figure_size=(22, 6),\n legend_position=(0.67, 0.65),\n legend_direction=\"vertical\",\n )\n)\n```\n\n::: {.cell-output .cell-output-display}\n![](index_files/figure-html/cell-22-output-1.png){}\n:::\n\n::: {.cell-output .cell-output-display execution_count=21}\n```\n
\n```\n:::\n:::\n\n\n## Result #1: Job Duration\n\nThis result is pretty interesting.\n\nA few things pop out to me right away:\n\n- The move to poetry decreased the average job run duration by quite a bit. No, I'm not going to do any statistical tests.\n- The variability of job run durations also decreased by quite a bit after introducing poetry.\n- Moving to the team plan had little to no effect on job run duration.\n\n::: {#c1e62289 .cell execution_count=22}\n``` {.python .cell-code}\n(\n ggplot(\n df.loc[df.entity != \"job\"].reset_index(drop=True),\n aes(x=\"started_date\", y=\"duration\", color=\"factor(improvements)\"),\n )\n + facet_wrap(\"entity\", ncol=1)\n + geom_point()\n + geom_vline(\n xintercept=[TEAMIZATION_DATE, POETRY_MERGED_DATE],\n linetype=\"dashed\",\n )\n + scale_color_brewer(\n palette=7,\n type='qual',\n limits=[\"None\", \"Poetry\", \"Poetry + Team Plan\"],\n )\n + geom_text(x=POETRY_MERGED_DATE, label=poetry_label, y=75, color=\"blue\")\n + geom_text(x=TEAMIZATION_DATE, label=team_label, y=50, color=\"blue\")\n + stat_smooth(method=\"lm\")\n + labs(x=\"Date\", y=\"Duration (minutes)\")\n + ggtitle(\"Workflow Duration\")\n + theme(\n figure_size=(22, 13),\n legend_position=(0.68, 0.75),\n legend_direction=\"vertical\",\n )\n)\n```\n\n::: {.cell-output .cell-output-display}\n![](index_files/figure-html/cell-23-output-1.png){}\n:::\n\n::: {.cell-output .cell-output-display execution_count=22}\n```\n
\n```\n:::\n:::\n\n\n## Result #2: Workflow Duration and Queueing Time\n\nAnother interesting result.\n\n### Queueing Time\n\n- It almost looks like moving to poetry made average queueing time worse. This is probably due to our perception that faster jobs means faster ci. As we see here that isn't the case\n- Moving to the team plan cut down the queueing time by quite a bit\n\n### Workflow Duration\n\n- Overall workflow duration appears to be strongly influenced by moving to the team plan, which is almost certainly due to the drop in queueing time since we are no longer limited by slow job durations.\n- Perhaps it's obvious, but queueing time and workflow duration appear to be highly correlated.\n\nIn the next plot we'll look at that correlation.\n\n::: {#878f8b27 .cell execution_count=23}\n``` {.python .cell-code}\n(\n ggplot(raw_df, aes(x=\"workflow\", y=\"queueing_time\"))\n + geom_point()\n + geom_rug()\n + facet_grid(\". ~ team_plan\")\n + labs(x=\"Workflow Duration (minutes)\", y=\"Queueing Time (minutes)\")\n + ggtitle(\"Workflow Duration vs. Queueing Time\")\n + theme(figure_size=(22, 6))\n)\n```\n\n::: {.cell-output .cell-output-display}\n![](index_files/figure-html/cell-24-output-1.png){}\n:::\n\n::: {.cell-output .cell-output-display execution_count=23}\n```\n
\n```\n:::\n:::\n\n\n## Result #3: Workflow Duration and Queueing Duration are correlated\n\nIt also seems that moving to the team plan (though also the move to poetry might be related here) reduced the variability of both metrics.\n\nWe're lacking data compared to the past so we should wait for more to come in.\n\n## Conclusions\n\nIt appears that you need both a short queue time **and** fast individual jobs to minimize time spent in CI.\n\nIf you have a short queue time, but long job runs then you'll be bottlenecked on individual jobs, and if you have more jobs than queue slots then you'll be blocked on queueing time.\n\nI think we can sum this up nicely:\n\n- slow jobs, slow queue: 🤷 blocked by jobs or queue\n- slow jobs, fast queue: ❓ blocked by jobs, if jobs are slow enough\n- fast jobs, slow queue: ❗ blocked by queue, with enough jobs\n- fast jobs, fast queue: ✅\n\n",
+ "markdown": "---\ntitle: \"Analysis of Ibis's CI performance\"\nauthor: \"Phillip Cloud\"\ndate: \"2023-01-09\"\ncategories:\n - blog\n - bigquery\n - continuous integration\n - data engineering\n - dogfood\n---\n\n## Summary\n\nThis notebook takes you through an analysis of Ibis's CI data using ibis on top of [Google BigQuery](https://cloud.google.com/bigquery).\n\n- First, we load some data and poke around at it to see what's what.\n- Second, we figure out some useful things to calculate based on our poking.\n- Third, we'll visualize the results of calculations to showcase what changed and how.\n\n## Imports\n\nLet's start out by importing ibis and turning on interactive mode.\n\n::: {#02f86f58 .cell execution_count=1}\n``` {.python .cell-code}\nimport ibis\nfrom ibis import _\n\nibis.options.interactive = True\n```\n:::\n\n\n## Connect to BigQuery\n\nWe connect to BigQuery using the `ibis.connect` API, which accepts a URL string indicating the backend and various bit of information needed to connect to the backend. Here we're using BigQuery, so we need the project id (`ibis-gbq`) and the dataset id (`workflows`).\n\nDatasets are analogous to schemas in other systems.\n\n::: {#1cd51565 .cell execution_count=2}\n``` {.python .cell-code}\nurl = \"bigquery://ibis-gbq/workflows\"\ncon = ibis.connect(url)\n```\n:::\n\n\nLet's see what tables are available.\n\n::: {#c8e5249f .cell execution_count=3}\n``` {.python .cell-code}\ncon.list_tables()\n```\n\n::: {.cell-output .cell-output-display execution_count=3}\n```\n['analysis', 'jobs', 'workflows']\n```\n:::\n:::\n\n\n## Analysis\n\nHere we've got our first bit of interesting information: the `jobs` and `workflows` tables.\n\n### Terminology\n\nBefore we jump in, it helps to lay down some terminology.\n\n- A **workflow** corresponds to an individual GitHub Actions YAML file in a GitHub repository under the `.github/workflows` directory.\n- A **job** is a named set of steps to run inside a **workflow** file.\n\n### What's in the `workflows` table?\n\nEach row in the `workflows` table corresponds to a **workflow run**.\n\n- A **workflow run** is an instance of a workflow that was triggered by some entity: a GitHub user, bot, or other entity. Each row of the `workflows` table is a **workflow run**.\n\n### What's in the `jobs` table?\n\nSimilarly, each row in the `jobs` table is a **job run**. That is, for a given **workflow run** there are a set of jobs run with it.\n\n- A **job run** is an instance of a job *in a workflow*. It is associated with a single **workflow run**.\n\n## Rationale\n\nThe goal of this analysis is to try to understand ibis's CI performance, and whether the amount of time we spent waiting on CI has decreased, stayed the same or increased. Ideally, we can understand the pieces that contribute to the change or lack thereof.\n\n### Metrics\n\nTo that end there are a few interesting metrics to look at:\n\n- **job run** *duration*: this is the amount of time it takes for a given job to complete\n- **workflow run** *duration*: the amount of time it takes for *all* job runs in a workflow run to complete.\n- **queueing** *duration*: the amount time time spent waiting for the *first* job run to commence.\n\n### Mitigating Factors\n\n- Around October 2021, we changed our CI infrastructure to use [Poetry](https://python-poetry.org/) instead of [Conda](https://docs.conda.io/en/latest/). The goal there was to see if we could cache dependencies using the lock file generated by poetry. We should see whether that had any effect.\n- At the end of November 2022, we switch to the Team Plan (a paid GitHub plan) for the Ibis organzation. This tripled the amount of **job runs** that could execute in parallel. We should see if that helped anything.\n\nAlright, let's jump into some data!\n\n::: {#2a119bb7 .cell execution_count=4}\n``` {.python .cell-code}\njobs = con.tables.jobs[_.started_at < \"2023-01-09\"]\njobs\n```\n\n::: {.cell-output .cell-output-display execution_count=4}\n```{=html}\n
\n```\n:::\n:::\n\n\nThese first few columns in the `jobs` table aren't that interesting so we should look at what else is there\n\n::: {#2d796f1f .cell execution_count=5}\n``` {.python .cell-code}\njobs.columns\n```\n\n::: {.cell-output .cell-output-display execution_count=5}\n```\n['url',\n 'steps',\n 'status',\n 'started_at',\n 'runner_group_name',\n 'run_attempt',\n 'name',\n 'labels',\n 'node_id',\n 'id',\n 'runner_id',\n 'run_url',\n 'run_id',\n 'check_run_url',\n 'html_url',\n 'runner_name',\n 'runner_group_id',\n 'head_sha',\n 'conclusion',\n 'completed_at']\n```\n:::\n:::\n\n\nA bunch of these aren't that useful for our purposes. However, `run_id`, `started_at`, `completed_at` are useful for us. The [GitHub documentation for job information](https://docs.github.com/en/rest/actions/workflow-jobs?apiVersion=2022-11-28#get-a-job-for-a-workflow-run) provides useful detail about the meaning of these fields.\n\n- `run_id`: the workflow run associated with this job run\n- `started_at`: when the job started\n- `completed_at`: when the job completed\n\nWhat we're interested in to a first degree is the job duration, so let's compute that.\n\nWe also need to compute when the last job for a given `run_id` started and when it completed. We'll use the former to compute the queueing duration, and the latter to compute the total time it took for a given workflow run to complete.\n\n::: {#f989d187 .cell execution_count=6}\n``` {.python .cell-code}\nrun_id_win = ibis.window(group_by=_.run_id)\njobs = jobs.select(\n _.run_id,\n job_duration=_.completed_at.delta(_.started_at, \"microsecond\"),\n last_job_started_at=_.started_at.max().over(run_id_win),\n last_job_completed_at=_.completed_at.max().over(run_id_win),\n)\njobs\n```\n\n::: {.cell-output .cell-output-display execution_count=6}\n```{=html}\n
\n```\n:::\n:::\n\n\nAgain we have a bunch of columns that aren't so useful to us, so let's see what else is there.\n\n::: {#846952c9 .cell execution_count=8}\n``` {.python .cell-code}\nworkflows.columns\n```\n\n::: {.cell-output .cell-output-display execution_count=8}\n```\n['workflow_url',\n 'workflow_id',\n 'triggering_actor',\n 'run_number',\n 'run_attempt',\n 'updated_at',\n 'cancel_url',\n 'rerun_url',\n 'check_suite_node_id',\n 'pull_requests',\n 'id',\n 'node_id',\n 'status',\n 'repository',\n 'jobs_url',\n 'previous_attempt_url',\n 'artifacts_url',\n 'html_url',\n 'head_sha',\n 'head_repository',\n 'run_started_at',\n 'head_branch',\n 'url',\n 'event',\n 'name',\n 'actor',\n 'created_at',\n 'check_suite_url',\n 'check_suite_id',\n 'conclusion',\n 'head_commit',\n 'logs_url']\n```\n:::\n:::\n\n\nWe don't care about many of these for the purposes of this analysis, however we need the `id` and a few values derived from the `run_started_at` column.\n\n- `id`: the unique identifier of the **workflow run**\n- `run_started_at`: the time the workflow run started\n\nWe compute the date the run started at so we can later compare it to the dates where we added poetry and switched to the team plan.\n\n::: {#d1f82209 .cell execution_count=9}\n``` {.python .cell-code}\nworkflows = workflows.select(\n _.id, _.run_started_at, started_date=_.run_started_at.date()\n)\nworkflows\n```\n\n::: {.cell-output .cell-output-display execution_count=9}\n```{=html}\n
\n```\n:::\n:::\n\n\nWe need to associate jobs and workflows somehow, so let's join them on the relevant key fields.\n\n::: {#0d322a2d .cell execution_count=10}\n``` {.python .cell-code}\njoined = jobs.join(workflows, jobs.run_id == workflows.id)\njoined\n```\n\n::: {.cell-output .cell-output-display execution_count=10}\n```{=html}\n
\n```\n:::\n:::\n\n\nSweet! Now we have workflow runs and job runs together in the same table, let's start exploring summarization.\n\nLet's encode our knowledge about when the poetry move happened and also when we moved to the team plan.\n\n::: {#8cad3b01 .cell execution_count=11}\n``` {.python .cell-code}\nfrom datetime import date\n\nPOETRY_MERGED_DATE = date(2021, 10, 15)\nTEAMIZATION_DATE = date(2022, 11, 28)\n```\n:::\n\n\nLet's compute some indicator variables indicating whether a given row contains data after poetry changes occurred, and do the same for the team plan.\n\nLet's also compute queueing time and workflow duration.\n\n::: {#1c5210e6 .cell execution_count=12}\n``` {.python .cell-code}\nstats = joined.select(\n _.started_date,\n _.job_duration,\n has_poetry=_.started_date > POETRY_MERGED_DATE,\n has_team=_.started_date > TEAMIZATION_DATE,\n queueing_time=_.last_job_started_at.delta(_.run_started_at, \"microsecond\"),\n workflow_duration=_.last_job_completed_at.delta(_.run_started_at, \"microsecond\"),\n)\nstats\n```\n\n::: {.cell-output .cell-output-display execution_count=12}\n```{=html}\n
\n```\n:::\n:::\n\n\nLet's create a column ranging from 0 to 2 inclusive where:\n\n- 0: no improvements\n- 1: just poetry\n- 2: poetry and the team plan\n\nLet's also give them some names that'll look nice on our plots.\n\n::: {#8b6cf051 .cell execution_count=13}\n``` {.python .cell-code}\nstats = stats.mutate(\n raw_improvements=_.has_poetry.cast(\"int\") + _.has_team.cast(\"int\")\n).mutate(\n improvements=(\n _.raw_improvements.case()\n .when(0, \"None\")\n .when(1, \"Poetry\")\n .when(2, \"Poetry + Team Plan\")\n .else_(\"NA\")\n .end()\n ),\n team_plan=ibis.where(_.raw_improvements > 1, \"Poetry + Team Plan\", \"None\"),\n)\nstats\n```\n\n::: {.cell-output .cell-output-display execution_count=13}\n```{=html}\n
\n```\n:::\n:::\n\n\nFinally, we can summarize by averaging the different durations, grouping on the variables of interest.\n\n::: {#427e669d .cell execution_count=14}\n``` {.python .cell-code}\nUSECS_PER_MIN = 60_000_000\n\nagged = stats.group_by([_.started_date, _.improvements, _.team_plan]).agg(\n job=_.job_duration.div(USECS_PER_MIN).mean(),\n workflow=_.workflow_duration.div(USECS_PER_MIN).mean(),\n queueing_time=_.queueing_time.div(USECS_PER_MIN).mean(),\n)\nagged\n```\n\n::: {.cell-output .cell-output-display execution_count=14}\n```{=html}\n
\n```\n:::\n:::\n\n\nIf at any point you want to inspect the SQL you'll be running, ibis has you covered with `ibis.to_sql`.\n\n::: {#59c65db6 .cell execution_count=15}\n``` {.python .cell-code}\nibis.to_sql(agged)\n```\n\n::: {.cell-output .cell-output-display execution_count=15}\n```sql\nWITH t0 AS (\n SELECT\n t6.*\n FROM `ibis-gbq`.workflows.jobs AS t6\n WHERE\n t6.`started_at` < '2023-01-09'\n), t1 AS (\n SELECT\n t6.`id`,\n t6.`run_started_at`,\n DATE(t6.`run_started_at`) AS `started_date`\n FROM `ibis-gbq`.workflows.workflows AS t6\n), t2 AS (\n SELECT\n t0.`run_id`,\n TIMESTAMP_DIFF(t0.`completed_at`, t0.`started_at`, MICROSECOND) AS `job_duration`,\n MAX(t0.`started_at`) OVER (PARTITION BY t0.`run_id`) AS `last_job_started_at`,\n MAX(t0.`completed_at`) OVER (PARTITION BY t0.`run_id`) AS `last_job_completed_at`\n FROM t0\n), t3 AS (\n SELECT\n `started_date`,\n `job_duration`,\n `started_date` > CAST('2021-10-15' AS DATE) AS `has_poetry`,\n `started_date` > CAST('2022-11-28' AS DATE) AS `has_team`,\n TIMESTAMP_DIFF(`last_job_started_at`, `run_started_at`, MICROSECOND) AS `queueing_time`,\n TIMESTAMP_DIFF(`last_job_completed_at`, `run_started_at`, MICROSECOND) AS `workflow_duration`\n FROM t2\n INNER JOIN t1\n ON t2.`run_id` = t1.`id`\n), t4 AS (\n SELECT\n t3.*,\n CAST(t3.`has_poetry` AS INT64) + CAST(t3.`has_team` AS INT64) AS `raw_improvements`\n FROM t3\n)\nSELECT\n t5.`started_date`,\n t5.`improvements`,\n t5.`team_plan`,\n avg(IEEE_DIVIDE(t5.`job_duration`, 60000000)) AS `job`,\n avg(IEEE_DIVIDE(t5.`workflow_duration`, 60000000)) AS `workflow`,\n avg(IEEE_DIVIDE(t5.`queueing_time`, 60000000)) AS `queueing_time`\nFROM (\n SELECT\n t4.*,\n CASE t4.`raw_improvements`\n WHEN 0\n THEN 'None'\n WHEN 1\n THEN 'Poetry'\n WHEN 2\n THEN 'Poetry + Team Plan'\n ELSE 'NA'\n END AS `improvements`,\n IF(t4.`raw_improvements` > 1, 'Poetry + Team Plan', 'None') AS `team_plan`\n FROM t4\n) AS t5\nGROUP BY\n 1,\n 2,\n 3\n```\n:::\n:::\n\n\n# Plot the Results\n\nIbis doesn't have builtin plotting support, so we need to pull our results into pandas.\n\nHere I'm using `plotnine` (a Python port of `ggplot2`), which has great integration with pandas DataFrames.\n\n::: {#4ea62468 .cell execution_count=16}\n``` {.python .cell-code}\nraw_df = agged.execute()\nraw_df\n```\n\n::: {.cell-output .cell-output-display execution_count=16}\n```{=html}\n
\n\n
\n \n
\n
\n
started_date
\n
improvements
\n
team_plan
\n
job
\n
workflow
\n
queueing_time
\n
\n \n \n
\n
0
\n
2021-11-03
\n
Poetry
\n
None
\n
3.948251
\n
18.438061
\n
17.952411
\n
\n
\n
1
\n
2020-10-01
\n
None
\n
None
\n
9.915315
\n
26.179842
\n
26.083483
\n
\n
\n
2
\n
2022-08-23
\n
Poetry
\n
None
\n
2.744350
\n
12.839580
\n
12.064237
\n
\n
\n
3
\n
2021-06-09
\n
None
\n
None
\n
8.044477
\n
15.938178
\n
1.141473
\n
\n
\n
4
\n
2022-06-13
\n
Poetry
\n
None
\n
3.117226
\n
15.782421
\n
14.715766
\n
\n
\n
...
\n
...
\n
...
\n
...
\n
...
\n
...
\n
...
\n
\n
\n
779
\n
2020-12-03
\n
None
\n
None
\n
10.913713
\n
39.732489
\n
39.495992
\n
\n
\n
780
\n
2021-10-21
\n
Poetry
\n
None
\n
3.781108
\n
31.423465
\n
28.041193
\n
\n
\n
781
\n
2021-12-14
\n
Poetry
\n
None
\n
3.240217
\n
13.778852
\n
10.919449
\n
\n
\n
782
\n
2023-01-02
\n
Poetry + Team Plan
\n
Poetry + Team Plan
\n
3.144575
\n
10.116722
\n
7.886025
\n
\n
\n
783
\n
2022-02-02
\n
Poetry
\n
None
\n
3.119334
\n
25.054407
\n
23.989267
\n
\n \n
\n
784 rows × 6 columns
\n
\n```\n:::\n:::\n\n\nGenerally, `plotnine` works with long, tidy data so let's use `pandas.melt` to get there.\n\n::: {#5b55980e .cell execution_count=17}\n``` {.python .cell-code}\nimport pandas as pd\n\ndf = pd.melt(\n raw_df,\n id_vars=[\"started_date\", \"improvements\", \"team_plan\"],\n var_name=\"entity\",\n value_name=\"duration\",\n)\ndf.head()\n```\n\n::: {.cell-output .cell-output-display execution_count=17}\n```{=html}\n
\n\n
\n \n
\n
\n
started_date
\n
improvements
\n
team_plan
\n
entity
\n
duration
\n
\n \n \n
\n
0
\n
2021-11-03
\n
Poetry
\n
None
\n
job
\n
3.948251
\n
\n
\n
1
\n
2020-10-01
\n
None
\n
None
\n
job
\n
9.915315
\n
\n
\n
2
\n
2022-08-23
\n
Poetry
\n
None
\n
job
\n
2.744350
\n
\n
\n
3
\n
2021-06-09
\n
None
\n
None
\n
job
\n
8.044477
\n
\n
\n
4
\n
2022-06-13
\n
Poetry
\n
None
\n
job
\n
3.117226
\n
\n \n
\n
\n```\n:::\n:::\n\n\nLet's make our theme lighthearted by using `xkcd`-style plots.\n\n::: {#d149c514 .cell execution_count=18}\n``` {.python .cell-code}\nfrom plotnine import *\n\ntheme_set(theme_xkcd())\n```\n:::\n\n\nCreate a few labels for our plot.\n\n::: {#ee3add6c .cell execution_count=19}\n``` {.python .cell-code}\npoetry_label = f\"Poetry\\n{POETRY_MERGED_DATE}\"\nteam_label = f\"Team Plan\\n{TEAMIZATION_DATE}\"\n```\n:::\n\n\nWithout the following line you may see large amount of inconsequential warnings that make the notebook unusable.\n\n::: {#1bc4c94f .cell execution_count=20}\n``` {.python .cell-code}\nimport logging\n\n# without this, findfont logging spams the notebook making it unusable\nlogging.getLogger('matplotlib.font_manager').disabled = True\n```\n:::\n\n\nHere we show job durations, coloring the points differently depending on whether they have no improvements, poetry, or poetry + team plan.\n\n::: {#3b549c85 .cell execution_count=21}\n``` {.python .cell-code}\n(\n ggplot(\n df.loc[df.entity == \"job\"].reset_index(drop=True),\n aes(x=\"started_date\", y=\"duration\", color=\"factor(improvements)\"),\n )\n + geom_point()\n + geom_vline(\n xintercept=[TEAMIZATION_DATE, POETRY_MERGED_DATE],\n colour=[\"blue\", \"green\"],\n linetype=\"dashed\",\n )\n + scale_color_brewer(\n palette=7,\n type='qual',\n limits=[\"None\", \"Poetry\", \"Poetry + Team Plan\"],\n )\n + geom_text(x=POETRY_MERGED_DATE, label=poetry_label, y=15, color=\"blue\")\n + geom_text(x=TEAMIZATION_DATE, label=team_label, y=10, color=\"blue\")\n + stat_smooth(method=\"lm\")\n + labs(x=\"Date\", y=\"Duration (minutes)\")\n + ggtitle(\"Job Duration\")\n + theme(\n figure_size=(22, 6),\n legend_position=(0.67, 0.65),\n legend_direction=\"vertical\",\n )\n)\n```\n\n::: {.cell-output .cell-output-display}\n![](index_files/figure-html/cell-22-output-1.png){}\n:::\n\n::: {.cell-output .cell-output-display execution_count=21}\n```\n