Skip to content
Permalink

Comparing changes

This is a direct comparison between two commits made in this repository or its related repositories. View the default comparison for this range or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: carbonplan/blog
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: 7e6bcf8295f4ed3a1b9dca91785e219512fb6e31
Choose a base ref
..
head repository: carbonplan/blog
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: 7b38bb2e323ae1bc74459d68ff537f06a3791182
Choose a head ref
4 changes: 2 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -4,7 +4,7 @@ ci:

repos:
- repo: https://github.com/pre-commit/mirrors-prettier
rev: 'v3.0.0-alpha.4'
rev: 'v3.0.0-alpha.6'
hooks:
- id: prettier
language_version: system
@@ -19,7 +19,7 @@ repos:
|yaml|yml\
)$"
- repo: https://github.com/pre-commit/mirrors-prettier
rev: 'v3.0.0-alpha.4'
rev: 'v3.0.0-alpha.6'
hooks:
- id: prettier
language_version: system
1 change: 1 addition & 0 deletions build-cards.js
Original file line number Diff line number Diff line change
@@ -25,6 +25,7 @@ async function getScreenshot(postId) {
await page.setViewport({ width: 1200, height: 630 })
await page.goto(baseUrl + '/cards/' + postId)
await page.waitForSelector('#final-authors', { timeout: 3000 })
await new Promise((resolve) => setTimeout(resolve, 1000))
const file = await page.screenshot()
await page.close()

1 change: 1 addition & 0 deletions components/main.js
Original file line number Diff line number Diff line change
@@ -13,6 +13,7 @@ import List from './list'
const initYear = {
2021: true,
2022: true,
2023: true,
}

const Settings = ({ setYear, year }) => {
16 changes: 16 additions & 0 deletions components/mdx/page-components.js
Original file line number Diff line number Diff line change
@@ -3,6 +3,22 @@ import dynamic from 'next/dynamic'
// NOTE: This is a dynamically generated file based on the config specified under the
// `components` key in each post's frontmatter.
const components = {
'bootleg-fire-update': {},
'forest-offsets-mismatch': {
Table: dynamic(() =>
import('@carbonplan/components').then((mod) => mod.Table || mod.default)
),
},
'compliance-users-update': {},
'cdr-standards-call': {},
'climate-risk-metadata': {},
'lionshead-fire-update': {
Scan: dynamic(() =>
import('../../posts/lionshead-fire-update/scan.js').then(
(mod) => mod.Scan || mod.default
)
),
},
'climate-risks-insurance': {
States: dynamic(() =>
import('../../posts/climate-risks-insurance/states.js').then(
15 changes: 15 additions & 0 deletions posts/bootleg-fire-update.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
version: 1.0.0
title: Klamath East poised for automatic termination
authors:
- Grayson Badgley
date: 03-29-2023
summary: Carbon losses from the Bootleg Fire are severe enough to cause the termination of a forest offset project in Oregon.
card: bootleg-fire-update
---

The Klamath East (ACR273) forest carbon offset project is slated for automatic termination as a result of the catastrophic [Bootleg Fire that burned through the project in 2021](https://www.nytimes.com/2021/08/23/us/wildfires-carbon-offsets.html). New paperwork, [filed on Monday](https://acr2.apx.com/mymodule/reg/TabDocuments.asp?r=111&ad=Prpt&act=update&type=PRO&aProj=pub&tablename=doc&id1=273), puts total wildfire-induced carbon losses at over 3 million tCO₂. The extent of the damage was so severe that the project's current standing live carbon stocks are lower than the project's baseline carbon stocks. As a result, [California's rules](https://govt.westlaw.com/calregs/Document/I16D2FF335A2111EC8227000D3A7C4BC3?bhcp=1&transitionType=Default&contextData=%28sc.Default%29) require that the entire project be terminated.

Automatic termination means retiring 100 percent of the credits already issued to the project from the program's buffer pool — totalling at least 1.14 million offset credits. When combined with the [estimated 3.95 million credits](https://carbonplan.org/blog/buffer-analysis-update) that have already or are soon to be retired from the buffer pool, total known wildfire losses through the end of the 2021 fire season stand at 5.09 million credits.

We [previously estimated](https://doi.org/10.3389/ffgc.2022.930426) that the buffer pool was designed with the assumption that about 6 million credits would be sufficient to cover the wildfire risk of the current portfolio of projects for the next 100 years. The termination of ACR273 would mean about 84 percent of those credits are now gone. And, as we've discussed before, that number will continue to grow [once we have an official reversal estimate](https://carbonplan.org/blog/lionshead-fire-update) for the 2020 Lionshead fire. Taken together, it seems increasingly likely that the entire wildfire portion of California's forest carbon buffer pool has already been depleted.
1 change: 0 additions & 1 deletion posts/buffer-analysis-update.md
Original file line number Diff line number Diff line change
@@ -7,7 +7,6 @@ authors:
date: 12-01-2022
summary: Offset projects hit by recent wildfires report larger carbon losses than we had projected.
card: buffer-analysis-update
fileId: 14HWfduvsFCneJ3K3yQyg3i4aX_zC4whpN8RSznFt4G8
data:
ACR255:
slag:
23 changes: 23 additions & 0 deletions posts/cdr-standards-call.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
---
version: 1.0.0
title: Industry call for CDR Standards Initiative
authors:
- name: Zeke Hausfather
src: https://images.carbonplan.org/authors/zeke-hausfather.png
- Freya Chay
- name: Ryan Orbuch
src: https://images.carbonplan.org/authors/ryan-orbuch.png
- name: Elizabeth Troein
src: https://images.carbonplan.org/authors/elizabeth-troein.png
date: 2-10-2023
summary: We summarize a public letter from 35 organizations across the CDR ecosystem calling for a scientifically-grounded standards body for long-duration CDR that could review and harmonize emerging protocols.
card: cdr-standards-call
---

Last November, [Stripe](https://stripe.com/climate), [Lowercarbon Capital](https://lowercarboncapital.com), [Isometric](https://isometric.com), and CarbonPlan organized a convening focused on carbon dioxide removal (CDR) measurement, reporting, and verification (MRV). At the convening, there was broad agreement that more robust structures are needed to ensure high-quality quantification and verification of carbon removal deployments.

In an [open letter](https://files.carbonplan.org/CDR-MRV-Standards-Letter-02-10-2023.pdf) that was collaboratively written by a working group coming out of the November convening, 35 organizations representing carbon removal buyers, suppliers, verifiers, non-profits, and academics call for the creation of an independent standards initiative that would provide a trusted, scientific stamp-of-approval for CDR protocols.

We envision this initiative establishing a transparent and predictable process for consolidating the best available science, reviewing existing and forthcoming protocols, and harmonizing MRV approaches across and within CDR pathways. This process would provide reference points to ensure that claimed permanent removals are consistently and rigorously quantified and are aligned with [relevant system boundaries](https://carbonplan.org/research/cdr-verification). We also suggest that the initiative should be conscientiously set up to avoid conflicts of interest — particularly that funding for the standards initiative should not involve issuing or selling carbon credits.

If you are interested in joining the call for high quality CDR standards, you can add your signature to the open letter [here](https://forms.gle/vnCaF8LtdxoQgEUc8). If you have any questions or would like more information about potential next steps, please feel free to reach out to hello@carbonplan.org.
40 changes: 40 additions & 0 deletions posts/climate-risk-metadata.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
---
version: 1.0.0
title: What metadata are necessary for interpreting a climate risk assessment?
authors:
- Oriana Chegwidden
- Sadie Frank
date: 01-30-2023
summary: Highlighting scientific factors that can influence climate risk products.
card: climate-risk-metadata
---

Until recently, physical climate risk assessments were conducted largely in academic contexts, where detailed methods descriptions are the norm. In that setting, researchers can evaluate and trust scientific analyses because they can review the methodological details, which increasingly means having access to the underlying data and code.

As more and more non-academic stakeholders rely on physical climate information for decision-making — often in private — quality control remains essential. But sharing fully open code and data is often inconsistent with the business models of companies that provide climate risk information. In the absence of full transparency, many crucial choices underlying a climate risk assessment can still be usefully captured in the “metadata,” or auxiliary information, associated with climate data and modeling. With sufficient metadata, a well-informed consumer can evaluate model assumptions and the conditions under which a risk assessment appropriately applies; without enough metadata, these judgments can become challenging, creating the potential for misapplication or misinterpretation.

In this post, we describe four categories of metadata that we think will be important for any robust disclosure of climate risk assessments, such as what [a draft SEC rule would require from publicly traded companies](https://carbonplan.org/research/data-financial-risk). Our framing is inspired, in part, by a set of “Dos and Don’ts” for using climate information in water resource planning ([Vano et al, 2018](https://doi.org/10.1016/j.cliser.2018.07.002)).

## 01 — Source, accessibility, and documentation

The first category of metadata includes the identity of the party that created the risk assessment, how accessible that assessment is, and what documentation is available. Knowing who created a climate risk assessment and the category of institution (e.g., academic, private) can provide important information about potential biases or reputational indicators. Assessments that are based on highly transparent methods, such as open source models with freely available input and output datasets, more readily support due diligence and reproducibility. Some assessments might also have documentation (e.g., websites, white papers, or peer-reviewed publications) with detailed descriptions of the methods and shortcomings of the assessment.

## 02 — Variables, domain, and resolution

The second category concerns the scope, variables (e.g., hazards), and spatiotemporal resolution of the climate risk assessment. These pieces of metadata inform the comprehensiveness and level of detail of the assessment, and facilitate comparison among assessments. For example, metadata can help indicate whether an assessment was based only on historical information, as opposed to future projections, or whether it only considered a subset of risks, such as fire but not flooding. Further, details about the locations and scope of risks can inform applicability, both in obvious ways (does it cover the relevant spatial region or time horizon?) and in ways that are more nuanced (is the spatial resolution sufficient to distinguish risks across nearby regions or capture extreme events that occur at finer temporal resolutions than the assessment captures?).

## 03 — Model and dataset identification

The third category identifies the assessment’s underlying models and datasets. Robust descriptions of the models and datasets used at every stage of the process can help a user interpret a risk assessment. Different models, configurations, or input datasets can result in different final risk estimates, and every step of the analytical process involves an influential choice. Most risk assessments begin with a future climate projection, which varies depending on the choice of emissions scenario, global climate model ([Tebaldi et al. 2021](https://esd.copernicus.org/articles/12/253/2021/)), subset or ensemble of Global Circulation Models (GCMs) ([McSweeney and Jones, 2016](https://www.sciencedirect.com/science/article/pii/S2405880715300170)), and even which iteration of a run from a single GCM is selected ([Kay et al. 2015](https://doi.org/10.1175/BAMS-D-13-00255.1)). These details are important because it is common to report only the results of one model, even though an ensemble is likely to be more robust ([Saxe et al. 2020](https://doi.org/10.5194/hess-25-1529-2021)). If the assessment includes a [downscaling step](https://carbonplan.org/research/cmp6-downscaling-explainer), the choice of downscaling algorithm matters ([Wilby et al. 1998](https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/98WR02577)). Finally, following downscaling, different impact models yield different results, so they should be identified.

## 04 — Model specifications

The fourth category describes how models were implemented, not just identifying which ones were used. When downscaling, implementation details such as the parameterization of input variables, choice of resolution, and handling of extremes can strongly affect the final results. For example, using different input variables can influence whether precipitation is projected to increase or decrease ([Gutmann et al 2022](https://doi.org/10.1175/JHM-D-21-0142.1)) and using a higher-resolution meteorological product for downscaling can predict extreme precipitation and flooding that might otherwise be missed ([Bador et al. 2020](https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019JD032184)).
With an impact model, the parameterization, choice of training data, and post-processing all affect results. For example, the way you parameterize a hydrologic model influences its projection of drought
([Chegwidden et al. 2019](https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2018EF001047)) and the choice of meteorological training data and training period can affect projections of precipitation, especially in areas with complex terrain ([Henn et al. 2018](https://www.sciencedirect.com/science/article/abs/pii/S0022169417301452)).

## Transparency allows due diligence

While we consider these four categories of metadata critical to evaluating a risk assessment, they are neither fixed nor exhaustive. And although we focused on climate risk assessments, these considerations apply to climate information products more broadly. Some forms of metadata might be more or less relevant depending on the application, and evaluating a risk assessment will likely always require expert judgment.

Comprehensive metadata is the minimum amount of information required to understand the quality of a climate risk assessment. Ideally, a risk assessment would include complete methods and open code and data to support due diligence and intercomparison by reviewers and consumers. But standard disclosure of the metadata described here is a critical place to start.
15 changes: 15 additions & 0 deletions posts/compliance-users-update.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
version: 1.0.0
title: Updates to the Compliance Users Tool
authors:
- Freya Chay
date: 2-23-2023
summary: We updated the Compliance Users Tool to include the latest available cap-and-trade program data about who is using which offsets.
card: compliance-users-update
---

Last year, we released a [tool](https://carbonplan.org/research/compliance-users) that allows you to explore who is using which offsets to meet compliance obligations in the California cap-and-trade program. We've now updated the tool to reflect the [compliance data](https://ww2.arb.ca.gov/our-work/programs/cap-and-trade-program/cap-and-trade-program-data) that was released in December 2022.

With this update, the tool includes all available data about offset use reported for three full compliance periods (2013-2014, 2015-2017, 2018-2020) and the most recent annual compliance period (2021). Note that during an annual compliance period, regulated companies are only obligated to meet 30% of their compliance obligation. The remainder will come due at the end of the full compliance period (2021-2023) and will be reported publicly in 2024. We'll continue updating the tool as additional data becomes available.

You can read more about the tool in our [original blog post](https://carbonplan.org/blog/compliance-users-release) or check out our [GitHub repository](https://github.com/carbonplan/compliance-users) for the data processing that underlies the web tool.
Loading