Skip to content

Conversation

bpblanken
Copy link
Collaborator

No description provided.

@bpblanken bpblanken changed the base branch from master to dev October 2, 2025 22:20
@bpblanken bpblanken changed the title Benb/connect postgres feat: add affected status to ClickHouse! Oct 3, 2025
AS SELECT
project_guid,
key,
dictGetOrDefault('seqrdb_affected_status_dict', 'affected', (family_guid, calls.sampleId), 'U') affected,
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I used dictGetOrDefault here to get things working, but assume there's fixtures that we can fix when the search logic is implemented to avoid a default value.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assume there's fixtures that we can fix when the search logic is implemented"

Modifying this migration later will not be possible, so we would functionally need to totally drop and recreate the dictionary in a new migration if we want this to change which essentially negates this migration. If there is fixture data that needs to change in order to get thsi to work, now is the time to change it

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

drop and recreate is what we're doing here. the "materialized view" isn't a table, but a set of instructions route data from one table to another, so there's no reason to use an ALTER.

I can go investigate the fixtures.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some more insight after looking into the fixtures. I think the issue is this line loads both the postgres and clickhouse fixtures together; we need an opportunity to refresh the dictionary after '1kg_project' is loaded into postgres but before the clickhouse models are loaded.

I think the easiest option is to manage the fixture creation manually within setUpTestData? Something like:

def setUpTestData(cls):
   postgres_fixtures = ['users', '1kg_project', 'variant_searches', 'reference_data']
   clickhouse_fixtures = ['clickhouse_search', 'clickhouse_transcripts']
   
   for fixture in postgres_fixtures:
      call_command('loaddata', fixture, verbosity=0) 
   
   with connections['clickhouse_write'].cursor() as cursor:
      cursor.execute('SYSTEM RELOAD DICTIONARY "seqrdb_affected_status_dict"')

   for fixture in clickhouse_fixtures:
      call_command('loaddata', fixture, verbosity=0)
  
   ...

does that sound reasonable?

Copy link
Collaborator

@hanars hanars Oct 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why wouldn't it work to just load all the data using the default way fixture data is loading and then refresh the dictionaries and materialized views the way we do in the clickhosue search tests:
https://github.com/broadinstitute/seqr/blob/master/clickhouse_search/search_tests.py#L36-L41
I just don't see why the refresh command has to be before the clickhouse fixture data is loaded. Anything thats part of a view or dict should not be defined in the fixture

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The dictGet is called as the fixture is loaded, when a row is inserted into the entries table.

]

operations = [
migrations.RunSQL(
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sql created by the model change itself included mutations_sync=1 and I couldn't figure out a way to remove it. We don't want the mutation to be synchronous for this.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This approach gives column ordering control too, which is a plus.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless it would be a problem if this is added to all requests, you should be able to change this setting globally in the settings.py connection config. Alternatively, it should work as a keyword arg to the migration command itself, you can just modify it once its auto generated
For column ordering, the list of columns that are generated can be reordered however you want without needing to execute raw SQL, the order they are in the list is the order they will be in when the SQL is generated so you can still set it explicitly

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried changing the setting yesterday after reading the docs and it didn't work for this column ALTER. My reading of the documentation was that that setting was applied for ORM calls (data updates and deletes, which are also async) but not for schema changes.

The default makemigrations output produces:

        migrations.AddField(
            model_name='gtstatsgrch37snvindel',
            name='ac_affected',
            field=clickhouse_backend.models.UInt32Field(default=0),
        ),

which I was unable to modify to change the column ordering in the same way we were with the CreateModel in the other migrations.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my concern is I do not think django will be able to tell that all the changes you made to the models are actually in the migration, as it checks the models.py against the migrations themselves, not against the database - if you run makemigrations again does it report no changes or does it generate another migration with the add columns?
Also, if this has to repopulate anyways is there any harm in just dropping and recreating these tables instead of modifying them in place? or would that make us lose a whole bunch of data that would need to be recomputed?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makemigrations generates a new migrations as is. We need to modify the RunSQL like:

 migrations.RunSQL(
            "ALTER TABLE `GRCh37/SNV_INDEL/gt_stats` ADD COLUMN `hom_affected` UInt32 DEFAULT 0 AFTER `hom_wgs`;",
            hints={'clickhouse': True},
            state_operations=[
                migrations.AddField(
                    model_name='gtstatsgrch37snvindel',
                    name='hom_affected',
                    field=clickhouse_backend.models.UInt32Field(default=0),
                ),
            ],
        ),

I think there's potentially no harm in dropping the gt_stats tables - they're recomputed from scratch on every load. However, the project_gt_stats table is not and would need the expensive multi-hour rebuild that is not yet built. If data were loaded during that time the downstream rollups would all be wrong.

Copy link
Collaborator

@hanars hanars Oct 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have never looked into state_operations before but this looks like exactly what we need! And if we are doing these raw ALTER TABLE commands for project_gt_stats then I think theres no harm in also using them for gt_stats

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just as a sanity check, if you now run makeigrations does it generate anything?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope! It's clean.

'DROP DICTIONARY `GRCh38/SNV_INDEL/gt_stats_dict`',
hints={'clickhouse': True},
),
migrations.RunSQL(
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note that this doesn't drop the table itself, just the "view" in between the two tables.


class OrderedDatabaseDeletionRunner(DiscoverRunner):

def teardown_databases(self, old_config, **kwargs):
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Due to the new dictionary opening a connection from ClickHouse to Postgres, we need to drop the clickhouse database first. This custom runner helps with that.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason not to just put this into the DifferentDbTransactionSupportMixin

Copy link
Collaborator Author

@bpblanken bpblanken Oct 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method subclasses DiscoverRunner, which is different than a TestCase. It's used in a different way

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Django allows you to specify the creation order for database in the settings, and I would assume it would tear them down in reverse order - have we tried setting that up instead of needing a custome test runner?
https://docs.djangoproject.com/en/4.2/topics/testing/advanced/#topics-testing-creation-dependencies

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alternatively, have we tried just closing whichever connection is causing the issue in the test class teardown?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just tried, I don't think it works. We get the:

django.db.utils.OperationalError: database "test_seqrdb" is being accessed by other users

during teardown. Reading through the docs I think there's already an implicit depends on the default db, so I'm not sure the code I wrote is even doing anything.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just worried because I think that this solution is like a scorched earth approach, but it feels like theres a connection thats not being properly managed or torn down somewhere and I am worried this is an actual bug with how these databases are connected. If clickhouse has been cleaned up first, which it seems like it should always be, then this error should not be able to happen. Like in the event that clickhouse does fail in production I don't want to have postgres leaving this open connection to a dead service in perpetuity

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if this is actually just a test-specific thing thats unrelated to real world connections I am less worried

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I think it's mostly a test-specific thing, though potentially something to keep an eye on. Postgres does not have an open connection to Clickhouse, it's only the reverse (Clickhouse connecting to Postgres) and it's not bidirectional. The issue is with postgres itself blocking database deletion when there are open connections. It's actually a reasonable safety mechanism for a production system! But we will never delete the "seqrdb" not in the tests.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Going into the django source, the code that evaluates the DEPENDENCIES seems to be referenced when databases are setup. The docs say the return value of setup_databases is passed to teardown_databases, indicating that the db ordering is not reversed for teardown.

@bpblanken bpblanken marked this pull request as ready for review October 3, 2025 13:10
@bpblanken bpblanken requested a review from hanars October 3, 2025 13:10
hints={'clickhouse': True},
),
migrations.RunSQL(
GT_STATS_DICT.substitute(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is not a safe pattern for migrations - if the GT_STATS_DICT model changes in the future then new migrations will get that model here, and then the next migration will have a conflict and fail. Now that we have released clickhouse broadly we pretty much can never modify existing migrations. If we really have to, like if the env vars removed here are actually removed from the env, then we need to just modify them to use dummy variables and the next migration needs to be able to handle that

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So GT_STATS_DICT isn't actually a model, it's a string template. The migrations are being managed entirely manually. I moved the template into the models file because it was entirely shared by the two migrations, the new one just tweaks the columns (which are passed as strings).

Given the question though, I think it makes more sense to leave the existing migration as is and just duplicate the template in the new migration as it will be less confusing.

AS SELECT
project_guid,
key,
dictGetOrDefault('seqrdb_affected_status_dict', 'affected', (family_guid, calls.sampleId), 'U') affected,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assume there's fixtures that we can fix when the search logic is implemented"

Modifying this migration later will not be possible, so we would functionally need to totally drop and recreate the dictionary in a new migration if we want this to change which essentially negates this migration. If there is fixture data that needs to change in order to get thsi to work, now is the time to change it

]

operations = [
migrations.RunSQL(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless it would be a problem if this is added to all requests, you should be able to change this setting globally in the settings.py connection config. Alternatively, it should work as a keyword arg to the migration command itself, you can just modify it once its auto generated
For column ordering, the list of columns that are generated can be reordered however you want without needing to execute raw SQL, the order they are in the list is the order they will be in when the SQL is generated so you can still set it explicitly

hints={'clickhouse': True},
),
migrations.RunSQL(
clickhouse_search.models.GT_STATS_DICT.substitute(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

per my above comment ,it is never safe to reference a model directly in a migration. If you need to, like in a RunPython context, you need to use the apps.get_model function and the migration-specific db_alias: https://github.com/broadinstitute/seqr/blob/master/seqr/migrations/0081_savedvariant_gene_ids.py#L17-L19

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but TBH it would be safer to just define the template directly in this file they way you did in the past and not store it in models.py at all, unless there is a reason why we need to access that template anytime thats not a miration


class OrderedDatabaseDeletionRunner(DiscoverRunner):

def teardown_databases(self, old_config, **kwargs):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a reason not to just put this into the DifferentDbTransactionSupportMixin

Copy link
Collaborator

@hanars hanars left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am happy to take over dealing with the testing issues if you are getting frustrated, just let me know


@classmethod
def setUpTestData(cls):
super().setUpClickhouseEntriesFixtures(['clickhouse_search'])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think this is going to be a viable pattern in the long term - being unable to add fixtures using the normal fixture syntax and instead always needing to remember to do a separate special call is a real frustrating technical debt burden. 99% of our test that load clickhouse fixtures do not actually use or care about gt stats being accurate - in fact I think this is literally the only file where that matters. I think we should leave the fixture initialization pattern alone and inline the refresh calls here along with all the existing refresh calls we have below - if in the future there is another file that needs those calls run we can always abstract the logic accordingly
I do not understand why it should matter if the gt stats tables are empty during a test if they are not actually being used

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So here's the context here:

This doesn't have to do with gt_stats accuracy, but the choice here to use dictGet, which enforces the sample existing in the ClickHouse dictionary at the moment the clickhouse_search fixture is run. Inlining the REFRESH call does not work as it is run after the fixture is loaded. The dictionary is still empty at load time and the fixture fails.

The other additional wrench here is that, even with several more manual attempts at the postgres fixtures, I was never able to get the ClickHouse dictionary to access the postgres fixtures because they're running in a different transaction context with a different connection. I felt that changing the postgres transaction flow was much worse and well out of scope. Mocking the dictionary from scratch was the only way to actually get values there.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Confirming the behavior for posterity and thoroughness:

  1. I added a debug statement in the clickhouse search unit test.
    def setUpTestData(cls):
        import pdb; pdb.set_trace()
        super().setUpClickhouseEntriesFixtures(['clickhouse_search'])
  1. The postgres fixtures are loaded and data is visible within the tests:
(Pdb) Project.objects.all()
<QuerySet [<Project: 1kg project nåme with uniçøde>, <Project: Empty Project>, <Project: Test Reprocessed Project>, <Project: Non-Analyst Project>]>
  1. Confirmed that I cannot see the data when logging in via psql:
test_seqrdb=# select count(*) FROM seqr_project;
 count
-------
     0
(1 row)
  1. I can see the project query as idle in transaction:
test_seqrdb=# SELECT                                                                                                                                                                                                                             pid,                                                                                                                                                                                                                                         usename,                                                                                                                                                                                                                                     application_name,                                                                                                                                                                                                                            client_addr,                                                                                                                                                                                                                                 backend_start,                                                                                                                                                                                                                               xact_start,                                                                                                                                                                                                                                  state,                                                                                                                                                                                                                                       query                                                                                                                                                                                                                                    FROM                                                                                                                                                                                                                                             pg_stat_activity                                                                                                                                                                                                                         WHERE                                                                                                                                                                                                                                            xact_start IS NOT NULL;
  pid  | usename  | application_name | client_addr |         backend_start         |          xact_start           |        state        |                                                                                                                                                                                                                                                                                                                                                                                                                      query
-------+----------+------------------+-------------+-------------------------------+-------------------------------+---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 39590 | postgres |                  | ::1         | 2025-10-06 11:20:35.686583-04 | 2025-10-06 11:20:56.117842-04 | idle in transaction | SELECT "seqr_project"."id", "seqr_project"."guid", "seqr_project"."created_date", "seqr_project"."created_by_id", "seqr_project"."last_modified_date", "seqr_project"."name", "seqr_project"."description", "seqr_project"."can_edit_group_id", "seqr_project"."can_view_group_id", "seqr_project"."subscribers_id", "seqr_project"."genome_version", "seqr_project"."consent_code", "seqr_project"."is_mme_enabled", "seqr_project"."mme_primary_data_owner", "seqr_project"."mme_contact_url", "seqr_project"."mme_contact_institution", "seqr_project"."vlm_contact_email", "seqr_project"."has_case_review", "seqr_project"."enable_hgmd", "seqr_project"."all_user_demo", "seqr_project"."is_demo", "seqr_project"."last_accessed_date", "seqr_project"."workspace_namespace", "seqr_project"."workspace_name" FROM "seqr_project" LIMIT 21
 39763 | postgres |                  | ::1         | 2025-10-06 11:20:50.081399-04 | 2025-10-06 11:20:56.653383-04 | idle in transaction | RELEASE SAVEPOINT "s8745295616_x1"
  1. The dictionary exists with the right source in Clickhouse:
SHOW CREATE DICTIONARY seqrdb_affected_status_dict

Query id: 51a59322-587a-411e-a48d-1ef24c88d252

   ┌─statement───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
1. │ CREATE DICTIONARY test_seqr.seqrdb_affected_status_dict                                                                                                                                                                            ↴│
   │↳(                                                                                                                                                                                                                                  ↴│
   │↳    `family_guid` String,                                                                                                                                                                                                          ↴│
   │↳    `sampleId` String,                                                                                                                                                                                                             ↴│
   │↳    `affected` String                                                                                                                                                                                                              ↴│
   │↳)                                                                                                                                                                                                                                  ↴│
   │↳PRIMARY KEY family_guid, sampleId                                                                                                                                                                                                  ↴│
   │↳SOURCE(POSTGRESQL(NAME 'seqr_postgres_named_collection' DATABASE test_seqrdb QUERY 'select f.guid as family_guid, i.individual_id as sample_id, i.affected FROM seqr_individual i INNER JOIN seqr_family f ON i.family_id = f.id'))↴│
   │↳LIFETIME(MIN 0 MAX 0)                                                                                                                                                                                                              ↴│
   │↳LAYOUT(COMPLEX_KEY_HASHED())                                                                                                                                                                                                        │
   └─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

1 row in set. Elapsed: 0.001 sec.
  1. Reloading is successful but the dict stays empty:
wmbbe-67c :) system reload dictionary seqrdb_affected_status_dict;

SYSTEM RELOAD DICTIONARY seqrdb_affected_status_dict

Query id: ce7ed94d-5553-4425-af99-956ffb9f4a67

Ok.

0 rows in set. Elapsed: 0.021 sec.

wmbbe-67c :) \l^C
wmbbe-67c :) select count(*) FROM seqrdb_affected_status_dict;

SELECT count(*)
FROM seqrdb_affected_status_dict

Query id: 65ff90fb-0986-463e-ab86-249ec4b59f56

   ┌─count()─┐
1. │       0 │
   └─────────┘

1 row in set. Elapsed: 0.001 sec.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is really helpful, thanks for documenting!

('F000011_11', 'NA20885', 'A'),
('F000014_14', 'NA21234', 'A'),
('F000014_14', 'NA21987', 'A'),
('F000014_14', 'NA21654', 'N')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why can't the tests use the actual fixture data thats already been loaded to postgres? If the fixture data is inadequate because like certain families/individuals/samples are missing we should just add those models to the clickhouse_search fixture

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the fixtures were actually mostly fine, except for F000002_2_x. if we were to finalize this approach (which I think we shouldn't) it would be straightforwards to read the fixture and parse the necessary values here. I didn't do it because I felt this approach was iffy.

)
cursor.execute(
f'''
REPLACE DICTIONARY `seqrdb_affected_status_dict`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not comfortable with using this approach for testing, as it means we never once test the dictionary as it is defined. Using this pattern we could easily replace the dictionary with a definition that is totally different from whats actually on prod and miss a real bug. I think this is incredibly risky to have a mission critical resource thats never unit tested.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree with this. I don't think this is particularly good long term solution, but it was the best way to get dictGet to work rather than dictGetOrDefault, given the issue with Clickhouse not being able to see the fixture-loaded postgres data.

After this exercise I think dictGetOrDefault might be more reasonable?

Copy link
Collaborator

@hanars hanars Oct 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah and maybe its not the worse thing to have the default if we somehow end up in a funky condition where postgres samples and clickhouse data are out of sync somehow that it doesn't like actually break the dictionary. Like a real world flow I could imagine is:

  • User triggers the "delete family" flow which inactivates the postgres sample model and queues up a delet family job in the pipeline
  • before the job runs, the user then goes in and deletes the family and individual models from postgres which is now allowed because theres no active search sample associated with them
  • a loading job completes and the dictionary/views try to refresh

]

operations = [
migrations.RunSQL(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just as a sanity check, if you now run makeigrations does it generate anything?

# TransactionTestCase does not call setupTestData in the same way as TestCase
# https://github.com/django/django/blob/stable/4.2.x/django/test/testcases.py#L1466
# As a warning to a future reader, this method changes from an instance to a class method
# between versions 4.x and 6.x (alongside several other impactful method changes). When
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure this comment is necessary as like literally none of our test classes are resilient to a major Django version bump. Basically all our tests and test helpers will need to be rearchitected when that is happening so I do not think this file needs a special reminder

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 I can remove it. I didn't realize how ingrained we were.

from django.test.runner import DiscoverRunner


class OrderedDatabaseDeletionRunner(DiscoverRunner):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just sanity checking that this is still necessary with the switch to using the transaction test case

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, it's still necessary. The transaction test case change just allows the data to be viewed by the additional Clickhouse->Postgres connection, but doesn't change that there is an additional connection that we need to maneuver around.

@hanars
Copy link
Collaborator

hanars commented Oct 9, 2025

We will not want to merge this to dev until we also have the necessary code in place to properly handle what happens when affected status is updated. Otherwise this will go live, the migration will run, and then folks will update affected status in a way that we can not track/replicate and there will not really be a way to fix it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants