-
-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sync notes from markdown files into Orbit #220
Comments
Thank you for looking at I agree with you on all the shortcomings. On the opinions:
Yes, let's. Probably best to extract the
Yep, I agree that Anki sync should be handled orthogonally.
I think you're suggesting that to sync, we simply write logs to the server creating all the prompts found in the Markdown notes. If the server handles idempotency correctly, this would generate the correct behavior! I do worry about performance: it means reading, parsing, and transmitting the prompt content from every note on each sync. I have a 10^3 notes with 10^3 prompts. A few tens of megabytes to parse; maybe a couple MB of data transmitted to the API. Maybe that's OK! Certainly it's a simple way to start. Thinking purely about a local scenario, imagine you're on your Mac and editing your notes. Ideally, if you then switch into Orbit, you should be able to immediately review the prompts you've just added. I'm not sure what would trigger a sync in this scenario: a file watcher, a frequent timer, or an explicit user action. But I worry that this could be difficult to achieve if we're round-tripping all notes to the server. Here's an alternative framing which might preserve the simplicity of your suggestion for a local context. I've been working on rearchitecting the data layer of Orbit as a simple syncable file format. So if you download the Orbit app, you'd end up with some Orbit.db in a folder on disk, which the app would read and write, and which could be intermittently synced to the server. It's not a cache, per-se: more a replica. Right now, the app has a (non-shareable) data store, and this script has its own separate cache. But if you think of the app's data store as a real local file format, the script can just write to it directly, and let some other process handle over-the-network syncing. In this context, "syncing" from Markdown notes might mean:
In this scheme, we still pay the price of reading and parsing all local note files each time we sync, but at least we don't have to transmit them all the server. If you're running in a context where you don't want to maintain a local replica, we can implement a persistence strategy for
The provenance info is indeed being sent to the API, in the
Yep. When a stable identifier is not available, I can imagine using the note file subpath instead, as you suggest. The "point" of provenance:
Right! The syncing script is meant to update the provenance metadata for notes in this instance, when it notices that prompts move between files. The detection of the moves and the corresponding actions for Anki are implemented, but I didn't yet implement the relevant Orbit actions. They do exist (log type (see [#54 note-sync: implement support for “move” operations])
I think we can be a bit fuzzy about it, but displaying the context in the UI is pretty important. For instance, yesterday I was writing a note called "Tachistoscope". One paragraph in that note:
This paragraph only makes sense as a cloze deletion if the note's title ("Tachistoscope") is displayed above it. |
Noting for observers: the data architecture rewrite is now complete. I've migrated note-sync over to use the new |
I've been looking into the problem of how to sync SRS prompts from note-taking systems with Orbit.
The goal is to have a CLI tool that:
The
note-sync
package already accomplishes most of this, but it has a few shortcomings:computer-supported-thinking
,spaced-everything
, andincremental-thinking
), which makes the code more complex than necessary.After spending a few hours reviewing the code and testing out the
note-sync
package, I have a few thoughts/opinions:incremental-thinking
that parsesqaPrompts
andclozePrompts
. That code is well-tested and has been in use by Andy for a while.anki-import
that at least handles one-way importing.Thoughts on Provenance
How should this library handle provenance? Broadly, I have questions about how Orbit thinks about provenance, but scoping my questions to this library, it appears that the current implementation is caching provenance information locally, but not syncing that provenance information to the Orbit API.
The current implementation depends on a Bear Note ID at the bottom of the markdown file to determine provenance, which is obviously undesirable as notes could be exported from a variety of different note-taking systems.
If we'd like to track provenance, we could use the note's filename and modified date to populate the provenance data Orbit requires. Here's the PromptProvenanceType filled out:
One gotcha with provenance is based on the way we're handling Idempotency, moving a prompt from one file to another, so long as the prompt didn't change at all, would change the provenance information but not the identity of the prompt itself. That's probably desirable for the prompt, but the provenance information has changed. We'll need to account for that.
Again, there's a general question of whether we need to track provenance at all for this importer.
The text was updated successfully, but these errors were encountered: