Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Synching & conflict resolution for peer-to-peer and offline-first applcations #8

Open
ShishKabab opened this issue Jan 14, 2019 · 9 comments

Comments

@ShishKabab
Copy link
Member

ShishKabab commented Jan 14, 2019

For situations where we want to host data on multiple sources, we need to be able to sync database changes around. This involves:

  1. Recording changes made to the database
  2. Replaying them on other nodes
  3. Conflict resolution in case changes are out-of-sync

Point 3) will be most difficult and must be handled on a per-case basis. Sometimes we can let the last change win, sometimes the user needs to intervene and sometimes we can come up with fitting application-specific algorithms. This functionality should be modular enough to allow for all these cases. Additionally, it should allow for different mechanisms of storing and broadcasting the changes.

@bkniffler
Copy link

Huge fan of having another player enter offline first database, since every current one has pretty big caveats.
For the record, here is what's out there atm.

Realm

https://github.com/realm/realm-js
Super fast, but since it uses native code it can't be used in browser. Also it can only sync with payed, closed-sourced server.

PouchDB/CouchDB

https://github.com/pouchdb/pouchdb
Classic. Has a great and proven sync, but its pretty clunky performance wise, especially when it comes to querying. Also I really dislike that whatever database you use, it will use its own indexing and table schema, which make raw queries impossible. Also forces you to use CouchDB.

RxDB

https://github.com/pubkey/rxdb
Based on PouchDB and RxJS, it allows for better performance and some great reactivity, which pairs very well with react. But you will even have more layers of technology now. Has same disadvantage when it comes to actual database written. Still, my favorite for now.

Gun

https://github.com/amark/gun
Very interesting syncing tech that might integrate well with Storex. Still in early development, but would cover most of what's needed for peer-to-peer syncing (with CRDT). The one thing that's missing in my opinion is a great client side querying API.

@ShishKabab
Copy link
Member Author

ShishKabab commented Jan 28, 2019

Thank you @bkniffler for your first comment from outside our small core development team! Very nice summary, thank you for the care you've put in :)

I'll study the resources you've noted down very soon. So far, I've been reading up on what's out there, and also looking at the data model of Memex (which'll be the first consumer of the lib) and the types of modifications it needs.

Here's my thought process so far:

  • Different conflict resolution strategies make sense for different types of data [1]. Thus, it should be possible to define different strategies per collection and field. (Including asking for user intervention if it fits the UX of what you're building.)
  • Everything should be explicit and testable as hell. What I'm thinking of is defining explicitly what kind of write operations you're expecting per collection/field and have something generate all the possible combinations of modifications you'd need to unit test. This may very well be a nice fit with the planned feature of having a central registry of operations an application will make, for different analysis purposes (in this case, generating an alert if you're doing something that will mess up the sync functionality of you're app.)
  • Different application may have different strategies of determining causality. For some applications, the GUN strategy of lexical ordering may make sense. For Memex, I'm thinking of some weird combo between vector clocks and relative timestamps to have the last write win, regardless of which device you did it on.
  • Some applications may be P2P only and only directly sync between devices. Some applications may need an intermediary buffer, so you can sync devices even if they're not online at the same time. Ideally that should be possible using dumb static file storage such as Google Drive or AWS S3. Here, due to costs, it may be beneficial to either optimize to minimize writes by batching, or not, depending on which backend you're using (S3 write costs can get out of hand pretty quickly.)

Also, the idea is that even though this lib will easily integrate with Storex, I want to design it in such a way that it's not directly coupled to Storex, so also applications using other database abstractions can benefit from this.

If you're interested to offer your thoughts once in a while or shoot at my ideas with a flamethrower, it would be much appreciated :) (Either in this thread, or by Slack/e-mail.) Currently I'm thinking about whether I'm missing any big factors in designing this thing. And, how to solve the whole causality puzzle of enabling last-write-wins independent of the device you were on when you made the edit.

[1] https://enauczanie.pg.edu.pl/moodle/pluginfile.php/253757/mod_resource/content/0/07%20Data%20synchonization.pdf

@ShishKabab
Copy link
Member Author

ShishKabab commented Jan 29, 2019

From @bluesun on Slack:

Hi Vincent, you are thinking in the right direction. The best article I ever read on this topic is: http://www.codecommit.com/blog/java/understanding-and-applying-operational-transformation

I cannot at the moment estimate the complexity of implementation. You surely need to lower your expectations (I know of only very few systems doing delayed sync right). So the best way is to avoid the possibility of conflicts in changing the perspective on the own tool (meaning that one has either to decide crudely on choosing with a heuristic how to solve a conflict or to change the UI so that conflict resolution is part of the natural output and not shown as conflicts.)

Please read the article and let me know of your thoughts. (I never implemented it but I am willing to dive in the details.)

I'll get back to you once I have more ideas. Thank you Samir for your input! And let's keep the conversation going here, will notify you through direct channels when urgent questions come up.

@ShishKabab
Copy link
Member Author

ShishKabab commented Jan 30, 2019

Reading the article, it reminds me a lot of:
http://archagon.net/blog/2018/03/24/data-laced-with-history/

Because the Memex data model is quite simple so far, I think we can make some shortcuts to make the implementation much simpler. Only complication I'm still struggling with is ordering of the operation and accounting for clock inaccuracies.

@bluesun
Copy link

bluesun commented Jan 30, 2019 via email

@ShishKabab
Copy link
Member Author

ShishKabab commented Feb 5, 2019

After thinking about this some more, I've decided on an architecture to plan out. Since our data model is quite simple, and we're not dealing yet which unstructured data transforming in unexpected ways (like strings), OT and CRDTs would be overkill for the core architecture. Instead, I've decided that at it's core, it should automatically sync operations that do not generate conflicts, while allowing you to decide what to do with more complex conflicts that it should be able to foresee automatically. This is done by declaring the ways you're going to interact with the data in advance, so they can be statically analyzed:
https://github.com/WorldBrain/storex-pattern-modules/blob/0f6789cb359e8b2ab48399ff80754a450682fd93/ts/index.test.ts

With the help of those, and uniqueness constraints, we can detect potential conflicts through static analysis. One case where it'll detect a potential unique constraint violation is when having ordered children with parentId + position being unique, such as in our feature allowing you to organize pages into ordered lists. Here an OT-like algorithm could be a good fit. Instead of just capturing a series of lower-level operations that capture modifcations of the position field, we could define higher-level operations such as insertBefore, moveBefore, etc. (but named better) than we could do some OT magic with.

Other conflicts that'll arise, which should be automatically detected are:

  • Modifying a field on two different devices (here you could specify a 'last in time should win' strategy)
  • Modifying a field on a deleted object (in which case you could specify the modification could be discarded)
  • Adding a child to a deleted object (like an item to a list, you'd probably discard the change)

For creation operations not to generate conflicts, there needs to be another strategy to generate IDs instead of auto-increment integers. This to prevent the scenario of where:

  • Both devices have a widget collection containing one object with ID 1
  • Device A creates a new object, with auto-incremented ID 2
  • Device B creates a new object, also with auto-incremented ID 2
  • ID 2 now means different things on different devices

Here I was thinking about the following strategies:

  • Assign a unique device ID to every device, override default ID generation to generate IDs like <deviceId>-<autoIncrementedInteger> (lots of moving parts, like device ID communication and propagation.)
  • Using UUIDs (which might be to space-inefficient, but are always unique and simple to implement.)
  • Assinging each device a unique range of the 64-bit number range. (thing about it, too many problems, and probably not a lot of space wins in most real-world scenarios.)

The changes will be synchronized through a shared database. This database will hold change records which hold the time at which they occured, and when they were uploaded. Also, every device will record it's last sync time. On every sync, changes below the minimal sync time shared between devices will be archived to serve as a backup (by one of the devices) and removed from the main sync log.

When a client syncs the log with the shared database, it'll locally process the newly arrived changes. For each newly arrived changed, it'll look up relevant changes its own log (writes to the same field, to fields matching unique constraints, etc.) and replay the changes after conflict resolution.

This shared database will initially be implemented on Firestore (#10), which also allows live sync. However, it'll be implemented in such a way that it can easily be ported to a self-hosted server. This means that:

  • We'll have to able to define that 'something needs to happens once an object is added to a collection', but decoupled from how that happens. On Firestore we have to deploy a Firebase Function. One a self-hosted server, we might need to intercept all write operations, and execute the code automatically on each write. On a cloud infrastructure, we might need to publish a write through a Redis Pubsub channel, to signal a pool of workers to execute the code.
  • Access rights need to be declared in a database-agnostic way, translating to Firestore rules on Firestore, but to be enforced directly on the API server in a cloud-provider setting.

I've already added support for middleware to the Storex core (yet to be published) that allows you to intercept all operations you do on the database:
https://github.com/WorldBrain/storex/blob/16a647acf9017e51801039ecd20a442fc73a6eb3/ts/types/middleware.test.ts

The unified access control (#6) is still TBD.

@ShishKabab
Copy link
Member Author

Work started here:
https://github.com/WorldBrain/storex-sync/

@bluesun
Copy link

bluesun commented Feb 7, 2019 via email

@ShishKabab
Copy link
Member Author

ShishKabab commented Feb 7, 2019

Good ideas! Would be pretty easy to implement a middleware for a write lock (have to think about the possibility of deadlocks, but none come to mind quickly.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants