-
Notifications
You must be signed in to change notification settings - Fork 63
question: how to build a dat-powered P2P Wikipedia? #165
Comments
Awesome! We've wanted this to exist for awhile =). The two concepts that'll be important to use in Dat are the
Ya, I'd agree! This should be the first question to answer. Once the UI is decided on, putting together the pieces underneath will be the easier part. An good example you may look at is Science Fair. The underlying data is two dat archives, one for metadata and one for the actual articles. Both of these can be viewed/downloaded via CLI. But the way data is presented in the app makes it much easier to use and manage. So a few points on compatibility:
As long as you are downloading files (not writing), several processes can share the same unerlying archive. The syncing status, etc. is all built into the metadata.
Yes, This sounds really cool and definitely a project we've want to see work! Let us know if you have any more questions. ps. Have you seen the Beaker Browser, and specifically the dat API may be a good way to prototype a UI for this. Additionally, if you can make a wikipedia dat that is browsable as regular webpages, you can access it like any other website in Beaker. |
Beaker would be really nice for this indeed! |
I assume you mean #163 ? |
Hey!
I'm working on bringing Wikipedia into dat. For that, I have written two tools that work on top of
hyperdrive
:build-wikipedia-feed
'sstore-revisions
fetches a revision of an article and writes it into the hyperdrive, with a custommtime
.wikipedia-feed-ui
takes a hyperdrive and serves an index of articles, an article itself and the history of an article (usingarchive.history
) over HTTP.I want to build a "full node" that downloads all of wikipedia, feeds it into a hyperdrive/dat archive and seeds it over the network. The client/"light node" will be able to access Wikipedia in three ways:
My question is a rather general one: Which way to you recommend to build the client/"light node"? I think it is crucial to have a good UX on this, as the average Wikipedia user won't be tech-savvy. It should be one-click-installable solution with little moving parts. I can see multiple solutions:
dat-node
.archive.history
andarchive.writeFile
(withmtime
) then?dat-cli
.The text was updated successfully, but these errors were encountered: