-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deprecate then Remove /api/v0/pubsub/* RPC API and ipfs pubsub
Commands
#9717
Comments
Thanks for creating this issue @Jorropo . I made some adjustments to the issue description including adding a formal task list. (Feel free to look at the changes in the issue history.) |
ipfs pubsub
Commands
Thanks for the write up @Jorropo. This makes total sense for all the reasons you mentioned. Broadly speaking, we just need to do some better advocacy and education about PubSub in libp2p and establish some best practices from the known real world usecases you listed. As far as I understand, this would deprecate the following endpoints:
Tagging @TheDiscordian as this would likely break Discochat, which relies on the Kubo-RPC client and a Kubo daemon to subscribe to topics. It looks like it from a search of the code Either way, we're already planning a new example to showcase universal connectivity with libp2p libp2p/universal-connectivity#1 which showcases an app architecture where every user is a full libp2p Peer. |
Reopening since this isn't complete (only #9718 is). |
So kubo v0.19 will be the last kubo release to have the pubsub RPC API baked in, it seems. My project can't work without pubsub so i'll just freeze the kubo version for now. There's a post here that suggests it'll still be possible somehow to use pubsub via a separate repo, any info on that ? |
@pinnaculum this is point number 1 in Ideally what we want is that you write ~100 lines of Go and use |
Merci @Jorropo :) Yes, that works if you're writing in Go. I've read the "chat" example in go-libp2p-pubsub, but since your program runs outside of kubo and has to be written in Go, this makes things more difficult for a project that wants to use pubsub and the rest of kubo's APIs. IPFS's Pubsub on its own is truly great (hooray gossipsub), but that's when you can combine it with the rest of the IPFS APIs that it can truly shine IMO, because there are many other pubsub implementations out there i think. The beauty of having an integrated pubsub RPC API is that you can write in any language that has bindings for kubo's RPC, without knowing (or caring, tbh) about the internals of all this, and take advantage of the rest of the IPFS API. You guys are the experts and i understand that it takes a lot of time to maintain and fix all of this code. I've used the pubsub RPC API since the go-ipfs 0.4.x days until the recent kubo releases and it has steadily improved, so thank you and congratulations to the geniuses involved in this. |
@pinnaculum I'm not saying that you would rewrite your complete app in Go,
There will be a binary that provides the same features if you are happy with them, I would be surprised if very few peoples sees issues like #9665. Which are not trivial to solve. |
My project relies on both IPFS and pubsub. By pulling it out of kubo, and having projects rely on libp2p instead, wouldn't that leave us to effectively load libp2p twice, once in kubo, then a separate instance of libp2p to run pubsub? It seems that folks using pubsub via kubo are perfectly happy with the level of accessibility that's currently offered, and would prefer to continue to use it. The alternate solutions provided above make projects much more cumbersome to manage, requiring support for multiple languages isn't ideal (wrapper), or having to now manage an additional process (go-libp2p-pubsub) adds further complexity in deployment, maintenance, and usage. I'd propose leaving the API in its current state, and if someone is unhappy with how it's working, leave it to them to implement their changes, and just remove it from the primary devs' roadmap. The feature is only a performance issue for nodes that explicitly enable it, correct? |
I rely on the Kubo RPC API for accessing pubsub from an Electron app. Removing this would mean I could no longer use the go-ipfs NPM module or Kubo in general and would need to totally rework how I integrate IPFS into my applications. I suppose this could be a reason to ditch Kubo entirely and embed just a subst of it into a custom HTTP API? It would certainly make it harder to deploy and reuse things like IPFS-Cluster with the node. |
I just wanted to say thanks for folks sharing about their usecases and needs. For transparency, Kubo maintainers havne't done any work on this yet. In case it wasn't clear, the migration path/plan will be designed and communicated before we undergo this work. Updates will be posted here. In the meantime, feel free to continue to share. |
I've thought about that (expose some kind of RPC in the wrapper) but did not mention it in my message. The reason why this approach would be problematic for my project, and @fcbrandon talks about this as well, is that as i understand it you would have two parallel libp2p instances/nodes, the kubo's libp2p instance (that would not have pubsub "capabilities") and the libp2p instance running in the "wrapper" that uses go-libp2p-pubsub and thereforce can exchange pubsub messages. Am i wrong about that ? I use IPFS peer IDs as a "unique key" in a PeerID <=> DID mapping. Having two nodes and therefore distinct peer IDs makes it much more difficult, but not impossible, and also performance wise it's not ideal. Just exposing my thoughts about this approach, but maybe for other people's usecases it wouldn't be a problem at all. I wonder if a compromise could not be found, by having kubo keep using the "default" pubsub validator (the "BasicSeqnoValidator" ?, which is probably inefficient), while letting people who want better control with the option of using go-libp2p-pubsub's API and set up their custom validators, etc ... I've read validation_builtin.go. If better validators are implemented in the future, then there could be a /api/v0/pubsub/set_topic_validator kubo RPC API call that would just pass the name of a builtin pubsub validator (and some optional settings maybe), to set the validator to use for a given topic. Then, no need for messy callbacks, and kubo's pubsub implementation would strengthen over time with different kinds of validators ? I think the implementation of validators should stay in go-libp2p-pubsub. Once you take this problem out of the equation, the case for deprecating the pubsub API in kubo is not as strong because most people using pubsub with kubo are probably fine with letting kubo use the best validator available. |
One our usecase would be to know the IP address of the direct peer who sent a message (not the origin), to be able to block IPs of peers that relay bad messages. Also being able to block IPs in the pubsub would be nice. I know you can do it with IMO having a basic pubsub API included with kubo is good for testing, prototyping and discovering that it even exists, even if it can't be customized. I'm not sure I would know it exists if I didn't randomly see it as an option in IPFS one day. In our app we bundle kubo with electron, and use both ipfs and pubsub, so having 2 binaries, one for ipfs and one for pubsub would mean a larger bundle. Could it also mean more resource/bandwidth usage if the user has to run both at the same time? Also no one in our team knows go, so the ideal scenario would be for pubsub to remain in kubo, but with more configuration options, for example in our case blocking IPs of direct peers. The second best scenario would be a separate pubsub binary and RPC with more configuration options. The least ideal (but not dealbreaking) would be to use |
First of all, thank you very much for putting so much time into your clarification. |
My vote on how to move forwards are options 1 & 3 (leave PubSub API in Kubo but be extremely clear that it is broken and push people to use go-libp2p-PubSub). My reasons for wanting to leave the Kubo PubSub Endpoint are:
|
Thank you @Jorropo for the detailed explanations. Solution 2 is time-consuming and using callbacks is wrong.
I think it's acceptable, flag the feature as "incomplete", "broken", users are warned and you can point to go-libp2p-pubsub .. That way you halt the flow of tears of the developers already using kubo's pubsub and who don't have a backup plan, but also you keep the possibility of fixing pubsub's flaws in the near future if a better solution comes along. |
That the goal.
|
Sad. Do what must be done, make it quick and painless :) A ceremony will be held for the orphaned pubsubers, topic name livefast_dieyoung. |
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
@emendir , @pinnaculum , @TheDiscordian , @RangerMauve @sevenrats, and everybody who care about pubsub: See https://github.com/MichaelMure/ipfs-pubsub-service-api/blob/master/pubsub-service-api.yml Readme rendered there: https://gist.github.com/MichaelMure/87296599c5ec3f6ad08468bef7b66d68 I've been working on a replacement API for pubsub that improve on it in several way:
I believe that API could be a viable replacement for application use-cases. I should also solve that validation problem that kubo has. It is however only a spec for now. I'm planning to push it as an IPIP later when I get the time. It would be extremely useful to have early feedback on it. Could you open issues in that repo if you have some feedback? |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
After these comments from @Jorropo on the alternatives to using Kubo's PubSub RPC API, I started looking into what we users of the PubSub RPC API are supposed to do now. I don't know how others are getting along with this, but my path so far has been pretty rough. After trying out different ways of interfacing libraries written in go with the language I work in (Python), I started looking into exactly which IPFS libraries I want to work with. First of all, it was slightly overwhelming that there are so many different libraries to choose from. As I understand it so far, we can break those libraries down into the following categories:
So @Jorropo and other IFPS & Kubo developers, could you please critique my above representation of the categories of IPFS go libraries so that we IPFS-application-developers who are begging you not to remove Kubo's PubSub RPC API can better judge which direction we need to move on to? I can tell that the IPFS & Kubo developers are working hard on building better ways for us IPFS-application-developers to use it, but the matter of the fact is that the documentation and tutorials are currently mediochre at best. Part of my difficulties with this endeavour stem from the fact that this is the first time I use the go programming language, but as I won't be alone with this trouble. Thank you go-libp2p, IPFS & Kubo developers for all your work. I really appreciate it. |
My approach has been to create a custom pubsub validator function for our app, as well as a custom messageId and AppSpecificScore function. And then fork kubo, and add a few lines of code to kubo to pass the options. I was able to do this in a few days even if I had never used go before.
In kubo, only a few lines need to be added to use it: import (
pubsubPlebbitValidator "github.com/plebbit/go-libp2p-pubsub-plebbit-validator"
)
validator := pubsubPlebbitValidator.NewValidator(host)
peerScoreParams := pubsubPlebbitValidator.NewPeerScoreParams(validator)
return pubsub.NewGossipSub(helpers.LifecycleCtx(mctx, lc), host, append(
pubsubOptions,
pubsub.WithDefaultValidator(validator.Validate),
pubsub.WithMessageIdFn(pubsubPlebbitValidator.MessageIdFn),
pubsub.WithPeerScore(&peerScoreParams, &pubsubPlebbitValidator.PeerScoreThresholds),
pubsub.WithDiscovery(disc),
pubsub.WithFloodPublish(true))...,
) Then I made a github action to build the modified kubo binaries and put them in a github release: https://github.com/plebbit/kubo/releases/latest. My own app downloads it from there when building. I imagine if the pubsub APIs were to be removed completely from kubo (instead of just being deprecated), I would readd them myself by copy pasting the code from previous versions. I wouldn't be able to use |
@estebanabaroa thanks for sharing your experience!
I agree that the parallel running of 2 libp2p instances is a concern. It is the worry that nags me the most as I explore the go-library-based approaches. Wastes processing resources, bandwidth, I guess it will crash routers more often, and will require some apps to autostart and run in the background just to keep their IPFS nodes runnning where they previously didn't have to. |
@Jorropo You mentioned:
It wouldn't be the first time things have changed in PubSub, remember the release of v0.11.0?
https://github.com/ipfs/kubo/releases/tag/v0.11.0 I remember having to figure out how to adjust a library I maintain to make it compatible with the new version. Since Kubo's PubSub endpoint is marked as experimental such changes are readily forgivable |
I agree but not sure it's possible, the message id function, validator function and app specific score function could be called thousands of times per second, they can't do a network round trip. Also different apps might need different internal data from libp2p or pubsub, so all data would need to be included in the network round trip. IMO pubsub should remain an API in kubo at least as a demo, even if it can't ever be used in production, otherwise how would people ever discover that it exists and try it. It could also include some extra configuration, like multiple message id functions and validators that people can test out, even if it can't be used like that in production. |
It's notable that writing Go code just isn't an option for many of us. The dotnet ecosystem can't keep up with development of Kubo and libp2p, so we've been bootstrapping Kubo and using the RPC HTTP API instead. Until a stable WASM/WASI implementation of libp2p comes along, we can't even begin to move to this new library-oriented story. That in mind, this option suggested above by @Jorropo seems like the cleanest move:
|
I see various comments but is https://github.com/libp2p/go-libp2p-daemon the correct thing now? Remember people land here from the CLI and HTTP use. It might make sense to point to "please use this CLI/HTTP" in the deprecation warning in Kubo message for pubsub command. |
Triage notes:
|
I came across this issue when getting a deprecation notice after using an The problem with "option 1" above is that it forces developers to use Go for their own applications. An HTTP API provides a fairly universal language agnostic interface. My understanding from the above thread is that the only technical problem with exposing the functionality over HTTP is the need for callbacks, so I'd like to propose another option: use WebSockets. You subscribe to a topic by making an HTTP connection to an API endpoint, upgrading the connection to WebSocket, and then topics you're subscribed to get broadcast over that connection. This would provide a similar language-agnostic bridge to the functionality without forcing developers to write the pubsub parts of their applications in Go. |
yes, this is what I think works. Currently you need to implement this websocket upgrade in go. I managed to get some experiment working like this (go api and python client). It would be nice to have some shared cli to use and develop against. |
I agree that WebSockets are probably the best way to handle pubsub. For anybody with more success than me in getting started building kubo components, I do recommend ZMQ, a WebSocket technology built on top of TCP/IP that already has pubsub functionality built in. |
https://github.com/libp2p/go-libp2p-pubsub/releases/tag/v0.12.0 introduced GossipSub v1.2. As noted in #9684 (comment) we no longer need to maintain interop of opinionated settings between kubo and js-ipfs, reducing cost of maintenance to acceptable level. This enables us to pick up #9684, update to latest version, and switch pubsub from deprecated back to experimental feature in one of future releases. |
Kubo's PubSub RPC API (
/api/v0/pubsub/*
) provides access to a somewhat reliable multicast protocol.However it has many issues:
Lack of a real contract on the reliability of messages
The messages reliability contract is more or less:
(AKA whatever
go-libp2p-pubsub
implements)(Attempt to avoid storms)
First point,
go-libp2p-pubsub
has a bunch of options and callbacks you can configure to change the behaviour of the mesh (viewExamples of correct usage of go-libp2p-pubsub
section below).The options are not currently configurable, it's not just a question of exposing them, the really impactfull options are the one locked behind callback like validators.
Two potential solutions are to have clients maintain a websocket, grpc, ... stream open with Kubo, then when Kubo receives a callback from
go-libp2p-pubsub
it can forward the arguments over the API stream and wait for the response. Or add something like WASM to run custom application code inside Kubo, then you will be able to configure WASM blobs which implements the validator you want.This is much harder than just throwing a WASM interpreter and writing a few hundred SLOCs of glue code, because most validators you would want write would need to access and store some application related state. (for example in a CRDT application, do not relay messages that advertise a HEAD that is lower than the currently known HEAD).
Second point, our current implementation of message deduplication use a bounded cache to find duplicates, if the mesh gets wider than the cache size, you can reach an exponential broadcast storm like event: #9665, sadly linking to point one, even tho the fix is supposed to be transparent and implement a visibly similar message deduping logic except it does not have a bounded size this make our interop tests very flaky and thus might break various stuff in the ecosystem.
Confusing Architecture
I had more than I have fingers discussions with various peoples who complain that Kubo pubsub does not work, they never receive messages.
Almost always the issue is that they are running ipfs http clients in the browser, open two browser tabs, and then try to receive messages from the other tab.
This does not work because Kubo does not think of the two clients as two clients, from Kubo's point of view the http api is remote controlling the Kubo node. Thus the fact that the browsers are different tabs are different browsers instance, is not taken into a count, as far as Kubo can see, the messages are sent by the same node (itself) and it does not return you your own message because receiving messages you sent yourself is confusing.
This is a perfectly valid usecase, just not what the API was designed to do (you can implement this is to use js-libp2p in the browser then your browser node would use floodsub to a local Kubo node, with messages going through the libp2p swarm instead of the HTTP API).
Future of the API
Currently the pubsub API is not in a good place and correctly advertise this:
Our current team goals are to move away from the two ABIs (HTTP & Go) maintenance costs for people who want to build applications on top of IPFS by providing a consistent Go ABI story (
go-libipfs
) and a comprehensive example collection on how to use this (go-libipfs/examples
).Fixing the PubSub API require lots of work which does not allign with theses goals and thus to not justify allocating much time on this when we enginer time is at a premium.
go-libp2p-pubsub
's Go API is already competent and capable of satisfying the needs of consumers proven by theProduction examples of correct usage of go-libp2p-pubsub
section below.go-libp2p-pubsub
will continue to stay part of libp2p given this really have very little to do with IPFS and can be used by any libp2p project (ETH2 for example).Ways for creating a soft landing
To ease the pain of people currently using the PubSub HTTP API Kubo API, we could:
go-libp2p-pubsub
if that is useful. (TBD where that example will live but could be something like full-example in libp2p/go-libp2p-pubub that is validated as part of CI)Production examples of correct usage of
go-libp2p-pubsub
For good example of how to use libp2p-pubsub's effectively see things like:
Tasks
The text was updated successfully, but these errors were encountered: