This document contains a list of experimental features in go-ipfs. These features, commands, and APIs aren't mature, and you shouldn't rely on them. Once they reach maturity, there's going to be mention in the changelog and release posts. If they don't reach maturity, the same applies, and their code is removed.
Subscribe to ipfs#3397 to get updates.
When you add a new experimental feature to go-ipfs or change an experimental feature, you MUST please make a PR updating this document, and link the PR in the above issue.
- ipfs pubsub
- Raw leaves for unixfs files
- ipfs filestore
- ipfs urlstore
- Private Networks
- ipfs p2p
- p2p http proxy
- Plugins
- Directory Sharding / HAMT
- IPNS PubSub
- AutoRelay
- Strategic Providing
- Graphsync
- Noise
- Accelerated DHT Client
Candidate, disabled by default but will be enabled by default in 0.6.0.
0.4.5
run your daemon with the --enable-pubsub-experiment
flag. Then use the
ipfs pubsub
commands.
Configuration documentation can be found in ./config.md
- Needs to not impact peers who don't use pubsub: libp2p/go-libp2p-pubsub#332
Allows files to be added with no formatting in the leaf nodes of the graph.
Stable but not used by default.
0.4.5
Use --raw-leaves
flag when calling ipfs add
. This will save some space when adding files.
Enabling this feature by default will change the CIDs (hashes) of all newly imported files and will prevent newly imported files from deduplicating against previously imported files. While we do intend on enabling this by default, we plan on doing so once we have a large batch of "hash-changing" features we can enable all at once.
Allows files to be added without duplicating the space they take up on disk.
Experimental.
0.4.7
Modify your ipfs config:
ipfs config --json Experimental.FilestoreEnabled true
Then restart your IPFS node to reload your config.
Finally, when adding files with ipfs add, pass the --nocopy flag to use the filestore instead of copying the files into your local IPFS repo.
- Needs more people to use and report on how well it works.
- Need to address error states and failure conditions
- Need to write docs on usage, advantages, disadvantages
- Need to merge utility commands to aid in maintenance and repair of filestore
Allows ipfs to retrieve blocks contents via a URL instead of storing it in the datastore
Experimental.
v0.4.17
Modify your ipfs config:
ipfs config --json Experimental.UrlstoreEnabled true
And then add a file at a specific URL using ipfs urlstore add <url>
- Needs more people to use and report on how well it works.
- Need to address error states and failure conditions
- Need to write docs on usage, advantages, disadvantages
- Need to implement caching
- Need to add metrics to monitor performance
It allows ipfs to only connect to other peers who have a shared secret key.
Stable but not quite ready for prime-time.
0.4.7
Generate a pre-shared-key using ipfs-swarm-key-gen):
go get github.com/Kubuxu/go-ipfs-swarm-key-gen/ipfs-swarm-key-gen
ipfs-swarm-key-gen > ~/.ipfs/swarm.key
To join a given private network, get the key file from someone in the network
and save it to ~/.ipfs/swarm.key
(If you are using a custom $IPFS_PATH
, put
it in there instead).
When using this feature, you will not be able to connect to the default bootstrap nodes (Since we aren't part of your private network) so you will need to set up your own bootstrap nodes.
First, to prevent your node from even trying to connect to the default bootstrap nodes, run:
ipfs bootstrap rm --all
Then add your own bootstrap peers with:
ipfs bootstrap add <multiaddr>
For example:
ipfs bootstrap add /ip4/104.236.76.40/tcp/4001/p2p/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64
Bootstrap nodes are no different from all other nodes in the network apart from the function they serve.
To be extra cautious, You can also set the LIBP2P_FORCE_PNET
environment
variable to 1
to force the usage of private networks. If no private network is
configured, the daemon will fail to start.
- Needs more people to use and report on how well it works
- More documentation
- Needs better tooling/UX.
Allows tunneling of TCP connections through Libp2p streams. If you've ever used
port forwarding with SSH (the -L
option in OpenSSH), this feature is quite
similar.
Experimental, will be stabilized in 0.6.0
0.4.10
The p2p
command needs to be enabled in the config:
> ipfs config --json Experimental.Libp2pStreamMounting true
Netcat example:
First, pick a protocol name for your application. Think of the protocol name as
a port number, just significantly more user-friendly. In this example, we're
going to use /x/kickass/1.0
.
Setup:
- A "server" node with peer ID
$SERVER_ID
- A "client" node.
On the "server" node:
First, start your application and have it listen for TCP connections on
port $APP_PORT
.
Then, configure the p2p listener by running:
> ipfs p2p listen /x/kickass/1.0 /ip4/127.0.0.1/tcp/$APP_PORT
This will configure IPFS to forward all incoming /x/kickass/1.0
streams to
127.0.0.1:$APP_PORT
(opening a new connection to 127.0.0.1:$APP_PORT
per
incoming stream.
On the "client" node:
First, configure the client p2p dialer, so that it forwards all inbound
connections on 127.0.0.1:SOME_PORT
to the server node listening
on /x/kickass/1.0
.
> ipfs p2p forward /x/kickass/1.0 /ip4/127.0.0.1/tcp/$SOME_PORT /p2p/$SERVER_ID
Next, have your application open a connection to 127.0.0.1:$SOME_PORT
. This
connection will be forwarded to the service running on 127.0.0.1:$APP_PORT
on
the remote machine. You can test it with netcat:
On "server" node:
> nc -v -l -p $APP_PORT
On "client" node:
> nc -v 127.0.0.1 $SOME_PORT
You should now see that a connection has been established and be able to exchange messages between netcat instances.
(note that depending on your netcat version you may need to drop the -v
flag)
SSH example
Setup:
- A "server" node with peer ID
$SERVER_ID
and running ssh server on the default port. - A "client" node.
you can get $SERVER_ID
by running ipfs id -f "<id>\n"
First, on the "server" node:
ipfs p2p listen /x/ssh /ip4/127.0.0.1/tcp/22
Then, on "client" node:
ipfs p2p forward /x/ssh /ip4/127.0.0.1/tcp/2222 /p2p/$SERVER_ID
You should now be able to connect to your ssh server through a libp2p connection
with ssh [user]@127.0.0.1 -p 2222
.
- More documentation
Allows proxying of HTTP requests over p2p streams. This allows serving any standard HTTP app over p2p streams.
Experimental
0.4.19
The p2p
command needs to be enabled in the config:
> ipfs config --json Experimental.Libp2pStreamMounting true
On the client, the p2p HTTP proxy needs to be enabled in the config:
> ipfs config --json Experimental.P2pHttpProxy true
Netcat example:
First, pick a protocol name for your application. Think of the protocol name as
a port number, just significantly more user-friendly. In this example, we're
going to use /http
.
Setup:
- A "server" node with peer ID
$SERVER_ID
- A "client" node.
On the "server" node:
First, start your application and have it listen for TCP connections on
port $APP_PORT
.
Then, configure the p2p listener by running:
> ipfs p2p listen --allow-custom-protocol /http /ip4/127.0.0.1/tcp/$APP_PORT
This will configure IPFS to forward all incoming /http
streams to
127.0.0.1:$APP_PORT
(opening a new connection to 127.0.0.1:$APP_PORT
per incoming stream.
On the "client" node:
Next, have your application make a http request to 127.0.0.1:8080/p2p/$SERVER_ID/http/$FORWARDED_PATH
. This
connection will be forwarded to the service running on 127.0.0.1:$APP_PORT
on
the remote machine (which needs to be a http server!) with path $FORWARDED_PATH
. You can test it with netcat:
On "server" node:
> echo -e "HTTP/1.1 200\nContent-length: 11\n\nIPFS rocks!" | nc -l -p $APP_PORT
On "client" node:
> curl http://localhost:8080/p2p/$SERVER_ID/http/
You should now see the resulting HTTP response: IPFS rocks!
We also support the use of protocol names of the form /x/$NAME/http where $NAME doesn't contain any "/"'s
- Needs p2p streams to graduate from experiments
- Needs more people to use and report on how well it works / fits use cases
- More documentation
- Need better integration with the subdomain gateway feature.
0.4.11
Experimental
Plugins allow adding functionality without the need to recompile the daemon.
See Plugin docs
- More plugins and plugin types
- A way to reliably build and distribute plugins.
- Better support for platforms other than Linux & MacOS
- Feedback on stability
0.4.8
Experimental
Allows creating directories with an unlimited number of entries.
Caveats:
- right now it is a GLOBAL FLAG which will impact the final CID of all directories produced by
ipfs.add
(even the small ones) - currently size of unixfs directories is limited by the maximum block size
ipfs config --json Experimental.ShardingEnabled true
- Make sure that objects that don't have to be sharded aren't
- Generalize sharding and define a new layer between IPLD and IPFS
0.4.14 :
- Introduced
0.5.0 :
- No longer needs to use the DHT for the first resolution
- When discovering PubSub peers via the DHT, the DHT key is different from previous versions
- This leads to 0.5 IPNS pubsub peers and 0.4 IPNS pubsub peers not being able to find each other in the DHT
- Robustness improvements
Experimental, default-disabled.
Utilizes pubsub for publishing ipns records in real time.
When it is enabled:
- IPNS publishers push records to a name-specific pubsub topic, in addition to publishing to the DHT.
- IPNS resolvers subscribe to the name-specific topic on first resolution and receive subsequently published records through pubsub in real time. This makes subsequent resolutions instant, as they are resolved through the local cache.
Both the publisher and the resolver nodes need to have the feature enabled for it to work effectively.
Note: While IPNS pubsub has been available since 0.4.14, it received major changes in 0.5.0. Users interested in this feature should upgrade to at least 0.5.0
run your daemon with the --enable-namesys-pubsub
flag; enables pubsub.
- Needs more people to use and report on how well it works
- Pubsub enabled as a real feature
0.4.19
Experimental, disabled by default.
Automatically discovers relays and advertises relay addresses when the node is behind an impenetrable NAT.
Modify your ipfs config:
ipfs config --json Swarm.EnableRelayHop false
ipfs config --json Swarm.EnableAutoRelay true
NOTE: Ensuring Swarm.EnableRelayHop
is false is extremely important here. If you set it to true, you will act as a public relay for the rest of the network instead of using the public relays.
- needs testing
Experimental, disabled by default.
Replaces the existing provide mechanism with a robust, strategic provider system. Currently enabling this option will provide nothing.
Modify your ipfs config:
ipfs config --json Experimental.StrategicProviding true
- needs real-world testing
- needs adoption
- needs to support all provider subsystem features
- provide nothing
- provide roots
- provide all
- provide strategic
Experimental, disabled by default.
GraphSync is the next-gen graph exchange protocol for IPFS.
When this feature is enabled, IPFS will make files available over the graphsync protocol. However, IPFS will not currently use this protocol to fetch files.
Modify your ipfs config:
ipfs config --json Experimental.GraphsyncEnabled true
- We need to confirm that it can't be used to DoS a node. The server-side logic for GraphSync is quite complex and, if we're not careful, the server might end up performing unbounded work when responding to a malicious request.
Stable, enabled by default
Noise libp2p transport based on the Noise Protocol Framework. While TLS remains the default transport in go-ipfs, Noise is easier to implement and is thus the "interop" transport between IPFS and libp2p implementations.
0.9.0
Experimental, default-disabled.
Utilizes an alternative DHT client that searches for and maintains more information about the network in exchange for being more performant.
When it is enabled:
- DHT operations should complete much faster than with it disabled
- A batching reprovider system will be enabled which takes advantage of some properties of the experimental client to very efficiently put provider records into the network
- The standard DHT client (and server if enabled) are run alongside the alternative client
- The operations
ipfs stats dht
andipfs stats provide
will have different outputsipfs stats provide
only works when the accelerated DHT client is enabled and shows various statistics regarding the provider/reprovider systemipfs stats dht
will default to showing information about the new client
Caveats:
- Running the experimental client likely will result in more resource consumption (connections, RAM, CPU, bandwidth)
- Users that are limited in the number of parallel connections their machines/networks can perform will likely suffer
- Currently, the resource usage is not smooth as the client crawls the network in rounds and reproviding is similarly done in rounds
- Users who previously had a lot of content but were unable to advertise it on the network will see an increase in egress bandwidth as their nodes start to advertise all of their CIDs into the network. If you have lots of data entering your node that you don't want to advertise consider using Reprovider Strategies to reduce the number of CIDs that you are reproviding. Similarly, if you are running a node that deals mostly with short-lived temporary data (e.g. you use a separate node for ingesting data then for storing and serving it) then you may benefit from using Strategic Providing to prevent advertising of data that you ultimately will not have.
- Currently, the DHT is not usable for queries for the first 5-10 minutes of operation as the routing table is being
prepared. This means operations like searching the DHT for particular peers or content will not work
- You can see if the DHT has been initially populated by running
ipfs stats dht
- You can see if the DHT has been initially populated by running
- Currently, the accelerated DHT client is not compatible with LAN-based DHTs and will not perform operations against them
ipfs config --json Experimental.AcceleratedDHTClient true
- Needs more people to use and report on how well it works
- Should be usable for queries (even if slower/less efficient) shortly after startup
- Should be usable with non-WAN DHTs