- Skywire Mainnet - Public Test Phase
This is a public testing version of the Skywire mainnet and is intended for developers use to find bugs only. It is not yet intended to replace the testnet and miners should not install this software on their miners or they may lose their reward eligibility.
The software is still under heavy development and the current version is intended for public testing purposes only. A GUI interface and various guides on how to use Skywire, application development on Skywire and contribution policies will follow in the near future. For now this version of the software can be used by developers to test the functionality and file bug issues to help the development.
Skywire is a decentralized and private network. Skywire separates the data and control plane of the network and assigns the tasks of network coordination and administration to dedicated services, while the nodes follow the rules that were created by the control plane and execute them.
The core of Skywire is the Skywire visor which hosts applications and is the gateway to use the network. It establishes connections, called transports, to other nodes, requests the setup of routes and forwards packets for other nodes on a route. The Skywire visor exposes an API to applications for using the networking protocol of Skywire.
In order to detach control plane tasks from the network nodes, there are 3 other services that maintain a picture of the network topology, calculate routes (currently based on the number of hops, but will be extended to other metrics) and set the routing rules on the nodes.
The transport discovery maintains a picture of the network topology, by allowing Skywire visors to advertise transports that they established with other nodes. It also allows to upload a status to indicate whether a given transport is currently working or not.
On the basis of this information the route finder calculates the most efficient route in the network. Nodes request a route to a given public key and the route finder will calculate the best route and return the transports that the packet will be sent over to reach the intended node.
This information is sent from a node to the Setup Node, which sets the routing rules in all nodes along a route. Skywire visors determine, which nodes they accept routing rules from, so only a whitelisted node can send routing rules to a node in the network. The only information the Skywire visor gets for routing is a Routing ID and an associated rule that defines which transport to send a packet to (or to consume the packet). Therefore nodes along a route only know the last and next hop along the route, but not where the packet originates from and where it is sent to. Skywire supports source routing, so nodes can specify a path that a packet is supposed to take in the network.
There are currently two types of transports that nodes can use. The messaging transport is a transport between two nodes that uses an intermediary messaging server to relay packets between them. The connection to a specific node and the connection to a messaging server is facilitated by a discovery service, that allows nodes to advertise the messaging servers over which they can be contacted. This transport is used by the setup node to send routing rules and can be used for other applications as well. It allows nodes behind NATs to communicate. The second transport type is TCP, which sets up a connection between two servers with a public IP. More transport types will be supported in the future and custom transport implementations can be written for specific use cases.
Skywire requires a version of golang with go modules support.
# Clone.
$ git clone https://github.com/SkycoinProject/skywire
$ cd skywire
$ git checkout mainnet
# Build
$ make build # installs all dependencies, build binaries and apps
Note: Environment variable OPTS
Build can be customized with environment variable OPTS
with default value GO111MODULE=on
E.g.
$ export OPTS="GO111MODULE=on GOOS=darwin"
$ make
# or
$ OPTS="GSO111MODULE=on GOOS=linux GOARCH=arm" make
Install skywire-visor, skywire-cli, hypervisor and SSH-cli
$ make install # compiles and installs all binaries
Generate default json config
$ skywire-cli node gen-config
skywire-visor
hosts apps, proxies app's requests to remote nodes and exposes communication API that apps can use to implement communication protocols. App binaries are spawned by the node, communication between node and app is performed via unix pipes provided on app startup.
# Run skywire-visor. It takes one argument; the path of a configuration file (`skywire-config.json` if unspecified).
$ skywire-visor skywire-config.json
make docker-run
The skywire-cli
tool is used to control the skywire-visor
. Refer to the help menu for usage:
$ skywire-cli -h
# Command Line Interface for skywire
#
# Usage:
# skywire-cli [command]
#
# Available Commands:
# help Help about any command
# mdisc Contains sub-commands that interact with a remote Messaging Discovery
# node Contains sub-commands that interact with the local Skywire Visor
# rtfind Queries the Route Finder for available routes between two nodes
# tpdisc Queries the Transport Discovery to find transport(s) of given transport ID or edge public key
#
# Flags:
# -h, --help help for skywire-cli
#
# Use "skywire-cli [command] --help" for more information about a command.
After skywire-visor
is up and running with default environment, default apps are run with the configuration specified in skywire-config.json
. Refer to the following for usage of the default apps:
In order for a local Skywire App to communicate with an App running on a remote Skywire visor, a transport to that remote Skywire visor needs to be established.
Transports can be established via the skywire-cli
.
# Establish transport to `0276ad1c5e77d7945ad6343a3c36a8014f463653b3375b6e02ebeaa3a21d89e881`.
$ skywire-cli node add-tp 0276ad1c5e77d7945ad6343a3c36a8014f463653b3375b6e02ebeaa3a21d89e881
# List established transports.
$ skywire-cli node ls-tp
App is a generic binary that can be executed by the node. On app
startup node will open pair of unix pipes that will be used for
communication between app and node. app
packages exposes
communication API over the pipe.
// Config defines configuration parameters for App
&app.Config{AppName: "helloworld", AppVersion: "1.0", ProtocolVersion: "0.0.1"}
// Setup setups app using default pair of pipes
func Setup(config *Config) (*App, error) {}
// Accept awaits for incoming loop confirmation request from a Node and
// returns net.Conn for a received loop.
func (app *App) Accept() (net.Conn, error) {}
// Addr implements net.Addr for App connections.
&Addr{PubKey: pk, Port: 12}
// Dial sends create loop request to a Node and returns net.Conn for created loop.
func (app *App) Dial(raddr *Addr) (net.Conn, error) {}
// Close implements io.Closer for App.
func (app *App) Close() error {}
$ make test
Options for go test
could be customized with $TEST_OPTS variable
E.g.
$ export TEST_OPTS="-race -tags no_ci -timeout 90s -v"
$ make test
By default all log messages during tests are disabled. In case of need to turn on log messages it could be achieved by setting $TEST_LOGGING_LEVEL variable
Possible values:
- "debug"
- "info", "notice"
- "warn", "warning"
- "error"
- "fatal", "critical"
- "panic"
E.g.
$ export TEST_LOGGING_LEVEL="info"
$ go clean -testcache || go test ./pkg/transport -v -run ExampleManager_CreateTransport
$ unset TEST_LOGGING_LEVEL
$ go clean -testcache || go test ./pkg/transport -v
In case of need to collect logs in syslog during integration tests $SYSLOG_OPTS variable can be used.
E.g.
$ make run_syslog ## run syslog-ng in docker container with logs mounted to /tmp/syslog
$ export SYSLOG_OPTS='--syslog localhost:514'
$ make integration-run-messaging ## or other integration-run-* goal
$ sudo cat /tmp/syslog/messages ## collected logs from NodeA, NodeB, NodeC instances
This software comes with an updater, which is located in this repo: https://github.com/SkycoinProject/skywire-updater. Follow the instructions in the README.md for further information. It can be used with a CLI for now and will be usable with the manager interface.
There are two make goals for running in development environment dockerized skywire-visor
.
$ make docker-run
This will:
- create docker image
skywire-runner
for runningskywire-visor
- create docker network
SKYNET
(can be customized) - create docker volume ./node with linux binaries and apps
- create container
SKY01
and starts it (can be customized)
./node
├── apps # node `apps` compiled with DOCKER_OPTS
│ ├── skychat.v1.0 #
│ ├── helloworld.v1.0 #
│ ├── socksproxy-client.v1.0 #
│ ├── socksproxy.v1.0 #
│ ├── SSH-client.v1.0 #
│ └── SSH.v1.0 #
├── local # **Created inside docker**
│ ├── skychat # according to "local_path" in skywire-config.json
│ ├── socksproxy #
│ └── SSH #
├── PK # contains public key of node
├── skywire # db & logs. **Created inside docker**
│ ├── routing.db #
│ └── transport_logs #
├── skywire-config.json # config of node
└── skywire-visor # `skywire-visor` binary compiled with DOCKER_OPTS
Directory ./node
is mounted as docker volume for skywire-visor
container.
Inside docker container it is mounted on /sky
Structure of ./skywire-visor
partially replicates structure of project root directory.
Note that files created inside docker container has ownership root:root
,
so in case you want to rm -rf ./node
(or other file operations) - you will need sudo
it.
Look at "Recipes: Creating new dockerized node" for further details.
$ make refresh-node
This will:
- stops running node
- recompiles
skywire-visor
for container - start node again
Docker image for running skywire-visor
.
Default value: skywire-runner
(built with make docker-image
)
Other images can be used. E.g.
DOCKER_IMAGE=golang make docker-run #buildpack-deps:stretch-scm is OK too
Name of virtual network for skywire-visor
Default value: SKYNET
Name of container for skywire-visor
Default value: SKY01
go build
options for binaries and apps in container.
Default value: "GO111MODULE=on GOOS=linux"
$ cat ./node/skywire-config.json|grep static_public_key |cut -d ':' -f2 |tr -d '"'','' '
# 029be6fa68c13e9222553035cc1636d98fb36a888aa569d9ce8aa58caa2c651b45
$ docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' SKY01
# 192.168.112
$ firefox http://$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' SKY01):8000
In case you need more dockerized nodes or maybe it's needed to customize node let's look how to create new node.
# 1. We need a folder for docker volume
$ mkdir /tmp/SKYNODE
# 2. compile `skywire-visor`
$ GO111MODULE=on GOOS=linux go build -o /tmp/SKYNODE/skywire-visor ./cmd/skywire-visor
# 3. compile apps
$ GO111MODULE=on GOOS=linux go build -o /tmp/SKYNODE/apps/skychat.v1.0 ./cmd/apps/skychat
$ GO111MODULE=on GOOS=linux go build -o /tmp/SKYNODE/apps/helloworld.v1.0 ./cmd/apps/helloworld
$ GO111MODULE=on GOOS=linux go build -o /tmp/SKYNODE/apps/socksproxy.v1.0 ./cmd/apps/therealproxy
$ GO111MODULE=on GOOS=linux go build -o /tmp/SKYNODE/apps/SSH.v1.0 ./cmd/apps/SSH
$ GO111MODULE=on GOOS=linux go build -o /tmp/SKYNODE/apps/SSH-client.v1.0 ./cmd/apps/SSH-client
# 4. Create skywire-config.json for node
$ skywire-cli node gen-config -o /tmp/SKYNODE/skywire-config.json
# 2019/03/15 16:43:49 Done!
$ tree /tmp/SKYNODE
# /tmp/SKYNODE
# ├── apps
# │ ├── skychat.v1.0
# │ ├── helloworld.v1.0
# │ ├── socksproxy.v1.0
# │ ├── SSH-client.v1.0
# │ └── SSH.v1.0
# ├── skywire-config.json
# └── skywire-visor
# So far so good. We prepared docker volume. Now we can:
$ docker run -it -v /tmp/SKYNODE:/sky --network=SKYNET --name=SKYNODE skywire-runner bash -c "cd /sky && ./skywire-visor"
# [2019-03-15T13:55:08Z] INFO [messenger]: Opened new link with the server # 02a49bc0aa1b5b78f638e9189be4ed095bac5d6839c828465a8350f80ac07629c0
# [2019-03-15T13:55:08Z] INFO [messenger]: Updating discovery entry
# [2019-03-15T13:55:10Z] INFO [skywire]: Connected to messaging servers
# [2019-03-15T13:55:10Z] INFO [skywire]: Starting skychat.v1.0
# [2019-03-15T13:55:10Z] INFO [skywire]: Starting RPC interface on 127.0.0.1:3435
# [2019-03-15T13:55:10Z] INFO [skywire]: Starting socksproxy.v1.0
# [2019-03-15T13:55:10Z] INFO [skywire]: Starting SSH.v1.0
# [2019-03-15T13:55:10Z] INFO [skywire]: Starting packet router
# [2019-03-15T13:55:10Z] INFO [router]: Starting router
# [2019-03-15T13:55:10Z] INFO [trmanager]: Starting transport manager
# [2019-03-15T13:55:10Z] INFO [router]: Got new App request with type Init: {"app-name":"skychat",# "app-version":"1.0","protocol-version":"0.0.1"}
# [2019-03-15T13:55:10Z] INFO [router]: Handshaked new connection with the app skychat.v1.0
# [2019-03-15T13:55:10Z] INFO [skychat.v1.0]: 2019/03/15 13:55:10 Serving HTTP on :8000
# [2019-03-15T13:55:10Z] INFO [router]: Got new App request with type Init: {"app-name":"SSH",# "app-version":"1.0","protocol-version":"0.0.1"}
# [2019-03-15T13:55:10Z] INFO [router]: Handshaked new connection with the app SSH.v1.0
# [2019-03-15T13:55:10Z] INFO [router]: Got new App request with type Init: {"app-name":"socksproxy",# "app-version":"1.0","protocol-version":"0.0.1"}
# [2019-03-15T13:55:10Z] INFO [router]: Handshaked new connection with the app socksproxy.v1.0
Note that in this example docker is running in non-detached mode - it could be useful in some scenarios.
Instead of skywire-runner you can use:
golang
,buildpack-deps:stretch-scm
"as is"- and
debian
,ubuntu
- afterapt-get install ca-certificates
in them. Look inskywire-runner.Dockerfile
for example
export SW_NODE_A=127.0.0.1
export SW_NODE_A_PK=$(cat ./skywire-config.json|grep static_public_key |cut -d ':' -f2 |tr -d '"'','' ')
export SW_NODE_B=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' SKY01)
export SW_NODE_B_PK=$(cat ./node/skywire-config.json|grep static_public_key |cut -d ':' -f2 |tr -d '"'','' ')
Idea of test from Erlang classics: https://youtu.be/uKfKtXYLG78?t=120
# Setup: run skywire-visors on host and in docker
$ make run
$ make docker-run
# Open in browser skychat application
$ firefox http://$SW_NODE_B:8000 &
# add transport
$ ./skywire-cli add-transport $SW_NODE_B_PK
# "Hello Mike!" - "Hello Joe!" - "System is working!"
$ curl --data {'"recipient":"'$SW_NODE_A_PK'", "message":"Hello Mike!"}' -X POST http://$SW_NODE_B:8000/message
$ curl --data {'"recipient":"'$SW_NODE_B_PK'", "message":"Hello Joe!"}' -X POST http://$SW_NODE_A:8000/message
$ curl --data {'"recipient":"'$SW_NODE_A_PK'", "message":"System is working!"}' -X POST http://$SW_NODE_B:8000/message
# Teardown
$ make stop && make docker-stop