OpenFaaS provides event-driven compute using functions and also supports traditional microservices.
All services are either functions or microservices, which are built into Docker images. Once pushed to a Docker image, the CLI or REST can be used to deploy the endpoint. It will become available on the OpenFaaS gateway.
The Gateway can be accessed through its REST API, via the CLI or through the UI. All services or functions get a default route exposed, but custom domains can also be used for each endpoint. Prometheus collects metrics which are available via the Gateway's API and which are used for auto-scaling.
By changing the URL for a function from /function/NAME to /async-function/NAME an invocation can be run in a queue using NATS Streaming. You can also pass an optional callback URL via the header X-Callback-Url
.
The k3sup
binary installs OpenFaaS using helm3 and its chart:
- Get
k3sup
- for Mac, Windows or Linux
curl -sLS https://get.k3sup.dev | sh
sudo install k3sup /usr/bin/
- Deploy OpenFaaS with a LoadBalancer
Since the Packet Labs configuration deploys MetalLB, we can create deploy OpenFaaS and expose a LoadBalancer service for the OpenFaaS gateway:
k3sup app install openfaas --load-balancer
Follow the output at the end of the installation to test the deployment.
kubectl rollout status -n openfaas deploy/gateway
kubectl port-forward -n openfaas svc/gateway 8080:8080 &
# If basic auth is enabled, you can now log into your gateway:
PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)
echo -n $PASSWORD | faas-cli login --username admin --password-stdin
faas-cli store list
faas-cli store deploy nodeinfo
# Check for the Pod to become available ("Status: Ready")
faas-cli describe nodeinfo
echo verbose | faas-cli invoke nodeinfo
Now obtain your public endpoint for the OpenFaaS gateway, look for the EXTERNAL-IP:
kubectl get svc -n openfaas
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gateway-external LoadBalancer 10.100.71.191 172.217.14.165 8080:31079/TCP 10m
This corresponds to the LoadBalancer created by the helm chart.
curl -sLS https://cli.openfaas.com | sh
chmod +x faas-cli
sudo mv faas-cli /usr/bin/
All functions need to be pushed to a registry, whether in-cluster, using a managed product or the Docker Hub.
The Docker Hub is the easiest option, for example:
export OPENFAAS_PREFIX="alexellis2"
docker login --username $OPENFAAS_PREFIX
# List available templates
faas-cli template store list
# Create a Node.js 12 async/await function:
faas-cli new --lang node12 db-inserter
# We can use one file for all the functions
mv db-inserter.yml stack.yml
This gives us:
├── db-inserter
│ ├── handler.js
│ └── package.json
└── stack.yml
Example of handler.js
:
"use strict"
module.exports = async (event, context) => {
let err;
const result = {
status: "Received input: " + JSON.stringify(event.body)
};
return context
.status(200)
.succeed(result);
}
Deploy:
# Build / push / deploy
faas-cli build
faas-cli push
faas-cli deploy
# Or all-in-one
faas-cli up
View the function on the OpenFaaS UI or invoke via faas-cli invoke db-insert
.
Seek out technical support on the OpenFaaS Slack
Hands-on training:
Reference: