The different versions have been combined into a single container image. The image with the -lite
extension simply
does not exist anymore and all deployments can be done with just the base image. Since Postgres was the default before,
you need to change your image name when you do not use Postgres as your database, just remove the -lite
.
Dropped sqlx
SQLite in favor of Hiqlite
From this version on, Rauthy will not support a default SQLite anymore. Instead, it will use Hiqlite, which under the hood uses SQLite again and is another project of mine.
Hiqlite will bring lots of advantages. It will use a few more resources than a
direct, plain SQLite, but only ~10-15 MB of memory for small instances. In return, you will get higher consistency and
never blocking writes to the database during high traffic. It also reduces the latency for all read statements by a huge
margin compared to the solution before. Rauthy always enables the dashboard
feature
for Hiqlite, which will be available over the Hiqlite API port / server.
The biggest feature it brings though is the ability to run a HA cluster without any external dependencies. You can use Hiqlite on a single instance and it would "feel" the same as just a SQLite, but you can also spin up 3 or 5 nodes to get High Availability without the need for an external database. It uses the Raft algorithm to sync data while still using just a simple SQLite under the hood. The internal design of Hiqlite has been optimized a lot to provide way higher throughput as you would normally get when you just use a direct connection to a SQLite file. If you are interested more about the internals, take a look at the hiqlite/README.md or hiqlite/ARCHITECTURE.md.
With these features, Hiqlite will always be the preferred database solution for Rauthy. You should really not spin up a dedicated Postgres instance just for Rauthy, because it would just use too many resources, which is not necessary. If you have a Postgres up and running anyway, you can still opt-in to use it.
This was a very big migration and tens of thousands of lines of code has been changed. All tests are passing and a lot of additional checks have been included. I could not find any leftover issues or errors, but please let me know if you find something.
If you are using Rauthy with Postgres as database, you don't need to do that much. If however you use SQLite, no worries, Rauthy can handle the migration for you after adopting a few config variables. Even if you do the auto-migration from an existing SQLite to Hiqlite, Rauthy will keep the original SQLite file in place for additional safety, so you don't need to worry about a backup (as long as you set the config correctly of course). The next bigger release will maybe do cleanup work when everything worked fine for sure, or you can do it manually.
There are quite a few new config variables and some old ones are gone. What you need to set to migration will be explained below.
#####################################
############# BACKUPS ###############
#####################################
# When the auto-backup task should run.
# Accepts cron syntax:
# "sec min hour day_of_month month day_of_week year"
# default: "0 30 2 * * * *"
HQL_BACKUP_CRON="0 30 2 * * * *"
# Local backups older than the configured days will be cleaned up after
# the backup cron job.
# default: 30
#HQL_BACKUP_KEEP_DAYS=30
# Backups older than the configured days will be cleaned up locally
# after each `Client::backup()` and the cron job `HQL_BACKUP_CRON`.
# default: 3
#HQL_BACKUP_KEEP_DAYS_LOCAL=3
# If you ever need to restore from a backup, the process is simple.
# 1. Have the cluster shut down. This is probably the case anyway, if
# you need to restore from a backup.
# 2. Provide the backup file name on S3 storage with the
# HQL_BACKUP_RESTORE value.
# 3. Start up the cluster again.
# 4. After the restart, make sure to remove the HQL_BACKUP_RESTORE
# env value.
#HQL_BACKUP_RESTORE=
# Access values for the S3 bucket where backups will be pushed to.
#HQL_S3_URL=https://s3.example.com
#HQL_S3_BUCKET=my_bucket
#HQL_S3_REGION=example
#HQL_S3_PATH_STYLE=true
#HQL_S3_KEY=s3_key
#HQL_S3_SECRET=s3_secret
#####################################
############# CLUSTER ###############
#####################################
# Can be set to 'k8s' to try to split off the node id from the hostname
# when Hiqlite is running as a StatefulSet inside Kubernetes.
#HQL_NODE_ID_FROM=k8s
# The node id must exist in the nodes and there must always be
# at least a node with ID 1
# Will be ignored if `HQL_NODE_ID_FROM=k8s`
HQL_NODE_ID=1
# All cluster member nodes.
# To make setting the env var easy, the values are separated by `\s`
# while nodes are separated by `\n`
# in the following format:
#
# id addr_raft addr_api
# id addr_raft addr_api
# id addr_raft addr_api
#
HQL_NODES="
1 localhost:8100 localhost:8200
"
# Sets the limit when the Raft will trigger the creation of a new
# state machine snapshot and purge all logs that are included in
# the snapshot.
# Higher values can achieve more throughput in very write heavy
# situations but will end up in more disk usage and longer
# snapshot creations / log purges.
# default: 10000
#HQL_LOGS_UNTIL_SNAPSHOT=10000
# Secrets for Raft internal authentication as well as for the API.
# These must be at least 16 characters long and you should provide
# different ones for both variables.
HQL_SECRET_RAFT=SuperSecureSecret1337
HQL_SECRET_API=SuperSecureSecret1337
# You can either parse `ENC_KEYS` and `ENC_KEY_ACTIVE` from the
# environment with setting this value to `env`, or parse them from
# a file on disk with `file:path/to/enc/keys/file`
# default: env
#HQL_ENC_KEYS_FROM=env
#####################################
############ DATABASE ###############
#####################################
# Max DB connections for the Postgres pool.
# Irrelevant for Hiqlite.
# default: 20
#DATABASE_MAX_CONN=20
# If specified, the currently configured Database will be DELETED and
# OVERWRITTEN with a migration from the given database with this variable.
# Can be used to migrate between different databases.
#
# !!! USE WITH CARE !!!
#
#MIGRATE_DB_FROM=sqlite:data/rauthy.db
#MIGRATE_DB_FROM=postgresql://postgres:123SuperSafe@localhost:5432/rauthy
# Hiqlite is the default database for Rauthy.
# You can opt-out and use Postgres instead by setting the proper
# `DATABASE_URL=postgresql://...` by setting `HIQLITE=false`
# default: true
#HIQLITE=true
# The data dir hiqlite will store raft logs and state machine data in.
# default: data
#HQL_DATA_DIR=data
# The file name of the SQLite database in the state machine folder.
# default: hiqlite.db
#HQL_FILENAME_DB=hiqlite.db
# If set to `true`, all SQL statements will be logged for debugging
# purposes.
# default: false
#HQL_LOG_STATEMENTS=false
# The size of the pooled connections for local database reads.
#
# Do not confuse this with a pool size for network databases, as it
# is much more efficient. You can't really translate between them,
# because it depends on many things, but assuming a factor of 10 is
# a good start. This means, if you needed a (read) pool size of 40
# connections for something like a postgres before, you should start
# at a `read_pool_size` of 4.
#
# Keep in mind that this pool is only used for reads and writes will
# travel through the Raft and have their own dedicated connection.
#
# default: 4
#HQL_READ_POOL_SIZE=4
# Enables immediate flush + sync to disk after each Log Store Batch.
# The situations where you would need this are very rare, and you
# should use it with care.
#
# The default is `false`, and a flush + sync will be done in 200ms
# intervals. Even if the application should crash, the OS will take
# care of flushing left-over buffers to disk and no data will get
# lost. If something worse happens, you might lose the last 200ms
# of commits (on that node, not the whole cluster). This is only
# important to know for single instance deployments. HA nodes will
# sync data from other cluster members after a restart anyway.
#
# The only situation where you might want to enable this option is
# when you are on a host that might lose power out of nowhere, and
# it has no backup battery, or when your OS / disk itself is unstable.
#
# `sync_immediate` will greatly reduce the write throughput and put
# a lot more pressure on the disk. If you have lots of writes, it
# can pretty quickly kill your SSD for instance.
#HQL_SYNC_IMMEDIATE=false
# The password for the Hiqlite dashboard as Argon2ID hash.
# '123SuperMegaSafe' in this example
#
# You only need to provide this value if you need to access the
# Hiqlite debugging dashboard for whatever reason. If no password
# hash is given, the dashboard will not be reachable.
#HQL_PASSWORD_DASHBOARD=JGFyZ29uMmlkJHY9MTkkbT0xOTQ1Nix0PTIscD0xJGQ2RlJDYTBtaS9OUnkvL1RubmZNa0EkVzJMeTQrc1dxZ0FGd0RyQjBZKy9iWjBQUlZlOTdUMURwQkk5QUoxeW1wRQ==
If you use Rauthy with Postgres and want to keep doing that, the only thing you need to do is to opt-out of Hiqlite.
HIQLITE=false
If you use Rauthy with SQLite and want to migrate to Hiqlite, you can utilize all the above-mentioned new config variables, but mandatory are the following ones.
Backups for the internal database work in the same way as before, but because I moved the backup functionality directly into Hiqlite, the variable names have been changed so they make sense if it is used in another context.
BACKUP_TASK
->HQL_BACKUP_CRON
BACKUP_NAME
does not exist anymore, will be chosen automatically depending on the cluster leader nameBACKUP_RETENTION_LOCAL
->HQL_BACKUP_KEEP_DAYS_LOCAL
RESTORE_BACKUP
->HQL_BACKUP_RESTORE
S3_URL
->HQL_S3_URL
S3_REGION
->HQL_S3_REGION
S3_PATH_STYLE
->HQL_S3_PATH_STYLE
S3_BUCKET
->HQL_S3_BUCKET
S3_ACCESS_KEY
->HQL_S3_KEY
S3_ACCESS_SECRET
->HQL_S3_SECRET
S3_DANGER_ALLOW_INSECURE
stayed as it is.
BACKUP_RETENTION_LOCAL
is new, and it will actually handle the backup cleanup on the S3 storage for you,
without defining retention rules for the whole bucket.
Rauthy comes with the dashboard
feature from Hiqlite enabled. If you want to make use of it, you need to set the
password for logging in. This is a static config variable, and it will only be a single password, no users / accounts.
The main idea behind the dashboard is to have debugging capabilities in production, which is usually hard to do with
a SQLite running inside a container.
You need to generate a random password (at least 16 characters), hash it with Argon2ID, and then base64 encode it.
You can do all this manually, or use the hiqlite
cli to generate a complete hiqlite config where you copy it from.
Manual:
Use an online tool like for instance https://argon2.online to generate an Argon2ID hash. Set the following options:
- Salt: Random
- Parallelism Factor: 2
- Memory Cost: 32
- Iterations: 2
- Hash Length: 32
- Argon2id
Then copy the Output in Encoded Form and base64 encode it, for instance using https://www.base64encode.org.
hiqlite
cli:
Currently, you can only install the hiqlite
cli via cargo
:
cargo install hiqlite --features server
hiqlite generate-config -p YouRandomSecurePasswordAtLeast16Chars
- grab the password from the config, location written in the output
Unless you specified a custom target path on disk for SQLite(HQL_DATA_DIR
)) before, you should be good with the
configuration now. If you start up Rauthy now, it will be like a fresh install, which you most probably don't want.
To migrate your current SQLite to Hiqlite at startup, you need to set the MIGRATE_DB_FROM
once at startup. If you used
the default path before, you need to set:
MIGRATE_DB_FROM=sqlite:data/rauthy.db
For a custom path, just adopt the value accordingly. This works as well by the way, if you want to migrate from Postgres to Hiqlite.
!!! CAUTION !!!
You must remove this variable after Rauthy has been started successfully! Otherwise, it would do the migration again
and again with each following restart and therefore remove everything that has happened in between!
As an additional hardening, the open redirect hint for user registrations has been locked down a bit by default.
If you used this feature before, you should update Client URI
s via the Admin UI, so all possible redirect_uri
s
you are using will still be considered valid, or opt-out of the additional hardening.
# If set to `true`, any validation of the `redirect_uri` provided during
# a user registration will be disabled.
# Clients can use this feature to redirect the user back to their application
# after a successful registration, so instead of ending up in the user
# dashboard, they come back to the client app that initiated the registration.
#
# The given `redirect_uri` will be compared against all registered
# `client_uri`s and will throw an error, if there is no match. However,
# this check will prevent ephemeral clients from using this feature. Only
# if you need it in combination with ephemeral clients, you should
# set this option to `true`. Otherwise it is advised to set the correct
# Client URI in the admin UI. The `redirect_uri` will be allowed if it starts
# with any registered `client_uri`.
#
# default: false
#USER_REG_OPEN_REDIRECT=true
Since I received quite a few questions and requests regarding the mandatory family_name
for users, I decided to change
it and make it optional. This change should not affect you in any way if you only consumed id_token
s, because the
family_name
inside them has been optional before already. Its existence in the id_token
depends on allowed and
requested claims.
However, if you used to communicate with the Rauthy API directly, you should be aware of this change. The
User.family_name
is now optional in all situations.
During the migration to Hiqlite, I stumbled about a few DB queries in different places that were low haging fruits for efficiency and speed improvements. I did these while migrating the code base. There were a few ones, for instance in situations like session invalidation, for password reminder cron jobs, roles / groups / scopes name updates, and so on. These do not affect the behavior, just the handling under the hood has been improved.
Rauthy allows you to upload SVGs as either client or upstream IdP logos. This is an action, that only an authorized
rauthy_admin
role can do. However, as an additional defense in-depth and protection against an evil admin, Rauthy
now sanitizes all uploaded SVGs, no matter what.
The allowed characters for input validation for:
- user
given_name
- user
family_name
- client name
have been expanded and will also allow characters from Latin-1 Extended A
.
- When Rauthy sent E-Mails, the Name of the name of the recipient has not been set correctly #602
- The banner should not be logged as plain text when
LOG_FMT=json
is set #605 - When requesting a single user by its ID using an API key, you would get an invalid session error response. This was due to an earlier migration a few versions back. The session check should be done only when no API key is present to make this request work. #609
This patch reverts an unintended change to the user:group
inside the container images.
This will fix issues with migrations from existing deployments using SQLite with manually managed
volume access rights.
v0.26.0 changed from scratch
to gcr.io/distroless/cc-debian12:nonroot
as the base image for the final deployment.
The distroless image however sets a user of 65532
by default, while it always has been 10001:10001
before.
The affected versions are
0.26.0
0.26.1
Starting from this release (0.26.2
), the user inside the container will be the same one as before:
10001:10001
839724001710cb095f39ff7df6be00708a01801a
Some upstream auth providers need custom query params appended to their authorization endpoint URL. Rauthy will now accept URLs in the auth provider config with pre-defined query params, as long as they don't interfere with OIDC default params.
To make automatic parsing of logs possible (to some extent), you now have the ability to change the logging output from text to json with the following new config variable:
# You can change the log output format to JSON, if you set:
# `LOG_FMT=json`.
# Keep in mind, that some logs will include escaped values,
# for instance when `Text` already logs a JSON in debug level.
# Some other logs like an Event for instance will be formatted
# as Text anyway. If you need to auto-parse events, please consider
# using an API token and listen ot them actively.
# default: text
#LOG_FMT=text
- With relaxing requirements for password resets for new users, a bug has been introduced that would prevent a user from registering an only-passkey account when doing the very first "password reset". de2cfea
The following API routes have been deprecated in the last version and have now been fully removed:
/oidc/tokenInfo
/oidc/rotateJwk
The whole CACHE
section in the config has been changed:
#####################################
############## CACHE ################
#####################################
# Can be set to 'k8s' to try to split off the node id from the hostname
# when Hiqlite is running as a StatefulSet inside Kubernetes.
# Will be ignored if `HQL_NODE_ID_FROM=k8s`
#HQL_NODE_ID_FROM=k8s
# The node id must exist in the nodes and there must always be
# at least a node with ID 1
HQL_NODE_ID=1
# All cluster member nodes.
# To make setting the env var easy, the values are separated by `\s`
# while nodes are separated by `\n`
# in the following format:
#
# id addr_raft addr_api
# id addr_raft addr_api
# id addr_raft addr_api
#
# 2 nodes must be separated by 2 `\n`
HQL_NODES="
1 localhost:8100 localhost:8200
"
# If set to `true`, all SQL statements will be logged for debugging
# purposes.
# default: false
#HQL_LOG_STATEMENTS=false
# If given, these keys / certificates will be used to establish
# TLS connections between nodes.
#HQL_TLS_RAFT_KEY=tls/key.pem
#HQL_TLS_RAFT_CERT=tls/cert-chain.pem
#HQL_TLS_RAFT_DANGER_TLS_NO_VERIFY=true
#HQL_TLS_API_KEY=tls/key.pem
#HQL_TLS_API_CERT=tls/cert-chain.pem
#HQL_TLS_API_DANGER_TLS_NO_VERIFY=true
# Secrets for Raft internal authentication as well as for the API.
# These must be at least 16 characters long and you should provide
# different ones for both variables.
HQL_SECRET_RAFT=SuperSecureSecret1337
HQL_SECRET_API=SuperSecureSecret1337
# You can either parse `ENC_KEYS` and `ENC_KEY_ACTIVE` from the
# environment with setting this value to `env`, or parse them from
# a file on disk with `file:path/to/enc/keys/file`
# default: env
#HQL_ENC_KEYS_FROM=env
The response for /auth/v1/health
has been changed.
If you did not care about the response body, there is nothing to do for you. The body itself returns different values now:
struct HealthResponse {
db_healthy: bool,
cache_healthy: bool,
}
Translations for ZH-Hans
have been added to Rauthy. These exist in all places other than the Admin UI, just like the
existing ones already.
Up until v0.25, it was not possible to set the Allowed Origin
for a client in a way that Rauthy would allow access
for instance from inside a Tauri app. The reason is that Tauri (and most probably others) do not set an HTTP / HTTPS
scheme in the Origin
header, but something like tauri://
.
Rauthy has now support for such situations with adjusted validation for the Origin values and a new config variable
to allow specific, additional Origin
schemes:
# To bring support for applications using deep-linking, you can set custom URL
# schemes to be accepted when present in the `Origin` header. For instance, a
# Tauri app would set `tauri://` instead of `https://`.
#
# Provide the value as a space separated list of Strings, like for instance:
# "tauri myapp"
ADDITIONAL_ALLOWED_ORIGIN_SCHEMES="tauri myapp"
For HA deployments, the /health
checks are more stable now.
The quorum is also checked, which will detect network segmentations. To achieve this and still make it possible to use
the health check in situations like Kubernetes rollouts, a delay has been added, which will simply always return true
after a fresh app start. This initial delay make it possible to use the endpoint inside Kubernetes and will not prevent
from scheduling the other nodes. This solves a chicken-and-egg problem.
You usually do not need to care about it, but this value can of course be configured:
# Defines the time in seconds after which the `/health` endpoint
# includes HA quorum checks. The initial delay solves problems
# like Kubernetes StatefulSet starts that include the health
# endpoint in the scheduling routine. In these cases, the scheduler
# will not start other Pods if the first does not become healthy.
#
# This is a chicken-and-egg problem which the delay solves.
# There is usually no need to adjust this value.
#
# default: 30
#HEALTH_CHECK_DELAY_SECS=30
To send out Matrix notifications, Rauthy was using the matrix-sdk
up until now. This crate however comes with a huge
list of dependencies and at the same time pushes too few updates. I had quite a few issues with it in the past because
it was blocking me from updating other dependencies.
To solve this issue, I decided to drop matrix-sdk
in favor of ruma
, which it is using under the hood anyway. With
ruma
, I needed to do a bit more work myself since it's more low level, but at the same time I was able to reduce the
list of total dependencies Rauthy has by ~90 crates.
This made it possible to finally bump other dependencies and to start the internal switch from redhac to Hiqlite for caching.
IMPORTANT:
If you are using a self-hosted homeserver or anything else than the official matrix.org
servers for Matrix event
notifications, you must set a newly introduced config variable:
# URL of your Matrix server.
# default: https://matrix.org
#EVENT_MATRIX_SERVER_URL=https://matrix.org
The internal cache layer has been migrated from redhac to Hiqlite.
A few weeks ago, I started rewriting the whole persistence layer from scratch in a separate project. redhac
is working
fine, but it has some issues I wanted to get rid of.
- its network layer is way too complicated which makes it very hard to maintain
- there is no "sync from other nodes" functionality, which is not a problem on its own, but leads to the following
- for security reasons, the whole cache is invalidated when a node has a temporary network issue
- it is very sensitive to even short term network issues and leader changes happen too often for my taste
I started the Hiqlite project some time ago to get rid of these things and have additional features. It is outsourced to make it generally usable in other contexts as well.
This first step will also make it possible to only have a single container image in the future without the need to decide between Postgres and SQLite via the tag.
The way the container images are built, the builder for the images is built and also the whole justfile
have been
changed quite a bit. This will not concern you if you are not working with the code.
The way of wrapping and executing everything inside a container, even during local dev, became tedious to maintain, especially for different architectures and I wanted to get rid of the burden of maintenance, because it did not provide that many benefits. Postgres and Mailcrab will of course still run in containers, but the code itself for backend and frontend will be built and executed locally.
The reason I started doing all of this inside containers beforehand was to not need a few additional tool installed locally to make everything work, but the high maintenance was not worth it in the end. This change now reduced the size of the Rauthy builder image from 2x ~4.5GB down to 1x ~1.9GB, which already is a big improvement. Additionally, you don't even need to download the builder image at all when you are not creating a production build, while beforehand you always needed the builder image in any case.
To encounter the necessary dev tools installation and first time setup, I instead added a new just
recipe called
setup
which will do everything necessary, as long as you have the prerequisites available (which you needed before
as well anyway, apart from npm
). This has been updated in the
CONTRIBUTING.md.
- The
refresh_token
grant type on the/token
endpoint did not set the originalauth_time
for theid_token
, but instead calculated it fromnow()
each time. aa6e07d
The introspection endpoint has been fixed in case of the encoding like mentioned in bugfixes.
Additionally, authorization has been added to this endpoint. It will now make sure that the request also includes
an AUTHORIZATION
header with either a valid Bearer JwtToken
or Basic ClientId:ClientSecret
to prevent
token scanning.
The way of authorization on this endpoint is not really standardized, so you may run into issues with your client application. If so, you can disable the authentication on this endpoint with
# Can be set to `true` to disable authorization on `/oidc/introspect`.
# This should usually never be done, but since the auth on that endpoint is not
# really standardized, you may run into issues with your client app. If so,
# please open an issue about it.
# default: false
DANGER_DISABLE_INTROSPECT_AUTH=true
In preparation for a clean v1.0.0, some older API routes have been fixed regarding their casing and naming. The "current" or old routes and names will be available for exactly one release and will be phased out afterward to have a smooth migration, just in case someone uses these renamed routes.
/oidc/tokenInfo
->/oidc/introspect
/oidc/rotateJwk
->/oidc/rotate_jwk
Since I don't like kebab-case
, most API routes are written in snake_case
, with 2 exceptions that follow RFC namings:
openid-configuration
web-identity
All the *info
routes like userinfo
or sessioninfo
are not kebab_case
on purpose, just to match other IdPs and
RFCs a bit more.
There is not a single camelCase
anymore in the API routes to avoid confusion and issues in situations where you could
for instance mistake an uppercase I
as a lowercase l
. The current camelCase
endpoints only exist for a smoother
migration and will be phased out with the next bigger release.
The current behavior of reading in config variables was not working as intended.
Rauthy reads the rauthy.cfg
as a file first and the environment variables afterward. This makes it possible to
configure it in any way you like and even mix and match.
However, the idea was that any existing variables in the environment should overwrite config variables and therefore
have the higher priority. This was exactly the other way around up until v0.24.1
and has been fixed now.
How Rauthy parses config variables now correctly:
- read
rauthy.cfg
- read env var
- all existing env vars will overwrite existing vars from
rauthy.cfg
and therefore have the higher priority
- The token introspection endpoint was only accepting requests with
Json
data, when it should have instead been withForm
data.
The last weeks were mostly for updating the documentation and including all the new features that came to Rauthy in the last months. Some small things are still missing, but it's almost there.
Apart from that, this is an important update because it fixes some security issues in external dependencies.
Security issues in external crates have been fixed:
- moderate matrix-sdk-crypto
- moderate openssl
- low vodozemac
The config var S3_DANGER_ACCEPT_INVALID_CERTS
has been renamed to S3_DANGER_ALLOW_INSECURE
. This is not a breaking
change right now, because for now Rauthy will accept both versions to not introduce a breaking change, but the
deprecated values will be removed after v0.24.
Quite a few internal dependencies have been updated to the latest versions (where it made sense).
One of them was my own cryptr. This was using the rusty-s3
crate beforehand,
which is a nice one when working with S3 storages, but had 2 issues. One of them is that it is using pre-signed URLs.
That is not a flaw in the first place, just a design decision to become network agnostic. The other one was that it
signed the URL in a way that would make the request not compatible with Garage.
I migrated cryptr
to my own s3-simple which solves these issues.
This update brings compatibility with the garage
s3 storage for Rauthy's S3 backup feature.
- Fetching the favicon (and possibly other images) was forbidden because of the new CSRF middleware from some weeks ago. 76cd728
- The UI and the backend had a difference in input validation for
given_name
andfamily_name
which could make some buttons in the UI get stuck. This has been fixed and the validation for these 2 is the same everywhere and at least 1 single character is required now. 19d512a
Many thousands of lines have been refactored internally to provide better maintainability in the future. These are not mentioned separately, since they did not introduce anything new. Apart from this, there are only small changes, but one of them is an important breaking change.
The new config variable TRUSTED_PROXIES
introduces a breaking change in some cases.
If you are running Rauthy with either PROXY_MODE=true
or with a set PEER_IP_HEADER_NAME
value, you must add the
TRUSTED_PROXIES
to your existing config before updating.
This value specifies trusted proxies in the above situation. The reason is that Rauthy extracts the client IP from the HTTP headers, which could be spoofed if they are used without validating the source. This was not a security issue, but gave an attacker the ability to blacklist or rate-limit IPs that do not belong to him.
When PROXY_MODE=true
or set PEER_IP_HEADER_NAME
, Rauthy will now only accept direct connections from IPs specified
with TRUSTED_PROXIES
and block all other requests. You can provide a list of CIDRs to have full flexibility for your
deployment.
# A `\n` separated list of trusted proxy CIDRs.
# When `PROXY_MODE=true` or `PEER_IP_HEADER_NAME` is set,
# these are mandatory to be able to extract the real client
# IP properly and safely to prevent IP header spoofing.
# All requests with a different source will be blocked.
#TRUSTED_PROXIES="
#192.168.14.0/24
#10.0.0.0/8
#"
Note:
Keep in mind, that you must include IPs for direct health checks like for instance inside Kubernetes here,
if they are not being sent via a trusted proxy.
If you are using an open user registration without domain restriction, you now have the possibility to blacklist
certain E-Mail provider domains. Even if your registration endpoint allows registrations, this blacklist will be
checked and deny requests with these domains.
This is mainly useful if you want to prevent malicious E-Mail providers from registering and spamming your database.
# If `OPEN_USER_REG=true`, you can blacklist certain domains
# on the open registration endpoint.
# Provide the domains as a `\n` separated list.
#USER_REG_DOMAIN_BLACKLIST="
#example.com
#evil.net
#"
Even though it was not needed so far, the OIDC userinfo endpoint now has a proper POST
handler in addition to the
existing GET
to comply with the RFC.
05a8793
- The upstream crate
curve25519-dalek
had a moderate timing variability security issue 8bb4069
This patch fixes a regression from fixing the special characters encoding in upstream IdP JWT tokens. A panic was
possible when the upstream IdP did not include a locale
in the id_token
.
ea24e7e
481c9b3
This is a tiny update, but brings an important bugfix for upstream IdPs.
A bug has been fixed in case an upstream IdP included special characters inside Strings in the returned JWT token after
a successful user login flow.
Since JWT tokens should use UNICODE encoding in these cases, it is not possible to do zero-copy deserialization into
Rust UTF8 string slices in that case. This has been fixed in a way, that only when there are existing special
characters,
Rauthy will now do the additional, necessary String allocations for the deserialization process.
This should fix current issues when logging in via an upstream IdP with special characters inside the E-Mail address for
instance.
aa97cb8
Apart from that, there were some minor UX improvements for the Admin UI providers setup page like earlier client side
checking of variables and preventing form submission when some required ones were missing.
9a227c9
c89fb7f
Updated sections in the documentation for:
- Choose database in Getting Started
- Started a new page for production setup
- Info on Android passkey status
- Encryption section
- Getting Started with Kubernetes
More strict checking and validation for allowed_origins
has been implemented, when you configure clients. Before,
the regex only checked for the input to be a valid URI, which is not strict enough for validation an origin.
This should improve the UX and prevents hard to debug bugs, when someone enters an invalid origin.
At the same time, a better visual separation has been added to the Origins / URI section in the UI when configuring clients.
Small improvements have been made in a lot of places, which resulted in less memory allocations.
9144f2a
The logic on POST /authorize
has been simplified internally. The code grew to an over-complicated state with new
features coming in all the time until a point where it was hard to follow. This has been simplified.
This makes the software better maintainable in the future.
af0db9d
- add all
/fed_cm/
routes as exceptions to the new CSRF protection middleware 360ce46 - upstream auth provider templates could get stuck in the UI when switching between them d2b928a
- when a problem with an upstream provider occurs on
/callback
, you will now see the detailed error in the UI 8041c95
This release brings some very minor features and bugfixes.
CSRF protection was there already without any issues.
However, a new middleware has been added to the whole routing stack in addition to the existing checks. This provides
another defense in depth. The advantage of the new middleware is, that this can be enforced all the way in the future
after enough testing in parallel.
If this works fine without any issues, we might get rid of the current way of doing it and only use the new middleware,
which is easier to maintain and to work with.
To not break any existing deployments and make sure I did not forget route exceptions for the new middleware, you can set it to warn-only mode for this minor release. This option will be removed in future releases though and should only be a temporary solution:
# If set to true, a violation inside the CSRF protection middleware based
# on Sec-* headers will block invalid requests. Usually you always want this
# enabled. You may only set it to false during the first testing phase if you
# experience any issues with an already existing Rauthy deployment.
# In future releases, it will not be possible the disable these blocks.
# default: true
#SEC_HEADER_BLOCK=true
This is not really considered a new feature, but Rauthy now has experimental support for FedCM in its current state.
This is opt-in and disabled by default. You should not attempt to use it in production because the FedCM implementation
itself still has a few bumps and sharp edges.
The only reason the experimental support is there is to help smooth out these things and hopefully have FedCM as a
really nice addition. It does not really bring any new possibilities to the table, but it would improve the UX quite a
bit, if it hopefully turns out great.
#####################################
############## FED CM ###############
#####################################
## CAUTION: The FedCM is highly experimental at this point!
## Do not attempt to use it in production because it is subject to change
## in the future! The spec is currently a draft and under active development.
# Set to `true` to enable the experimental FedCM.
# default: false
#EXPERIMENTAL_FED_CM_ENABLE=false
# Session lifetime for FedCM in seconds - the session can not be extended
# beyond this time and a new login will be forced.
# default: 2592000
#SESSION_LIFETIME_FED_CM=2592000
# Session timeout for FedCM in seconds
# When a new token / login is requested before this timeout hits the limit,
# the user will be authenticated without prompting for the credentials again.
# This is the value which can extend the session, until it hits its maximum
# lifetime set with _FED_CM.
# default: 259200
#SESSION_TIMEOUT_FED_CM=259200
The input validation for ephemeral client_id
s has been relaxed. This now makes it possible to test them with OIDC
playgrounds, which typically generate pretty long testing URLs, which were being rejected for their length beforehand.
Rauthy now accepts URLs of up to 256 characters as client_id
s.
62405bb
The default values for the Argon2ID hashing algorithm have been bumped up quite a bit. Rauthy's goal is to be as secure as possible by default. The old values were quite a bit above the OWASP recommendation, but still way too low imho. The values will of course still need tuning and adjustment to the target architecture / deployment, but they provide a way better starting point and can be considered really secure even if not adjusted.
The new defaults are:
# M_COST should never be below 32768 in production
ARGON2_M_COST=131072
# T_COST should never be below 1 in production
ARGON2_T_COST=4
# P_COST should never be below 2 in production
ARGON2_P_COST=8
- Ephemeral client's now work properly with the
/userinfo
endpoint in strict-validation mode. Their validation is simply being skipped at that point, because it does not make much sense to do anenabled
check at that point. 90b0367 - A small bug appeared in the UI after you have added new custom user attributes. Instead of resetting the input values to empty strings after the registration, they were set to undefined. ab77595
- Because of a bug in the account overview UI, it was not possible to link an already existing account to an upstream IdP after the registration. 22751ee
All Rauthy cookies (except for the locale) are now encrypted globally inside the whole app by default.
This is just another defense in depth. The AEAD algorithm makes sure, that you can't tamper with the cookie values,
even if you would try to do it manually.
If you are in the situation where you run Rauthy behind a reverse proxy on the exact same origin with another app,
and you want to build custom user facing UI parts, you had to retrieve the original HTML for /authorize
or the
password reset to extract the CSRF token from the HTML content.
Doing this in tests is fine, but very tedious and wasteful for a production deployment.
For this reason, there are now 2 new possibilities:
- POST
/oidc/session
endpoint to create a session inInit
state, which will return the cookie and the correct CSRF token in a json body - the password reset link returns a json with a CSRF token instead of an HTML document, if you request it
with a
Accept: application/json
header
- the password expiry reminder E-Mail had a wrong a link to the account page, a left over from older versions
with
.html
appended d728317
This release does the first preparations to prepare a future v1.0.0 release.
Quite a few values have been cleaned up or improved.
If you are using the rauthy-client, you should upgrade to v0.4.0
before
upgrade Rauthy to v0.23.0
. Any older client version will not understand the new grant type for the OAuth2
Device Authorization grant.
The config variable UNSAFE_NO_RESET_BINDING
has been removed in favor of PASSWORD_RESET_COOKIE_BINDING
.
The logic for this security feature has been reversed. The default behavior until now was to block subsequent
requests to the password reset form if they provided an invalid binding cookie. This created issues for people
that were using evil E-Mail providers. These would scan their users E-Mails and use links inside them.
This link usage however made it impossible for "the real user" to use the link properly, because it has been
used already by its provider.
In some cases, this hurts the UX more than it is a benefit to the security, so this feature is now an opt-in
hardening instead of opt-out evil provider error fixing.
Additionally, to improve the UX even further, the additional E-Mail input form has been removed from the password
reset page as well. The security benefits of this were rather small compared to the UX degradation.
#365
1af7b92
OFFLINE_TOKEN_LIFETIME
has been removed from the config. This variable has been deprecated since a lof
of versions now. The offline_access
scope was not even allowed via the UI for a long time now, so these offline
tokens were never issued anyway.
The "new" mechanism Rauthy uses with the switch in the Admin UI to issue / allow refresh tokens for a client
is much more clear, since the offline_access
scope produces a lot of confusion for people new to OIDC.
From the name, it simply makes no sense that you need to activate offline_access
to get a refresh token.
Having an option named "allow refresh tokens" is just so much better.
71db7fe
If you used the endpoint for retrieving a client secret with an API key before, you need to change the method.
The endpoint works exactly the same, but the method has been changed from a GET
to a POST
to request and validate
the additional CSRF token from the Admin UI.
72f077f
The Refresh Token
switch for a client config in the Admin UI has been removed.
The old behavior was misleading and unintuitive, I just got rid of that switch.
If you want to use the refresh flow with a client, the only thing you need to do is to allow the refresh_token
flow.
You needed to do this before anyway, but in addition enable the switch further down below. So this is not really a
breaking change, but could lead to confusion, if this switch is just gone.
2ece6ed
This release brings support for the OAuth 2.0 Device Authorization Grant.
On top of the default RFC spec, we have some additional features like optional rate limiting and being able to
do the flow with confidential clients as well. The rauthy-client has the
basics implemented as well for fetching tokens via the device_code
flow. An automatic refresh token handler is
on the TODO list though. A small
example exists as well.
You will find new sections in the account and admin -> user view, where you can see all linked devices, can give
them a friendly name and revoke refresh tokens, if they exist.
544bebe
8d028bf
e8077ce
62d41bc
51a50ac
9352b3c
Until now, the Admin UI used client side searching and pagination. This is fine for most endpoints, but
the users can grow quite large depending on the instance while all other endpoints will return rather small
"GET all" data.
To keep big Rauthy instances with many thousands of users fast and responsive, you can set a threshold for
the total users count at which Rauthy will dynamically switch from client side to server side pagination
and searching for the Admin UI's Users and Sessions page.
# Dynamic server side pagination threshold
# If the total users count exceeds this value, Rauthy will dynamically
# change search and pagination for users in the Admin UI from client
# side to server side to not have a degradation in performance.
# default: 1000
SSP_THRESHOLD=1000
For smaller instances, keeping it client side will make the UI a bit more responsive and snappy. For higher user counts, you should switch to do this on the server though to keep the UI fast and not send huge payloads each time.
The login form now contains a "Home" icon which will appear, if a client_uri
is registered for the current
client. A user may click this and be redirected to the client, if a login is not desired for whatever reason.
Additionally, if the user registration is configured to be open, a link to the user registration will be shown
at the bottom as well.
b03349c
b03349c
A new button has been introduced to the account view of federated accounts.
You can now "Unlink" an account from an upstream provider, if you have set it up with at least
a password or passkey before.
This is the counterpart to the unlink feature from above.
This makes it possible to link an already existing, unlinked user account to an upstream auth provider.
The only condition is a matching email
claim after successful login. Apart from that, there are quite a few things
going on behind the scenes and you must trigger this provider link from an authorized, valid session from inside your
user account view. This is necessary to prevent account takeovers if an upstream provider has been hacked in some way.
You can set environment variables either via rauthy.cfg
, .env
or as just an env var during
initial setup in production. This makes it possible to create an admin account with the very first
database setup with a custom E-Mail + Password, instead of the default [email protected]
with
a random password, which you need to pull from the logs. A single API Key may be bootstrapped as well.
#####################################
############# BOOSTRAP ##############
#####################################
# If set, the email of the default admin will be changed
# during the initialization of an empty production database.
BOOTSTRAP_ADMIN_EMAIL="[email protected]"
# If set, this plain text password will be used for the
# initial admin password instead of generating a random
# password.
#BOOTSTRAP_ADMIN_PASSWORD_PLAIN="123SuperSafe"
# If set, this will take the argon2id hashed password
# during the initialization of an empty production database.
# If both BOOTSTRAP_ADMIN_PASSWORD_PLAIN and
# BOOTSTRAP_ADMIN_PASSWORD_ARGON2ID are set, the hashed version
# will always be prioritized.
BOOTSTRAP_ADMIN_PASSWORD_ARGON2ID='$argon2id$v=19$m=32768,t=3,p=2$mK+3taI5mnA+Gx8OjjKn5Q$XsOmyvt9fr0V7Dghhv3D0aTe/FjF36BfNS5QlxOPep0'
# You can provide an API Key during the initial prod database
# bootstrap. This key must match the format and pass validation.
# You need to provide it as a base64 encoded JSON in the format:
#
# ```
# struct ApiKeyRequest {
# /// Validation: `^[a-zA-Z0-9_-/]{2,24}$`
# name: String,
# /// Unix timestamp in seconds in the future (max year 2099)
# exp: Option<i64>,
# access: Vec<ApiKeyAccess>,
# }
#
# struct ApiKeyAccess {
# group: AccessGroup,
# access_rights: Vec<AccessRights>,
# }
#
# enum AccessGroup {
# Blacklist,
# Clients,
# Events,
# Generic,
# Groups,
# Roles,
# Secrets,
# Sessions,
# Scopes,
# UserAttributes,
# Users,
# }
#
# #[serde(rename_all = "lowercase")]
# enum AccessRights {
# Read,
# Create,
# Update,
# Delete,
# }
# ```
#
# You can use the `api_key_example.json` from `/` as
# an example. Afterwards, just `base64 api_key_example.json | tr -d '\n'`
#BOOTSTRAP_API_KEY="ewogICJuYW1lIjogImJvb3RzdHJhcCIsCiAgImV4cCI6IDE3MzU1OTk2MDAsCiAgImFjY2VzcyI6IFsKICAgIHsKICAgICAgImdyb3VwIjogIkNsaWVudHMiLAogICAgICAiYWNjZXNzX3JpZ2h0cyI6IFsKICAgICAgICAicmVhZCIsCiAgICAgICAgImNyZWF0ZSIsCiAgICAgICAgInVwZGF0ZSIsCiAgICAgICAgImRlbGV0ZSIKICAgICAgXQogICAgfSwKICAgIHsKICAgICAgImdyb3VwIjogIlJvbGVzIiwKICAgICAgImFjY2Vzc19yaWdodHMiOiBbCiAgICAgICAgInJlYWQiLAogICAgICAgICJjcmVhdGUiLAogICAgICAgICJ1cGRhdGUiLAogICAgICAgICJkZWxldGUiCiAgICAgIF0KICAgIH0sCiAgICB7CiAgICAgICJncm91cCI6ICJHcm91cHMiLAogICAgICAiYWNjZXNzX3JpZ2h0cyI6IFsKICAgICAgICAicmVhZCIsCiAgICAgICAgImNyZWF0ZSIsCiAgICAgICAgInVwZGF0ZSIsCiAgICAgICAgImRlbGV0ZSIKICAgICAgXQogICAgfQogIF0KfQ=="
# The secret for the above defined bootstrap API Key.
# This must be at least 64 alphanumeric characters long.
# You will be able to use that key afterwards with setting
# the `Authorization` header:
#
# `Authorization: API-Key <your_key_name_from_above>$<this_secret>`
#BOOTSTRAP_API_KEY_SECRET=
You can now set a new config variable called USERINFO_STRICT
. If set so true, Rauthy will do additional
validations on the /userinfo
endpoint and actually revoke (even otherwise still valid) access tokens,
when any user / client / device it has been issued for has been deleted, expired or disabled. The non-strict
mode will simply make sure the token is valid and that the user still exists. The additional validations
will consume more resources because they need 1-2 additional database lookups but will provide more strict
validation and possible earlier token revocation. If you don't need it that strict, and you are resource
constrained, set it to false
.
198e7f9
The Rauthy id_token
now contains the access token hash at_hash
claim. This is needed for additional
downstream validation, if a client provides both tokens and they are not coming from Rauthy directly.
With the additional validation of the at_hash
claim, clients can be 100% sure, that a given id_token
belongs to a specific access_token
and has not been swapped out.
d506865
The allowed names for roles, groups and scopes have been adjusted. Rauthy allows names of up to 64 characters
now and containing :
or *
. This will make it possible to define custom scopes with names like
urn:matrix:client:api:guest
or urn:matrix:client:api:*
.
Depending on your final deployment, you may want to change the way Rauthy's set's its cookies, for instance if you want to create your own UI endpoints but still want to be able to communicate with the API.
The default cookie setting has been changed in a way that all cookies will have the __Host-
prefix now, which provides
the highest level of security. There might be cases where you don't want this and rather have the path restriction to
/auth
from before, for instance when you host an additional app on the same origin behind a reverse proxy, that should
not be able to read Rauthy's cookies.
And finally, for all Safari users, since Safari does not consider localhost
to be secure when testing, you can even
set insecure cookies for testing purposes.
# You can set different security levels for Rauthy's cookies.
# The safest option would be 'host', but may not be desirable when
# you host an application on the same origin behind a reverse proxy.
# In this case you might want to restrict to 'secure', which will then
# take the COOKIE_PATH from below into account.
# The last option is 'danger-insecure' which really should never be used
# unless you are just testing on localhost and you are using Safari.
#COOKIE_MODE=host
# If set to 'true', Rauthy will bind the cookie to the `/auth` path.
# You may want to change this only for very specific reasons and if
# you are in such a situation, where you need this, you will know it.
# Otherwise don't change this value.
# default: true
#COOKIE_SET_PATH=true
Rauthy can now auto-blacklist IP's that do suspicious requests, like for instance:
- /.ssh/
- /.kube/config
- /backup.zip
- /wp-admin/
... and so on.
Rauthy has a "catch all" API route handler on /
which looks for these by default.
By default, IPs from such requests will be blacklisted for 24 hours, but you can of course configure this.
# The "catch all" route handler on `/` will compare the request path
# against a hardcoded list of common scan targets from bots and attackers.
# If the path matches any of these targets, the IP will be blacklisted
# preemptively for the set time in minutes.
# You can disable it with setting it to `0`.
# default: 1440
SUSPICIOUS_REQUESTS_BLACKLIST=1440
# This will emit a log with level of warning if a request to `/` has
# been made that has not been caught by any of the usual routes and
# and handlers. Apart from a request to just `/` which will end in
# a redirect to `/auth/v1`, all additional path's will be logged.
# This can help to improve the internal suspicious blocklist in the
# future.
# default: false
SUSPICIOUS_REQUESTS_LOG=flase
The whoami endpoint has been changed. It does not return all headers anymore, because this could possibly leak sensitive
headers in some environments, especially with the new auth headers feature in some situations.
Instead, it only returns the peer IP that Rauthy extracted for this request. This can be very helpful if you need to
configure the extraction, for instance when you are behind a reverse proxy or CDN.
758b31c
Users in the Admin UI can now be sorted by their created_at
or last_login
timestamp.
Users that never have logged in will always be at the end of the list, since this value might be undefined
.
4c41d64
- The button for requesting a password reset from inside a federated account view has been disabled when it should not be, and therefore did not send out requests. 39e585d
- A really hard to reproduce bug where the backend complained about a not-possible mapping
from postgres
INT4
to Rusti64
as been fixed. This came with the advantage of hacing a few more compile-time checked queries for theusers
table. 1740177 - A fix for the
/users/register
endpoint in the OpenAPI documentation has been fixed, which was referencing the wrong request body 463e424 - The page title for a password reset now shows "New Account" if this is a fresh setup and only "Password Reset" when it actually is a reset 84bbdf7
- The "User Registration" header on the page for an open user registration as only showing up, when the domain was restricted. fc3417e
- Button labels were misplaced on chrome based browsers 901eb55
/authorize
for logins had a bit too strict validation for the user password, which had a chance that a new password a user just set, would be rejected because of some invalid special chars not being allowed 9bb0a72- when resources in the Admin UI have been re-fetched, for instance because of a user deletion, the search input has not been emptied 033db25
- the deprecated
x-xss-protection
header has been removed 5008438
This version fixes a potential DoS in rustls which has
been found yesterday.
f4d65a6
In addition to the /userinfo
endpoint specified in the OIDC spec, Rauthy implements an additional endpoint
specifically for ForwardAuth situations. You can find it at /auth/v1/oidc/forward_auth
and it can be configured
to append optional Trusted Header with User Information for downstream applications, that do not support OIDC
on their own.
The HeaderNames can be configured to match your environment. Please keep in mind, that you should only use these, if you legacy application does not support OIDC natively, because Auth Headers come with a lot of pitfalls, when your environment is not configured properly.
# You can enable authn/authz headers which would be added to the response
# of the `/auth/v1/forward_auth` endpoint. With `AUTH_HEADERS_ENABLE=true`,
# the headers below will be added to authenticated requests. These could
# be used on legacy downstream applications, that don't support OIDC on
# their own.
# However, be careful when using this, since this kind of authn/authz has
# a lot of pitfalls out of the scope of Rauthy.
AUTH_HEADERS_ENABLE=true
# Configure the header names being used for the different values.
# You can change them to your needs, if you cannot easily change your
# downstream apps.
# default: x-forwarded-user
AUTH_HEADER_USER=x-forwarded-user
# default: x-forwarded-user-roles
AUTH_HEADER_ROLES=x-forwarded-user-roles
# default: x-forwarded-user-groups
AUTH_HEADER_GROUPS=x-forwarded-user-groups
# default: x-forwarded-user-email
AUTH_HEADER_EMAIL=x-forwarded-user-email
# default: x-forwarded-user-email-verified
AUTH_HEADER_EMAIL_VERIFIED=x-forwarded-user-email-verified
# default: x-forwarded-user-family-name
AUTH_HEADER_FAMILY_NAME=x-forwarded-user-family-name
# default: x-forwarded-user-given-name
AUTH_HEADER_GIVEN_NAME=x-forwarded-user-given-name
# default: x-forwarded-user-mfa
AUTH_HEADER_MFA=x-forwarded-user-mfa
- allow CORS requests for the GET PoW and the user sign up endpoint's to make it possible to build a custom UI without
having a server side. At the same time, the method for requesting a PoW has been changed from
GET
toPOST
. This change has been done because even though only in-memory, a request would create data in the backend, which should never be done by aGET
. Technically, this is a breaking change, but since it has only been available from the Rauthy UI itself because of the CORS header setting, I decided to only bump the patch, not the minor version. e4d935f
There is one breaking change, which could not have been avoided.
Because of a complete rewrite of the logic how custom client logos (uploaded via
Admin UI -> Clients -> Client Config -> Branding
), you will loose custom logos uploaded in the past for a client.
The reason is pretty simple. Just take a look at Auto Image Optimization
below.
Apart from this, quite a few small internal improvements have been made to make life easier for developers and new contributors. These changes are not listed in the release notes.
Rauthy v0.22.0 brings (beta) support for upstream authentication providers.
This is a huge thing. It will basically allow you to set up things like Sign In with Github into Rauthy. You could use
your Github account for signup and login, and manage custom groups, scopes, and so on for the users on Rauthy. This
simplifies the whole onboarding and login for normal users a lot.
You can add as many auth providers as you like. They are not statically configured, but actually configurable via the
Admin UI. A user account can only be bound to one auth provider though for security reasons. Additionally, when a user
already exists inside Rauthy's DB, was not linked to an upstream provider and then tries a login but produces an email
conflict, the login will be rejected. It must be handled this way, because Rauthy can not know for sure, if the upstream
email was actually been verified. If this is not the case, simply accepting this login could lead to account takeover,
which is why this will not allow the user to login in that case.
The only absolutely mandatory information, that Rauthy needs from an upstream provider, is an email
claim in either
the id_token
or as response from the userinfo endpoint. If it cannot find any name
/ given_name
/ family_name
,
it will simply insert N/A
as values there. The user will get a warning on his next values update to provide that
information.
The supported features (so far) are:
- auto OpenID Connect metadata discovery
- accept invalid TLS certs for upstream clients, for instance inside self-hosted environments
- provide a root certificate for an upstream client for the same reason as above
- choose a template for the config (currently Google and Github exist)
- fully customized endpoint configuration if the provider does not support auto-lookup
- optional mfa claim mapping by providing a json parse regex:
If the upstream provider returns information about if the user has actually done at least a 2FA sign in, Rauthy can extract this information dynamically from the returned JSON. For instance, Rauthy itself will add anamr
claim to theid_token
and you can find a value withmfa
inside it, if the user has done an MFA login.
Github returns this information as well (which has been added to the template). - optional
rauthy_admin
claim mapping: If you want to allow full rauthy admin access for a user depending on some value returned by the upstream provider, you can do a mapping just like for the mfa claim above. - upload a logo for upstream providers Rauthy does not (and never will do) an automatic logo download from a provider, because this logo will be shown on the login page and must be trusted. However, if Rauthy would download any arbitrary logo from a provider, this could lead to code injection into the login page. This is why you need to manually upload a logo after configuration.
Note:
If you are testing this feature, please provide some feedback in #166
in any case - if you have errors or not. It would be nice to know about providers that do work already and those, that
might need some adoptions. All OIDC providers should work already, because for these we can rely on standards and RFCs,
but all others might produce some edge cases and I simply cannot test all of them myself.
If we have new providers we know of, that need special values, these values would be helpful as well, because Rauthy
could provide a template in the UI for these in the future, so please let me know.
The whole logic how images are handled has been rewritten.
Up until v0.21.1, custom client logos have been taken as a Javascript data:
url because of easier handling.
This means however, that we needed to allow data:
sources in the CSP for img-src
, which can be a security issue and
should be avoided if possible.
This whole handling and logic has been rewritten. The CSP hardening has been finalized by removing the data:
allowance
for img-src
. You can still upload SVG / JPG / PNG images under the client branding (and for the new auth providers).
In the backend, Rauthy will actually parse the image data, convert the images to the optimized webp
format, scale
the original down and save 2 different versions of it. The first version will be saved internally to fit into 128x128px
for possible later use, the second one even smaller. The smaller version will be the one actually being displayed on
the login page for Clients and Auth Providers.
This optimization reduces the payload sent to clients during the login by a lot, if the image has not been manually
optimized beforehand. Client Logos will typically be in the range of ~5kB now while the Auth Providers ones will usually
be less than 1kB.
With the name config variable PEER_IP_HEADER_NAME
, you can specify a custom header name which will be used for
extracting the clients IP address. For instance, if you are running Rauthy behind a Cloudflare proxy, you will usually
only see the IP of the proxy itself in the X-FORWARDED-FOR
header. However, cloudflare adds a custom header called
CF-Connecting-IP
to the request, which then shows the IP you are looking for.
Since it is very important for rate limiting and blacklisting that Rauthy knows the clients IP, this can now be
customized.
# Can be set to extract the remote client peer IP from a custom header name
# instead of the default mechanisms. This is needed when you are running
# behind a proxy which does not set the `X-REAL-IP` or `X-FORWARDED-FOR` headers
# correctly, or for instance when you proxy your requests through a CDN like
# Cloudflare, which adds custom headers in this case.
# For instance, if your requests are proxied through cloudflare, your would
# set `CF-Connecting-IP`.
PEER_IP_HEADER_NAME="CF-Connecting-IP"
For each client, you can now specify contacts and a URI, where the application is hosted. These values might be shown to users during login in the future. For Rauthy itself, the values will be set with each restart in the internal anti lockout rule. You can specify the contact via a new config variable:
# This contact information will be added to the `rauthy`client
# within the anti lockout rule with each new restart.
RAUTHY_ADMIN_EMAIL="[email protected]"
If you want to initiate a user registration from a downstream app, you might not want your users to be redirected
to their Rauthy Account page after they have initially set the password. To encounter this, you can redirect them
to the registration page and append a ?redirect_uri=https%3A%2F%2Frauthy.example.com
query param. This will be
saved in the backend state and the user will be redirected to this URL instead of their account after they have set
their password.
You can not overwrite the template i18n translations for the NewPassword and ResetPassword E-Mail templates.
There is a whole nwe section in the config and it can be easily done with environment variables:
#####################################
############ TEMPLATES ##############
#####################################
# You can overwrite some default email templating values here.
# If you want to modify the basic templates themselves, this is
# currently only possible with a custom build from source.
# The content however can mostly be set here.
# If the below values are not set, the default will be taken.
# New Password E-Mail
#TPL_EN_PASSWORD_NEW_SUBJECT="New Password"
#TPL_EN_PASSWORD_NEW_HEADER="New password for"
#TPL_EN_PASSWORD_NEW_TEXT=""
#TPL_EN_PASSWORD_NEW_CLICK_LINK="Click the link below to get forwarded to the password form."
#TPL_EN_PASSWORD_NEW_VALIDITY="This link is only valid for a short period of time for security reasons."
#TPL_EN_PASSWORD_NEW_EXPIRES="Link expires:"
#TPL_EN_PASSWORD_NEW_BUTTON="Set Password"
#TPL_EN_PASSWORD_NEW_FOOTER=""
#TPL_DE_PASSWORD_NEW_SUBJECT="Passwort Reset angefordert"
#TPL_DE_PASSWORD_NEW_HEADER="Passwort Reset angefordert für"
#TPL_DE_PASSWORD_NEW_TEXT=""
#TPL_DE_PASSWORD_NEW_CLICK_LINK="Klicken Sie auf den unten stehenden Link für den Passwort Reset."
#TPL_DE_PASSWORD_NEW_VALIDITY="Dieser Link ist aus Sicherheitsgründen nur für kurze Zeit gültig."
#TPL_DE_PASSWORD_NEW_EXPIRES="Link gültig bis:"
#TPL_DE_PASSWORD_NEW_BUTTON="Passwort Setzen"
#TPL_DE_PASSWORD_NEW_FOOTER=""
# Password Reset E-Mail
#TPL_EN_RESET_SUBJECT="Neues Passwort"
#TPL_EN_RESET_HEADER="Neues Passwort für"
#TPL_EN_RESET_TEXT=""
#TPL_EN_RESET_CLICK_LINK="Klicken Sie auf den unten stehenden Link um ein neues Passwort zu setzen."
#TPL_EN_RESET_VALIDITY="This link is only valid for a short period of time for security reasons."
#TPL_EN_RESET_EXPIRES="Link expires:"
#TPL_EN_RESET_BUTTON="Reset Password"
#TPL_EN_RESET_FOOTER=""
#TPL_DE_RESET_SUBJECT="Passwort Reset angefordert"
#TPL_DE_RESET_HEADER="Passwort Reset angefordert für"
#TPL_DE_RESET_TEXT=""
#TPL_DE_RESET_CLICK_LINK="Klicken Sie auf den unten stehenden Link für den Passwort Reset."
#TPL_DE_RESET_VALIDITY="Dieser Link ist aus Sicherheitsgründen nur für kurze Zeit gültig."
#TPL_DE_RESET_EXPIRES="Link gültig bis:"
#TPL_DE_RESET_BUTTON="Passwort Zurücksetzen"
#TPL_DE_RESET_FOOTER=""
- UI: when a client name has been removed and saved, the input could show
undefined
in some cases 2600005 - The default path to TLS certificates inside the container image has been fixed in the deploy cfg template. This makes it possible now to start the container for testing with TLS without explicitly specifying the path manually. 3a04dc0
- The early Passkey implementations of the Bitwarden browser extension seem to have not provided all correct values, which made Rauthy complain because of not RFC-compliant requests during Passkey sign in. This error cannot really be reproduced. However, Rauthy tries to show more error information to the user in such a case. b7f94ff
- Don't use the reset password template text for "new-password emails" 45b4160
- host
.well-known additionally
on root/
3c594f4
- Correctly show the
registration_endpoint
for dynamic client registration in theopenid-configuration
if it is enabled. 424fdd1
The access token's sub
claim had the email as value beforehand. This was actually a bug.
The sub
of access token and id token must be the exact same value. sub
now correctly contains the user ID,
which is 100% stable, even if the user changes his email address.
This means, if you used the sub
from the access token before to get the users email, you need to pay attention now.
The uid
from the access token has been dropped, because this value is now in sub
. Additionally, since many
applications need the email anyway, it makes sense to have it inside the access token. For this purpose, if email
is in the requested scope
, it will be mapped to the email
claim in the access token.
Rauthy should now be compliant with the mandatory part of the OIDC spec.
A lot of additional things were already implemented many versions ago.
The missing thing was respecting some additional params during GET /authorize
.
Rauthy now supports Dynamic Client registration as defined here.
Dynamic clients will always get a random ID, starting with dyn$
, followed by a random alphanumeric string,
so you can distinguish easily between them in the Admin UI.
Whenever a dynamic client does a PUT
on its own modification endpoint with the registration_token
it
received from the registration, the client_secret
and the registration_token
will be rotated and the
response will contain new ones, even if no other value has been modified. This is the only "safe" way to
rotate secrets for dynamic clients in a fully automated manner. The secret expiration is not set on purpose,
because this could easily cause troubles, if not implemented properly on the client side.
If you have a
badly implemented client that does not catch the secret rotation and only if you cannot fix this on the
client side, maybe because it's not under your control, you may deactivate the auto rotation with
DYN_CLIENT_SECRET_AUTO_ROTATE=false
. Keep in mind, that this reduces the security level of all dynamic
clients.
Bot and spam protection is built-in as well in the best way I could think of. This is disabled, if you set
the registration endpoint to need a DYN_CLIENT_REG_TOKEN
. Even though this option exists for completeness,
it does not really make sense to me though. If you need to communicate a token beforehand, you could just
register the client directly. Dynamic clients are a tiny bit less performant than static ones, because we
need one extra database round trip on successful token creation to make the spam protection work.
However, if you do not set a DYN_CLIENT_REG_TOKEN
, the registration endpoint would be just open to anyone.
To me, this is the only configuration for dynamic client registration, that makes sense, because only that
is truly dynamic. The problem then are of course bots and spammers, because they can easily fill your
database with junk clients. To counter this, Rauthy includes two mechanisms:
- hard rate limiting - After a dynamic client has been registered, another one can only be registered
after 60 seconds (default, can be set with
DYN_CLIENT_RATE_LIMIT_SEC
) from the same public IP. - auto-cleanup of unused clients - All clients, that have been registered but never used, will be deleted
automatically 60 minutes after the registration (default, can be set with
DYN_CLIENT_CLEANUP_MINUTES
).
There is a whole new section in the config:
#####################################
########## DYNAMIC CLIENTS ##########
#####################################
# If set to `true`, dynamic client registration will be enabled.
# Only activate this, if you really need it and you know, what
# you are doing. The dynamic client registration without further
# restriction will allow anyone to register new clients, even
# bots and spammers, and this may create security issues, if not
# handled properly and your users just login blindly to any client
# they get redirected to.
# default: false
#ENABLE_DYN_CLIENT_REG=false
# If specified, this secret token will be expected during
# dynamic client registrations to be given as a
# `Bearer <DYN_CLIENT_REG_TOKEN>` token. Needs to be communicated
# in advance.
# default: <empty>
#DYN_CLIENT_REG_TOKEN=
# The default token lifetime in seconds for a dynamic client,
# that will be set during the registration.
# This value can be modified manually after registration via
# the Admin UI like for any other client.
# default: 1800
#DYN_CLIENT_DEFAULT_TOKEN_LIFETIME=1800
# If set to 'true', client secret and registration token will be
# automatically rotated each time a dynamic client updates itself
# via the PUT endpoint. This is the only way that secret rotation
# could be automated safely.
# However, this is not mandatory by RFC and it may lead to errors,
# if the dynamic clients are not implemented properly to check for
# and update their secrets after they have done a request.
# If you get into secret-problems with dynamic clients, you should
# update the client to check for new secrets, if this is under your
# control. If you cannot do anything about it, you might set this
# value to 'false' to disable secret rotation.
# default: true
#DYN_CLIENT_SECRET_AUTO_ROTATE=true
# This scheduler will be running in the background, if
# `ENABLE_DYN_CLIENT_REG=true`. It will auto-delete dynamic clients,
# that have been registered and not been used in the following
# `DYN_CLIENT_CLEANUP_THRES` hours.
# Since a dynamic client should be used right away, this should never
# be a problem with "real" clients, that are not bots or spammers.
#
# The interval is specified in minutes.
# default: 60
#DYN_CLIENT_CLEANUP_INTERVAL=60
# The threshold for newly registered dynamic clients cleanup, if
# not being used within this timeframe. This is a helper to keep
# the database clean, if you are not using any `DYN_CLIENT_REG_TOKEN`.
# The threshold should be specified in minutes. Any client, that has
# not been used within this time after the registration will be
# automatically deleted.
#
# Note: This scheduler will only run, if you have not set any
# `DYN_CLIENT_REG_TOKEN`.
#
# default: 60
#DYN_CLIENT_CLEANUP_MINUTES=60
# The rate-limiter timeout for dynamic client registration.
# This is the timeout in seconds which will prevent an IP from
# registering another dynamic client, if no `DYN_CLIENT_REG_TOKEN`
# is set. With a `DYN_CLIENT_REG_TOKEN`, the rate-limiter will not
# be applied.
# default: 60
#DYN_CLIENT_RATE_LIMIT_SEC=60
This is a small UX improvement in some situations. If a downstream client needs a user to log in, and it knows
the users E-Mail address somehow, maybe because of an external initial registration, It may append the correct
value with appending the login_hint
to the login redirect. If this is present, the login UI will pre-fill the
E-Mail input field with the given value, which make it one less step for the user to log in.
- The
/userinfo
endpoint now correctly respects thescope
claim from withing the givenBearer
token and provides more information. Depending on thescope
, it will show the additional user values that were introduced with v0.20 49dd553 - respect
max_age
during GET/authorize
and addauth_time
to the ID token 9ca6970 - correctly work with
prompt=none
andprompt=login
during GET/authorize
9964fa4 - Make it possible to use an insecure SMTP connection ef46414
- Implement OpenID Connect Dynamic Client Registration b48552e 12179c9
- respect
login_hint
during GET/authorize
963644c
- Fix the link to the latest version release notes in the UI, if an update is available e66e496
- Fix the access token
sub
claim (see breaking changes above) 29dbe26 - Fix a short route fallback flash during Admin UI logout 6787261
This is a small bugfix release.
The temp migrations which exist for v0.20 only to migrate existing database secrets to
cryptr were causing a crash at startup for a fresh installation. This is the only
thing that has been fixed
with this version. They are now simply ignored and a warning is logged into the console at the very first startup.
This update is not backwards-compatible with any previous version. It will modify the database under the hood which makes it incompatible with any previous version. If you need to downgrade for whatever reason, you will only be able to do this by applying a database backup from an older version. Testing has been done and everything was fine in tests. However, if you are using Rauthy in production, I recommend taking a database backup, since any version <= v0.19 will not be working with a v0.20+ database.
If you are upgrading from any earlier version, there is a manual action you need to perform, before you can start v0.20.0. If this has not been done, it will simply panic early and not start up. Nothing will get damaged.
The internal encryption of certain values has been changed. Rauthy now uses cryptr to handle these things, like mentioned below as well.
However, to make working with encryption keys easier and provide higher entropy, the format has changed.
You need to convert your currently used ENC_KEYS
to the new format:
1. Install cryptr - https://github.com/sebadob/cryptr
If you have Rust available on your system, just execute:
cargo install cryptr --features cli --locked
Otherwise, pre-built binaries do exist:
Linux: https://github.com/sebadob/cryptr/raw/main/out/cryptr_0.2.2
Windows: https://github.com/sebadob/cryptr/raw/main/out/cryptr_0.2.2.exe
2. Execute:
cryptr keys convert legacy-string
3. Paste your current ENC_KEYS into the command line.
For instance, if you have
ENC_KEYS="bVCyTsGaggVy5yqQ/S9n7oCen53xSJLzcsmfdnBDvNrqQ63r4 q6u26onRvXVG4427/3CEC8RJWBcMkrBMkRXgx65AmJsNTghSA"
in your config, paste
bVCyTsGaggVy5yqQ/S9n7oCen53xSJLzcsmfdnBDvNrqQ63r4 q6u26onRvXVG4427/3CEC8RJWBcMkrBMkRXgx65AmJsNTghSA
If you provide your ENC_KEYS via a Kubernetes secret, you need to do a base64 decode first. For instance, if your secret looks something like this
ENC_KEYS: YlZDeVRzR2FnZ1Z5NXlxUS9TOW43b0NlbjUzeFNKTHpjc21mZG5CRHZOcnFRNjNyNCBxNnUyNm9uUnZYVkc0NDI3LzNDRUM4UkpXQmNNa3JCTWtSWGd4NjVBbUpzTlRnaFNB
Then decode via shell or any tool your like:
echo -n YlZDeVRzR2FnZ1Z5NXlxUS9TOW43b0NlbjUzeFNKTHpjc21mZG5CRHZOcnFRNjNyNCBxNnUyNm9uUnZYVkc0NDI3LzNDRUM4UkpXQmNNa3JCTWtSWGd4NjVBbUpzTlRnaFNB | base64 -d
... and paste the decoded value into cryptr
4. cryptr will output the correct format for either usage in config or as kubernetes secret again
5. Paste the new format into your Rauthy config / secret and restart.
Rauthy expects the ENC_KEYS
now base64 encoded, and instead of separated by whitespace it expects them to
be separated by \n
instead.
If you don't want to use cryptr
you need to convert your current keys manually.
For instance, if you have
ENC_KEYS="bVCyTsGaggVy5yqQ/S9n7oCen53xSJLzcsmfdnBDvNrqQ63r4 q6u26onRvXVG4427/3CEC8RJWBcMkrBMkRXgx65AmJsNTghSA"
in your config, you need to convert the enc key itself, the value after the /
, to base64, and then separate
them with \n
.
For instance, to convert bVCyTsGaggVy5yqQ/S9n7oCen53xSJLzcsmfdnBDvNrqQ63r4
, split off the enc key part
S9n7oCen53xSJLzcsmfdnBDvNrqQ63r4
and encode it with base64:
echo -n 'S9n7oCen53xSJLzcsmfdnBDvNrqQ63r4' | base64
Then combine the result with the key id again to:
bVCyTsGaggVy5yqQ/UzluN29DZW41M3hTSkx6Y3NtZmRuQkR2TnJxUTYzcjQ=
Do this for every key you have. The ENC_KEYS
should then look like this in the end:
ENC_KEYS="
bVCyTsGaggVy5yqQ/UzluN29DZW41M3hTSkx6Y3NtZmRuQkR2TnJxUTYzcjQ=
q6u26onRvXVG4427/M0NFQzhSSldCY01rckJNa1JYZ3g2NUFtSnNOVGdoU0E=
"
Important:
Make sure to not add any newline characters or spaces when copying values around when doing the bas64 encoding!
Rauthy can now push encrypted SQLite backups to a configured S3 bucket.
The local backups to data/backups/
do still exist. If configured, Rauthy will now push backups from SQLite
to an S3 storage and encrypt them on the fly. All this happens with the help
of cryptr
which is a new crate of mine. Resource usage is minimal, even if the SQLite file would be multiple GB's big.
The whole operation is done with streaming.
Rauthy can now automatically restore SQLite backups either from a backup inside data/backups/
locally, or
fetch an encrypted backup from an S3 bucket. You only need to set the new RESTORE_BACKUP
environment variable
at startup and Rauthy will do the rest. No manually copying files around.
For instance, a local backup can be restored with setting RESTORE_BACKUP=file:rauthy-backup-1703243039
and an
S3 backup with RESTORE_BACKUP=s3:rauthy-0.20.0-1703243039.cryptr
.
To not show unexpected behavior at runtime, Rauthy will initialize and test a configured S3 connection at startup. If anything is not configured correctly, it will panic early. This way, when Rauthy starts and the tests are successful, you know it will be working during the backup process at night as well, and it will not crash and throw errors all night long, if you just had a typo somewhere.
The old (very naive) Proof-of-Work (PoW) mechanism for bot and spam protection has been migrated to make use
of the spow crate, which is another new project of mine.
With this implementation, the difficulty for PoW's a client must solve can be scaled up almost infinitely,
while the time is takes to verify a PoW on the server side will always be O(1)
, no matter hoch high the
difficulty was. spow
uses a modified version of the popular Hashcat PoW algorithm, which is also being used
in the Bitcoin blockchain.
A typical Rauthy deployment will have a finite amount of clients, roles, groups, scopes, and so on. The only thing that might scale endlessly are the users. Because of this, the users are now being cached inside their own separate cache, which can be configured and customized to fit the deployment's needs. You can now set the upper limit and the lifespan for cached user's. This is one of the first upcoming optimizations, since Rauthy gets closer to the first v1.0.0 release:
Up until now, it was possible to register the same E-Mail address multiple times with using uppercase characters.
E-Mail is case-insensitive by definition though. This version does a migration of all currently existing E-Mail
addresses
in the database to lowercase only characters. From that point on, it will always convert any address to lowercase only
characters to avoid confusion and conflicts.
This means, if you currently have the same address in your database with different casing, you need to resolve this
issue manually. The migration function will throw an error in the console at startup, if it finds such a conflict.
# The max cache size for users. If you can afford it memory-wise, make it possible to fit
# all active users inside the cache.
# The cache size you provide here should roughly match the amount of users you want to be able
# to cache actively. Depending on your setup (WebIDs, custom attributes, ...), this number
# will be multiplied internally by 3 or 4 to create multiple cache entries for each user.
# default: 100
CACHE_USERS_SIZE=100
# The lifespan of the users cache in seconds. Cache eviction on updates will be handled automatically.
# default: 28800
CACHE_USERS_LIFESPAN=28800
The scope profile
now additionally adds the following claims to the ID token (if they exist for the user):
locale
birthdate
The new scope address
adds:
-
address
in JSON formatThe new scope
phone
adds: -
phone
- new POST
/events
API endpoint which serves archived events d5d4b01 - new admin UI section to fetch and filter archived events. ece73bb
- backend + frontend dependencies have been updated to the latest versions everywhere
- The internal encryption handling has been changed to a new project of mine
called cryptr.
This makes the whole value encryption way easier, more stable and future-proof, because values have their own tiny header data with the minimal amount of information needed. It not only simplifies encryption key rotations, but also even encryption algorithm encryptions really easy in the future. d6c224e c3df3ce - Push encrypted SQLite backups to S3 storage fa0e496
- S3 connection and config test at startup 701c785
- Auto-Restore SQLite backups either from file or S3 65bbfea
- Migrate to spow ff579f6
- Pre-Compute CSP's for all HTML content at build-time and get rid of the per-request nonce computation 8fd2c99
noindex, nofollow
globally via headers and meta tag -> Rauthy as an Auth provider should never be indexed 38a2a52- push users into their own, separate, configurable cache 3137927
- Convert to lowercase E-Mail addresses, always, everywhere a137e96 2467227
- add additional user values matching OIDC default claims fca0c13
- add
address
andphone
default OIDC scopes and additional values forprofile
3d497a2
- A visual bugfix appeared on Apple systems because of the slightly bigger font size. This made the live events look a bit ugly and characters jumping in a line where they should never end up. 3b56b50
- An incorrect URL has been returned for the
end_session_endpoint
in the OIDC metadata 3caabc9 - Make the
ItemTiles
UI componend used for roles, groups, and so on, wrap nicely on smaller screens 6f83e4a - Show the corresponding E-Mail address for
UserPasswordReset
andUserEmailChange
events in the UI 7dc4794
- Invalidate all user sessions after a password reset to have a more uniform flow and better UX 570dea6
- Add an additional foreign key constraint on the
user_attr_values
table to cascade rows on user deletion for enhanced stability during user deletions 1dc730c
- Fix a bug when an existing user with already registered passkeys would not be able to use the password reset functionality correctly when opened in a fully fresh browser 24af03c
- Fix cache evictions of existing user sessions after a user has been deleted while having an active session ed76418
- Fix some default values in the reference config docs not having the correct default documented a7101b2
This is a small bugfix and compatibility release regarding password reset E-Mails.
The main reason for this release are problems with the Password Reset via E-Mail when users are using Microsoft (the only service provider where this problems can be replicated 100% of the time) and / or Outlook. These users were unable to use password reset links at all. The reason is a "Feature" from Microsoft. They fully scan the user's E-Mails and even follow all links inside it. The problem is, that the binding cookie from Rauthy will go to the Microsoft servers instead of the user, making is unusable and basically invalidating everything before the user has any chance to use the link properly.
The usage of this config variable is highly discouraged, and you should avoid it, if you can. However, big enterprises are moving slowly (and often not at all). This new config variable can be used as a last resort, to make it usable by giving up some security.
# This value may be set to 'true' to disable the binding cookie checking
# when a user uses the password reset link from an E-Mail.
#
# When using such a link, you will get a so called binding cookie. This
# happens on the very first usage of such a reset link. From that moment on,
# you will only be able to access the password reset form with this very
# device and browser. This is just another security mechanism and prevents
# someone else who might be passively sniffing network traffic to extract
# the (unencrypted) URI from the header and just use it, before the user
# has a change to fill out the form. This is a mechanism to prevent against
# account takeovers during a password reset.
#
# The problem however are companies (e.g. Microsoft) who scan their customers
# E-Mails and even follow links and so on. They call it a "feature". The
# problem is, that their servers get this binding cookie and the user will be
# unable to use this link himself. The usage of this config option is highly
# discouraged, but since everything moves very slow in big enterprises and
# you cannot change your E-Mail provider quickly, you can use it do just make
# it work for the moment and deal with it later.
#
# default: false
#UNSAFE_NO_RESET_BINDING=false
- implement
UNSAFE_NO_RESET_BINDING
like mentioned above 1f4a146 - prettify the expiry timestamp in some E-Mails 1173fa0
- It was possible to get an "Unauthorized Session" error during a password reset, if it has been initiated by an admin and / or from another browser. e5d1d9d
- Correctly set
ML_LT_PWD_FIRST
- set the default value in minutes (like documented) instead of seconds. New default isML_LT_PWD_FIRST=4320
e9d1b56
This is the main new feature for this release.
With the now accepted RSA
signatures for DPoP tokens, the ephemeral, dynamic clients and
the basic serving of webid
documents for each user, Rauthy should now fully support Solid OIDC.
This feature just needs some more real world testing with already existing applications though.
These 3 new features are all opt-in, because a default deployment of Rauthy will most probably
not use them at all. There is a whole new section in the Config
called EPHEMERAL CLIENTS
where you can configure these things. The 3 main variables you need
to set are:
# Can be set to 'true' to allow the dynamic client lookup via URLs as
# 'client_id's during authorization_code flow initiation.
# default: false
ENABLE_EPHEMERAL_CLIENTS=true
# Can be set to 'true' to enable WebID functionality like needed
# for things like Solid OIDC.
# default: false
ENABLE_WEB_ID=true
# If set to 'true', 'solid' will be added to the 'aud' claim from the ID token
# for ephemeral clients.
# default: false
ENABLE_SOLID_AUD=true
Afterward, the only "manual" thing you need to do is to add a custom scope called webid
once via the Admin UI.
This new config variable solves a possible chicken and egg problem, if you use a self-hosted Matrix server and Rauthy as its OIDC provider at the same time. If both services are offline, for instance because of a server reboot, you would not be able to start them.
- The Matrix Server would panic because it cannot connect to and verify Rauthy
- Rauthy would panic because it cannot connect to Matrix
Setting this variable to true
solves this issue and Rauthy would only log an error in that
case instead of panicking. The panic is the preferred behavior though, because this makes
100% sure that Rauthy will actually be able to send out notification to configured endpoints.
- ~20% smaller binary size by stripping unnecessary symbols 680d5e5
- Accept
DPoP
tokens withRSA
validations daade41 - Dynamically build up and serve custom scopes in the
/.well-known/openid-configuration
904cf09 - A much nicer way of generating both DEV and PROD TLS certificates by using Nioca has been integrated into the project itself, as well as the Rauthy Book 463bf8a a14beda
- Implement opt-in ephemeral clients 52c84c2 617908b
- Implement opt-in basic
webid
document serving bca77f5 1e32f6f 79cb836 55433f4 3cdf81c - For developers, a new CONTRIBUTING.md guide has been added to get people started quickly 7c38142 411393f
- Add a new config variable
EVENT_MATRIX_ERROR_NO_PANIC
to only throw an error instead of panic on Matrix connection errors 4fc3382 - Not really a bug nor a feature, but the "App Version Update" watcher now remembers a sent notification for an update and will only notify after a restart again. be19735
- In a HA deployment, the new integrated health watcher from v0.17.0 could return false positives 93d75d5 9bbaeb2
- In v0.18.0 a bug has been introduced because of internal JWKS optimizations. This produced cache errors when trying to deserialize cached JWKS after multiple requests. 3808423
This is a rather small release. The main reason it is coming so early is the license change.
With this release, the license of Rauthy is changed from the AGPLv3 to an Apache 2.0. The Apache is way more permissive and make the integration with other open source projects and software a lot easier.
The first steps towards DPoP Token support have been made.
It is marked as experimental though, because the other authentication methods have been tested and verified with
various real world applications already. This is not the case for DPoP yet.
Additionally, the only supported alg for DPoP proofs is EdDSA for now. The main reason being that I am using Jetbrains
IDE's and the Rust plugin for both IDEA and RustRover are currently broken in conjunction with the rsa
crate
(and some others) which makes writing code with them a nightmare. RSA support is prepared as much as possible
though and I hope they will fix this bug soon, so it can be included.
If you have or use a DPoP application, I would really appreciate testing with Rauthy and to get some feedback, so I can make the whole DPoP flow more resilient as well.
Please note that Authorization Code binding to a DPoP key is also not yet supported, only the /token
endpoint accepts
and validates the DPoP
header for now.
- Typos have been changed in docs and config 51dc320
- Listen Scheme was not properly set when only HTTP was selected exclusively c002fbe
- Resource links in default error HTML template did not work properly in all locations 5965d9a
This is a pretty huge update with a lot of new features.
With the release of v0.17.0, Rauthy's container images are now multi-platform.
Both a linux/amd64
and a linux/arm64
are supported. This means you can "just use it" now on Raspberry Pi and
others, or on Ampere architecture from Cloud providers without the need to compile it yourself.
Rauthy now produces events in all different kinds of situations. These can be used for auditing, monitoring, and so on.
You can configure quite a lot for them in the new EVENTS / AUDIT
section in the
Rauthy Config.
These events are persisted in the database, and they can be fetched in real time via a new Server Sent Events(SSE)
endpoint /auth/v1/events/stream
. There is a new UI component in the Admin UI that uses the same events stream.
In case of a HA deployment, Rauthy will use one additional DB connection (all the time) from the connection pool
to distribute these events via pg listen / notify to the other members. This makes a much simpler deployment and
there is no real need to deploy additional resources like Nats or something like that. This keeps the setup easier
and therefore more fault-tolerant.
You should at least set
EVENT_EMAIL
now, if you update from an older version.
The new Events can be sent to a Slack Webhook or Matrix Server.
The Slack integration uses the simple (legacy) Slack Webhooks and can be configured with EVENT_SLACK_WEBHOOK
:
# The Webhook for Slack Notifications.
# If left empty, no messages will be sent to Slack.
#EVENT_SLACK_WEBHOOK=
The Matrix integration can connect to a Matrix server and room. This setup requires you to provide a few more variables:
# Matrix variables for event notifications.
# `EVENT_MATRIX_USER_ID` and `EVENT_MATRIX_ROOM_ID` are mandatory.
# Depending on your Matrix setup, additionally one of
# `EVENT_MATRIX_ACCESS_TOKEN` or `EVENT_MATRIX_USER_PASSWORD` is needed.
# If you log in to Matrix with User + Password, you may use `EVENT_MATRIX_USER_PASSWORD`.
# If you log in via OIDC SSO (or just want to use a session token you can revoke),
# you should provide `EVENT_MATRIX_ACCESS_TOKEN`.
# If both are given, the `EVENT_MATRIX_ACCESS_TOKEN` will be preferred.
#
# If left empty, no messages will be sent to Slack.
# Format: `@<user_id>:<server address>`
#EVENT_MATRIX_USER_ID=
# Format: `!<random string>:<server address>`
#EVENT_MATRIX_ROOM_ID=
#EVENT_MATRIX_ACCESS_TOKEN=
#EVENT_MATRIX_USER_PASSWORD=
# Optional path to a PEM Root CA certificate file for the Matrix client.
#EVENT_MATRIX_ROOT_CA_PATH=tls/root.cert.pem
# May be set to disable the TLS validation for the Matrix client.
# default: false
#EVENT_MATRIX_DANGER_DISABLE_TLS_VALIDATION=false
You can configure the minimum event level which would trigger it to be sent:
# The notification level for events. Works the same way as a logging level. For instance:
# 'notice' means send out a notifications for all events with the info level or higher.
# Possible values:
# - info
# - notice
# - warning
# - critical
#
# default: 'warning'
EVENT_NOTIFY_LEVEL_EMAIL=warning
# default: 'notice'
EVENT_NOTIFY_LEVEL_MATRIX=notice
# default: 'notice'
EVENT_NOTIFY_LEVEL_SLACK=notice
Up until version 0.16, a failed login would extend the time the client needed to wait for the result artificially until it ended up in the region of the median time to log in successfully. This was already a good thing to do to prevent username enumeration. However, this has been improved a lot now.
When a client does too many invalid logins, the time he needs to wait until he may do another try
increases with each failed attempt. The important thing here is, that this is not bound to a user,
but instead to the clients IP.
This makes sure, that an attacker cannot just lock a users account by doing invalid logins and therefore
kind of DoS the user. Additionally, Rauthy can detect Brute-Force or DoS attempts independently of
a users account.
There are certain thresholds at 7, 10, 15, 20, 25 invalid logins, when a clients IP will get fully blacklisted (explained below) for a certain amount of time. This is a good DoS and even DDoS prevention.
This is a new HTTP middleware which checks the clients IP against an internal blacklist.
This middleware is the very first thing that is being executed and just returns an HTML page
to a blacklisted client with the information about the blacklisting and the expiry.
This blacklist is in-memory only to be as fast as possible to actually be able to handle brute
force and DoS attacks in the best way possible while consuming the least amount of resources
to do this.
Currently, IP's may get blacklisted in two ways:
- Automatically when exceeding the above-mentioned thresholds for invalid logins in a row
- Manually via the Admin UI
Blacklisted IP's always have an expiry and will get removed from the blacklist automatically. Both actions will trigger one of the new Rauthy Events and send out notifications.
This is a simple new cron job which rotates the JSON Web Key Set (JWKS) automatically for enhanced security, just in case one of the keys may get leaked at some point.
By default, it runs every first day of the month. This can be adjusted in the config:
# JWKS auto rotate cronjob. This will (by default) rotate all JWKs every
# 1. day of the month. If you need smaller intervals, you may adjust this
# value. For security reasons, you cannot fully disable it.
# In a HA deployment, this job will only be executed on the current cache
# leader at that time.
# Format: "sec min hour day_of_month month day_of_week year"
# default: "0 30 3 1 * * *"
JWK_AUTOROTATE_CRON="0 30 3 1 * * *"
The authentication and authorization system has been fully reworked and improved.
The new middleware and way of checking the client's access rights in each endpoint is way less error-prone than before. The whole process has been much simplified which indirectly improves the security:
- CSRF Tokens are now checked automatically if the request method is any other than a
GET
Bearer
Tokens are not allowed anymore to access the Admin API- A new
ApiKey
token type has been added (explained below) - Only a single authn/authz struct is needed to validate each endpoint
- The old permission extractor middleware was removed which also increases the performance a bit
This new API-Key type may be used, if you need to access Rauthy API from other applications.
Beforehand, you needed to create a "user" for an application, if you wanted to access the API,
which is kind of counter-intuitive and cumbersome.
These new API-Keys can be used to handle this task now. These are static keys with an
optional expiry date and fine-grained access rights. You should only give them permissions
to the resources you actually need to further improve your backend security.
They can be easily created, configured and revoked / deleted in the Admin UI at any time.
IMPORTANt: The API now cannot be accessed anymore with Bearer tokens! If you used this method until now, you need to switch to the new API Keys
In the configuration for each individual OIDC client, you can find a new FORCE MFA
switch.
It this new option is activated for a client, it will only issue authentication codes for
those users, that have at least one Passkey registered.
This makes it possible to force MFA for all your different applications from Rauthy directly
without the need to check for the amr
claim in the ID token and do or configure all of
this manually downstream. Most of the time, you may not even have control over the client
itself, and you are basically screwed, if the client does not have its own "force mfa integration".
CAUTION: This mentioned in the UI as well, but when you check this new force mfa option, it can only force MFA for the
authorization_code
flow of course! If you use other flows, there just is no MFA that could be checked / forced.
Since we do have an Events system now, there is a new scheduled cron job, which checks the latest available Rauthy Version.
This Job runs once every 8 hours and does a single poll to the Github Releases API. It looks for the latest available Rauthy Version that is not a prerelease or anything unstable. If it finds a version higher than the currently running one, a new Event will be generated. Additionally, you will see the current Rauthy Version in the UI now and a small indicator just next to it, if there is a stable update available.
- Support for
linux/arm64
2abb071 - New events and auditing 758dda6 488f9de 7b95acc 34d8888 f70f0b2 5f0c9c9 a9af494 797dad5 b338f26
rauthy-notify
crate has been added which implements the above-mentioned Slack and Matrix integrations for Event notifications. 8767389- Increasing login timeouts and delays after invalid logins 7f7a675 5d19d2d
- IpBlacklist Middleware d69845e
- IPBlacklist Admin UI Component c76c208
- JWKS Auto-Rotate cd087eb
- New Authentication Middleware a097a5d
- ApiKey Admin UI Component 53ffe49
- OIDC Client
FORCE_MFA
3efdcce - Rauthy Version Checker aea7794 41b4c9c
- Show a Link to the accounts page after a password reset if the redirection does not work ace4daf
- Send out E-Mail change confirmations E-Mails to both old and new address when an admin changes the address 8e97e31 97197db
- Allow CORS requests to the
.well-known
endpoints to make the oidc config lookup from an external UI possible b57656f - include the
alg
in the JWKS response for theopenid-configuration
c9073cb - The E-Mail HTML templates have been optically adjusted a bit to make them "prettier" 926de6e
- A User may have not been updated correctly in the cache when the E-Mail was changed. 8d9cdce
- With v0.16, it was possible to not be able to switch back to a password account type from passkey,
when it was a password account before already which did update its password in the past and therefore
would have entries in the DB for
last_recent_passwords
if you had the password policy correctly. 7a965a2 - When you were using a password manager that filled out the username 'again' in the login form, after the additional password request input showed up, it could reset the form on some browser. 09d1d3a
- The
ADMIN_ACCESS_SESSION_ONLY
config variable was removed. This was obsolete now with the introduction of the new ApiKey type. b28d8ba
This is a small bugfix release.
If you had an active and valid session from a v0.15.0, did an update to v0.16.0 and your session was still valid,
it did not have valid information about the peer IP. This is mandatory for a new feature introduced with v0.16.0
though and the warning logging in that case had an unwrap for the remote IP (which can never be null from v0.16.0 on),
which then would panic.
This is a tiny bugfix release that just gets rid of the possible panic and prints UNKNOWN
into the logs instead.
- print
UNKNOWN
into the logs for non-readable / -existing peer IPs 6dfd0f4
This version does modify the database and is therefore not backwards compatible with any previous version. If you need to downgrade vom v0.15 and above, you will only be able to do this via by applying a DB Backup.
It is now possible to limit the lifetime of a user.
You can set an optional expiry date and time for each user. This enables temporary access, for instance in
a support case where an external person needs access for a limited time.
Once a user has expired, a few things will happen:
- The user will not be able to log in anymore.
- As soon as the scheduler notices the expiry, it will automatically invalidate all possibly existing
sessions and refresh tokens for this user. How often the scheduler will run can be configured with the
SCHED_USER_EXP_MINS
variable. The default is 'every 60 minutes' to have a good balance between security and resource usage. However, if you want this to be very strict, you can adjust this down to something like '5 minutes' for instance. - If configured, expired users can be cleaned up automatically after the configured time. By default, expired users will not be cleaned up automatically. You can enable this feature with the ´SCHED_USER_EXP_DELETE_MINS` variable.
With this new config variable, you can define, if users with at least one valid registered passkey will
have expiring passwords (depending on the current password policy), or not.
By default, these users do not need to renew their passwords like it is defined in the password policy.
When a new session is being created, the peer / remote IP will be extracted and saved with the session information. This peer IP can be checked with each access and the session can be rejected, if this IP has changed, which will force the user to do a new login.
This will of course happen if a user is "on the road" and uses different wireless networks on the way, but it prevents a session hijack and usage from another machine, if an attacker has full access to the victims machine and even can steal the encrypted session cookie and(!) the csrf token saved inside the local storage. This is very unlikely, since the attacker would need to have full access to the machine anyway already, but it is just another security mechanism.
If this IP should be validated each time can be configured with the new SESSION_VALIDATE_IP
variable.
By default, peer IP's will be validated and a different IP for an existing session will be rejected.
Rauthy starts up a second HTTP Server for prometheus metrics endpoint and (optional) SwaggerUI.
By default, the SwaggerUI from the Docs
link in the Admin UI will not work anymore, unless you
specify the SwaggerUI via config to be publicly available. This just reduces the possible attack
surface by default.
New config variables are:
# To enable or disable the additional HTTP server to expose the /metrics endpoint
# default: true
#METRICS_ENABLE=true
# The IP address to listen on for the /metrics endpoint.
# You do not want to expose your metrics on a publicly reachable endpoint!
# default: 0.0.0.0
#METRICS_ADDR=0.0.0.0
# The post to listen on for the /metrics endpoint.
# You do not want to expose your metrics on a publicly reachable endpoint!
# default: 9090
#METRICS_PORT=9090
# If the Swagger UI should be served together with the /metrics route on the internal server.
# It it then reachable via:
# http://METRICS_ADDR:METRICS_PORT/docs/v1/swagger-ui/
# (default: true)
#SWAGGER_UI_INTERNAL=true
# If the Swagger UI should be served externally as well. This makes the link in the Admin UI work.
#
# CAUTION: The Swagger UI is open and does not require any login to be seen!
# Rauthy is open source, which means anyone could just download it and see on their own,
# but it may be a security concern to just expose less information.
# (default: false)
#SWAGGER_UI_EXTERNAL=false
For all registered passkeys, the User Verification (UV) state is now being saved and optionally checked. You can see the status for each device with the new fingerprint icon behind its name in the UI.
New config variable:
# This feature can be set to 'true' to force User verification during the Webauthn ceremony.
# UV will be true, if the user does not only need to verify its presence by touching the key,
# but by also providing proof that he knows (or is) some secret via a PIN or biometric key for
# instance. With UV, we have a true MFA scenario where UV == false (user presence only) would
# be a 2FA scenario (with password).
#
# Be careful with this option, since Android and some special combinations of OS + browser to
# not support UV yet.
# (default: false)
#WEBAUTHN_FORCE_UV=true
This is the biggest new feature for this release. It allows user accounts to be "passkey only".
A passkey only account does not have a password. It works only with registered passkeys with forced additional User Verification (UV).
Take a look at the updated documentation for further information:
[Passkey Only Accounts]https://sebadob.github.io/rauthy/config/fido.html#passkey-only-accounts
If an already existing user decides to change the E-Mail linked to the account, a new verification flow will be started:
- A user changes the E-Mail in the Account view.
- The E-Mail will not be updated immediately, but:
- A verification mail will be sent to the new address with an expiring magic link.
- After the user clicked the link in the mail, the new address will be verified.
- Once a user verifies the new E-Mail:
- The address will finally be updated in the users profile.
- Information E-Mails about the change will be sent to the old and the new address
This new variable allows you to add an E-Mail subject prefix to each mail that Rauthy sends out. This makes it easier for external users to identify the email, what it is about and what it is doing, in case the name 'Rauthy' does not mean anything to them.
# Will be used as the prefix for the E-Mail subject for each E-Mail that will be sent out to a client.
# This can be used to further customize your deployment.
# default: "Rauthy IAM"
EMAIL_SUB_PREFIX="Rauthy IAM"
In a few scenarios, for instance wrong client information for the authorization code flow or a non-existing or expired magic link, Rauthy now does not return the generic JSON error response, but actually a translated HTML page which informs the user in a nicer looking way about the problem. This provides a way better user experience especially for all Magic Link related requests.
This is an additional internal check which compares the version of the DB during startup and the App version of Rauthy itself. This makes it possible to have way more stable and secure migrations between versions in the future and helps prevent user error during upgrades.
- legacy MFA app pieces and DB columns have been cleaned up dce148a 423db7a
- user expiry feature bab6bfc e63d1ce 566fff1
WEBAUTHN_NO_PASSWORD_EXPIRY
config variable 7e16b6eSESSION_VALIDATE_IP
config variable 828bcd2 26924ff/metrics
endpoint 085d412WEBAUTHN_FORCE_UV
config variable- Passkey Only Accounts 6c2406c
- New E-Mail verification flow 260169d
- New nicely looking error page template 0e476ab
- Rust v1.73 update 43f5b61
EMAIL_SUB_PREFIX
config variable af85839- Rauthy DB Version check d2d9271
- Cleanup schedulers in HA_MODE run on leader only 4881dd8
- Updated documentation and book 5bbaae9
- Updated dependencies 0f11923
This version does modify the database and is therefore not backwards compatible with any previous version. If you need to downgrade vom v0.15 and above, you will only be able to do this via by applying a DB Backup.
This release is all about new Passkey Features.
- A user is not limited to just 2 keys anymore
- During registration, you can (and must) provide a name for the passkey, which helps you identify and distinguish your keys, when you register multiple ones.
- The
exclude_credentials
feature is now properly used and working. This makes sure, that you cannot register the same Passkey multiple times. - The Passkeys / MFA section in the Admin UI has been split from the User Password section to be more convenient to use
Commits:
- New Passkey Features
d317d90 cd5d086 61464bf e70c5a7 bc75610 49e9630 - New config option
WEBAUTHN_FORCE_UV
to optionally reject Passkeys without user verification c35ecc0 - Better
/token
endpoint debugging capabilities 87f7969
This is the last v0.14 release.
The next v0.15 will be an "in-between-release" which will do some migration preparations for Webauthn
/ FIDO 2 updates and features coming in the near future.
- Removed duplicate
sub
claims from JWT ID Tokens a35db33 - Small UI improvements:
- Show loading indicator when doing a password change
- The Loading animation was changes from JS to a CSS animation abd0a06
- Upgrades to actix-web 4.4 + rustls 0.21 (and all other minor upgrades) 070a453
This release mostly finishes the translation / i18n part of rauthy for now and adds some other
smaller improvements.
Container Images will be published with ghcr.io as well from now on. Since I am on the free plan
here, storage is limited and too old versions will be deleted at some point in the future.
However, I will keep pushing all of them to docker hub as well, where you then should be able
to find older versions too. ghcr.io is just preferred, because it is not so hardly rate limited
than the docker hub free tier is.
- Added translations for E-Mails 11544ac
- Made all UI parts work on mobile (except for the Admin UI itself) a4f31f2 4ee3540
- Images will be published on Github Container Registry as well from now on cc15ea9
- All dependencies have been updates in various places. This just keeps everything up to date and fixed some potential security issues in third party libraries
- UI: UX Improvements to Webauthn Login when the user lets the request time out 7683133
- UI: i18n for password reset page 27e620e
- Keep track of users chosen language in the database 7517693
- Make user language editable in the admin ui 77886a9 1061fc2
- Update the users language in different places:
- Language switch in the Account page
- Fetch users chosen language from User Registration
- Selector from Registration in Admin UI 5ade849
- Fix for the new LangSelector component on mobile view
- Add default translations (english) for the PasswordPolicy component for cases when it is used in a non-translated context 2f8a627
Bugfix release for the Dockerfiles and Pagination in some places
- Split the Dockerfiles into separate files because the
ARG
introduced access rights problems 25e3918 - Small bugfix for the pagination component in some places 317dbad
This release is mostly about UI / UX improvements and some smaller bugfixes.
- UI: Client side pagination added 60a499a
- Browsers' native language detection 884f599
sqlx
v0.7 migration 7c7a380- Docker container image split adb3971
- Target database validation before startup a68c652
- UI: I18n (english and german currently) for: Index, Login, Logout, Account, User Registration 99e454e dd2e9ae 7b401f6
- UI: Custom component to overwrite the browsers' native language 4208fdb
- Some Readme and Docs updates e2ebef9 d0a71d6
- The
sub
claim was added to the ID token and will contain the Users UID 6b0a8b0
- UI: small visual bugfixes and improvements in different places 459bdbd 57a5600
- UI: All navigation routes can be reached via their own link now. This means a refresh of the page does not return to the default anymore 4999995 7f0ac0b cadaa40
- UI: added an index to the users table to prevent a rendering bug after changes e35ffbe
- General code and project cleanup 4531ae9 782bb9a 0c5ad02 e453142 85fbafe
- Created a
justfile
for easier development handling 4aa5b99 1489efe - UI: fixed some visual bugs and improved the rendering with larger default browser fonts 45334fd
This is just a small bugfix release.
- UI Bugfix: Client flow updates were not applied via UI 6fe8fbc
- Improved container security: Rauthy is based off a Scratch container image by default now. This improved the security
quite a lot, since you cannot even get a shell into the container anymore, and it reduced the image size by another
~4MB.
This makes it difficult however if you need to debug something, for instance when you use a SQLite deployment. For this reason, you can append-debug
to a tag and you will get an Alpine based version just like before. 1a7e79d - More stable HA deployment: In some specific K8s HA deployments, the default HTTP2 keep-alive's from redhac were not good enough and we got broken pipes in some environments which caused the leader to change often. This has been fixed in redhac-0.6.0 too, which at the same time makes Rauthy HA really stable now.
- The client branding section in the UI has better responsiveness for smaller screens dfaa23a
- For a HA deployment, cache modifications are now using proper HA cache functions. These default back to the single instance functions in non-HA mode since redhac-0.6.0 7dae043
- All static UI files are now precompressed with gzip and brotli to use even fewer resources 10ad51a
- CSP script-src unsafe-inline was removed in favor of custom nonce's 7de918d
- UI migrated to Svelte 4 21f73ab
Rauthy goes open source