Releases: sebadob/rauthy
v0.27.2
Changes
Even though not recommended at all, it is now possible to opt-out of the refresh_token
nbf claim, and disable it.
By default, A refresh_token
will not be valid before access_token_lifetime - 60 seconds
, but some (bad) client implementations try to refresh access_tokens
while they are still valid for a long time. To opt-out, you get a new config variable:
# By default, `refresh_token`s will have an `nbf` claim, making them valid
# at `access_token_lifetime - 60 seconds`. Any usage before this time will
# result in invalidation of not only the token itself, but also all other
# linked sessions and tokens for this user to prevent damage in case a client
# leaked the token by accident.
# However, there are bad / lazy client implementations that do not respect
# either `nbf` in the `refresh_token`, or the `exp` claim in `access_token`
# and will refresh early while the current access_token is still valid.
# This does not only waste resources and time, but also makes it possible
# to have multiple valid `access_token`s at the same time for the same
# session. You should only disable the `nbf` claim if you have a good
# reasons to do so.
# If disabled, the `nbf` claim will still exist, but always set to *now*.
# default: false
DISABLE_REFRESH_TOKEN_NBF=false
Bugfix
The Rauthy deployment could get stuck in Kubernetes when you were running a HA-Cluster with Postgres as your database of choice. The cache raft re-join had an issue sometimes because of a race condition, which needed a full restart of the cluster. This has been fixed in hiqlite-0.3.2 and the dependency has been bumped.
v0.27.1
Bugfix
With the big migration to Hiqlite under the hood, a bug has been introduced with v0.27.0
that made it possible to end up with a NULL
value for the password policy after an update. Which would result in errors further down the road after a restart, because the policy could not be read again.
This version fixes the issue itself and checks at startup if the database needs a fix for this issue because of an already existing NULL
value. In this case, the default password policy will be inserted correctly at startup.
EDIT:
Please don't use this release if Postgres is your database of choice:
- With Postgres, you could not get into the
NULL
situation - The check for
NULL
at startup does not work with Postgres as your main DB and will cause issues. A 0.27.2 will come soon which fixes everything for both.
v0.27.0
Breaking
Single Container Image
The different versions have been combined into a single container image. The image with the -lite
extension does not exist anymore and all deployments can be done with just the base image. Since Postgres was the default before, you need to change your image name when you do not use Postgres as your database, just remove the -lite
.
Dropped sqlx
SQLite in favor of Hiqlite
From this version on, Rauthy will not support a default SQLite anymore. Instead, it will use Hiqlite, which under the hood uses SQLite again and is another project of mine.
Hiqlite will bring lots of advantages. It will use a few more resources than a direct, plain SQLite, but only ~10-15 MB of memory for small instances. In return, you will get higher consistency and never blocking writes to the database during high traffic. It also reduces the latency for all read statements by a huge margin compared to the solution before. Rauthy always enables the dashboard
feature for Hiqlite, which will be available over the Hiqlite API port / server.
The biggest feature it brings though is the ability to run a HA cluster without any external dependencies. You can use Hiqlite on a single instance and it would "feel" the same as just a SQLite, but you can also spin up 3 or 5 nodes to get High Availability without the need for an external database. It uses the Raft algorithm to sync data while still using just a simple SQLite under the hood. The internal design of Hiqlite has been optimized a lot to provide way higher throughput as you would normally get when you just use a direct connection to a SQLite file. If you are interested more about the internals, take a look at the hiqlite/README.md or hiqlite/ARCHITECTURE.md.
With these features, Hiqlite will always be the preferred database solution for Rauthy. You should really not spin up a dedicated Postgres instance just for Rauthy, because it would just use too many resources, which is not necessary. If you have a Postgres up and running anyway, you can still opt-in to use it.
This was a very big migration and tens of thousands of lines of code has been changed. All tests are passing and a lot of additional checks have been included. I could not find any leftover issues or errors, but please let me know if you find something.
If you are using Rauthy with Postgres as database, you don't need to do that much. If however you use SQLite, no worries, Rauthy can handle the migration for you after adopting a few config variables. Even if you do the auto-migration from an existing SQLite to Hiqlite, Rauthy will keep the original SQLite file in place for additional safety, so you don't need to worry about a backup (as long as you set the config correctly of course). The next bigger release will maybe do cleanup work when everything worked fine for sure, or you can do it manually.
New / Changed Config Variables
There are quite a few new config variables and some old ones are gone. What you need to set to migration will be explained below.
#####################################
############## BACKUPS ###############
#####################################
# When the auto-backup task should run.
# Accepts cron syntax:
# "sec min hour day_of_month month day_of_week year"
# default: "0 30 2 * * * *"
HQL_BACKUP_CRON="0 30 2 * * * *"
# Local backups older than the configured days will be cleaned up after
# the backup cron job.
# default: 30
#HQL_BACKUP_KEEP_DAYS=30
# Backups older than the configured days will be cleaned up locally
# after each `Client::backup()` and the cron job `HQL_BACKUP_CRON`.
# default: 3
#HQL_BACKUP_KEEP_DAYS_LOCAL=3
# If you ever need to restore from a backup, the process is simple.
# 1. Have the cluster shut down. This is probably the case anyway, if
# you need to restore from a backup.
# 2. Provide the backup file name on S3 storage with the
# HQL_BACKUP_RESTORE value.
# 3. Start up the cluster again.
# 4. After the restart, make sure to remove the HQL_BACKUP_RESTORE
# env value.
#HQL_BACKUP_RESTORE=
# Access values for the S3 bucket where backups will be pushed to.
#HQL_S3_URL=https://s3.example.com
#HQL_S3_BUCKET=my_bucket
#HQL_S3_REGION=example
#HQL_S3_PATH_STYLE=true
#HQL_S3_KEY=s3_key
#HQL_S3_SECRET=s3_secret
#####################################
############# CLUSTER ###############
#####################################
# Can be set to 'k8s' to try to split off the node id from the hostname
# when Hiqlite is running as a StatefulSet inside Kubernetes.
#HQL_NODE_ID_FROM=k8s
# The node id must exist in the nodes and there must always be
# at least a node with ID 1
# Will be ignored if `HQL_NODE_ID_FROM=k8s`
HQL_NODE_ID=1
# All cluster member nodes.
# To make setting the env var easy, the values are separated by `\s`
# while nodes are separated by `\n`
# in the following format:
#
# id addr_raft addr_api
# id addr_raft addr_api
# id addr_raft addr_api
#
HQL_NODES="
1 localhost:8100 localhost:8200
"
# Sets the limit when the Raft will trigger the creation of a new
# state machine snapshot and purge all logs that are included in
# the snapshot.
# Higher values can achieve more throughput in very write heavy
# situations but will end up in more disk usage and longer
# snapshot creations / log purges.
# default: 10000
#HQL_LOGS_UNTIL_SNAPSHOT=10000
# Secrets for Raft internal authentication as well as for the API.
# These must be at least 16 characters long and you should provide
# different ones for both variables.
HQL_SECRET_RAFT=SuperSecureSecret1337
HQL_SECRET_API=SuperSecureSecret1337
# You can either parse `ENC_KEYS` and `ENC_KEY_ACTIVE` from the
# environment with setting this value to `env`, or parse them from
# a file on disk with `file:path/to/enc/keys/file`
# default: env
#HQL_ENC_KEYS_FROM=env
#####################################
############ DATABASE ###############
#####################################
# Max DB connections for the Postgres pool.
# Irrelevant for Hiqlite.
# default: 20
#DATABASE_MAX_CONN=20
# If specified, the currently configured Database will be DELETED and
# OVERWRITTEN with a migration from the given database with this variable.
# Can be used to migrate between different databases.
#
# !!! USE WITH CARE !!!
#
#MIGRATE_DB_FROM=sqlite:data/rauthy.db
#MIGRATE_DB_FROM=postgresql://postgres:123SuperSafe@localhost:5432/rauthy
# Hiqlite is the default database for Rauthy.
# You can opt-out and use Postgres instead by setting the proper
# `DATABASE_URL=postgresql://...` by setting `HIQLITE=false`
# default: true
#HIQLITE=true
# The data dir hiqlite will store raft logs and state machine data in.
# default: data
#HQL_DATA_DIR=data
# The file name of the SQLite database in the state machine folder.
# default: hiqlite.db
#HQL_FILENAME_DB=hiqlite.db
# If set to `true`, all SQL statements will be logged for debugging
# purposes.
# default: false
#HQL_LOG_STATEMENTS=false
# The size of the pooled connections for local database reads.
#
# Do not confuse this with a pool size for network databases, as it
# is much more efficient. You can't really translate between them,
# because it depends on many things, but assuming a factor of 10 is
# a good start. This means, if you needed a (read) pool size of 40
# connections for something like a postgres before, you should start
# at a `read_pool_size` of 4.
#
# Keep in mind that this pool is only used for reads and writes will
# travel through the Raft and have their own dedicated connection.
#
# default: 4
#HQL_READ_POOL_SIZE=4
# Enables immediate flush + sync to disk after each Log Store Batch.
# The situations where you would need this are very rare, and you
# should use it with care.
#
# The default is `false`, and a flush + sync will be done in 200ms
# intervals. Even if the application should crash, the OS will take
# care of flushing left-over buffers to disk and no data will get
# lost. If something worse happens, you might lose the last 200ms
# of commits (on that node, not the whole cluster). This is only
# important to know for single instance deployments. HA nodes will
# sync data from other cluster members after a restart anyway.
#
# The only situation where you might want to enable this option is
# when you are on a host that might lose power out of nowhere, and
# it has no backup battery, or when your OS / disk itself is unstable.
#
# `sync_immediate` will greatly reduce the write throughput and put
# a lot more pressure on the disk. If you have lots of writes, it
# can pretty quickly kill your SSD for instance.
#HQL_SYNC_IMMEDIATE=false
# The password for the Hiqlite dashboard as Argon2ID hash.
# '123SuperMegaSafe' in this example
#
# You only need to provide this value if you need to access the
# Hiqlite debugging dashboard for whatever reason. If no password
# hash is given, the dashboard will not be reachable.
#HQL_PASSWORD_DASHBOARD=JGFyZ29uMmlkJHY9MTkkbT0xOTQ1Nix0PTIscD0xJGQ2RlJDYTBtaS9OUnkvL1RubmZNa0EkVzJMeTQrc1dxZ0FGd0RyQjBZKy9iWjBQUlZlOTdUMURwQkk5QUoxeW1wRQ==
Migration (Postgres)
If you use Rauthy with Postgres and want to keep doing that, the only thing you need to do is to opt-out of Hiqlite.
HIQLITE=false
Migration (SQLite)
If you use Rauthy with SQLite and want to migrate to Hiqlite, you can utilize all the above-mentioned new config variables, but mandatory are the following ones.
Backups
Backups for the internal database work in the same way as before, but bec...
v0.26.2
Bugfix
This patch reverts an unintended change to the user:group
inside the container images.
This will fix issues with migrations from existing deployments using SQLite with manually managed
volume access rights.
v0.26.0 changed from scratch
to gcr.io/distroless/cc-debian12:nonroot
as the base image for the final deployment.
The distroless image however sets a user of 65532
by default, while it always has been 10001:10001
before.
The affected versions are
0.26.0
0.26.1
Starting from this release (0.26.2
), the user inside the container will be the same one as before:
10001:10001
839724001710cb095f39ff7df6be00708a01801a
Images
Postgres
ghcr.io/sebadob/rauthy:0.26.2
SQLite
ghcr.io/sebadob/rauthy:0.26.2-lite
v0.26.1
Changes
Upstream Auth Provider Query Params
Some upstream auth providers need custom query params appended to their authorization endpoint URL.
Rauthy will now accept URLs in the auth provider config with pre-defined query params, as long as they
don't interfere with OIDC default params.
Option Log Fmt as JSON
To make automatic parsing of logs possible (to some extent), you now have the ability to change the logging output from
text to json with the following new config variable:
# You can change the log output format to JSON, if you set:
# `LOG_FMT=json`.
# Keep in mind, that some logs will include escaped values,
# for instance when `Text` already logs a JSON in debug level.
# Some other logs like an Event for instance will be formatted
# as Text anyway. If you need to auto-parse events, please consider
# using an API token and listen ot them actively.
# default: text
#LOG_FMT=text
Bugfix
- With relaxing requirements for password resets for new users, a bug has been introduced that would prevent
a user from registering an only-passkey account when doing the very first "password reset".
de2cfea
Images
Postgres
ghcr.io/sebadob/rauthy:0.26.1
SQLite
ghcr.io/sebadob/rauthy:0.26.1-lite
v0.26.0
Breaking
Deprecated API Routes Removal
The following API routes have been deprecated in the last version and have now been fully removed:
/oidc/tokenInfo
/oidc/rotateJwk
Base Container Image Change
With this version, Rauthy switches to the rootless
version of distroless
images.
If you managed your file permissions inside the container manually (for instance for a SQLite file), you may
need to adopt your config.
The user ID inside the container is not 10001
anymore but 65532
instead.
Cache Config
The whole CACHE
section in the config has been changed:
#####################################
############## CACHE ################
#####################################
# Can be set to 'k8s' to try to split off the node id from the hostname
# when Hiqlite is running as a StatefulSet inside Kubernetes.
# Will be ignored if `HQL_NODE_ID_FROM=k8s`
#HQL_NODE_ID_FROM=k8s
# The node id must exist in the nodes and there must always be
# at least a node with ID 1
HQL_NODE_ID=1
# All cluster member nodes.
# To make setting the env var easy, the values are separated by `\s`
# while nodes are separated by `\n`
# in the following format:
#
# id addr_raft addr_api
# id addr_raft addr_api
# id addr_raft addr_api
#
# 2 nodes must be separated by 2 `\n`
HQL_NODES="
1 localhost:8100 localhost:8200
"
# If set to `true`, all SQL statements will be logged for debugging
# purposes.
# default: false
#HQL_LOG_STATEMENTS=false
# If given, these keys / certificates will be used to establish
# TLS connections between nodes.
#HQL_TLS_RAFT_KEY=tls/key.pem
#HQL_TLS_RAFT_CERT=tls/cert-chain.pem
#HQL_TLS_RAFT_DANGER_TLS_NO_VERIFY=true
#HQL_TLS_API_KEY=tls/key.pem
#HQL_TLS_API_CERT=tls/cert-chain.pem
#HQL_TLS_API_DANGER_TLS_NO_VERIFY=true
# Secrets for Raft internal authentication as well as for the API.
# These must be at least 16 characters long and you should provide
# different ones for both variables.
HQL_SECRET_RAFT=SuperSecureSecret1337
HQL_SECRET_API=SuperSecureSecret1337
# You can either parse `ENC_KEYS` and `ENC_KEY_ACTIVE` from the
# environment with setting this value to `env`, or parse them from
# a file on disk with `file:path/to/enc/keys/file`
# default: env
#HQL_ENC_KEYS_FROM=env
/auth/v1/health
Response Change
The response for /auth/v1/health
has been changed.
If you did not care about the response body, there is nothing to do for you. The body itself returns different values
now:
struct HealthResponse {
db_healthy: bool,
cache_healthy: bool,
}
Changes
ZH-Hans Translations
Translations for ZH-Hans
have been added to Rauthy. These exist in all places other than the Admin UI, just like the
existing ones already.
Support for deep-linking client apps like Tauri
Up until v0.25, it was not possible to set the Allowed Origin
for a client in a way that Rauthy would allow access
for instance from inside a Tauri app. The reason is that Tauri (and most probably others) do not set an HTTP / HTTPS
scheme in the Origin
header, but something like tauri://
.
Rauthy has now support for such situations with adjusted validation for the Origin values and a new config variable
to allow specific, additional Origin
schemes:
# To bring support for applications using deep-linking, you can set custom URL
# schemes to be accepted when present in the `Origin` header. For instance, a
# Tauri app would set `tauri://` instead of `https://`.
#
# Provide the value as a space separated list of Strings, like for instance:
# "tauri myapp"
ADDITIONAL_ALLOWED_ORIGIN_SCHEMES="tauri myapp"
More stable health checks in HA
For HA deployments, the /health
checks are more stable now.
The quorum is also checked, which will detect network segmentations. To achieve this and still make it possible to use
the health check in situations like Kubernetes rollouts, a delay has been added, which will simply always return true
after a fresh app start. This initial delay make it possible to use the endpoint inside Kubernetes and will not prevent
from scheduling the other nodes. This solves a chicken-and-egg problem.
You usually do not need to care about it, but this value can of course be configured:
# Defines the time in seconds after which the `/health` endpoint
# includes HA quorum checks. The initial delay solves problems
# like Kubernetes StatefulSet starts that include the health
# endpoint in the scheduling routine. In these cases, the scheduler
# will not start other Pods if the first does not become healthy.
#
# This is a chicken-and-egg problem which the delay solves.
# There is usually no need to adjust this value.
#
# default: 30
#HEALTH_CHECK_DELAY_SECS=30
Migration to ruma
To send out Matrix notifications, Rauthy was using the matrix-sdk
up until now. This crate however comes with a huge
list of dependencies and at the same time pushes too few updates. I had quite a few issues with it in the past because
it was blocking me from updating other dependencies.
To solve this issue, I decided to drop matrix-sdk
in favor of ruma
, which it is using under the hood anyway. With
ruma
, I needed to do a bit more work myself since it's more low level, but at the same time I was able to reduce the
list of total dependencies Rauthy has by ~90 crates.
This made it possible to finally bump other dependencies and to start the internal switch
from redhac to Hiqlite for caching.
IMPORTANT:
If you are using a self-hosted homeserver or anything else than the official matrix.org
servers for Matrix event
notifications, you must set a newly introduced config variable:
# URL of your Matrix server.
# default: https://matrix.org
#EVENT_MATRIX_SERVER_URL=https://matrix.org
Internal Migration from redhac
to hiqlite
The internal cache layer has been migrated from redhac
to Hiqlite.
A few weeks ago, I started rewriting the whole persistence layer from scratch in a separate project. redhac
is working
fine, but it has some issues I wanted to get rid of.
- its network layer is way too complicated which makes it very hard to maintain
- there is no "sync from other nodes" functionality, which is not a problem on its own, but leads to the following
- for security reasons, the whole cache is invalidated when a node has a temporary network issue
- it is very sensitive to even short term network issues and leader changes happen too often for my taste
I started the Hiqlite project some time ago to get rid of these things and have
additional features. It is outsourced to make it generally usable in other contexts as well.
This first step will also make it possible to only have a single container image in the future without the need to
decide between Postgres and SQLite via the tag.
Local Development
The way the container images are built, the builder for the images is built and also the whole justfile
have been
changed quite a bit. This will not concern you if you are not working with the code.
The way of wrapping and executing everything inside a container, even during local dev, became tedious to maintain,
especially for different architectures and I wanted to get rid of the burden of maintenance, because it did not provide
that many benefits. Postgres and Mailcrab will of course still run in containers, but the code itself for backend and
frontend will be built and executed locally.
The reason I started doing all of this inside containers beforehand was to not need a few additional tool installed
locally to make everything work, but the high maintenance was not worth it in the end. This change now reduced the
size of the Rauthy builder image from 2x ~4.5GB down to 1x ~1.9GB, which already is a big improvement. Additionally,
you don't even need to download the builder image at all when you are not creating a production build, while beforehand
you always needed the builder image in any case.
To encounter the necessary dev tools installation and first time setup, I instead added a new just
recipe called
setup
which will do everything necessary, as long as you have the prerequisites available (which you needed before
as well anyway, apart from npm
). This has been updated in the
CONTRIBUTING.md.
Bugfix
- The
refresh_token
grant type on the/token
endpoint did not set the originalauth_time
for theid_token
, but
instead calculated it fromnow()
each time.
aa6e07d
Images
Postgres
ghcr.io/sebadob/rauthy:0.26.0
SQLite
ghcr.io/sebadob/rauthy:0.26.0-lite
v0.25.0
Changes
Token Introspection
The introspection endpoint has been fixed in case of the encoding like mentioned in bugfixes.
Additionally, authorization has been added to this endpoint. It will now make sure that the request also includes
an AUTHORIZATION
header with either a valid Bearer JwtToken
or Basic ClientId:ClientSecret
to prevent
token scanning.
The way of authorization on this endpoint is not really standardized, so you may run into issues with your client
application. If so, you can disable the authentication on this endpoint with
# Can be set to `true` to disable authorization on `/oidc/introspect`.
# This should usually never be done, but since the auth on that endpoint is not
# really standardized, you may run into issues with your client app. If so,
# please open an issue about it.
# default: false
DANGER_DISABLE_INTROSPECT_AUTH=true
API Routes Normalization
In preparation for a clean v1.0.0, some older API routes have been fixed regarding their casing and naming.
The "current" or old routes and names will be available for exactly one release and will be phased out afterward
to have a smooth migration, just in case someone uses these renamed routes.
/oidc/tokenInfo
->/oidc/introspect
/oidc/rotateJwk
->/oidc/rotate_jwk
Since I don't like kebab-case
, most API routes are written in snake_case
, with 2 exceptions that follow RFC namings:
openid-configuration
web-identity
All the *info
routes like userinfo
or sessioninfo
are not kebab_case
on purpose, just to match other IdPs and
RFCs a bit more.
There is not a single camelCase
anymore in the API routes to avoid confusion and issues in situations where you could
for instance mistake an uppercase I
as a lowercase l
. The current camelCase
endpoints only exist for a smoother
migration and will be phased out with the next bigger release.
Config Read
The current behavior of reading in config variables was not working as intended.
Rauthy reads the rauthy.cfg
as a file first and the environment variables afterward. This makes it possible to
configure it in any way you like and even mix and match.
However, the idea was that any existing variables in the environment should overwrite config variables and therefore
have the higher priority. This was exactly the other way around up until v0.24.1
and has been fixed now.
How Rauthy parses config variables now correctly:
- read
rauthy.cfg
- read env var
- all existing env vars will overwrite existing vars from
rauthy.cfg
and therefore have the higher priority
Bugfixes
- The token introspection endpoint was only accepting requests with
Json
data, when it should have instead been
withForm
data.
Images
Postgres
ghcr.io/sebadob/rauthy:0.25.0
SQLite
ghcr.io/sebadob/rauthy:0.25.0-lite
v0.24.1
The last weeks were mostly for updating the documentation and including all the new features that came to Rauthy in the last months. Some small things are still missing, but it's almost there.
Apart from that, this is an important update because it fixes some security issues in external dependencies.
Security
Security issues in external crates have been fixed:
- moderate matrix-sdk-crypto
- moderate openssl
- low vodozemac
Changes
S3_DANGER_ACCEPT_INVALID_CERTS
renamed
The config var S3_DANGER_ACCEPT_INVALID_CERTS
has been renamed to S3_DANGER_ALLOW_INSECURE
. This is not a breaking change right now, because for now Rauthy will accept both versions to not introduce a breaking change, but the deprecated value will be removed after v0.24.
S3 Compatibility
Quite a few internal dependencies have been updated to the latest versions (where it made sense).
One of them was my own cryptr. This was using the rusty-s3
crate beforehand, which is a nice one when working with S3 storages, but had 2 issues. One of them is that it is using pre-signed URLs. That is not a flaw in the first place, just a design decision to become network agnostic. The other one was that it signed the URL in a way that would make the request not compatible with Garage. I migrated cryptr
to my own s3-simple which solves these issues.
This update brings compatibility with the garage
s3 storage for Rauthy's S3 backup feature.
Bugfixes
- Fetching the favicon (and possibly other images) was forbidden because of the new CSRF middleware from some weeks
ago.
76cd728 - The UI and the backend had a difference in input validation for
given_name
andfamily_name
which could make some buttons in the UI get stuck. This has been fixed and the validation for these 2 is the same everywhere and at least 1 single character is required now.
19d512a
Images
Postgres
ghcr.io/sebadob/rauthy:0.24.1
SQLite
ghcr.io/sebadob/rauthy:0.24.1-lite
v0.24.0
Many thousands of lines have been refactored internally to provide better maintainability in the future.
These are not mentioned separately, since they did not introduce anything new. Apart from this, there are only small
changes, but one of them is an important breaking change.
Breaking
TRUSTED_PROXIES
Config Variable
The new config variable TRUSTED_PROXIES
introduces a breaking change in some cases.
If you are running Rauthy with either PROXY_MODE=true
or with a set PEER_IP_HEADER_NAME
value, you must add the
TRUSTED_PROXIES
to your existing config before updating.
This value specifies trusted proxies in the above situation. The reason is that Rauthy extracts the client IP from
the HTTP headers, which could be spoofed if they are used without validating the source. This was not a security issue,
but gave an attacker the ability to blacklist or rate-limit IPs that do not belong to him.
When PROXY_MODE=true
or set PEER_IP_HEADER_NAME
, Rauthy will now only accept direct connections from IPs specified
with TRUSTED_PROXIES
and block all other requests. You can provide a list of CIDRs to have full flexibility for your
deployment.
# A `\n` separated list of trusted proxy CIDRs.
# When `PROXY_MODE=true` or `PEER_IP_HEADER_NAME` is set,
# these are mandatory to be able to extract the real client
# IP properly and safely to prevent IP header spoofing.
# All requests with a different source will be blocked.
#TRUSTED_PROXIES="
#192.168.14.0/24
#10.0.0.0/8
#"
Note:
Keep in mind, that you must include IPs for direct health checks like for instance inside Kubernetes here,
if they are not being sent via a trusted proxy.
Features
User Registration Domain Blacklisting
If you are using an open user registration without domain restriction, you now have the possibility to blacklist
certain E-Mail provider domains. Even if your registration endpoint allows registrations, this blacklist will be
checked and deny requests with these domains.
This is mainly useful if you want to prevent malicious E-Mail providers from registering and spamming your database.
# If `OPEN_USER_REG=true`, you can blacklist certain domains
# on the open registration endpoint.
# Provide the domains as a `\n` separated list.
#USER_REG_DOMAIN_BLACKLIST="
#example.com
#evil.net
#"
Changes
Even though it was not needed so far, the OIDC userinfo endpoint now has a proper POST
handler in addition to the
existing GET
to comply with the RFC.
05a8793
Bugfixes
- The upstream crate
curve25519-dalek
had a moderate timing variability security issue
8bb4069
Images
Postgres
ghcr.io/sebadob/rauthy:0.24.0
SQLite
ghcr.io/sebadob/rauthy:0.24.0-lite
v0.23.5
Upstream IdP Locale Fix
This patch fixes a regression from fixing the special characters encoding in upstream IdP JWT tokens. A panic was
possible when the upstream IdP did not include a locale
in the id_token
.
ea24e7e
481c9b3
Images
Postgres
ghcr.io/sebadob/rauthy:0.23.5
SQLite
ghcr.io/sebadob/rauthy:0.23.5-lite