-
Notifications
You must be signed in to change notification settings - Fork 377
VDD22Q2
-
logging is a killer feature which users' business cases depends upon
-
Some users would rather see clients wait than losing a log record.
-
log clients are single threaded
-
for multiple queries, basic structure building done per log client
-
ideas
-
binary log nice to have, not a solution
-
solve desync issue by giving the option to report lost entries
- or throttling / blocking varnishd
-
merge logging into varnishd
-
drop session logging, copy session attributes into request scope
-
potential to optimize logging away (kills varnishlog -d, could keep internal buffers longer instead)
-
varnishlogd muti-query multi-thread (read shm and fan out to pipes/udses)
-
structured shared memory
-
use utility threads inside varnishd to write logs to fs, which is then read by the clients.
-
compiled queries
-
pluggable, custom log implementations in varnishd, renovate interface first (phk)
-
-
side topics
-
custom tags for vmods
-
log all counter diffs
-
Rational R1000 machine at datamuseum.dk
-
martin talks about VS efforts in this direction
-
VS owners have welcomed the idea
-
slide set from law firm LUCENTUM
-
disputed: Transfer copyright to foundation?
-
maybe not even relevant?
-
Lots of commitees
-
trademark: VS AB owns, licenses it to Verein
-
cost: VS willing to take on running costs
-
-
Should the entity spread money for development "centralized VML"
-
nils presents ideas for bylaws
-
most important questions
-
do the bylaws get the verein involved in money etc?
-
"grassroots" or "corporate" style?
-
-
arguments: - ToS / QoS by content - Adaptive Congestion Control by content - kernel TLS for mallanox hw offload
-
use cases
-
dynamic loading of certificates based on SNI from VCL
-
not just by name, want to prefer specific cert
-
session key sharing
-
vcl_${proto} {}
-
client certificates for backend connections
-
-
should use something along the lines of CF keyless
-
no private keys in varnishd worker process
-
cert server should be pluggable, should have some kind of CLI
-
could significantly complicate H3
-
-
martin: have done poc for plugging "keyless" as openssl encryption engine
-
seems hard to do it ssl lib implemention independent - try to not make it hard to switch impl
-
VS: linking against stock openssl
-
-
stack transports?
-
do the work
-
UPLEX has sponsor
-
VS might upstream as open source
-
limit the scope to OpenSSL >= 1.1 ?
-
strawman the vcertd (dridi?)
-
-
varnishtest
-
dridi has done 2 rounds of polishing of https in varnishtest
-
dridi has plans for a 3 round before porting to trunk
-
to submodule SOONER (revive slinks PR?)
-
when varnish is branched off, the vtest code can be bundled (no submodule)
-
needs patch back-ports convenience
-
refs
-
-
- Lars is still working on the quant project
- should look at msquic
- quictls openssl fork for quic
- haproxy has own quic
- argument against openssl is that h3/quic moves fast
- probably needs a new survey of available implementations (asad)
-
phk talks about changes which are going to happen
-
martin brings up loading extensions in mgmt, seems not to be a killer argument
-
built-in stevedores stay in varnishd
-
built-in stevedores will register before extensions, so default=malloc|umem will remain
-
martins batch insertion
-
HSH_Insert() EXP_Insert() BAN_Insert() batched functions
-
martin+slink
-
-
current implementation is just defensive
-
parameter with absolute limit on switches, default should remain just 1 level
-
structure label names? xyz/abc -> xyz can only return (vcl(xyz/abc))
-
is the prefix argument absolute or relative
-
write the docs, implement
-
- discussion about varnish-modules and var vs. objvar vs. native variables in vcl
objvar today:
vcl_init {
new myvar = taskvar.string();
}
myvar.set("foo");
set resp.http.foo = myvar.get()
objvar with designated setter/getter methods:
myvar = "foo";
set resp.http.foo = myvar;
native scoped variable suggestion:
{vcl,client,backend}.* symbols
set client.var.foo = <string expr>;
set backend.var.foo = <string expr>;
set vcl.var.foo = <string expr>;
native scoped variable alternate suggestion:
{be{re{q,sp}}}.var.*
with req=>bereq
and beresp=>obj=>resp
copies
another scoped variable suggestion:
[global] {[req|bereq|resp|beresp]} [type] var;
beresp variables would be persisted with the object
-> write a concrete proposal for user docs (guillaume)
xkey:
- vmod as is only works with non-persistent storage
- should look at http-wg suggestion
- important feature
suggestion for inclusion into varnish-cache
- accept: Filter accept-like headers
- bodyaccess: Client request body access
- header: Modify and change complex HTTP headers
VS to upstream headerplus?
- str: String operations
should turn into type methods, need arguments for type methods
task for dridi's apprentice
- tcp: TCP connections tweaking
- vsthrottle: Request and bandwidth throttling
should be redone with modules and better methods
side discussion: issues with vmod cookie
- removes duplicate cookies
- need iterator over multiple cookies -> simon please open cookie
- guillaume: still need vcl even if overriding cc_command for vmod import
- we should put vmods into .so
- complicated because of vmod random names
Homework assignment for friday, read: https://httpwg.org/http-extensions/
- asad recommends wg meeting videos, https://www.youtube.com/user/ietf/videos
- phk wants to remove the option to reset the synth body
- vsb is contigous region, good case for discontinous object creation
- dridi is of the opinion that vcl resetting the beresp.body should pay the cost of resetting a vfp
https://docs.varnish-software.com/varnish-cache-plus/features/backend-ssl/
dridi syntax suggestion:
backend proxy {
.type = connect; # registered by vext proxy
.host = "...";
}
backend b{1..3} {
.type = http1; # built-in, default value
.tls = true;
.client_cert = "/bla/c{1..3}";
.via = proxy;
.host = "...";
.authority = "...";
}
backend default {
.type = round_robin; # registered by vext directors
.via = [b1, b2, b3];
}
shall implement
-
xtaskvar.foreach_header(HEADER, SUB)
-
xtaskvar.foreach_cookie(HEADER, SUB)
-
HEADER.count
-
std.sanitize_headers()
- collect all collectable headers
-
C code fail for more duplicate headers
- date?
Legacy headers which are SF compatible could become binary
- https://mnot.github.io/I-D/draft-nottingham-binary-structured-headers.html
- https://www.rfc-editor.org/internet-drafts/draft-ietf-httpbis-retrofit-00.html
-> how could VCL look if those became a reality?
Implement cli-like interface to vkeyservd with async/parallel syntax Check how this could look like for varnishd
Only specific commands would become async (vcl.load and vcl.inline mostly) bans could serve as a template (vcl.load -a name ... -> 205 async started / vcl.list shows "pending"/"failed") buffer output in vsb, cli command to query (vcl.show -o vcl_name, -o for compiler output) martin suggests a "job interface" jobs.list / jobs.wait name
#16 optional vmods / conditional code?
import cookie [from PATH] [or skip];
with (import cookieplus) {
# cookieplus.do_stuff()
} else with (import cookie) {
# cookie.do_stuff()
} else {
# regex time
}
The end!
-
dridi:
- HTTPS
-
phk:
- synth bodies and filters.
-
martin:
- Varnish Verein
- Varnishlog scalability
-
simon:
- TLS
- Problems with multiple headers with the same name (jwt, cookie validation in varnish just mention it)
-
Guillaume:
- mainline varnish modules
- compiled VCL sideloading
-
slink:
-
1 level of vcl.use
- TLS in varnish: killer arguments
- ToS / QoS by content
- Adaptive Congestion Control by content
- kernel TLS for mallanox hw offload
- H3
- Project Governance / Varnish Forening
- finally nail via backends (+1 by Varnish Software / Pål)
- Plans for H3
- extensions, pluggable stevedores and protocols
- If anyone was interested, could present the slash stevedores:
- based on home-grown buddy allocator, with features we need for v-c
- fixed size
- allocations of page & extent
- cram factor: how/when to return smaller-than-requested
- waiting allocations with priorities
- optionally chose "nearest" page based on double value
- batched alloc/free for efficiency
- buddy: in memory
- all of the above as a storage engine with
- "nearest by expiry"
- no more lru_nuke_limit
- plus
- configurable reserve to immediately serve requests when LRU kicks in
- all of the above as a storage engine with
- fellow: 2 tier ram + disk
- persistent
- always consistent on disk with log
- async io
- checksummed
- RAM LRU per segment (= supports objects larger than RAM)
- based on home-grown buddy allocator, with features we need for v-c
-