-
Notifications
You must be signed in to change notification settings - Fork 2
Add a non-existence test using NSEC3 #63
base: main
Are you sure you want to change the base?
Conversation
42d55e2
to
2133e38
Compare
I didn't know that the binary search method was part of `&[T]`, I thought it was part of `Vec<T>` instead. The more you know 🌈
.authority | ||
.into_iter() | ||
.filter_map(|record| { | ||
if let Record::NSEC3(r) = record { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you may want to add a try_into_nsec3
method using rust-analyzer and use that here
// dns.dnssec.nsec3_hash(domain, salt="", iterations=1, algorithm="SHA1") | ||
// ``` | ||
let bob_hash = "9AU9KOU2HVABPTPB7D3AQBH57QPLNDI6"; /* bob.namesevers.com. */ | ||
let wildcard_hash = "M417220KKVJDM7CHD6QVUV4TGHDU2N2K"; /* *.nameservers.com */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this will be included in the signed zone file despite the original / unsigned zone file not having a wildcard record?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, it will not, it is added because the server will compute it and generate an NSEC3 record that covers that specific hash.
// Include the hashes of the nameservers dynamically as they change between executions. | ||
for ns in &nameservers { | ||
let hash = match ns.fqdn().as_str() { | ||
"primary0.nameservers.com." => "E05P5R80N590NS9PP24QOOFHRT605T8A", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if your leaf nameserver was not nameservers.com.
but rather somedomain.com.
you would not need to deal with these primaryNN.nameservers.com.
records
also, changing the naming scheme to something like primary${thread_id}-${thread_local_count}
would make this list shorter but you would need to compute the hash in the test code instead of computing them offline / ahead of time
it's probably easier to read the contents of the signed zone file to get the hashes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think having to compute the hash in the tests kinda defeats the purpose as eventually we'd have to integrate similar code to Hickory-DNS and it would feel like we're testing the hashing implementation against itself.
BUUUUT, if we were able to use example.
we could just reuse the examples in the RFC's appendix.
// owner hash. | ||
let expected = hashes[(index + 1) % hashes.len()]; | ||
let found = &wildcard_rr.next_hashed_owner_name; | ||
assert_eq!(expected, found); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there are several assertions in this single unit test. I would suggest splitting it into several #[test]
s, where each one maps to a sentence / paragraph in the RFC text, and using a fixture (i.e. a setup function, see src/resolver/dnssec/fixtures.rs
) to set up the same name server graph in all the tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds a bit expensive. Would it make sense to initialize this in a static then?
But then they wouldn't be dropped 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to initialize this in a static then?
if you put anything that internally contains a Network
or a Container
in a static variable then its destructor won't run even if the process shut down cleanly. not running the destructors means leaving docker networks / containers behind which is undesirable
at the moment, I would not worry about having more #[test]
functions making things slower. more tests means you have more granularity about what's failing. if you put many assertions in a single tests and the first one fails then you won't know the outcome of the other assertions. tests can be filtered through the CLI so devs fixing issues will only be running the tests that are of interest to them so the total number of tests won't impact their workflow.
that being said, there are ways to set up an object that will be shared by #[test]
functions and then properly tear down when all the #[test]
functions end but it's quite a bit of work to setup. it does involve a shared static
variable and a Mutex
. the side effect of such setup is that one test will impact the other because of the caching behavior of resolvers and nameservers; meaning that such resource-efficient test groups should only be used to test things where caching is unimportant (lest we come up with a way to clear all the caches in between test runs; which again adds more complexity)
Based on top of #22