-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for dynamic value resolver and the ability for values to resolve to an Any value #47
Conversation
Signed-off-by: Hiram Chirino <[email protected]>
Signed-off-by: Hiram Chirino <[email protected]>
…ype. Signed-off-by: Hiram Chirino <[email protected]>
Alright, sorry for the delay, I spent a few hours looking at this this morning. There are some concerns that I have with the implementation, for example:
From a philosophical perspective, I also have concerns that these changes are a departure from some of the primary goals of CEL. This change would effectively open the door to a class of features that allow users to hook into the CEL interpreter and change how execution works. For example, using this PR, a user could use a dynamic resolver to make an HTTP request in the middle of CEL traversing a variable reference. This would not fulfill the promise of linear time evaluation.
For these reasons I am not excited about merging a PR like this. I'll leave the PR open for a few more days if you'd like to provide any follow-up opinions/insights/feedback. |
sorry for not getting back to you sooner, was on vacation.
good idea.
Was not initially sure what the function signature should look like, Handler seemed like a more flexible option. But yeah, we can likely simplify.
This PR was just shared to share the general gist of the feature being requested, not really optimized for performance. Maybe with the changes mentioned above perf will improve a bit.
The above is true for any user-provided functions. We should just document those caveats so that users are not surprised when they don't get linear execution time when they use non-linear functions or dynamic resolvers. |
BTW.. where is that benchmark located? I can try to see if it can be optimized a bit. |
No problem! Hope it was relaxing.
This is also the case in all CEL implementations so what the language designers likely meant was that the language itself was linear time, is mutation free, and not Turing-complete. The language designers made this claim likely with full understanding that you can't stop someone from throwing an infinite loop in a UDF. The concern I have here is that this is a change to how the language itself works, and that a simple Frankly, I'm not excited about departing from the spec on this. It feels like a departure from some the primary goals of CEL: "This limitation is a feature of the language design, which allows the implementation to evaluate orders of magnitude faster than equivalently sandboxed JavaScript." As a side note, the PR introduces an I'll be happy to look through the PR if you decide to tweak it, I'm just trying to be fully transparent about where I personally stand on this issue today from a philosophical perspective.
I had saved a patch for the benchmark, but there were other changes I had to make to get the benchmark to working properly, and apparently I don't have a patch for those changes anymore. Subject: [PATCH] Dynamic resolver benchmark
---
Index: interpreter/benches/runtime.rs
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/interpreter/benches/runtime.rs b/interpreter/benches/runtime.rs
--- a/interpreter/benches/runtime.rs
+++ b/interpreter/benches/runtime.rs
@@ -1,7 +1,9 @@
use cel_interpreter::context::Context;
-use cel_interpreter::Program;
+use cel_interpreter::{ExecutionError, Program, Value};
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use std::collections::HashMap;
+use std::io::BufWriter;
+use std::sync::Arc;
pub fn criterion_benchmark(c: &mut Criterion) {
let expressions = vec![
@@ -66,5 +68,29 @@
group.finish();
}
-criterion_group!(benches, criterion_benchmark, map_macro_benchmark);
+pub fn variable_resolution_benchmark(c: &mut Criterion) {
+ let mut group = c.benchmark_group("variable resolution");
+ let sizes = vec![1, 10, 100];
+
+ for size in sizes {
+ let mut expr = String::new();
+ for i in 0..size {
+ expr.push_str(&format!("var_{i}", i = i));
+ if i < size - 1 {
+ expr.push_str("||");
+ }
+ }
+
+ let program = Program::compile(&expr).unwrap();
+ let mut ctx = Context::default();
+ let resolver = move |ident: Arc<String>| Ok(Value::Null);
+ ctx.set_dynamic_resolver(Some(resolver));
+ group.bench_function(format!("variable_resolution_{}", size).as_str(), |b| {
+ b.iter(|| program.execute(&ctx).unwrap())
+ });
+ }
+ group.finish();
+}
+
+criterion_group!(benches, variable_resolution_benchmark);
criterion_main!(benches); |
Thanks, I added a version of that bench to the PR in #58, it's having better performance now. That might be due to the simplifications of the dynamic resolver function or maybe because it constructs the context inside the iteration loop. |
No description provided.