-
Notifications
You must be signed in to change notification settings - Fork 0
/
sec_abstract.tex
51 lines (46 loc) · 4.27 KB
/
sec_abstract.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
%!TEX root = main.tex
%!TEX spellcheck = en_US
\begin{abstract}
%Misconfiguration detection in router configurations is crucial for maintaining network stability, security, and performance.
%Traditional methods like formal model checkers and consistency checkers either rely on manual rule definitions, requiring heavy domain expertise, or assume deviations from standard settings always indicate errors, overlooking context-specific configurations.
%LLM-based Q\&A models have emerged as a solution by leveraging transformer-based architectures pre-trained on vast datasets, including networking documents. These models enhance misconfiguration detection by capturing both syntax and semantic relationships, benefiting from pre-learned context on how configurations should function.
%However, detecting misconfigurations often requires task-specific context from the actual configuration files for accurate inference. Current methods such as partition-based prompting fail to capture this, missing crucial context and interdependencies spread across the files, resulting in incomplete or inaccurate detection.
% LLM-based Q&A models have emerged as a solution by leveraging transformer architectures and pre-training on vast datasets including networking-related documents, making them adept at capturing common syntax and semantic misconfigruation from general understand of network configruation context.
% The limitations have led to the adoption of LLM-based Q\&A models, which leverage transformer-based architectures to process and interpret both the syntax and semantics of configurations. Transformers, with their self-attention mechanisms, excel at capturing long-range dependencies and contextual relationships within sequences of text by pre-training on vast datasets, including networking-related documents.
% Yet, the prevalent use of partition-based prompting in LLMs—where configuration files are divided into isolated sections for analysis—fails to account for the interdependencies and contextual relationships that are spread across the configuration file, leading to incomplete or inaccurate misconfiguration detection.
Model checkers and consistency checkers detect critical errors in router
configurations, but these tools require significant manual effort to
develop and maintain.
% LLM-based Q\&A models have emerged as a promising
% alternative, because transformer-based models pre-trained on vast datasets
% include context for interpreting router configurations.
LLM-based Q\&A models have emerged as a promising
alternative, allowing users to query partitions of configurations
through prompts and receive answers based on learned
patterns, thanks to transformer models pre-trained on
vast datasets that provide generic configuration context for interpreting
router configurations.
% Yet, detecting
% misconfigurations often requires network-specific context from the actual
% configurations to enable accurate inference.
Yet, current methods of partition-based prompting often
do not provide enough network-specific context
from the actual
configurations to enable accurate inference.
We introduce a Context-Aware Iterative Prompting (\sysname{}) framework that
automates network-specific context extraction and optimizes LLM prompts
for more precise router misconfiguration detection. \sysname{} addresses
three challenges: (1) efficiently mining relevant context from complex configuration files, (2) accurately distinguishing between
pre-defined and user-defined parameter values to prevent irrelevant
context from being introduced, and (3) managing prompt context overload with
iterative, guided interactions with the model.
% Our evaluations using
% synthetic and real-world configurations show that \sysname{} significantly
% improves the accuracy of LLM-based misconfiguration detection,
% outperforming or matching existing methods across all tested scenarios.
Our evaluations on synthetic and real-world configurations show that
\sysname{} improves misconfiguration detection accuracy by more than
30\% compared to partition-based LLM approaches, model checkers, and
consistency checkers, uncovering over 20 previously undetected
misconfigurations in real-world configurations.
\end{abstract}