-
Notifications
You must be signed in to change notification settings - Fork 645
Prototype(symbolization): Add symbolization in Pyroscope read path #3799
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would be nice to have some benchmarks of symbolizing different amount of locations and different file sizes, I think it can help us to pick the right place and architecture for using this
efdde88
to
6b009d3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good work, Marc! I'm excited to see some experimental results 🚀
I think we can implement a slightly more optimized version for production use:
sequenceDiagram
autonumber
participant QF as Query Frontend
participant M as Metastore
participant QB as Query Backend
participant SYM as Symbolizer
QF ->>+M: Query Metadata
Note left of M: Build identifiers are returned<br> along with the metadata records
M ->>-QF:
par
QF ->>+SYM: Request for symbolication
Note left of SYM: Prepare symbols for<br>the objects requested
and
QF ->>+QB: Data retrieval and aggregation
Note left of QB: The main data path<br>Might be serverless
end
QB ->>-QF: Data in pprof format
Note over QF: Because of the truncation,<br> only a limited set of locations<br>make it here (16K by default)
QF --)SYM: Location addresses
SYM ->>-QF: Symbols
QF ->>QF: Flame graph rendering
Even without a parallel pipeline and dedicated symbolication service, we could implement something like this:
sequenceDiagram
autonumber
participant QF as Query Frontend
participant M as Metastore
participant QB as Query Backend
participant SYM as Symbols
QF ->>+M: Query Metadata
Note left of M: No build identifiers are returned
M ->>-QF:
QF ->>+QB: Data retrieval and aggregation
Note left of QB: The main data path<br>Might be serverless
QB ->>-QF: Data in pprof format
Note over QF: Because of the truncation,<br> only a limited set of locations<br>make it here (16K by default)
QF ->>+SYM: Fetch symbols
SYM ->>-QF: Symbols
Note over QF: In terms of the added latency,<br>this approach is not worse than<br>block level symbolication
QF ->>QF: Flame graph rendering
I think we should avoid symbolization at the block level if the symbols are not already present in the block itself. Otherwise, this approach leads to excessive processing, increased latency, and higher resource usage. Please keep in mind, that a query may span many thousands of blocks.
I won't delve too deeply into how we fetch and process ELF/DWARF files, but I strongly doubt we can bypass the need for an intermediate representation optimized for our access patterns. Additionally, we need a solution to prevent concurrent access to the debuginfod service.
I have not look into the code yet, but I've tried to run it locally and it looks like it's trying to load a lot of unnecesarry debug files. I run ebpf profiler with no ontarget symbolization , also run a simple I then query only one executable I see 268 GET requests, with 13 requests to Other then that it works \M/ Can't wait to run it in dev. |
Hi @marcsanmi , when can this PR be merged? |
2937519
to
0b8a289
Compare
Hi @liaol, |
7c2ab09
to
87a481c
Compare
I've created this diagram to outline the current Symbolization arch: flowchart TD
A[SymbolizePprof] --> B{Group by Mapping}
B --> C[Symbolize Request]
C --> D{Check Symbol Cache}
subgraph "Symbol Cache Layer (LRU, in-memory)"
D -->|Cache Hit| E[Return Cached Symbols]
D -->|Cache Miss| F
end
F{Check Debug Info Cache}
subgraph "Debug Info Cache Layer (Ristretto, in-memory)"
F -->|Cache Hit| G[Read from Debug Info Cache]
F -->|Cache Miss| H
end
subgraph "Persistent Storage Layer"
H{Check Object Store}
H -->|Cache Hit| I[Read from Object Store]
H -->|Cache Miss| J[Fetch from Debuginfod]
end
I --> K[Store in Debug Info Cache]
J --> L[Store in Debug Info Cache]
J --> M[Store in Object Store]
G --> N[Parse ELF/DWARF]
K --> N
L --> N
subgraph "DWARF Resolution Layer"
N --> O[Resolve Addresses]
O --> P{Check Address Map}
P -->|Map Hit| Q[Return from Map]
P -->|Map Miss| R[Parse DWARF Data]
R --> S[Build Lookup Tables]
S --> T[Store in Address Map]
T --> U[Return Symbols]
Q --> U
end
U --> V[Update Symbol Cache]
V --> W[Return to Caller]
E --> W
|
pkg/test/integration/testdata/otel-ebpf-profiler-offcpu-cpu.json
Outdated
Show resolved
Hide resolved
I might be missing some details, but I have doubts about the cache hierarchy. Now it looks like we have: symbols_cache -> object_store -> in_memory_object_store (ristretto) -> debuginfod. As far as I understand, we're going to read from object_store even if there's just a single unresolved address. I expected to see: symbols_cache -> in_memory_object_store (ristretto) -> object_store -> debuginfod. Could you please elaborate on the decision? |
You're right @kolesnikovae. I've just realized the problem is that the ristretto cache is coupled inside the debuginfod client. I'll decoupled it and placed it at symbolizer level. Thus, we'll be able to have the following path: symbols_cache -> in_memory_object_store (ristretto) -> object_store -> debuginfod |
fd39d9e
to
65fb599
Compare
65fb599
to
10f52ea
Compare
pkg/test/integration/testdata/otel-ebpf-profiler-offcpu.json.bin
Outdated
Show resolved
Hide resolved
…l review comments
* feat: add symbolizer per tenant overrides
829f81e
to
e66f976
Compare
start := time.Now() | ||
status := statusSuccess | ||
defer func() { | ||
s.metrics.profileSymbolization.WithLabelValues(status).Observe(time.Since(start).Seconds()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this metric is touched twice - once for the whole pprf and once for the permapping request. I suggest we remove the latter one or create a separate metric for it.
statusCode, isHTTPError := isHTTPStatusError(err) | ||
|
||
if errors.As(err, &bnfErr) || (isHTTPError && statusCode == http.StatusNotFound) { | ||
s.metrics.debuginfodRequestDuration.WithLabelValues(statusErrorNotFound).Observe(time.Since(debuginfodStart).Seconds()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please take another look at this metric. Something is off here. it looks like error statuses are accounted multiple times, in the DebugInfodClient and in the symbolizer. Can we remove all the usages of debuginfod metrics from the symbolizer?
debugReader, err := s.fetchFromDebuginfod(ctx, buildID) | ||
if err != nil { | ||
var bnfErr buildIDNotFoundError | ||
if errors.As(err, &bnfErr) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is the purpose of this type assertion? both branches are the same and return err. Is this needed? can we jsust return err?
lidiaBytes, err := s.getLidiaBytes(ctx, req.buildID) | ||
if err != nil { | ||
var bnfErr buildIDNotFoundError | ||
if errors.As(err, &bnfErr) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest we do not fail the query if there are any errors during symbolization. Let's treat all the errors the same - createNotFoundSymbolz
lidiaReader := NewReaderAtCloser(lidiaBytes) | ||
table, err = lidia.OpenReader(lidiaReader, lidia.WithCRC()) | ||
if err != nil { | ||
s.metrics.debugSymbolResolution.WithLabelValues("lidia_error").Observe(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest we do not fail the query if there are any errors during symbolization. Let's treat all the errors the same - createNotFoundSymbolz
} | ||
|
||
for mappingID, locations := range locationsByMapping { | ||
if err := s.symbolizeLocationsForMapping(ctx, profile, mappingID, locations); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest we do not fail the query if there are any errors during symbolization. Let's treat all the errors the same - createNotFoundSymbolz
type location struct { | ||
address uint64 | ||
lines []lidia.SourceInfoFrame | ||
mapping *pprof.Mapping |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this field is unused. Let's remove it
pprof "github.com/google/pprof/profile" | ||
) | ||
|
||
type locToSymbolize struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can replace locToSymbolize with just *googlev1.Location
because loc.Id == idx +1
|
||
var decompressed bytes.Buffer | ||
if _, err := decompressed.ReadFrom(gr); err != nil { | ||
gr.Close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we use defer gr.Close() ?
|
||
var decompressed bytes.Buffer | ||
if _, err := decompressed.ReadFrom(zr); err != nil { | ||
zr.Close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we use defer gr.Close() ?
// detectCompression reads the beginning of the input to determine if it's compressed, | ||
// and if so, returns a ReaderAt that decompresses the data. | ||
func detectCompression(r io.Reader) (io.ReaderAt, error) { | ||
br := bufio.NewReader(r) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets accept []byte as an argument to this function and do not use bufio - we already have everything in memory, no need to buffer. Let's return []byte here instead of ReaderAt - it will make it a bit more visible that we are reading and decompressing everything fully in memory
return lidiaBytes, nil | ||
} | ||
|
||
level.Error(s.logger).Log( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this is not an error if it is "not found"
} | ||
|
||
// updateProfileWithSymbols updates the profile with symbolization results | ||
func (s *Symbolizer) updateProfileWithSymbols(profile *googlev1.Profile, mapping *googlev1.Mapping, locs []locToSymbolize, symLocs []*location) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we reorganize the code a bit, to call this function once per pprof instead of once per mapping? Othervise we create all the maps multiple times
Id: funcID, | ||
Name: nameIdx, | ||
Filename: filenameIdx, | ||
StartLine: int64(line.LineNumber), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets remove this line. On the left we have the start of the function and on the right we have the middle of the function. Also the line number is not included in the key. Also we don't really have line numbers at the moment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we also don' really have filenames yet. I suggest we remove filename here as well.
nameIdx, filenameIdx int64 | ||
} | ||
funcMap := make(map[funcKey]uint64) | ||
maxFuncID := uint64(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we do maxFuncId := len(profile.Function) + 1
here? I know I told you sometime ago it's not a valid approach in general, but it is a valid approach for pprof files returned from our pprof queries. so should be fine?
|
||
// fetchLidiaFromObjectStore retrieves Lidia data from the object store | ||
func (s *Symbolizer) fetchLidiaFromObjectStore(ctx context.Context, buildID string) ([]byte, error) { | ||
objstoreReader, err := s.store.Get(ctx, buildID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we avoid store.Get invocations if we already know debuginfod returned NotFound ? it would probably require to pull the cache from debuginfod into symbolizer
return false | ||
} | ||
|
||
return len(slices.Collect(metadata.FindDatasets(block, matcher))) > 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@aleks-p Can you help me verify my assumption here please.
- Blocks contain multiple datasets and so a block may have both unsymbolized and symbolized datasets.
- I believe the
MetadataQueryService.QueryMetadata
does not return all the datasets present in the block ,but only the ones that matched query. I've made this conclusion from reading this code
if matches, ok = m.CollectMatches(matches, ds.Labels); ok {
If my understanding is correct, than this hasUnsymbolizedProfiles
is fine. If the second assumption is not correct, then this check would degrade queries for symbolized datasets.
I think everything is good here, just want another 👀 to doublecheck.
Context
This PR introduces a comprehensive implementation for DWARF symbolization of unsymbolized profiles in the Pyroscope read path. It enables automatic symbolization of profiles for non-customer code (primarily open-source libraries and binaries) where symbol information isn't available at collection time.
Symbolization
Multi-level Caching
Integration Points
Configuration Example