Skip to content

Commit

Permalink
[plugins] Speedup journal collection
Browse files Browse the repository at this point in the history
Instead of generating all the logs and tailing the last 100M,
we get the first 100M of 'journalctl --reverse' that we then
reverse again using tac_logs().

On journalctl timeout we now get the most recents logs
where previously we were getting some random old logs.

During collection, logs are now buffered on disk, so we use 2xsizelimit.
Previously buffering was in RAM (also 2xsizelimit).

On my test server, logs plugin runtime goes from 34s to 9.5s.

Signed-off-by: Etienne Champetier <[email protected]>
  • Loading branch information
champtar committed Jan 25, 2025
1 parent d50ca8c commit 4b2054f
Showing 1 changed file with 7 additions and 0 deletions.
7 changes: 7 additions & 0 deletions sos/report/plugins/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3087,8 +3087,15 @@ def add_journal(self, units=None, boot=None, since=None, until=None,
if output:
journal_cmd += output_opt % output

fname = journal_cmd
tac = False
if log_size > 0:
journal_cmd = f"{journal_cmd} --reverse"
tac = True

self._log_debug(f"collecting journal: {journal_cmd}")
self._add_cmd_output(cmd=journal_cmd, timeout=timeout,
tac=tac, to_file=True, suggest_filename=fname,
sizelimit=log_size, pred=pred, tags=tags,
priority=priority)

Expand Down

0 comments on commit 4b2054f

Please sign in to comment.