-
I've set-up and HTTP Server source that I want to push JSON log events to. They're being sent in batches with one event per line with a LF newline terminator. e.g. {"id":"00227a8ec90a4dc0820100000000009d","remoteIP":"113.212.87.42"}
{"id":"00227a8ec90a4dc0820100000000009e","remoteIP":"113.212.87.42"}
{"id":"015332cbaf5545d8917a000000000001","remoteIP":"201.123.133.189"} I'm pushing them to the HTTP endpoint using the following cURL:
My source is configured as: sources:
http_server_json_flattened:
type: http_server
address: 0.0.0.0:8002
headers:
- x-source
compression: auto
decoding:
codec: json
framing:
method: newline_delimited With the above, I get the error message: Which I'm interpreting that it's trying to parse the entire input as JSON and because it's not well formed (being one per line) its failing. This seems to go in the face of my understanding of the config options where framing tells it how to split/find each log event and decoding is the format of each log event. Second attempt I switched the decoding to bytes and try to use vrl to parse it. sources:
http_server_json_flattened:
type: http_server
address: 0.0.0.0:8002
headers:
- x-source
compression: auto
decoding:
codec: bytes
framing:
method: newline_delimited
transforms:
parse_logs:
type: "remap"
inputs:
- http_server_json_flattened
source: |
., err = parse_json(.message)
if err != null {
log("Unable to parse message as JSON.", level: "error", 0)
abort
} Which results in: Looking at the output of the HTTP sever, I can see that the Coincidentally, the AWS S3 source when set-up the same way has no problem reading AWS WAFv2 logs which are stored on S3 in the exact same format? At this point I'm at a bit of loss??? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @NeilJed ! I think the issue is that you need to use
|
Beta Was this translation helpful? Give feedback.
Hi @NeilJed !
I think the issue is that you need to use
--data-binary
withcurl
. Otherwise it removes the newlines from the input and just sends: