-
Notifications
You must be signed in to change notification settings - Fork 309
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Elasticsearch is rejecting logs with a 400 error mapper_parsing_exception but fluentd is not showing the details of why #1016
Comments
Unfortunately it looks like we are seeing this on our linux fluentd pods as well:
Sometimes we are shown the reason for the 400 but sometimes not, it's inconsistent. |
Not sure this should matter but we found out using the elasticsearch API the reason for these errors. We are getting the following error from ES that is causing the 400: "Limit of total fields [1000] has been exceeded while adding new fields [1]" Here is the full error from:
Maybe the issue here is that ES is returning 2 reason fields, one is kind of generic and one has more specific information. If there is a caused_by field then the additional type and reason should maybe be included? |
@kenhys Could you take a look on this? |
(check apply)
Problem
Elasticsearch is rejecting logs with a 400 error and we are unable to get fluentd to show us why. We have set the following:
@log_level trace
log_es_400_reason true
with those set we are still not seeing the reason for the 400. Here is a sample rejection:
It would make it easier to troubleshoot if we could see why we are getting this 400 mapping error.
When we enable this on our linux based fluentd pods we get log results like the following, that include the error from ES:
Steps to replicate
run pod, generate logs and watch for 400 errors
...
Using Fluentd and ES plugin versions
This is on kubernetes and on windows nodes
fluent/fluentd:v1.15-windows-ltsc2019-1
fluentd --version
fluentd 1.15.2
gem 'fluent-plugin-elasticsearch' version '5.2.4'
Elasticsearch 7.15.0 although we are seeing the same thing in dev which runs Elasticsearch 8.3.3
The text was updated successfully, but these errors were encountered: