Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not declaring logzioType field causes a potential bug #55

Open
Eli-Golin opened this issue Nov 13, 2018 · 1 comment
Open

Not declaring logzioType field causes a potential bug #55

Eli-Golin opened this issue Nov 13, 2018 · 1 comment

Comments

@Eli-Golin
Copy link

Eli-Golin commented Nov 13, 2018

Reproduce:
An example logback.xml

<?xml version="1.0" encoding="UTF-8"?>
<!-- Configuration file is scanned for changes every 60 seconds -->
<configuration scan="true" scanPeriod="60 seconds">
    <property name="LOGS_DIR_BASE" value="${LOGS_DIR_BASE:-log}"/>
    <property name = "KAFKA_LOG_LEVEL" value = "${KAFKA_LOG_LEVEL:-warn}"/>
    <property name = "ZOOKEEPER_LOG_LEVEL" value = "${ZOOKEEPER_LOG_LEVEL:-warn}"/>

    <define name="IP" class="com.clicktale.pipeline.webrecorder.logging.IpPropertyDefiner"/>
    <define name="SERVER_ID" class="com.clicktale.pipeline.webrecorder.logging.HostIdPropertyDefiner"/>
    <define name="MODULE_VERSION" class="com.clicktale.pipeline.webrecorder.logging.WebrecorderVeresionProvider"/>

    <shutdownHook class="ch.qos.logback.core.hook.DelayingShutdownHook"/>

    <appender name="logzio-es" class="ch.qos.logback.classic.AsyncAppender">
        <appender class="io.logz.logback.LogzioLogbackAppender">
            <token>${LOGZIO_ES_TOKEN}</token>
            <logzioUrl>${LOGZIO_URL}</logzioUrl>
            <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
                <level>${LOGZ_ES_LEVEL}</level>
            </filter>
            <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
                <providers>
                    <timestamp>
                        <timeZone>UTC</timeZone>
                    </timestamp>
                    <message/>
                    <loggerName>
                        <shortenedLoggerNameLength>36</shortenedLoggerNameLength>
                        <fieldName>logger</fieldName>
                    </loggerName>
                    <threadName>
                        <fieldName>thread</fieldName>
                    </threadName>
                    <logLevel>
                        <fieldName>level</fieldName>
                    </logLevel>
                    <stackTrace>
                        <throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
                            <maxDepthPerThrowable>30</maxDepthPerThrowable>
                            <maxLength>2048</maxLength>
                            <shortenedClassNameLength>20</shortenedClassNameLength>
                            <rootCauseFirst>true</rootCauseFirst>
                        </throwableConverter>
                    </stackTrace>
                    <logstashMarkers/>
                    <arguments/>
                    <pattern>
                        <pattern>
                            {
                            "dc": "${CT_REGION}",
                            "host": "${HOSTNAME}",
                            "module": "webrecorder",
                            "module_version": "${MODULE_VERSION}",
                            "env" : "${ENVIRONMENT}",
                            "ip" : "${IP}",
                            "server_id":"${SERVER_ID}"
                            }
                        </pattern>
                    </pattern>
                </providers>
            </encoder>
            <socketTimeout>10000</socketTimeout>
            <connectTimeout>10000</connectTimeout>
            <compressRequests>true</compressRequests>
            <drainTimeoutSec>5</drainTimeoutSec>
            <debug>true</debug>
            <inMemoryQueue>false</inMemoryQueue>
            <inMemoryQueueCapacityBytes>50000000</inMemoryQueueCapacityBytes>
        </appender>
    </appender>


    <appender name="logzio-s3" class="ch.qos.logback.classic.AsyncAppender">
        <appender class="io.logz.logback.LogzioLogbackAppender">
            <token>${LOGZIO_AUDIT_TOKEN}</token>
            <logzioUrl>${LOGZIO_URL}</logzioUrl>
            <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
                <providers>
                    <timestamp>
                        <timeZone>UTC</timeZone>
                    </timestamp>
                    <message/>
                    <loggerName>
                        <shortenedLoggerNameLength>36</shortenedLoggerNameLength>
                        <fieldName>logger</fieldName>
                    </loggerName>
                    <threadName>
                        <fieldName>thread</fieldName>
                    </threadName>
                    <logLevel>
                        <fieldName>level</fieldName>
                    </logLevel>
                    <stackTrace>
                        <throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
                            <maxDepthPerThrowable>30</maxDepthPerThrowable>
                            <maxLength>2048</maxLength>
                            <shortenedClassNameLength>20</shortenedClassNameLength>
                            <rootCauseFirst>true</rootCauseFirst>
                        </throwableConverter>
                    </stackTrace>
                    <logstashMarkers/>
                    <arguments/>
                    <pattern>
                        <pattern>
                            {
                            "dc": "${CT_REGION}",
                            "host": "${HOSTNAME}",
                            "module": "webrecorder",
                            "module_version": "${MODULE_VERSION}",
                            "env" : "${ENVIRONMENT}",
                            "ip" : "${IP}",
                            "server_id":"${SERVER_ID}"
                            }
                        </pattern>
                    </pattern>
                </providers>
            </encoder>
            <socketTimeout>10000</socketTimeout>
            <connectTimeout>10000</connectTimeout>
            <compressRequests>true</compressRequests>
            <drainTimeoutSec>5</drainTimeoutSec>
            <debug>true</debug>
            <inMemoryQueue>false</inMemoryQueue>
            <inMemoryQueueCapacityBytes>50000000</inMemoryQueueCapacityBytes>
        </appender>
    </appender>

    <appender name="FILE" class="ch.qos.logback.classic.AsyncAppender">
        <appender class="ch.qos.logback.core.rolling.RollingFileAppender">
            <file>${LOGS_DIR_BASE}/webrecorder.log</file>
            <encoder>
                <pattern>%d{ISO8601} [%thread] %-5level %logger{36} [%marker] - %msg%n</pattern>
            </encoder>
            <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
                <fileNamePattern>${LOGS_DIR_BASE}/webrecorder%i.log</fileNamePattern>
                <minIndex>1</minIndex>
                <maxIndex>20</maxIndex>
            </rollingPolicy>
            <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
                <!-- each file should be at most 50MB, and with rolling window of 20 we keep at most 1GB -->
                <maxFileSize>50MB</maxFileSize>
            </triggeringPolicy>
        </appender>
    </appender>


    <appender name="STATS" class="ch.qos.logback.classic.AsyncAppender">
        <appender class="ch.qos.logback.core.rolling.RollingFileAppender">
            <file>${LOGS_DIR_BASE}/statistics.log</file>
            <encoder>
                <pattern>%d{ISO8601} [%thread] %-5level %logger{36} [%marker] - %msg%n</pattern>
            </encoder>
            <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
                <fileNamePattern>${LOGS_DIR_BASE}/Statistics%i.log</fileNamePattern>
                <minIndex>1</minIndex>
                <maxIndex>10</maxIndex>
            </rollingPolicy>
            <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
                <!-- each file should be at most 20MB, and with rolling window of 20 we keep at most 200MB -->
                <maxFileSize>20MB</maxFileSize>
            </triggeringPolicy>
            <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
                <fileNamePattern>log/statistics%i.log</fileNamePattern>
                <minIndex>1</minIndex>
                <maxIndex>10</maxIndex>
            </rollingPolicy>
            <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
                <!-- each file should be at most 20MB, and with rolling window of 20 we keep at most 200MB -->
                <maxFileSize>20MB</maxFileSize>
            </triggeringPolicy>
        </appender>
    </appender>

    <appender name="AUDIT" class="ch.qos.logback.classic.AsyncAppender">
        <appender class="ch.qos.logback.core.rolling.RollingFileAppender">
            <file>${LOGS_DIR_BASE}/audit.log</file>
            <encoder>
                <pattern>%d{ISO8601} [%thread] %-5level %logger{36} [%marker] - %msg%n</pattern>
            </encoder>
            <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
                <fileNamePattern>${LOGS_DIR_BASE}/audit%i.log</fileNamePattern>
                <minIndex>1</minIndex>
                <maxIndex>10</maxIndex>
            </rollingPolicy>
            <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
                <!-- each file should be at most 20MB, and with rolling window of 10 we keep at most 200MB -->
                <maxFileSize>20MB</maxFileSize>
            </triggeringPolicy>
        </appender>
    </appender>


    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{ISO8601} [%level] [%logger{20}]  [%marker]  %msg%n</pattern>
        </encoder>
    </appender>

    <logger name="org.apache.zookeeper" level="${ZOOKEEPER_LOG_LEVEL}"/>
    <logger name="org.apache.kafka" level="${KAFKA_LOG_LEVEL}"/>
    <logger name="ch.qos.logback" level="WARN"/>


    <logger name="statistics" additivity="false">
        <appender-ref ref="STATS"/>
        <appender-ref ref="logzio-es"/>
    </logger>

    <logger name="auditlog" level="INFO" additivity="false">
        <appender-ref ref="logzio-s3"/>
    </logger>

    <root>
        <appender-ref ref="STDOUT"/>
        <appender-ref ref="FILE"/>
        <appender-ref ref="logzio-es"/>
    </root>
</configuration>

When not declaring logzioType field and enabling the disk caching option (inMemoryQueue = false) for the appenders specified in logback.xml, all logs will go to the same subaccount (regardless of the concrete token defined for each appender).

I guess it is somehow related to the fact that the logzioType has a default value (java in this case) and this field is used in the local folder's structure where the appender is generating it's log data for future sending.

@Eli-Golin Eli-Golin changed the title Not declaring logType field causes a potential bug Not declaring logzioType field causes a potential bug Nov 15, 2018
@idohalevi
Copy link
Contributor

idohalevi commented Nov 15, 2018

@Eli-Golin
Thank you for the info!
I will try to take a look at it as soon as possible. For now, did you find any workaround that fixed it for you? I think for now giving them different types should help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants