您好,登錄后才能下訂單哦!
Tags: k8s環境下的容器日志收集
K8S環境下面如何收集應用日志
===
在本文中重點講一下K8S容器環境中如何收集容器的日志;
??在K8S集群中,容器的日志收集方案一般有三種;第一種方案是通過在每一個k8s節點安裝日志收集客戶端軟件,比如fluentd。這種方案不好的一點是應用的日志必須輸出到標準輸出,并且是通過在每一臺計算節點的/var/log/containers目錄下面的日志文件,這個日志文件的名稱是這種格式user-center-765885677f-j68zt_default_user-center-0867b9c2f8ede64cebeb359dd08a6b05f690d50427aa89f7498597db8944cccc.log,文件名稱有很多隨機字符串,很難和容器里面的應用對應起來。并且在網上看到別人說這個里面的日志,對于JAVA的報錯內容沒有多行合并,不過我還沒有測試過此方案。
??第二種方案就是在應用的pods里面在運行一個sidecar container(邊角容器),這個容器會和應用的容器掛載同一個volume日志卷。比如這個sidecar容器可以是filebeat或者flunetd等;這種方案不足之處是部署了sidecar , 所以會消耗資源 , 每個pod都要起一個日志收集容器。
??第三種方案就是直接將應用的日志收集到kafka,然后通過kafka再發送到logstash,再處理成json格式的日志發送到es集群,最后在kibana展示。我實驗的就是這種方案。通過修改logsbak配置文件實現了日志直接發送到kafka緩存的功能;下面直接看配置了
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<jmxConfigurator/> <!-- 動態加載-->
<property name="log-path" value="/apptestlogs" /> <!-- 統一 /applogs 下面 -->
<property name="app-name" value="test" /> <!-- 應用系統名稱 -->
<property name="filename" value="test-test" /> <!---日志文件名,默認組件名稱 -->
<property name="dev-group-name" value="test" /> <!-- 開發團隊名稱 -->
<conversionRule conversionWord="traceId" converterClass="org.lsqt.components.log.logback.TraceIdConvert"/>
<!-- 根據實際情況修改變量 end-->
-<appender name="consoleAppender" class="ch.qos.logback.core.ConsoleAppender">
<!-- 典型的日志pattern -->
<!-- -<encoder>-->
<!--<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%traceId] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern>-->
<!--</encoder>-->
-<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%tid] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern>
</layout>
</encoder>
</appender>
-<appender name="fileAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${log-path}/${app-name}/${filename}.log</file>
-<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>/${log-path}/${app-name}/${filename}.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<maxHistory>15</maxHistory>
<!--用來指定日志文件的上限大小,例如設置為300M的話,那么到了這個值,就會刪除舊的日志。-->
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>300MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
</rollingPolicy>
<!-- -<encoder>-->
<!--<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%traceId] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern>-->
<!--</encoder>-->
-<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%tid] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern>
</layout>
</encoder>
</appender>
<appender name="errorAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${log-path}/${app-name}/${filename}-error.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>/${log-path}/${app-name}/${filename}-error.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>300MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<maxHistory>15</maxHistory>
</rollingPolicy>
<!--<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">-->
<!--<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%traceId] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern>-->
<!--</encoder>-->
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%tid] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern>
</layout>
</encoder>
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>ERROR</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
<!-- This example configuration is probably most unreliable under
failure conditions but wont block your application at all -->
<appender name="very-relaxed-and-fast-kafka-appender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>[%date{ISO8601}] [%level] %logger{80} [%thread] [%tid] ${dev-group-name} ${app-name} Line:%-3L - %msg%n</pattern>
</encoder>
<topic>elk-stand-sit-fkp-eureka</topic>
<!-- we don't care how the log messages will be partitioned -->
<keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" />
<!-- use async delivery. the application threads are not blocked by logging -->
<deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" />
<!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
<!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
<!-- bootstrap.servers is the only mandatory producerConfig -->
<producerConfig>bootstrap.servers=192.168.1.12:9092,192.168.1.14:9092,192.168.1.15:9092</producerConfig>
<!-- don't wait for a broker to ack the reception of a batch. -->
<producerConfig>acks=0</producerConfig>
<!-- wait up to 1000ms and collect log messages before sending them as a batch -->
<producerConfig>linger.ms=1000</producerConfig>
<!-- even if the producer buffer runs full, do not block the application but start to drop messages -->
<producerConfig>max.block.ms=0</producerConfig>
<!-- define a client-id that you use to identify yourself against the kafka broker -->
<producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed</producerConfig>
<!-- define All log messages that cannot be delivered fast enough will then immediately go to the fallback appenders -->
<producerConfig>block.on.buffer.full=false</producerConfig>
<!-- this is the fallback appender if kafka is not available. -->
<appender-ref ref="consoleAppender" />
</appender>
<root level="debug">
<appender-ref ref="very-relaxed-and-fast-kafka-appender" />
<appender-ref ref="fileAppender"/>
<appender-ref ref="consoleAppender"/>
<appender-ref ref="errorAppender"/>
</root>
</configuration>
###2. 針對logsbak配置說明:###
<!-- This example configuration is more restrictive and will try to ensure that every message
is eventually delivered in an ordered fashion (as long the logging application stays alive) -->
<appender name="very-restrictive-kafka-appender" class="com.github.danielwegener.logback.kafka.KafkaAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
<topic>important-logs</topic>
<!-- ensure that every message sent by the executing host is partitioned to the same partition strategy -->
<keyingStrategy class="com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy" />
<!-- block the logging application thread if the kafka appender cannot keep up with sending the log messages -->
<deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.BlockingDeliveryStrategy">
<!-- wait indefinitely until the kafka producer was able to send the message -->
<timeout>0</timeout>
</deliveryStrategy>
<!-- each <producerConfig> translates to regular kafka-client config (format: key=value) -->
<!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs -->
<!-- bootstrap.servers is the only mandatory producerConfig -->
<producerConfig>bootstrap.servers=localhost:9092</producerConfig>
<!-- restrict the size of the buffered batches to 8MB (default is 32MB) -->
<producerConfig>buffer.memory=8388608</producerConfig>
<!-- If the kafka broker is not online when we try to log, just block until it becomes available -->
<producerConfig>metadata.fetch.timeout.ms=99999999999</producerConfig>
<!-- define a client-id that you use to identify yourself against the kafka broker -->
<producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-restrictive</producerConfig>
<!-- use gzip to compress each batch of log messages. valid values: none, gzip, snappy -->
<producerConfig>compression.type=gzip</producerConfig>
<!-- Log every log message that could not be sent to kafka to STDERR -->
<appender-ref ref="STDERR"/>
</appender>
通過配置logsbak直接輸出到kafka,并且使用異步模式,就成功的在kibana里面看到了容器的日志了;
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。