Skip to content

Commit f287567

Browse files
author
Pat
authored
Update K8S installation to remove legacy cri info (fluent#1378)
Signed-off-by: Patrick Stephens <[email protected]>
1 parent 2d17aa8 commit f287567

File tree

2 files changed

+22
-48
lines changed

2 files changed

+22
-48
lines changed

installation/kubernetes.md

+7-38
Original file line numberDiff line numberDiff line change
@@ -31,31 +31,27 @@ To obtain this information, a built-in filter plugin called _kubernetes_ talks t
3131
3232
## Installation <a href="#installation" id="installation"></a>
3333

34-
[Fluent Bit](http://fluentbit.io) should be deployed as a DaemonSet, so on that way it will be available on every node of your Kubernetes cluster.
34+
[Fluent Bit](http://fluentbit.io) should be deployed as a DaemonSet, so it will be available on every node of your Kubernetes cluster.
3535

36-
The recommended way to deploy Fluent Bit is with the official Helm Chart: https://github.com/fluent/helm-charts
36+
The recommended way to deploy Fluent Bit is with the official Helm Chart: <https://github.com/fluent/helm-charts>
3737

3838
### Note for OpenShift
3939

40-
If you are using Red Hat OpenShift you will also need to set up security context constraints (SCC):
41-
42-
```
43-
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-openshift-security-context-constraints.yaml
44-
```
40+
If you are using Red Hat OpenShift you will also need to set up security context constraints (SCC) using the relevant option in the helm chart.
4541

4642
### Installing with Helm Chart
4743

4844
[Helm](https://helm.sh) is a package manager for Kubernetes and allows you to quickly deploy application packages into your running cluster. Fluent Bit is distributed via a helm chart found in the Fluent Helm Charts repo: [https://github.com/fluent/helm-charts](https://github.com/fluent/helm-charts).
4945

5046
To add the Fluent Helm Charts repo use the following command
5147

52-
```
48+
```shell
5349
helm repo add fluent https://fluent.github.io/helm-charts
5450
```
5551

5652
To validate that the repo was added you can run `helm search repo fluent` to ensure the charts were added. The default chart can then be installed by running the following
5753

58-
```
54+
```shell
5955
helm upgrade --install fluent-bit fluent/fluent-bit
6056
```
6157

@@ -67,39 +63,12 @@ The default chart values include configuration to read container logs, with Dock
6763

6864
The default configuration of Fluent Bit makes sure of the following:
6965

70-
* Consume all containers logs from the running Node.
71-
* The [Tail input plugin](https://docs.fluentbit.io/manual/v/1.0/input/tail) will not append more than **5MB** into the engine until they are flushed to the Elasticsearch backend. This limit aims to provide a workaround for [backpressure](https://docs.fluentbit.io/manual/v/1.0/configuration/backpressure) scenarios.
66+
* Consume all containers logs from the running Node and parse them with either the `docker` or `cri` multiline parser.
67+
* Persist how far it got into each file it is tailing so if a pod is restarted it picks up from where it left off.
7268
* The Kubernetes filter will enrich the logs with Kubernetes metadata, specifically _labels_ and _annotations_. The filter only goes to the API Server when it cannot find the cached info, otherwise it uses the cache.
7369
* The default backend in the configuration is Elasticsearch set by the [Elasticsearch Output Plugin](../pipeline/outputs/elasticsearch.md). It uses the Logstash format to ingest the logs. If you need a different Index and Type, please refer to the plugin option and do your own adjustments.
7470
* There is an option called **Retry\_Limit** set to False, that means if Fluent Bit cannot flush the records to Elasticsearch it will re-try indefinitely until it succeed.
7571

76-
## Container Runtime Interface (CRI) parser
77-
78-
Fluent Bit by default assumes that logs are formatted by the Docker interface standard. However, when using CRI you can run into issues with malformed JSON if you do not modify the parser used. Fluent Bit includes a CRI log parser that can be used instead. An example of the parser is seen below:
79-
80-
```
81-
# CRI Parser
82-
[PARSER]
83-
# http://rubular.com/r/tjUt3Awgg4
84-
Name cri
85-
Format regex
86-
Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<message>.*)$
87-
Time_Key time
88-
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
89-
```
90-
91-
To use this parser change the Input section for your configuration from `docker` to `cri`
92-
93-
```
94-
[INPUT]
95-
Name tail
96-
Path /var/log/containers/*.log
97-
Parser cri
98-
Tag kube.*
99-
Mem_Buf_Limit 5MB
100-
Skip_Long_Lines On
101-
```
102-
10372
## Windows Deployment
10473

10574
Since v1.5.0, Fluent Bit supports deployment to Windows pods.

pipeline/filters/kubernetes.md

+15-10
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,9 @@ To perform processing of the _log_ key, it's **mandatory to enable** the _Merge\
8383
If _log_ value processing fails, the value is untouched. The order above is not chained, meaning it's exclusive and the filter will try only one of the options above, **not** all of them.
8484

8585
## Kubernetes Namespace Meta
86-
Namespace Meta can be enabled via the following settings:
86+
87+
Namespace Meta can be enabled via the following settings:
88+
8789
* Namespace\_Labels
8890
* Namespace\_Annotations
8991

@@ -94,7 +96,7 @@ Namespace Meta if collected will be stored within a `kubernetes_namespace` recor
9496
> Namespace meta is not be guaranteed to be in sync as namespace labels & annotations can be adjusted after pod creation. Adjust `Kube_Meta_Namespace_Cache_TTL` to lower caching times to fit your use case.
9597
9698
* Namespace\_Metadata\_Only
97-
- Using this feature will instruct fluent-bit to only fetch namespace metadata and to not fetch POD metadata at all.
99+
* Using this feature will instruct fluent-bit to only fetch namespace metadata and to not fetch POD metadata at all.
98100
POD basic metadata like container id, host, etc will be NOT be added and the Labels and Annotations configuration options which are used specifically for POD Metadata will be ignored.
99101

100102
## Kubernetes Pod Annotations
@@ -162,7 +164,7 @@ Kubernetes Filter depends on either [Tail](../inputs/tail.md) or [Systemd](../in
162164
Name tail
163165
Tag kube.*
164166
Path /var/log/containers/*.log
165-
Parser docker
167+
multiline.parser docker, cri
166168

167169
[FILTER]
168170
Name kubernetes
@@ -223,11 +225,11 @@ You can see on [Rublar.com](https://rubular.com/r/HZz3tYAahj6JCd) web site how t
223225

224226
* [https://rubular.com/r/HZz3tYAahj6JCd](https://rubular.com/r/HZz3tYAahj6JCd)
225227

226-
#### Custom Regex
228+
### Custom Regex
227229

228230
Under certain and not common conditions, a user would want to alter that hard-coded regular expression, for that purpose the option **Regex\_Parser** can be used \(documented on top\).
229231

230-
##### Custom Tag For Enhanced Filtering
232+
#### Custom Tag For Enhanced Filtering
231233

232234
One such use case involves splitting logs by namespace, pods, containers or container id.
233235
The tag is restructured within the tail input using match groups, this can simplify the filtering by those match groups later in the pipeline.
@@ -287,7 +289,7 @@ rules:
287289
- pods
288290
- nodes
289291
- nodes/proxy
290-
verbs:
292+
verbs:
291293
- get
292294
- list
293295
- watch
@@ -432,19 +434,23 @@ If you are not seeing metadata added to your kubernetes logs and see the followi
432434
When Fluent Bit is deployed as a DaemonSet it generally runs with specific roles that allow the application to talk to the Kubernetes API server. If you are deployed in a more restricted environment check that all the Kubernetes roles are set correctly.
433435

434436
You can test this by running the following command (replace `fluentbit-system` with the namespace where your fluentbit is installed)
437+
435438
```text
436439
kubectl auth can-i list pods --as=system:serviceaccount:fluentbit-system:fluentbit
437440
```
438-
If set roles are configured correctly, it should simply respond with `yes`.
439441

440-
For instance, using Azure AKS, running the above command may respond with:
442+
If set roles are configured correctly, it should simply respond with `yes`.
443+
444+
For instance, using Azure AKS, running the above command may respond with:
445+
441446
```text
442447
no - Azure does not have opinion for this user.
443448
```
444449

445-
If you have connectivity to the API server, but still "could not get meta for POD" - debug logging might give you a message with `Azure does not have opinion for this user`. Then the following `subject` may need to be included in the `fluentbit` `ClusterRoleBinding`:
450+
If you have connectivity to the API server, but still "could not get meta for POD" - debug logging might give you a message with `Azure does not have opinion for this user`. Then the following `subject` may need to be included in the `fluentbit` `ClusterRoleBinding`:
446451

447452
appended to `subjects` array:
453+
448454
```yaml
449455
- apiGroup: rbac.authorization.k8s.io
450456
kind: Group
@@ -462,4 +468,3 @@ By default the Kube\_URL is set to `https://kubernetes.default.svc:443` . Ensure
462468
### I can't see new objects getting metadata
463469

464470
In some cases, you may only see some objects being appended with metadata while other objects are not enriched. This can occur at times when local data is cached and does not contain the correct id for the kubernetes object that requires enrichment. For most Kubernetes objects the Kubernetes API server is updated which will then be reflected in Fluent Bit logs, however in some cases for `Pod` objects this refresh to the Kubernetes API server can be skipped, causing metadata to be skipped.
465-

0 commit comments

Comments
 (0)