You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#### Description
Add signal-specific configuration for topic and encoding.
The topics are already signal-specific by default, it just hasn't been
possible to explicitly configure a different topic for each signal. Thus
if you set the `topic: foo`, it would be used for all signals, which is
never going to work with the receiver.
Similarly, while the default encoding is the same for all signals (i.e.
otlp_proto), some encodings are available only for certain signals, e.g.
azure_resource_logs is (obviously) only available for logs. This means
you could not use the same receiver for multiple signals unless they
each used the same encoding.
To address both of these issues we introduce signal-specific
configuration: `logs::topic`, `metrics::topic`, `traces::topic`,
`logs::encoding`, `metrics::encoding`, and `traces::encoding`.
The existing `topic` and `encoding` configuration have been deprecated.
If the new fields are set, they will take precedence; otherwise if the
deprecated fields are set they will be used. The defaults have not
changed.
#### Link to tracking issue
Fixes#32735
#### Testing
Unit tests added.
#### Documentation
README updated.
Kafka receiver receives traces, metrics, and logs from Kafka. Message payload encoding is configurable.
17
-
18
-
Note that metrics and logs only support OTLP.
16
+
Kafka receiver receives telemetry data from Kafka, with configurable topics and encodings.
19
17
20
18
If used in conjunction with the `kafkaexporter` configured with `include_metadata_keys`. The Kafka receiver will also propagate the Kafka headers to the downstream pipeline, giving access to the rest of the pipeline to arbitrary metadata keys and values.
21
19
@@ -28,20 +26,19 @@ The following settings can be optionally configured:
28
26
-`brokers` (default = localhost:9092): The list of kafka brokers.
-`resolve_canonical_bootstrap_servers_only` (default = false): Whether to resolve then reverse-lookup broker IPs during startup
31
-
-`topic` (default = otlp_spans for traces, otlp_metrics for metrics, otlp_logs for logs): The name of the kafka topic to read from.
32
-
Only one telemetry type may be used for a given topic.
33
-
-`encoding` (default = otlp_proto): The encoding of the payload received from kafka. Supports encoding extensions. Tries to load an encoding extension and falls back to internal encodings if no extension was loaded. Available internal encodings:
34
-
-`otlp_proto`: the payload is deserialized to `ExportTraceServiceRequest`, `ExportLogsServiceRequest` or `ExportMetricsServiceRequest` respectively.
35
-
-`otlp_json`: the payload is deserialized to `ExportTraceServiceRequest``ExportLogsServiceRequest` or `ExportMetricsServiceRequest` respectively using JSON encoding.
36
-
-`jaeger_proto`: the payload is deserialized to a single Jaeger proto `Span`.
37
-
-`jaeger_json`: the payload is deserialized to a single Jaeger JSON Span using `jsonpb`.
38
-
-`zipkin_proto`: the payload is deserialized into a list of Zipkin proto spans.
39
-
-`zipkin_json`: the payload is deserialized into a list of Zipkin V2 JSON spans.
40
-
-`zipkin_thrift`: the payload is deserialized into a list of Zipkin Thrift spans.
41
-
-`raw`: (logs only) the payload's bytes are inserted as the body of a log record.
42
-
-`text`: (logs only) the payload are decoded as text and inserted as the body of a log record. By default, it uses UTF-8 to decode. You can use `text_<ENCODING>`, like `text_utf-8`, `text_shift_jis`, etc., to customize this behavior.
43
-
-`json`: (logs only) the payload is decoded as JSON and inserted as the body of a log record.
44
-
-`azure_resource_logs`: (logs only) the payload is converted from Azure Resource Logs format to OTel format.
29
+
-`logs`
30
+
-`topic` (default = otlp\_logs): The name of the Kafka topic from which to consume logs.
31
+
-`encoding` (default = otlp\_proto): The encoding for the Kafka topic. See [Supported encodings](#supported-encodings).
32
+
-`metrics`
33
+
-`topic` (default = otlp\_metrics): The name of the Kafka topic from which to consume metrics.
34
+
-`encoding` (default = otlp\_proto): The encoding for the Kafka topic. See [Supported encodings](#supported-encodings).
35
+
-`traces`
36
+
-`topic` (default = otlp\_spans): The name of the Kafka topic from which to consume traces.
37
+
-`encoding` (default = otlp\_proto): The encoding for the Kafka topic. See [Supported encodings](#supported-encodings).
38
+
-`topic` (Deprecated [v0.124.0]: use `logs::topic`, `traces::topic`, or `metrics::topic`).
39
+
If this is set, it will take precedence over the default value for those fields.
40
+
-`encoding` (Deprecated [v0.124.0]: use `logs::encoding`, `traces::encoding`, or `metrics::encoding`).
41
+
If this is set, it will take precedence over the default value for those fields.
45
42
-`group_id` (default = otel-collector): The consumer group that receiver will be consuming messages from
46
43
-`client_id` (default = otel-collector): The consumer client ID that receiver will use
47
44
-`initial_offset` (default = latest): The initial offset to use if no offset was previously committed. Must be `latest` or `earliest`.
@@ -97,14 +94,52 @@ The following settings can be optionally configured:
97
94
-`randomization_factor`: A random factor used to calculate next backoff. Randomized interval = RetryInterval * (1 ± RandomizationFactor)
98
95
-`max_elapsed_time`: The maximum amount of time trying to backoff before giving up. If set to 0, the retries are never stopped.
99
96
100
-
Example:
97
+
### Supported encodings
98
+
99
+
The Kafka receiver supports encoding extensions, as well as the following built-in encodings.
100
+
101
+
Available for all signals:
102
+
103
+
-`otlp_proto`: the payload is decoded as OTLP Protobuf
104
+
-`otlp_json`: the payload is decoded as OTLP JSON
105
+
106
+
Available only for traces:
107
+
108
+
-`jaeger_proto`: the payload is deserialized to a single Jaeger proto `Span`.
109
+
-`jaeger_json`: the payload is deserialized to a single Jaeger JSON Span using `jsonpb`.
110
+
-`zipkin_proto`: the payload is deserialized into a list of Zipkin proto spans.
111
+
-`zipkin_json`: the payload is deserialized into a list of Zipkin V2 JSON spans.
112
+
-`zipkin_thrift`: the payload is deserialized into a list of Zipkin Thrift spans.
113
+
114
+
Available only for logs:
115
+
116
+
-`raw`: the payload's bytes are inserted as the body of a log record.
117
+
-`text`: the payload are decoded as text and inserted as the body of a log record. By default, it uses UTF-8 to decode. You can use `text_<ENCODING>`, like `text_utf-8`, `text_shift_jis`, etc., to customize this behavior.
118
+
-`json`: the payload is decoded as JSON and inserted as the body of a log record.
119
+
-`azure_resource_logs`: the payload is converted from Azure Resource Logs format to OTel format.
120
+
121
+
### Message header propagation
122
+
123
+
The Kafka receiver will extract Kafka message headers and include them as request metadata (context).
124
+
This metadata can then be used throughout the pipeline, for example to set attributes using the
0 commit comments