Skip to content

Logstash 2.4.1/5.0.x: Environments Variables parsed as Lists/Arrays for Logstash Configuration #6366

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
berglh opened this issue Dec 7, 2016 · 29 comments · Fixed by #12051
Closed

Comments

@berglh
Copy link

berglh commented Dec 7, 2016

I've been replacing the use of logstash-filter-environment plugin with the new feature: Using Environment Variables in the Configuration.

I initially incorrectly diagnosed and reported the issue here ES hosts array: Support for parsing Environment Variables. @jordansissel advised I should file an issue in the correct project for this particular feature.

One of the things I am attempting to do is to supply an array for configuration item such as the hosts array in the logstash-output-elasticsearch plugin. This is useful when the environment hostnames and IPs are ephemeral and the use of a service discovery via query of etcd to determine IP and ports of online elasticsearch nodes during logstash startup is useful.

I currently achieve this using sed to replace a placeholder string in the config on the docker container startup. It's not a major issue, but considering environment parsing functionality is built into Logstash now, it'd be great to leverage this instead of hacking the config prior to launch.

  • Version: 2.4.1 & 5.0.2
  • Operating System: Oracle Enterprise Linux
  • Docker: 1.12.2
  • Sample Data:
    • With a host string:
~/logstash-5.0.2$ ES_HOSTS="localhost:9200" bin/logstash -e 'output { elasticsearch { hosts => "${ES_HOSTS}" } }'

Sending Logstash's logs to /home/somebody/logstash-5.0.2/logs which is now configured via log4j2.properties
The stdin plugin is now waiting for input:
[2016-12-06T16:07:20,738][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://localhost:9200"]}}
[2016-12-06T16:07:20,740][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2016-12-06T16:07:20,781][WARN ][logstash.outputs.elasticsearch] Marking url as dead. {:reason=>"Elasticsearch Unreachable: [http://localhost:9200][Manticore::SocketException] Connection refused (Connection refused)", :url=>#<URI::HTTP:0x41764dd8 URL:http://localhost:9200>, :error_message=>"Elasticsearch Unreachable: [http://localhost:9200][Manticore::SocketException] Connection refused (Connection refused)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
  • Sample Data
    • With a host array:
user@computer:~/logstash-5.0.2$ export ES_HOSTS="\"localhost:9200\", \"host2:9200\", \"host3:9200\""

user@computer:~/logstash-5.0.2$ echo $ES_HOSTS
"localhost:9200", "host2:9200", "host3:9200"

user@computer:~/logstash-5.0.2$ bin/logstash -e 'output { elasticsearch { hosts => [ "${ES_HOSTS}" ] } }'
Sending Logstash's logs to ~/logstash-5.0.2/logs which is now configured via log4j2.properties
The stdin plugin is now waiting for input:
[2016-12-06T16:17:02,070][ERROR][logstash.agent           ] Pipeline aborted due to error {:exception=>#<LogStash::ConfigurationError: Host '"localhost:9200", "host2:9200", "host3:9200"' was specified, but is not valid! Use either a full URL or a hostname:port string!>, :backtrace=>["~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:183:in `host_to_url'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:93:in `build_pool'", "org/jruby/RubyArray.java:2414:in `map'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:93:in `build_pool'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:20:in `initialize'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:53:in `build'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch.rb:188:in `build_client'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/common.rb:13:in `register'", "~/logstash-5.0.2/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:8:in `register'", "~/logstash-5.0.2/logstash-core/lib/logstash/output_delegator.rb:37:in `register'", "~/logstash-5.0.2/logstash-core/lib/logstash/pipeline.rb:196:in `start_workers'", "org/jruby/RubyArray.java:1613:in `each'", "~/logstash-5.0.2/logstash-core/lib/logstash/pipeline.rb:196:in `start_workers'", "~/logstash-5.0.2/logstash-core/lib/logstash/pipeline.rb:153:in `run'", "~/logstash-5.0.2/logstash-core/lib/logstash/agent.rb:250:in `start_pipeline'"]}
[2016-12-06T16:17:02,086][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2016-12-06T16:17:05,074][WARN ][logstash.agent           ] stopping pipeline {:id=>"main"}

user@computer:~/logstash-5.0.2$ bin/logstash -e 'output { elasticsearch { hosts => [ "localhost:9200", "host2:9200", "host3:9200" ] } }'
Sending Logstash's logs to ~/logstash-5.0.2/logs which is now configured via log4j2.properties
The stdin plugin is now waiting for input:
[2016-12-06T16:17:21,387][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://localhost:9200", "http://host2:9200", "http://host3:9200"]}}
[2016-12-06T16:17:21,389][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2016-12-06T16:17:21,444][WARN ][logstash.outputs.elasticsearch] Marking url as dead. {:reason=>"Elasticsearch Unreachable: [http://localhost:9200][Manticore::SocketException] Connection refused (Connection refused)", :url=>#<URI::HTTP:0x51dd94f4 URL:http://localhost:9200>, :error_message=>"Elasticsearch Unreachable: [http://localhost:9200][Manticore::SocketException] Connection refused (Connection refused)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2016-12-06T16:17:21,445][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Elasticsearch Unreachable: [http://localhost:9200][Manticore::SocketException] Connection refused (Connection refused)", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2016-12-06T16:17:21,446][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["localhost:9200", "host2:9200", "host3:9200"]}
[2016-12-06T16:17:21,447][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}
[2016-12-06T16:17:21,449][INFO ][logstash.pipeline        ] Pipeline main started
[2016-12-06T16:17:21,467][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2016-12-06T16:17:26,390][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:url=>#<URI::HTTP:0x588852e4 URL:http://localhost:9200>, :healthcheck_path=>"/"}
^C[2016-12-06T16:17:29,147][WARN ][logstash.runner          ] SIGINT received. Shutting down the agent.
[2016-12-06T16:17:29,151][WARN ][logstash.agent           ] stopping pipeline {:id=>"main"}
@untergeek
Copy link
Member

Just for the sake of hare-brained ideas...

What happens if instead of

$ export ES_HOSTS="\"localhost:9200\", \"host2:9200\", \"host3:9200\""

you instead did

$ export ES_HOSTS='["localhost:9200", "host2:9200", "host3:9200"]'

(of course, you could just escape the brackets instead of using single quotes)

Then you see (and I just verified this in a terminal):

$ echo $ES_HOSTS
["localhost:9200", "host2:9200", "host3:9200"]

In theory, this may let you do:

output {
  elasticsearch {
    hosts => ${ES_HOSTS}
  }
}

I know that this works in Curator. It may or may not work in Logstash. I figure it's worth a try, though. I think that the ENV variables are parsed first, into strings, which would potentially make the square brackets work when the config is parsed (after env var expansion).

@berglh
Copy link
Author

berglh commented Dec 7, 2016

@untergeek That's a great idea; I tried it as per your suggestion, but still no dice. I also tried with escaping the square brackets but in the config it included the literal string. Host '\["localhost:9200", "host2:9200", "host3:9200"\]' was specified,

logstash-5.0.2$ ES_HOSTS='["localhost:9200", "host2:9200", "host3:9200"]' ./bin/logstash -e 'output { elasticsearch { hosts => "${ES_HOSTS}" } }'
Sending Logstash's logs to /home/uqblloy2/log-shipment/logstash-5.0.2/logs which is now configured via log4j2.properties
The stdin plugin is now waiting for input:
[2016-12-07T10:27:38,043][ERROR][logstash.agent           ] Pipeline aborted due to error {:exception=>#<LogStash::ConfigurationError: Host '["localhost:9200", "host2:9200", "host3:9200"]' was specified, but is not valid! Use either a full URL or a hostname:port string!>, :backtrace=>["~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:183:in `host_to_url'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:93:in `build_pool'", "org/jruby/RubyArray.java:2414:in `map'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:93:in `build_pool'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:20:in `initialize'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:53:in `build'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch.rb:188:in `build_client'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/common.rb:13:in `register'", "~/logstash-5.0.2/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:8:in `register'", "~/logstash-5.0.2/logstash-core/lib/logstash/output_delegator.rb:37:in `register'", "~/logstash-5.0.2/logstash-core/lib/logstash/pipeline.rb:196:in `start_workers'", "org/jruby/RubyArray.java:1613:in `each'", "~/logstash-5.0.2/logstash-core/lib/logstash/pipeline.rb:196:in `start_workers'", "~/logstash-5.0.2/logstash-core/lib/logstash/pipeline.rb:153:in `run'", "~/logstash-5.0.2/logstash-core/lib/logstash/agent.rb:250:in `start_pipeline'"]}
[2016-12-07T10:27:38,067][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2016-12-07T10:27:41,051][WARN ][logstash.agent           ] stopping pipeline {:id=>"main"}

@untergeek
Copy link
Member

What if you omit the quotes around "${ES_HOSTS}" and make it just hosts => ${ES_HOSTS}?

@untergeek
Copy link
Member

Maybe you did, and it made it a string anyway

@jordansissel
Copy link
Contributor

jordansissel commented Dec 7, 2016 via email

@berglh
Copy link
Author

berglh commented Dec 7, 2016

@untergeek

What if you omit the quotes around "${ES_HOSTS}" and make it just hosts => ${ES_HOSTS}?

The configuration parser complains because it expects any value to be wrapped in double quotes.

@untergeek
Copy link
Member

untergeek commented Dec 7, 2016 via email

@alesnav
Copy link

alesnav commented Jan 9, 2017

Hello,

I am facing this problem, too. In my case, I'm trying to pass all kafka topics using an array as env variable.

I tried all options defined above this post and "${KAFKA_TOPICS[@]}", too; but without success.

Thanks!

@patrickwallaws
Copy link

Any updates on this? This is really harshing my time....trying to pass the list as an environment variable to my ecs cluster of logstash servers.

If no updates on the feature - anyone find a good work around (other than using a single elasticsaerch host ...)

@fbaligand
Copy link
Contributor

fbaligand commented Aug 22, 2017

In issue #6665, 6 months ago, I proposed a way to do it, which is simple to implement and that I'm ready to implement (as I implemented first Logstash env var injection) :

  • as Jordan, I think that a env var with comma separated values that are splitted by Logstash is a nice idea
  • in my idea, I think my_array=> "${LS_ARRAY}" (with LS_ARRAY=value1,value2) will be replaced by my_array=> ["value1", "value2"]
  • It would be coherent with number value replacement where port => "${TCP_PORT}" is replaced by port => 12345
  • To do that :
    • I update env var injection behavior here :
      params[name.to_s] = deep_replace(value)
    • I check if config type is an array (using :validate info), and if value starts with ${ and ends with }
    • If so, I convert the result value to an array using String.split
    • Special case : if result value is an empty string, I convert it to an empty array

What do you think about ?

@nick-george
Copy link

To whoever ends up implementing this, it would be fantastic if you could please also make it possible to pass in a nil value? (as opposed to an empty string)?

It's a long story, but being able to pass in a nil would allow me to get around this issue logstash-plugins/logstash-input-beats#196.

Thanks!
Nick

@elastic elastic deleted a comment from DScheper Mar 13, 2018
@elastic elastic deleted a comment from orphaner Mar 13, 2018
@elastic elastic deleted a comment from justinmtilley Mar 13, 2018
@elastic elastic deleted a comment from Battleroid Mar 13, 2018
@jordansissel
Copy link
Contributor

Please do not send "+1" comments with no other content. This generates an ton of email for everyone.

I have deleted all prior +1 comments. At the time of deletion, there were 4.

If you feel compelled to "+1" something, please use Github issue reactions instead:
image

@huyqut
Copy link

huyqut commented Jun 27, 2018

Any updates on this issue yet?

@jordanhenderson
Copy link

jordanhenderson commented Aug 15, 2018

@fbaligand any updates on this? We really need this feature to keep our pipeline config clean. Also, unsure from the title whether this applies to logstash 6.x too? I'm hoping so 👍

@fbaligand
Copy link
Contributor

Hi @jordanhenderson

Well, 1 year ago, I was ready to make a PR to implement the feature as explained in my previous comment.
But @jordansissel (Logstash creator) explained in the 2 following comments the way to implement the feature. And since it requires to change Logstash grammar, I don't know how to implement it. That's why I finally didn't make a PR.

#6665 (comment)
#6665 (comment)

@jordansissel
Copy link
Contributor

If we focus on just lists, we could probably make the proposed syntax possible (thing => ${SOME_ENV_VAR}) to accept comma-delimited string values as a list, but this assumes that a single value (of that list) won't need to include a comma.

For setting multiple hosts, for example, something like:

export MY_HOSTS=foo,bar,baz

and in Logstash:

# ...
hosts => ${MY_HOSTS}

But what if the value would have a comma itself, such as passing a list to the date filter where one date pattern uses european-style decimal (with a comma), such as "YYYY-MM-DD HH:mm:ss,SSS" -- should users expect to need to escape the comma? What feedback can we provide?

In the above date example, what's the expected interpretation:

# Match both US and EU-style decimal separators
export FORMATS="YYYY-MM-DD HH:mm:ss.SSS,YYYY-MM-DD HH:mm:ss,SSS"
filter {
  date {
    match => [ "time", $FORMATS ]
  }
}

@jordansissel
Copy link
Contributor

@jsvd What do you think? I struggle with finding a syntax that I think will satisfy what I think most users are wanting without creating a bunch of frustrating edge cases or alien syntax.

@nick-george
Copy link

nick-george commented Aug 22, 2018

IMHO, using environment variables is a little clunky because we can't use:

  • nil strings (a use case I have encountered)
  • arrays
  • hashes (say you want to add a hash structure to an event)
  • booleans (would be particularly handy for flow control)

Could there be an alternative to using environment variables? Such as:

  • setting variables in the logstash.yml (or other) file
  • use a key-value database (like memcached)
  • pulling items out of elasticsearch
  • all of the above?

I'm guessing any such mechanism would require a change to the Logstash DSL. However, I feel this would give LS config authors the best flexibility in the long term.

Cheers,
Nick

@jordanhenderson
Copy link

jordanhenderson commented Aug 22, 2018

I am running multiple instances of logstash in a docker swarm. Environment variables are an easy way for us to configure each different instance of logstash via our CI process, rather than requiring a persistent service. These variables are dynamically switched between at deploy time, so they can be hardcoded in separate files in VCS, and then switched between depending on branch.

If we could put per environment configuration within logstash.yml, this may work... however I think adding a separate memcached/elastic instance for this will be overly complex. Perhaps something like 'logstash.dev.yml' and 'logstash.prod.yml' etc would do the job here, as we would be able to define per environment config overrides in any format, as long as logstash will look for/consume these files (preferrably by merging the base logstash.yml and logstash.env.yml).

@jordanhenderson
Copy link

jordanhenderson commented Aug 22, 2018

IE the only environment variable needed would be a simple string (LOGSTASH_ENV=dev, LOGSTASH_ENV=prod etc.), rather than complex formats. That way you have the benefit of versioning everything in a clean way, and avoiding formatting issues introduced by environment substitution.

@fbaligand
Copy link
Contributor

Hi @nick-george ,

First, in the future, probably other sources than environment variables will be available to interpolation using syntax ${SOME_KEY}. But the purpose of this precise issue is to support env var injection with arrays. Thus, as @jordanhenderson says, when Logstash runs inside a Docker instance, environment variables is the privileged way to inject environment configuration.
Then, to me, "boolean" values are already managed. "nil" or "array" values could be specifically processed (it's not the case for now).

@fbaligand
Copy link
Contributor

fbaligand commented Aug 22, 2018

Hi @jordansissel ,

To answer to your question about values that contain comma :

  • first, IMHO, this is really a rare case for array environment variable injection : concerning date format for example, most of the time, it is not specific to an environment.
  • then, I searched over spring documentation (which manages comma separated values for env var injection) and didn't find anything about how to handle values that contain comma. It seems to be not managed, although this feature is existing and used for years by lots of people. So it seems to be not a big problem for spring users community.

That's just my humble opinion :)

@jordansissel
Copy link
Contributor

@fbaligand if we scope this to solve providing a list of hosts from the environment instead of a generic list syntax, then I think it becomes easier to set user expectations.

Do you want to scope this to just a list of hosts? or even a specific setting on a specific plugin?

@fbaligand
Copy link
Contributor

fbaligand commented Aug 23, 2018

I think that the scope is every plugin option that is tied to environment.
So the examples that come in mind are : list of host:port (typically to call elasticsearch), list of ca files (to configure https on beats input), list of path patterns (to configure file input).

I also think it is important to manage empty array : if environment variable equals "" (empty string), then it is converted to empty array. Particularly useful for ca files list (in dev environment).

And finally, as requested by @nick-george, I would find nice to manage special value "nil" to convert it to nil value. It is not specific to arrays, by the way. It can be useful for advanced options that are not defined in dev environment, but defined in prod environment.

If implementation solves all these cases, I think we cover 99% of the needs.

@marfedd
Copy link

marfedd commented Jan 14, 2020

Hi!

Any chance this would be implemented? It's been 3 years already and there is a pull request that has been there for almost a year.

yaauie added a commit to yaauie/logstash that referenced this issue Jun 24, 2020
Since whitespace is illegal in URIs, we can safely use it as a delimiter when
validating `list`-type `URI` params, enabling the expansion of an arbitrary
list of URIs from a single Environment- or Keystore-variable.

Resolves: elastic#8157
Resolves: elastic#6366
yaauie added a commit to yaauie/logstash that referenced this issue Jun 24, 2020
Since whitespace is illegal in URIs, we can safely use it as a delimiter when
validating `list`-type `URI` params, enabling the expansion of an arbitrary
list of URIs from a single Environment- or Keystore-variable.

Resolves: elastic#8157
Resolves: elastic#6366
yaauie added a commit to yaauie/logstash that referenced this issue Jun 26, 2020
Since whitespace is illegal in URIs, we can safely use it as a delimiter when
validating `list`-type `URI` params, enabling the expansion of an arbitrary
list of URIs from a single Environment- or Keystore-variable.

Resolves: elastic#8157
Resolves: elastic#6366
yaauie added a commit that referenced this issue Jun 26, 2020
…2051)

* plugin config: support space-deliminated URIs on list-type params

Since whitespace is illegal in URIs, we can safely use it as a delimiter when
validating `list`-type `URI` params, enabling the expansion of an arbitrary
list of URIs from a single Environment- or Keystore-variable.

Resolves: #8157
Resolves: #6366

* Doc: Create section for cross-plugin functionality and add space delimiters

Co-authored-by: Karen Metts <[email protected]>
elasticsearch-bot pushed a commit that referenced this issue Jun 26, 2020
Since whitespace is illegal in URIs, we can safely use it as a delimiter when
validating `list`-type `URI` params, enabling the expansion of an arbitrary
list of URIs from a single Environment- or Keystore-variable.

Resolves: #8157
Resolves: #6366

Backport of #12051 to 7.x
@fbaligand
Copy link
Contributor

fbaligand commented Jun 27, 2020

@yaauie
Thanks for this new feature, waited for years.
But I can’t hide that I’m quite sad: the implementation only process options with “uri_list” type. A lot of plugins have hosts configured in a “array” type. Thus, all other environment arrays, are not processed like file paths array.
So this PR only partially fulfills this issue.

@yaauie
Copy link
Member

yaauie commented Jun 28, 2020

@yaauie
Thanks for this new feature, waited for years.
But I can’t hide that I’m quite sad: the implementation only process options with “uri_list” type. A lot of plugins have hosts configured in a “array” type. Thus, all other environment arrays, are not processed like file paths array.
So this PR only partially fulfills this issue.

My recent patch was limited to uri lists intentionally, because it is the only place where we can make a change to meaningfully address many use-cases that doesn't break in-the-wild configurations or require breaking changes in plugins, for three reasons:

  • plugin options with :list => true already expects an unbound list of entries; AND
  • the :uri validator rejects input with spaces, enabling us to use unescaped spaces as a delimiter; AND
  • introducing new APIs and behaviour in Logstash core is HARD, since plugins that consume these new APIs (including named validators) cannot run on versions of Logstash that didn't provide the feature or API.

But uris can be used to represent file paths, so this also gives us a path forward for plugins that wish to use one-to-many expansion of environment- and keystore-variables to populate file lists. Let's take a look at those plugins to see if they could meaningfully be moved over to use this functionality or in some similar way address those needs.

@fbaligand
Copy link
Contributor

fbaligand commented Jul 22, 2020

Hi @yaauie ,
Thanks your answer.
As you speak about moving plugins that contain array configuration, to benefit this feature, I try to generate a list of all options that are array-typed.
So here's the list:

'facility_labels': https://www.elastic.co/guide/en/logstash/current/plugins-filters-syslog_pri.html#plugins-filters-syslog_pri-facility_labels
'severity_labels': https://www.elastic.co/guide/en/logstash/current/plugins-filters-syslog_pri.html#plugins-filters-syslog_pri-severity_labels
'hosts': https://www.elastic.co/guide/en/logstash/current/plugins-filters-memcached.html#plugins-filters-memcached-hosts
'event_hubs': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-azure_event_hubs.html#plugins-inputs-azure_event_hubs-event_hubs
'event_hub_connections': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-azure_event_hubs.html#plugins-inputs-azure_event_hubs-event_hub_connections
'decrement': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-statsd.html#plugins-outputs-statsd-decrement
'increment': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-statsd.html#plugins-outputs-statsd-increment
'fields': https://www.elastic.co/guide/en/logstash/current/plugins-filters-de_dot.html#plugins-filters-de_dot-fields
'exclude_fields': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-influxdb.html#plugins-outputs-influxdb-exclude_fields
'send_as_tags': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-influxdb.html#plugins-outputs-influxdb-send_as_tags
'versions': https://www.elastic.co/guide/en/logstash/current/plugins-codecs-netflow.html#plugins-codecs-netflow-versions
'require_jars': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jms.html#plugins-inputs-jms-require_jars
'skip_headers': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jms.html#plugins-inputs-jms-skip_headers
'skip_properties': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jms.html#plugins-inputs-jms-skip_properties
'prepared_statement_bind_values': https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_streaming.html#plugins-filters-jdbc_streaming-prepared_statement_bind_values
'tag_on_default_use': https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_streaming.html#plugins-filters-jdbc_streaming-tag_on_default_use
'tag_on_failure': https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_streaming.html#plugins-filters-jdbc_streaming-tag_on_failure
'tag_on_failure': https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_static.html#plugins-filters-jdbc_static-tag_on_failure
'tag_on_default_use': https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_static.html#plugins-filters-jdbc_static-tag_on_default_use
'loaders': https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_static.html#plugins-filters-jdbc_static-loaders
'local_db_objects': https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_static.html#plugins-filters-jdbc_static-local_db_objects
'local_lookups': https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_static.html#plugins-filters-jdbc_static-local_lookups
'metrics': https://www.elastic.co/guide/en/logstash/current/plugins-filters-metricize.html#plugins-filters-metricize-metrics
'prepared_statement_bind_values': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jdbc.html#plugins-inputs-jdbc-prepared_statement_bind_values
'sfdc_fields': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-salesforce.html#plugins-inputs-salesforce-sfdc_fields
'tag_on_failure': https://www.elastic.co/guide/en/logstash/current/plugins-filters-dissect.html#plugins-filters-dissect-tag_on_failure
'lines': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-generator.html#plugins-inputs-generator-lines
'bucket': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-riak.html#plugins-outputs-riak-bucket
'indices': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-riak.html#plugins-outputs-riak-indices
'btags': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-boundary.html#plugins-outputs-boundary-btags
'failure_type_logging_whitelist': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-failure_type_logging_whitelist
'lines': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-java_generator.html#plugins-inputs-java_generator-lines
'channels': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-irc.html#plugins-inputs-irc-channels
'ssl_extra_chain_certs': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-graphite.html#plugins-inputs-graphite-ssl_extra_chain_certs
'dd_tags': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-datadog_metrics.html#plugins-outputs-datadog_metrics-dd_tags
'match': https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html#plugins-filters-date-match
'tag_on_failure': https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html#plugins-filters-date-tag_on_failure
'exclude_tables': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-sqlite.html#plugins-inputs-sqlite-exclude_tables
'fields': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-csv.html#plugins-outputs-csv-fields
'exclude_keys': https://www.elastic.co/guide/en/logstash/current/plugins-filters-kv.html#plugins-filters-kv-exclude_keys
'include_keys': https://www.elastic.co/guide/en/logstash/current/plugins-filters-kv.html#plugins-filters-kv-include_keys
'hosts': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-snmp.html#plugins-inputs-snmp-hosts
'fields': https://www.elastic.co/guide/en/logstash/current/plugins-filters-geoip.html#plugins-filters-geoip-fields
'tag_on_failure': https://www.elastic.co/guide/en/logstash/current/plugins-filters-geoip.html#plugins-filters-geoip-tag_on_failure
'rooms': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-xmpp.html#plugins-outputs-xmpp-rooms
'users': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-xmpp.html#plugins-outputs-xmpp-users
'follows': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-twitter.html#plugins-inputs-twitter-follows
'keywords': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-twitter.html#plugins-inputs-twitter-keywords
'languages': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-twitter.html#plugins-inputs-twitter-languages
'meter': https://www.elastic.co/guide/en/logstash/current/plugins-filters-metrics.html#plugins-filters-metrics-meter
'percentiles': https://www.elastic.co/guide/en/logstash/current/plugins-filters-metrics.html#plugins-filters-metrics-percentiles
'rates': https://www.elastic.co/guide/en/logstash/current/plugins-filters-metrics.html#plugins-filters-metrics-rates
'tags': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-azure_event_hubs.html#plugins-inputs-azure_event_hubs-tags
'clones': https://www.elastic.co/guide/en/logstash/current/plugins-filters-clone.html#plugins-filters-clone-clones
'multi_value': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-zabbix.html#plugins-outputs-zabbix-multi_value
'overwrite': https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#plugins-filters-grok-overwrite
'tag_on_failure': https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#plugins-filters-grok-tag_on_failure
'ignore_metadata': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-gelf.html#plugins-outputs-gelf-ignore_metadata
'level': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-gelf.html#plugins-outputs-gelf-level
'hosts': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-lumberjack.html#plugins-outputs-lumberjack-hosts
'include_path': https://www.elastic.co/guide/en/logstash/current/plugins-codecs-protobuf.html#plugins-codecs-protobuf-include_path
'transliterate': https://www.elastic.co/guide/en/logstash/current/plugins-filters-i18n.html#plugins-filters-i18n-transliterate
'exclude_metrics': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-graphite.html#plugins-outputs-graphite-exclude_metrics
'include_metrics': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-graphite.html#plugins-outputs-graphite-include_metrics
'hosts': https://www.elastic.co/guide/en/logstash/current/plugins-filters-elasticsearch.html#plugins-filters-elasticsearch-hosts
'tag_on_failure': https://www.elastic.co/guide/en/logstash/current/plugins-filters-elasticsearch.html#plugins-filters-elasticsearch-tag_on_failure
'arguments': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-rabbitmq.html#plugins-inputs-rabbitmq-arguments
'channels': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-irc.html#plugins-outputs-irc-channels
'exclude_metrics': https://www.elastic.co/guide/en/logstash/current/plugins-codecs-graphite.html#plugins-codecs-graphite-exclude_metrics
'include_metrics': https://www.elastic.co/guide/en/logstash/current/plugins-codecs-graphite.html#plugins-codecs-graphite-include_metrics
'topics': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-kafka.html#plugins-inputs-kafka-topics
'docinfo_fields': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-elasticsearch.html#plugins-inputs-elasticsearch-docinfo_fields
'hosts': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-elasticsearch.html#plugins-inputs-elasticsearch-hosts
'filters': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-cloudwatch.html#plugins-inputs-cloudwatch-filters
'metrics': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-cloudwatch.html#plugins-inputs-cloudwatch-metrics
'statistics': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-cloudwatch.html#plugins-inputs-cloudwatch-statistics
'community': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-snmptrap.html#plugins-inputs-snmptrap-community
'ssl_certificate_authorities': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-tcp.html#plugins-inputs-tcp-ssl_certificate_authorities
'ssl_extra_chain_certs': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-tcp.html#plugins-inputs-tcp-ssl_extra_chain_certs
'gsub': https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-gsub
'lowercase': https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-lowercase
'strip': https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-strip
'uppercase': https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-uppercase
'capitalize': https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-capitalize
'channels': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-juggernaut.html#plugins-outputs-juggernaut-channels
'tag_on_failure': https://www.elastic.co/guide/en/logstash/current/plugins-filters-json.html#plugins-filters-json-tag_on_failure
'dd_tags': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-datadog.html#plugins-outputs-datadog-dd_tags
'ranges': https://www.elastic.co/guide/en/logstash/current/plugins-filters-range.html#plugins-filters-range-ranges
'facility_labels': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-syslog.html#plugins-inputs-syslog-facility_labels
'severity_labels': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-syslog.html#plugins-inputs-syslog-severity_labels
'add_tag': https://www.elastic.co/guide/en/logstash/current/plugins-filters-aggregate.html#plugins-filters-aggregate-add_tag
'remove_field': https://www.elastic.co/guide/en/logstash/current/plugins-filters-aggregate.html#plugins-filters-aggregate-remove_field
'remove_tag': https://www.elastic.co/guide/en/logstash/current/plugins-filters-aggregate.html#plugins-filters-aggregate-remove_tag
'fields': https://www.elastic.co/guide/en/logstash/current/plugins-codecs-cef.html#plugins-codecs-cef-fields
'cipher_suites': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-http.html#plugins-inputs-http-cipher_suites
'ssl_certificate_authorities': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-http.html#plugins-inputs-http-ssl_certificate_authorities
'coalesce': https://www.elastic.co/guide/en/logstash/current/plugins-filters-alter.html#plugins-filters-alter-coalesce
'condrewrite': https://www.elastic.co/guide/en/logstash/current/plugins-filters-alter.html#plugins-filters-alter-condrewrite
'condrewriteother': https://www.elastic.co/guide/en/logstash/current/plugins-filters-alter.html#plugins-filters-alter-condrewriteother
'patterns_dir': https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html#plugins-codecs-multiline-patterns_dir
'rooms': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-xmpp.html#plugins-inputs-xmpp-rooms
'arguments': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-rabbitmq.html#plugins-outputs-rabbitmq-arguments
'tag_on_failure': https://www.elastic.co/guide/en/logstash/current/plugins-filters-urldecode.html#plugins-filters-urldecode-tag_on_failure
'timeout_tags': https://www.elastic.co/guide/en/logstash/current/plugins-filters-aggregate.html#plugins-filters-aggregate-timeout_tags
'resolve': https://www.elastic.co/guide/en/logstash/current/plugins-filters-dns.html#plugins-filters-dns-resolve
'reverse': https://www.elastic.co/guide/en/logstash/current/plugins-filters-dns.html#plugins-filters-dns-reverse
'cipher_suites': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html#plugins-inputs-beats-cipher_suites
'ssl_certificate_authorities': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html#plugins-inputs-beats-ssl_certificate_authorities

What do you think about?

@kyrias
Copy link

kyrias commented Aug 26, 2020

We were also just hit by this when trying to pass in multiple pipeline IDs to xpack.management.pipeline.id through an environment variable when running logstash in Kubernetes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.