Skip to content

Commit e546259

Browse files
authored
Added required external components (#461)
* Added required external components * added to nav and index * some changes to the getting started guide
1 parent 5aed035 commit e546259

File tree

5 files changed

+23
-5
lines changed

5 files changed

+23
-5
lines changed

docs/modules/druid/pages/getting_started/first_steps.adoc

+4-2
Original file line numberDiff line numberDiff line change
@@ -61,6 +61,8 @@ include::example$getting_started/getting_started.sh[tag=install-druid]
6161

6262
This will create the actual druid instance.
6363

64+
WARNING: This Druid instance uses Derby (`dbType: derby`) as a metadata store, which is an interal SQL database. It is not persisted and not suitable for production use! Consult the https://druid.apache.org/docs/latest/dependencies/metadata-storage.html#available-metadata-stores[Druid documentation] for a list of supported databases and setup instructions for production instances.
65+
6466
== Verify that it works
6567

6668
Next you will submit an ingestion job and then query the ingested data - either through the web interface or the API.
@@ -160,8 +162,8 @@ include::example$getting_started/expected_query_result.json[]
160162

161163
image::getting_started/query.png[]
162164

163-
Great! You've set up your first Druid cluster, ingested some data and queried it in the web interface!
165+
Great! You've set up your first Druid cluster, ingested some data and queried it in the web interface.
164166

165167
== What's next
166168

167-
Have a look at the xref:usage-guide/index.adoc[] page to find out more about the features of the Operator, such as S3-backed deep storage or OPA-based authorization.
169+
Have a look at the xref:usage-guide/index.adoc[] page to find out more about the features of the Operator, such as S3-backed deep storage (as opposed to the HDFS backend used in this guide) or OPA-based authorization.

docs/modules/druid/pages/index.adoc

+5-1
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,11 @@ The Druid Operator has the following dependencies:
3434
* The xref:commons-operator:index.adoc[] provides common CRDs such as xref:concepts:s3.adoc[] CRDs.
3535
* The xref:secret-operator:index.adoc[] is required for things like S3 access credentials or LDAP integration.
3636

37-
Have a look at the xref:getting_started/index.adoc[getting started guide] for an example of a minimal working setup. Druid works well with other Stackable supported products, such as xref:kafka:index.adoc[Apache Kafka] for data ingestion xref:trino:index.adoc[Trino] for data processing or xref:superset:index.adoc[Superset] for data visualization. xref:opa:index.adoc[OPA] can be connected to create authorization policies. Have a look at the xref:usage-guide/index.adoc[] for more configuration options and have a look at the <<demos, demos>> for complete data pipelines you can install with a single command.
37+
Have a look at the xref:getting_started/index.adoc[getting started guide] for an example of a minimal working setup.
38+
39+
The getting started guide sets up a fully working Druid cluster, but the S3 deep storage backend as well as the metadata SQL database are xref:required-external-components.adoc[required external components] and need to be set up by you as prerequisites for a production setup.
40+
41+
Druid works well with other Stackable supported products, such as xref:kafka:index.adoc[Apache Kafka] for data ingestion xref:trino:index.adoc[Trino] for data processing or xref:superset:index.adoc[Superset] for data visualization. xref:opa:index.adoc[OPA] can be connected to create authorization policies. Have a look at the xref:usage-guide/index.adoc[] for more configuration options and have a look at the <<demos, demos>> for complete data pipelines you can install with a single command.
3842

3943
== [[demos]]Demos
4044

Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
# Required external components
2+
3+
Druid uses an SQL database to store metadata. Consult the https://druid.apache.org/docs/latest/dependencies/metadata-storage.html#available-metadata-stores[Druid documentation] for a list of supported databases and setup instructions.
4+
5+
## Feature specific: S3 and cloud deep storage
6+
7+
https://druid.apache.org/docs/latest/dependencies/deep-storage.html[Deep storage] is where segments are stored. Druid offers multiple storage backends. For the local storage there are no prerequisites. HDFS deep storage can be set up with the xref:hdfs:index.adoc[Stackable Operator for Apache HDFS]. For S3 deep storage or the Google Cloud and Azure storage backends, you need to set up the storage.
8+
9+
Read the xref:usage-guide/deep-storage.adoc[deep storage usage guide] to learn more about configuring Druid deep storage.

docs/modules/druid/pages/usage-guide/deep-storage.adoc

+4-2
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,10 @@
11
= Deep storage configuration
22

3+
https://druid.apache.org/docs/latest/dependencies/deep-storage.html[Deep Storage] is where Druid stores data segments. For a Kubernetes environment, either the HDFS or S3 backend is recommended.
4+
35
== [[hdfs]]HDFS
46

5-
Druid can use HDFS as a backend for deep storage:
7+
Druid can use HDFS as a backend for deep storage, which requires having an HDFS instance running. You can use the xref:hdfs:index.adoc[Stackable Operator for Apache HDFS] to run HDFS. Configure the HDFS deep storage backend in your Druid cluster this way:
68

79
[source,yaml]
810
----
@@ -80,4 +82,4 @@ include::partial$s3-note.adoc[]
8082

8183
=== S3 Credentials
8284

83-
include::partial$s3-credentials.adoc[]
85+
include::partial$s3-credentials.adoc[]

docs/modules/druid/partials/nav.adoc

+1
Original file line numberDiff line numberDiff line change
@@ -12,5 +12,6 @@
1212
** xref:druid:usage-guide/monitoring.adoc[]
1313
** xref:druid:usage-guide/configuration-and-environment-overrides.adoc[]
1414
** xref:druid:usage-guide/cluster_operations.adoc[]
15+
* xref:druid:required-external-components.adoc[]
1516
* xref:druid:configuration.adoc[]
1617

0 commit comments

Comments
 (0)