You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
*[Yugabyte](https://docs.yugabyte.com/preview/deploy/kubernetes/single-zone/oss/helm-chart/) helm chart deployment (prototype, due to limitations in the chart only one cluster per namespace is possible)
18
20
19
21
## Quickstart
20
22
21
-
To test out the operator you do not need Azure, you just need a kubernetes cluster (you can for example create a local one with [k3d](https://k3d.io/)) and cluster-admin rights on it.
23
+
To test out the operator you do not need Azure or AWS, you just need a kubernetes cluster (you can for example create a local one with [k3d](https://k3d.io/)) and cluster-admin rights on it.
22
24
23
25
1. Run `helm repo add maibornwolff https://maibornwolff.github.io/hybrid-cloud-postgresql-operator/` to prepare the helm repository.
24
26
2. Run `helm install hybrid-cloud-postgresql-operator-crds maibornwolff/hybrid-cloud-postgresql-operator-crds` and `helm install hybrid-cloud-postgresql-operator maibornwolff/hybrid-cloud-postgresql-operator` to install the operator.
@@ -95,6 +97,34 @@ backends: # Configuration for the different backends. Required fields are only
95
97
dns_zone: # Settings for the private dns zone to use for vnet integration. If the private dns zone is in the same resource group as the server, the fields "name" and resource_group can be omitted and the name can be placed here, optional
96
98
name: privatelink.postgres.database.azure.com # Name of the private dns zone, optional
97
99
resource_group: foobar-rg # Resource group the private dns zone is part of, if omitted it defaults to the resource group the server resource group, optional
100
+
aws: # This is a virtual backend that can be used to configure both awsrds and awsaurora. Fields defined here can also be defined directly in the other backends
101
+
region: eu-central-1 # AWS region to use, required
102
+
vpc_security_group_ids: [] # List of VPC security group IDs to assign to instances, required
103
+
subnet_group: # The name of a DB subnet group to place instances in, required
104
+
deletion_protection: false # Configure deletion protection for instances, will prevent instances being deleted by the operator, optional
105
+
network:
106
+
public_access: false # Allow public access from outside the VPC for the instance (security groups still need to be configured), optional
107
+
admin_username: postgres # Username to use as admin user, optional
108
+
name_pattern: "{namespace}-{name}"# Pattern to use for naming instances in AWS. Variables {namespace} and {name} can be used and will be replaced by metadata.namespace and metadata.name of the custom object
109
+
awsrds:
110
+
availability_zone: eu-central-1a # Availability zone to place DB instances in, required
111
+
default_class: small # Name of the class to use as default if the user-provided one is invalid or not available, required
112
+
classes: # List of instance classes the user can select from, required
113
+
small: # Name of the class
114
+
instance_type: db.m5.large # EC2 Instance type to use, required
115
+
storage_type: gp2 # Storage type for the DB instance, currently gp2, gp3 or io1, optional
116
+
iops: 0# Only needed when storage_type == gp3 or io1, number of IOPS to provision for the storage, optional
117
+
awsaurora:
118
+
availability_zones: [] # List of availability zones to place DB instances in, optional
119
+
default_class: small # Name of the class to use as default if the user-provided one is invalid or not available, required
120
+
classes: # List of instance classes the user can select from, required
121
+
small: # Name of the class
122
+
instance_type: db.serverless # EC2 Instance type to use, use db.serverless for an Aurora v2 serverless cluster, required
123
+
scaling_configuration: # Needs to be configured for serverless cluster only, optional
124
+
min_capacity: 0.5# Minimal number of capacity units, required
125
+
max_capacity: 1# Maximum number of capacity units, required
126
+
storage_type: aurora # Storage type for the DB instance, currently aurora and aurora-iopt1 are allowed, optional
127
+
iops: 0# Only needed when storage_type == aurora-iopt1, number of IOPS to provision for the storage, optional
98
128
helmbitnami:
99
129
default_class: small # Name of the class to use as default if the user-provided one is invalid or not available, required if classes should be usable
100
130
classes: # List of instance classes the user can select from, optional
@@ -126,6 +156,8 @@ security: # Security-related settings independent of any backends, optional
126
156
127
157
Single configuration options can also be provided via environment variables, the complete path is concatenated using underscores, written in uppercase and prefixed with `HYBRIDCLOUD_`. As an example: `backends.azure.subscription_id` becomes `HYBRIDCLOUD_BACKENDS_AZURE_SUBSCRIPTION_ID`.
128
158
159
+
### Azure
160
+
129
161
The `azure` backend is a virtual backend that allows you to specify options that are the same for both `azurepostgres` and `azurepostgresflexible`. As such each option under `backends.azure` in the above configuration can be repeated in the `backends.azurepostgres` and `backends.azurepostgresflexible` sections. Note that currrently the operator cannot handle using different subscriptions for the backends.
130
162
131
163
To make it easier for the users to specify database sizes you can prepare a list of recommendations, called classes, the users can choose from. The fields of the classes are backend-dependent. Using this mechanism you can give the users classes like `small`, `production`, `production-ha` and size them appropriately for each backend. If the user specifies size using CPU and memory the backend will pick an appropriate match.
@@ -157,6 +189,50 @@ For the operator to interact with Azure it needs credentials. For local testing
157
189
158
190
Unfortunately there is no built-in azure role for the Database for PostgreSQL service, if you do not want to create a custom role you can also assign the operator the Contributor or Owner (if lock handling is required) roles, but beware this is a potential attack surface as someone compromising the operator can access your entire Azure account.
159
191
192
+
### AWS
193
+
194
+
The `awsrds` backend supports single-instance RDS Postgresql deployments. The `awsaurora` backend supports single-instance aurora clusters (the operator currently only creates a primary writer instance and no read replicas). The `aws` backend is a virtual backend that allows you to specify options that are the same for both `awsrds` and `awsaurora`.
195
+
196
+
Both AWS backends have some prerequisites:
197
+
198
+
* An existing VPC
199
+
* Existing VPC Security groups to control access to the RDS instances (the firewall options currently have no effect)
200
+
* An existing DB subnet group
201
+
* Some defined size classes (in the operator configuration) as specifying a size using CPU and memory is currently not implemented for AWS
202
+
203
+
For the operator to interact with AWS it needs credentials. For local testing it can pick up the credentials from a `~/.aws/credentials` file. For real deployments you need an IAM user. The IAM user needs full RDS permissions (the easiest way is to attach the `AmazonRDSFullAccess` policy to the user). Supply the credentials for the user using the environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` (if you deploy via the helm chart use the use `envSecret` value). The operator can also pick up credentials using [IAM instance roles](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) if they are configured.
204
+
205
+
The AWS backends currently have some limitations:
206
+
207
+
* No support for managing firewalls / IP whitelists (must be done via preprovided VPC security groups)
208
+
* No support for HA or Multi-AZ clusters
209
+
* No support for custom parameter or option groups
210
+
* No support for storage autoscaling or configuring storage througput (for gp3)
211
+
* No support for Extended monitoring / performance insights
212
+
* No support for Aurora serverless v1
213
+
214
+
To get started with AWS you can use the following minimal operator config:
215
+
216
+
```yaml
217
+
handler_on_resume: false
218
+
backend: awsrds
219
+
allowed_backends:
220
+
- awsrds
221
+
backends:
222
+
awsrds:
223
+
name_pattern: "{namespace}-{name}"
224
+
region: eu-central-1
225
+
availability_zone: "eu-central-1c"
226
+
subnet_group: "<db-subnet-group>" # You must create it
227
+
vpc_security_group_ids: ["<security-group-id>"] # You must create it
228
+
network:
229
+
public_access: true
230
+
classes:
231
+
small:
232
+
instance_type: db.m5.large
233
+
default_class: small
234
+
```
235
+
160
236
### Deployment
161
237
162
238
The operator can be deployed via helm chart:
@@ -193,7 +269,7 @@ spec:
193
269
storageGB: 32# Size of the storage for the database in GB, required
194
270
storageAutoGrow: false # If the backend supports it automatic growing of the storage can be enabled, optional
195
271
backup: # If the backend supports automatic backup it can be configured here, optional
196
-
retentionDays: 7# Number of days backups should be retained. Min and max are dependent on the backend (for azure 7-35 days), optional
272
+
retentionDays: 7# Number of days backups should be retained. Min and max are dependent on the backend (for azure 7-35 days, for AWS 0 disables backups), optional
197
273
geoRedundant: false # If the backend supports it the backups can be stored geo-redundant in more than one region, optional
198
274
extensions: [] # List of postgres extensions to install in the database. List is dependent on the backend (e.g. azure supports timescaledb). Currently only supported with azure backends. optional.
199
275
network: # Network related features, optional
@@ -233,6 +309,10 @@ It is recommended not to use the system database (`postgres`) for anything but i
233
309
234
310
A service/application that wants to access the database should depend on the credentials secret and use its values for the connection. That way it is independent of the actual backend. Provided keys in the secret are: `hostname`, `port`, `dbname`, `username`, `password`, `sslmode` and should be directly usable with any postgresql-compatible client library.
235
311
312
+
### Resetting passwords
313
+
314
+
The operator has support for resetting the password of a server or database (for example if the passwords has been compromised or your organization requires regular password changes). To initiate a reset just add a label `operator/action: reset-password` to the custom resource (for example with `kubectl label postgresqldatabase mydatabase operator/action=reset-password`). The operator will pick it up, generate a new password, set it for the server/database and then update the credentials secret. It will then remove the label to signal completion. Note that you are responsible for restarting any affected services that use the password.
315
+
236
316
## Development
237
317
238
318
The operator is implemented in Python using the [Kopf](https://github.com/nolar/kopf) ([docs](https://kopf.readthedocs.io/en/stable/)) framework.
@@ -243,7 +323,9 @@ To run it locally follow these steps:
3. Setup a local kubernetes cluster, e.g. with k3d: `k3d cluster create`
245
325
4. Apply the CRDs in your local cluster: `kubectl apply -f helm/hybrid-cloud-postgresql-operator-crds/templates/`
246
-
5. If you want to deploy to azure: Either have the azure cli installed and configured with an active login or export the following environment variables: `AZURE_TENANT_ID`, `AZURE_CLIENT_ID`, `AZURE_CLIENT_SECRET`
326
+
5. If you want to deploy to the cloud:
327
+
* For Azure: Either have the azure cli installed and configured with an active login or export the following environment variables: `AZURE_TENANT_ID`, `AZURE_CLIENT_ID`, `AZURE_CLIENT_SECRET`
328
+
* For AWS: Either have a local `~/.aws/credentials` or export the following environment variables: `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`
247
329
6. Adapt the `config.yaml` to suit your needs
248
330
7. Run `kopf run main.py -A`
249
331
8. In another window apply some objects to the cluster to trigger the operator (see the `examples` folder)
0 commit comments