Skip to content

Process changes to docs from: repo: EnterpriseDB/cloud-native-postgres ref: refs/tags/v1.26.0-rc1 #6696

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Connecting from an application'
originalFilePath: 'src/applications.md'
---



Applications are supposed to work with the services created by EDB Postgres for Kubernetes
in the same Kubernetes cluster.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Architecture'
originalFilePath: 'src/architecture.md'
---



!!! Hint
For a deeper understanding, we recommend reading our article on the CNCF
blog post titled ["Recommended Architectures for PostgreSQL in Kubernetes"](https://www.cncf.io/blog/2023/09/29/recommended-architectures-for-postgresql-in-kubernetes/),
Expand Down Expand Up @@ -414,9 +416,10 @@ This is typically triggered by:
declarative configuration, enabling you to automate these procedures as part of
your Infrastructure as Code (IaC) process, including GitOps.

The designated primary in the above example is fed via WAL streaming
(`primary_conninfo`), with fallback option for file-based WAL shipping through
the `restore_command` and `barman-cloud-wal-restore`.
In the example above, the designated primary receives WAL updates via streaming
replication (`primary_conninfo`). As a fallback, it can retrieve WAL segments
from an object store using file-based WAL shipping—for instance, with the
Barman Cloud plugin through `restore_command` and `barman-cloud-wal-restore`.

EDB Postgres for Kubernetes allows you to define topologies with multiple replica clusters.
You can also define replica clusters with a lower number of replicas, and then
Expand Down
17 changes: 11 additions & 6 deletions product_docs/docs/postgres_for_kubernetes/1/backup.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,14 @@ title: 'Backup'
originalFilePath: 'src/backup.md'
---



!!! Warning
With the deprecation of native Barman Cloud support in EDB Postgres for Kubernetes in
favor of the Barman Cloud Plugin, this page—and the backup and recovery
documentation—may undergo changes before the official release of version
1.26.0.

PostgreSQL natively provides first class backup and recovery capabilities based
on file system level (physical) copy. These have been successfully used for
more than 15 years in mission critical production databases, helping
Expand Down Expand Up @@ -30,7 +38,9 @@ The WAL archive can only be stored on object stores at the moment.
On the other hand, EDB Postgres for Kubernetes supports two ways to store physical base backups:

- on [object stores](backup_barmanobjectstore.md), as tarballs - optionally
compressed
compressed:
- Using the Barman Cloud plugin
- Natively via `.spec.backup.barmanObjectStore` (*deprecated, to be removed in EDB Postgres for Kubernetes 1.28*)
- on [Kubernetes Volume Snapshots](backup_volumesnapshot.md), if supported by
the underlying storage class

Expand All @@ -44,11 +54,6 @@ On the other hand, EDB Postgres for Kubernetes supports two ways to store physic
the supported [Container Storage Interface (CSI) drivers](https://kubernetes-csi.github.io/docs/drivers.html)
that provide snapshotting capabilities.

!!! Info
Starting with version 1.25, EDB Postgres for Kubernetes includes experimental support for
backup and recovery using plugins, such as the
[Barman Cloud plugin](https://github.com/cloudnative-pg/plugin-barman-cloud).

## WAL archive

The WAL archive in PostgreSQL is at the heart of **continuous backup**, and it
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,14 @@ title: 'Backup on object stores'
originalFilePath: 'src/backup_barmanobjectstore.md'
---



!!! Warning
With the deprecation of native Barman Cloud support in EDB Postgres for Kubernetes in
favor of the Barman Cloud Plugin, this page—and the backup and recovery
documentation—may undergo changes before the official release of version
1.26.0.

EDB Postgres for Kubernetes natively supports **online/hot backup** of PostgreSQL
clusters through continuous physical backup and WAL archiving on an object
store. This means that the database is always up (no downtime required)
Expand Down Expand Up @@ -96,7 +104,10 @@ algorithms via `barman-cloud-backup` (for backups) and

- bzip2
- gzip
- lz4
- snappy
- xz
- zstd

The compression settings for backups and WALs are independent. See the
[DataBackupConfiguration](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#DataBackupConfiguration) and
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,6 @@ title: 'Backup and Recovery'
originalFilePath: 'src/backup_recovery.md'
---



[Backup](backup.md) and [recovery](recovery.md) are in two separate sections.
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Backup on volume snapshots'
originalFilePath: 'src/backup_volumesnapshot.md'
---



!!! Warning
As noted in the [backup document](backup.md), a cold snapshot explicitly
set to target the primary will result in the primary being fenced for
Expand Down Expand Up @@ -60,6 +62,8 @@ volumes of a given storage class, and managed as `VolumeSnapshot` and

## How to configure Volume Snapshot backups



EDB Postgres for Kubernetes allows you to configure a given Postgres cluster for Volume
Snapshot backups through the `backup.volumeSnapshot` stanza.

Expand Down Expand Up @@ -245,7 +249,97 @@ referenced in the `.spec.backup.volumeSnapshot.className` option.
Please refer to the [Kubernetes documentation on Volume Snapshot Classes](https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes/)
for details on this standard behavior.

## Example
## Backup Volume Snapshot Deadlines

EDB Postgres for Kubernetes supports backups using the volume snapshot method. In some
environments, volume snapshots may encounter temporary issues that can be
retried.

The `backup.k8s.enterprisedb.io/volumeSnapshotDeadline` annotation defines how long
EDB Postgres for Kubernetes should continue retrying recoverable errors before marking the
backup as failed.

You can add the `backup.k8s.enterprisedb.io/volumeSnapshotDeadline` annotation to both
`Backup` and `ScheduledBackup` resources. For `ScheduledBackup` resources, this
annotation is automatically inherited by any `Backup` resources created from
the schedule.

If not specified, the default retry deadline is **10 minutes**.

### Error Handling

When a retryable error occurs during a volume snapshot operation:

1. EDB Postgres for Kubernetes records the time of the first error.
2. The system retries the operation every **10 seconds**.
3. If the error persists beyond the specified deadline (or the default 10
minutes), the backup is marked as **failed**.

### Retryable Errors

EDB Postgres for Kubernetes treats the following types of errors as retryable:

- **Server timeout errors** (HTTP 408, 429, 500, 502, 503, 504)
- **Conflicts** (optimistic locking errors)
- **Internal errors**
- **Context deadline exceeded errors**
- **Timeout errors from the CSI snapshot controller**

### Examples

You can add the annotation to a `ScheduledBackup` resource as follows:

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: ScheduledBackup
metadata:
name: daily-backup-schedule
annotations:
backup.k8s.enterprisedb.io/volumeSnapshotDeadline: "20"
spec:
schedule: "0 0 * * *"
backupOwnerReference: self
method: volumeSnapshot
# other configuration...
```

When you define a `ScheduledBackup` with the annotation, any `Backup` resources
created from this schedule automatically inherit the specified timeout value.

In the following example, all backups created from the schedule will have a
30-minute timeout for retrying recoverable snapshot errors.

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: ScheduledBackup
metadata:
name: weekly-backup
annotations:
backup.k8s.enterprisedb.io/volumeSnapshotDeadline: "30"
spec:
schedule: "0 0 * * 0" # Weekly backup on Sunday
method: volumeSnapshot
cluster:
name: my-postgresql-cluster
```

Alternatively, you can add the annotation directly to a `Backup` Resource:

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Backup
metadata:
name: my-backup
annotations:
backup.k8s.enterprisedb.io/volumeSnapshotDeadline: "15"
spec:
method: volumeSnapshot
# other backup configuration...
```

## Example of Volume Snapshot Backup



The following example shows how to configure volume snapshot base backups on an
EKS cluster on AWS using the `ebs-sc` storage class and the `csi-aws-vsc`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Before You Start'
originalFilePath: 'src/before_you_start.md'
---



Before we get started, it is essential to go over some terminology that is
specific to Kubernetes and PostgreSQL.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Benchmarking'
originalFilePath: 'src/benchmarking.md'
---



The CNP kubectl plugin provides an easy way for benchmarking a PostgreSQL deployment in Kubernetes using EDB Postgres for Kubernetes.

Benchmarking is focused on two aspects:
Expand Down Expand Up @@ -177,7 +179,7 @@ It will:
3. Create a fio deployment composed by a single Pod, which will run fio on
the PVC, create graphs after completing the benchmark and start serving the
generated files with a webserver. We use the
[`fio-tools`](https://github.com/wallnerryan/fio-tools`) image for that.
[`fio-tools`](https://github.com/wallnerryan/fio-tools) image for that.

The Pod created by the deployment will be ready when it starts serving the
results. You can forward the port of the pod created by the deployment
Expand Down
2 changes: 2 additions & 0 deletions product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Bootstrap'
originalFilePath: 'src/bootstrap.md'
---



!!! Note
When referring to "PostgreSQL cluster" in this section, the same
concepts apply to both PostgreSQL and EDB Postgres Advanced Server, unless
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Certificates'
originalFilePath: 'src/certificates.md'
---



EDB Postgres for Kubernetes was designed to natively support TLS certificates.
To set up a cluster, the operator requires:

Expand Down Expand Up @@ -53,6 +55,11 @@ expiration (within a 90-day validity period).
certificates not controlled by EDB Postgres for Kubernetes must be re-issued following the
renewal process.

When generating certificates, the operator assumes that the Kubernetes
cluster's DNS zone is set to `cluster.local` by default. This behavior can be
customized by setting the `KUBERNETES_CLUSTER_DOMAIN` environment variable. A
convenient alternative is to use the [operator's configuration capability](operator_conf.md).

### Server certificates

#### Server CA secret
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Instance pod configuration'
originalFilePath: 'src/cluster_conf.md'
---



## Projected volumes

EDB Postgres for Kubernetes supports mounting custom files inside the Postgres pods through
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Connection pooling'
originalFilePath: 'src/connection_pooling.md'
---



EDB Postgres for Kubernetes provides native support for connection pooling with
[PgBouncer](https://www.pgbouncer.org/), one of the most popular open source
connection poolers for PostgreSQL, through the `Pooler` custom resource definition (CRD).
Expand Down
47 changes: 23 additions & 24 deletions product_docs/docs/postgres_for_kubernetes/1/container_images.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,43 +3,42 @@ title: 'Container Image Requirements'
originalFilePath: 'src/container_images.md'
---

The EDB Postgres for Kubernetes operator for Kubernetes is designed to
work with any compatible container image of PostgreSQL that complies
with the following requirements:

- PostgreSQL executables that must be in the path:

The EDB Postgres for Kubernetes operator for Kubernetes is designed to work with any
compatible PostgreSQL container image that meets the following requirements:

- PostgreSQL executables must be available in the system path:
- `initdb`
- `postgres`
- `pg_ctl`
- `pg_controldata`
- `pg_basebackup`
- Barman Cloud executables that must be in the path:
- `barman-cloud-backup`
- `barman-cloud-backup-delete`
- `barman-cloud-backup-list`
- `barman-cloud-check-wal-archive`
- `barman-cloud-restore`
- `barman-cloud-wal-archive`
- `barman-cloud-wal-restore`
- PGAudit extension installed (optional - only if PGAudit is required
in the deployed clusters)
- Appropriate locale settings
- `du` (optional, for `kubectl cnp status`)
- Proper locale settings configured

Optional Components:

- [PGAudit](https://www.pgaudit.org/) extension (only required if audit logging
is needed)
- `du` (used for `kubectl cnp status`)

!!! Important
Only [PostgreSQL versions supported by the PGDG](https://postgresql.org/) are allowed.
Only [PostgreSQL versions officially supported by PGDG](https://postgresql.org/) are allowed.

!!! Info
Barman Cloud executables are no longer required in EDB Postgres for Kubernetes. The
recommended approach is to use the dedicated [Barman Cloud Plugin](https://github.com/cloudnative-pg/plugin-barman-cloud).

No entry point and/or command is required in the image definition, as
EDB Postgres for Kubernetes overrides it with its instance manager.
No entry point or command is required in the image definition. EDB Postgres for Kubernetes
automatically overrides it with its instance manager.

!!! Warning
Application Container Images will be used by EDB Postgres for Kubernetes
in a **Primary with multiple/optional Hot Standby Servers Architecture**
only.
EDB Postgres for Kubernetes only supports **Primary with multiple/optional Hot Standby
Servers architecture** for PostgreSQL application container images.

EDB provides and supports
EDB provides and maintains
[public PostgreSQL container images](https://github.com/enterprisedb/docker-postgres)
for EDB Postgres for Kubernetes, and publishes them on
that are fully compatible with EDB Postgres for Kubernetes. These images are published on
[quay.io](https://quay.io/enterprisedb/postgresql).

## Image Tag Requirements
Expand Down
2 changes: 2 additions & 0 deletions product_docs/docs/postgres_for_kubernetes/1/controller.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Custom Pod Controller'
originalFilePath: 'src/controller.md'
---



Kubernetes uses the
[Controller pattern](https://kubernetes.io/docs/concepts/architecture/controller/)
to align the current cluster state with the desired one.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ title: 'Importing Postgres databases'
originalFilePath: 'src/database_import.md'
---



This section describes how to import one or more existing PostgreSQL
databases inside a brand new EDB Postgres for Kubernetes cluster.

Expand Down
Loading