Skip to content

MaterializeInc/terraform-azurerm-materialize

Repository files navigation

Materialize on Azure

Terraform module for deploying Materialize on Azure with all required infrastructure components.

This module sets up:

  • AKS cluster for Materialize workloads
  • Azure Database for PostgreSQL Flexible Server for metadata storage
  • Azure Blob Storage for persistence
  • Required networking and security configurations
  • Managed identities with proper RBAC permissions

Warning

This module is intended for demonstration/evaluation purposes as well as for serving as a template when building your own production deployment of Materialize.

This module should not be directly relied upon for production deployments: future releases of the module will contain breaking changes. Instead, to use as a starting point for your own production deployment, either:

  • Fork this repo and pin to a specific version, or
  • Use the code as a reference when developing your own deployment.

The module has been tested with:

  • AKS version 1.28
  • PostgreSQL 15
  • Materialize Operator v0.1.0

Setup Notes:

This module requires active Azure credentials in your environment, either set up through environment variables containing the required keys or by logging in with the Azure CLI using:

az login

You also need to set an Azure subscription ID in the subscription_id variable or set the ARM_SUBSCRIPTION_ID environment variable, eg:

export ARM_SUBSCRIPTION_ID="your-subscription-id"

Additionally, this module runs a Python script to generate Azure SAS tokens for the storage account. This requires Python 3.12 or greater.

Installing Dependencies

Before running the module, ensure you have the necessary Python dependencies installed:

  1. Install Python 3.12+ if you haven't already.

  2. Install the required dependencies using pip:

    pip install -r requirements.txt

    Or alternatively, you can install the dependencies manually:

    pip install azure-identity azure-storage-blob azure-keyvault-secrets azure-mgmt-storage

If you are using a virtual environment, you can set it up as follows:

python -m venv venv
source venv/bin/activate  # On macOS/Linux
venv\Scripts\activate  # On Windows
pip install -r requirements.txt

This will install the required Python packages in a virtual environment.

Requirements

Name Version
terraform >= 1.0
azuread >= 2.45.0
azurerm >= 3.75.0
helm ~> 2.0
kubernetes ~> 2.0

Providers

No providers.

Modules

Name Source Version
aks ./modules/aks n/a
certificates ./modules/certificates n/a
database ./modules/database n/a
load_balancers ./modules/load_balancers n/a
networking ./modules/networking n/a
operator github.com/MaterializeInc/terraform-helm-materialize v0.1.11
storage ./modules/storage n/a

Resources

No resources.

Inputs

Name Description Type Default Required
aks_config AKS cluster configuration
object({
vm_size = string
disk_size_gb = number
min_nodes = number
max_nodes = number
})
{
"disk_size_gb": 100,
"max_nodes": 5,
"min_nodes": 1,
"vm_size": "Standard_E8ps_v6"
}
no
cert_manager_chart_version Version of the cert-manager helm chart to install. string "v1.17.1" no
cert_manager_install_timeout Timeout for installing the cert-manager helm chart, in seconds. number 300 no
cert_manager_namespace The name of the namespace in which cert-manager is or will be installed. string "cert-manager" no
database_config Azure Database for PostgreSQL configuration
object({
sku_name = optional(string, "GP_Standard_D2s_v3")
postgres_version = optional(string, "15")
password = string
username = optional(string, "materialize")
db_name = optional(string, "materialize")
})
n/a yes
helm_chart Chart name from repository or local path to chart. For local charts, set the path to the chart directory. string "materialize-operator" no
helm_values Additional Helm values to merge with defaults any {} no
install_cert_manager Whether to install cert-manager. bool true no
install_materialize_operator Whether to install the Materialize operator bool true no
location The location where resources will be created string "eastus2" no
materialize_instances Configuration for Materialize instances
list(object({
name = string
namespace = optional(string)
database_name = string
environmentd_version = optional(string)
cpu_request = optional(string, "1")
memory_request = optional(string, "1Gi")
memory_limit = optional(string, "1Gi")
create_database = optional(bool, true)
create_load_balancer = optional(bool, true)
internal_load_balancer = optional(bool, true)
in_place_rollout = optional(bool, false)
request_rollout = optional(string)
force_rollout = optional(string)
balancer_memory_request = optional(string, "256Mi")
balancer_memory_limit = optional(string, "256Mi")
balancer_cpu_request = optional(string, "100m")
license_key = optional(string)
}))
[] no
namespace Namespace for all resources, usually the organization or project name string "materialize" no
network_config Network configuration for the AKS cluster
object({
vnet_address_space = string
subnet_cidr = string
postgres_subnet_cidr = string
service_cidr = string
docker_bridge_cidr = string
})
n/a yes
operator_namespace Namespace for the Materialize operator string "materialize" no
operator_version Version of the Materialize operator to install string null no
orchestratord_version Version of the Materialize orchestrator to install string null no
prefix Prefix to be used for resource names string "materialize" no
resource_group_name The name of the resource group string n/a yes
tags Tags to apply to all resources map(string) {} no
use_local_chart Whether to use a local chart instead of one from a repository bool false no
use_self_signed_cluster_issuer Whether to install and use a self-signed ClusterIssuer for TLS. To work around limitations in Terraform, this will be treated as false if no materialize instances are defined. bool true no

Outputs

Name Description
aks_cluster AKS cluster details
connection_strings Formatted connection strings for Materialize
database Azure Database for PostgreSQL details
identities Managed Identity details
kube_config The kube_config for the AKS cluster
kube_config_raw The kube_config for the AKS cluster
load_balancer_details Details of the Materialize instance load balancers.
network Network details
operator Materialize operator details
resource_group_name n/a
storage Azure Storage Account details

Accessing the AKS cluster

The AKS cluster can be accessed using the kubectl command-line tool. To authenticate with the cluster, run the following command:

az aks get-credentials --resource-group $(terraform output -raw resource_group_name) --name $(terraform output -json aks_cluster | jq -r '.name')

This command retrieves the AKS cluster credentials and merges them into the ~/.kube/config file. You can now interact with the AKS cluster using kubectl.

Connecting to Materialize instances

By default, two LoadBalancer Services are created for each Materialize instance:

  1. One for balancerd, listening on:
    1. Port 6875 for SQL connections to the database.
    2. Port 6876 for HTTP(S) connections to the database.
  2. One for the web console, listening on:
    1. Port 8080 for HTTP(S) connections.

The IP addresses of these load balancers will be in the terraform output as load_balancer_details.

TLS support

TLS support is provided by using cert-manager and a self-signed ClusterIssuer.

More advanced TLS support using user-provided CAs or per-Materialize Issuers are out of scope for this Terraform module. Please refer to the cert-manager documentation for detailed guidance on more advanced usage.

Upgrade Notes

v0.3.0

We now install cert-manager and configure a self-signed ClusterIssuer by default.

Due to limitations in Terraform, it cannot plan Kubernetes resources using CRDs that do not exist yet. We have worked around this for new users by only generating the certificate resources when creating Materialize instances that use them, which also cannot be created on the first run.

For existing users upgrading Materialize instances not previously configured for TLS:

  1. Leave install_cert_manager at its default of true.
  2. Set use_self_signed_cluster_issuer to false.
  3. Run terraform apply. This will install cert-manager and its CRDs.
  4. Set use_self_signed_cluster_issuer back to true (the default).
  5. Update the request_rollout field of the Materialize instance.
  6. Run terraform apply. This will generate the certificates and configure your Materialize instance to use them.