Skip to content

Commit b2b08f6

Browse files
committed
[WIP] Add documentation on deploying a hosted cluster
This is a work in progress! This document describes how to deploy a bare metal hosted cluster using Red Hat's Hosted Control Plane service, with bare metal nodes and networking provided by the ESI environment at the MOC.
1 parent 3e5f191 commit b2b08f6

File tree

1 file changed

+202
-0
lines changed

1 file changed

+202
-0
lines changed

deploying-a-hosted-cluster.md

+202
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,202 @@
1+
# Deploying a hosted cluster on ESI-provisioned nodes
2+
3+
## Prerequisites
4+
5+
- You are comfortable working with both OpenShift and OpenStack.
6+
- You have cluster admin privileges on the management cluster.
7+
- You are able to create floating ips both for the hypershift project and for the project that owns the nodes on which you'll deploy your target cluster.
8+
- You are able to create DNS records on demand for the domain that you are using as your base domain.
9+
10+
## Assumptions
11+
12+
You have an OpenStack [`clouds.yaml`][clouds.yaml] file in the proper location, and it defines the following two clouds:
13+
14+
- `hypershift` -- this is the project that owns the nodes and networks allocated to the hypershift management cluster.
15+
- `mycluster` -- this is the project that owns the nodes and networks on which you will be deploying a new cluster.
16+
17+
[clouds.yaml]: https://docs.openstack.org/python-openstackclient/pike/configuration/index.html#clouds-yaml
18+
19+
## Allocate DNS and floating ips
20+
21+
You must have DNS records in place before deploying the cluster (the install process will block until the records exist).
22+
23+
- Allocate two floating ip addresses from ESI:
24+
25+
- One will be for the API and must be allocated from the hypershift project (because it will map to worker nodes on the management cluster)
26+
- One will be for the Ingress service and must be allocated from the network on which you are deploying your target cluster work nodes
27+
28+
- Create DNS entries that map to those addresses:
29+
30+
- `api.<clustername>.<basedomain>` should map to the api vip.
31+
- `api-int.<clustername>.<basedomain>` should map to the api vip.
32+
- `*.apps.<clustername>.<basedomain>` should map to the ingress vip.
33+
34+
Note that at this point these addresses are not associated with any internal ip address. We can't do that until after the cluster has been deployed.
35+
36+
## Gather required configuration
37+
38+
- You will need a pull secret, which you can download from <https://console.redhat.com/openshift/downloads>. Scroll to the "Tokens" section and download the pull secret.
39+
40+
- You will probably want to provide an ssh public key. This will be provisioned for the `core` user on your nodes, allowing you to log in for troubleshooting purposes.
41+
42+
## Deploy the cluster
43+
44+
First, create the namespace for your cluster:
45+
46+
```
47+
oc create ns clusters-mycluster
48+
```
49+
50+
Now you can use the `hcp` cli to create appropriate cluster manifests:
51+
52+
```
53+
hcp create cluster agent \
54+
--name mycluster \
55+
--pull-secret pull-secret.txt \
56+
--agent-namespace hardware-inventory \
57+
--base-domain int.massopen.cloud \
58+
--api-server-address api.mycluster.int.massopen.cloud \
59+
--etcd-storage-class lvms-vg1 \
60+
--ssh-key larsks.pub \
61+
--namespace clusters \
62+
--control-plane-availability-policy HighlyAvailable \
63+
--release-image quay.io/openshift-release-dev/ocp-release:4.17.9-multi \
64+
--node-pool-replicas 3
65+
```
66+
67+
This will create several resources in the `clusters` namespace:
68+
69+
- A HostedCluster resource
70+
- A NodePool resource
71+
- Several Secrets:
72+
- A pull secret (`<clustername>-pull-secret`)
73+
- Your public ssh key (`<clustername>-ssh-key`)
74+
- An etcd encryption key (`<clustername>-etcd-encryption-key`)
75+
76+
This will trigger the process of deploying control plane services for your cluster into the `clusters-<clustername>` namespace.
77+
78+
If you would like to see the manifests generated by the `hcp` command, add the options `--render --render-sensitive`; this will write the manifests to *stdout* instead of deploying them to the cluster.
79+
80+
After creating the HostedCluster resource, the hosted control plane will immediately start to deploy. You will find the associated services in the `clusters-<clustername>` namespace. You can track the progress of the deployment by watching the `status` field of the `HostedCluster` resource:
81+
82+
```
83+
oc -n clusters get hostedcluster mycluster -o json | jq .status
84+
```
85+
86+
You will also see that an appropriate number of agents have been allocated from the agent pool:
87+
88+
```
89+
$ oc -n hardware-inventory get agents
90+
NAME CLUSTER APPROVED ROLE STAGE
91+
07e21dd7-5b00-2565-ffae-485f1bf3aabc mycluster true worker
92+
2f25a998-0f1d-c202-4fdd-a2c300c9b7da mycluster true worker
93+
36c4906e-b96e-2de5-e4ec-534b45d61fa7 true auto-assign
94+
384b3b4f-e111-6881-019e-3668abb7cb0f true auto-assign
95+
5180125a-614c-ac90-7adf-9222dc228704 true auto-assign
96+
5aed1b72-90c6-da99-0bee-e668ca41b2ff true auto-assign
97+
8542e6ac-41b4-eca3-fedd-6af8edd4a41e mycluster true worker
98+
b698178a-7b31-15d2-5e20-b2381972cbdf true auto-assign
99+
c6a86022-c6b9-c89d-b6b9-3dd5c4c1063e true auto-assign
100+
d2c0f44b-993c-3e32-4a22-39af4be355b8 true auto-assign
101+
```
102+
103+
## Interacting with the control plane
104+
105+
The hosted control plane will be available within a matter of minutes, but in order to interact with it you'll need to complete a few additional steps.
106+
107+
### Set up port forwarding for control plane services
108+
109+
The API service for the new cluster is deployed as a [NodePort] service on the management cluster, as are several other services that need to be exposed in order for the cluster deploy to complete.
110+
111+
1. Acquire a floating ip address from the hypershift project if you don't already have a free one:
112+
113+
```
114+
api_vip=$(openstack --os-cloud hypershift floating ip create external -f value -c floating_ip_address)
115+
```
116+
117+
1. Pick the address of one of the cluster nodes as a target for the port forwarding:
118+
119+
```
120+
internal_ip=$(oc get nodes -l node-role.kubernetes.io/worker= -o name |
121+
shuf |
122+
head -1 |
123+
xargs -INODE oc get NODE -o jsonpath='{.status.addresses[?(@.type == "InternalIP")].address}'
124+
)
125+
```
126+
127+
1. Set up appropriate port forwarding:
128+
129+
```
130+
openstack --os-cloud hypershift esi port forwarding create "$internal_ip" "$api_vip" $(
131+
oc -n clusters-mycluster get service -o json |
132+
jq '.items[]|select(.spec.type == "NodePort")|.spec.ports[].nodePort' |
133+
sed 's/^/-p /'
134+
)
135+
```
136+
137+
The output of the above command will look something like this:
138+
139+
```
140+
+--------------------------------------+---------------+---------------+----------+--------------+---------------+
141+
| ID | Internal Port | External Port | Protocol | Internal IP | External IP |
142+
+--------------------------------------+---------------+---------------+----------+--------------+---------------+
143+
| 2bc05619-d744-4e8a-b658-714da9cf1e89 | 31782 | 31782 | tcp | 10.233.2.107 | 128.31.20.161 |
144+
| f386638e-eca2-465f-a05c-2076d6c1df5a | 30296 | 30296 | tcp | 10.233.2.107 | 128.31.20.161 |
145+
| c06adaff-e1be-49f8-ab89-311b550182cc | 30894 | 30894 | tcp | 10.233.2.107 | 128.31.20.161 |
146+
| b45f08fa-bbf3-4c1d-b6ec-73b586b4b0a3 | 32148 | 32148 | tcp | 10.233.2.107 | 128.31.20.161 |
147+
+--------------------------------------+---------------+---------------+----------+--------------+---------------+
148+
```
149+
150+
[nodeport]: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
151+
152+
### Update DNS
153+
154+
Ensure that the DNS entry for your API address is correct. The names `api.<cluster_name>.<basedomain>` and `api-int.<cluster_name>.<basedomain>` must both point to the `$api_vip` address configured in the previous section.
155+
156+
### Obtain the admin kubeconfig file
157+
158+
The admin `kubeconfig` file is available as a Secret in the `clusters-<cluster_name>` namespace:
159+
160+
```
161+
oc -n clusters-mycluster extract secret/admin-kubeconfig --keys kubeconfig
162+
```
163+
164+
This will extract the file `kubeconfig` into your current directory. You can use that to interact with the hosted control plane:f6c5eae4-d8c8-4d82-bcd4-47718335c39c
165+
166+
```
167+
oc --kubeconfig kubeconfig get namespace
168+
```
169+
170+
## Set up port forwarding for the ingress service
171+
172+
1. Acquire a floating ip address from the ESI project that owns the bare metal nodes if you don't already have a free one:
173+
174+
```
175+
ingress_vip=$(openstack --os-cloud mycluster floating ip create external -f value -c floating_ip_address)
176+
```
177+
178+
1. Pick the address of one of the cluster nodes as a target for the port forwarding. Note that here we're using the `kubeconfig` file we downloaded in a previous step:
179+
180+
```
181+
internal_ip=$(oc --kubeconfig kubeconfig get nodes -l node-role.kubernetes.io/worker= -o name |
182+
shuf |
183+
head -1 |
184+
xargs -INODE oc --kubeconfig kubeconfig get NODE -o jsonpath='{.status.addresses[?(@.type == "InternalIP")].address}'
185+
)
186+
```
187+
188+
1. Set up appropriate port forwarding (in the bare metal node ESI project):
189+
190+
```
191+
openstack --os-cloud mycluster esi port forwarding create "$internal_ip" "$ingress" -p 80 -p 443
192+
```
193+
194+
## Wait for the cluster deploy to complete
195+
196+
When the target cluster is fully deployed, the output for the HostedCluster resource will look like this:
197+
198+
```
199+
$ oc -n clusters get hostedcluster mycluster
200+
NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE
201+
mycluster 4.17.9 mycluster-admin-kubeconfig Completed True False The hosted control plane is available
202+
```

0 commit comments

Comments
 (0)