Skip to content

Commit 6ecd02d

Browse files
committed
[WIP] Add documentation on deploying a hosted cluster
This is a work in progress! This document describes how to deploy a bare metal hosted cluster using Red Hat's Hosted Control Plane service, with bare metal nodes and networking provided by the ESI environment at the MOC.
1 parent 3e5f191 commit 6ecd02d

File tree

1 file changed

+203
-0
lines changed

1 file changed

+203
-0
lines changed

deploying-a-hosted-cluster.md

+203
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,203 @@
1+
# Deploying a hosted cluster on ESI-provisioned nodes
2+
3+
## Prerequisites
4+
5+
- You are comfortable working with both OpenShift and OpenStack.
6+
- You are comfortable with shell scripts.
7+
- You have cluster admin privileges on the management cluster.
8+
- You are able to create floating ips both for the hypershift project and for the project that owns the nodes on which you'll deploy your target cluster.
9+
- You are able to create DNS records on demand for the domain that you are using as your base domain.
10+
11+
## Assumptions
12+
13+
You have an OpenStack [`clouds.yaml`][clouds.yaml] file in the proper location, and it defines the following two clouds:
14+
15+
- `hypershift` -- this is the project that owns the nodes and networks allocated to the hypershift management cluster.
16+
- `mycluster` -- this is the project that owns the nodes and networks on which you will be deploying a new cluster.
17+
18+
[clouds.yaml]: https://docs.openstack.org/python-openstackclient/pike/configuration/index.html#clouds-yaml
19+
20+
## Allocate DNS and floating ips
21+
22+
You must have DNS records in place before deploying the cluster (the install process will block until the records exist).
23+
24+
- Allocate two floating ip addresses from ESI:
25+
26+
- One will be for the API and must be allocated from the hypershift project (because it will map to worker nodes on the management cluster)
27+
- One will be for the Ingress service and must be allocated from the network on which you are deploying your target cluster work nodes
28+
29+
- Create DNS entries that map to those addresses:
30+
31+
- `api.<clustername>.<basedomain>` should map to the api vip.
32+
- `api-int.<clustername>.<basedomain>` should map to the api vip.
33+
- `*.apps.<clustername>.<basedomain>` should map to the ingress vip.
34+
35+
Note that at this point these addresses are not associated with any internal ip address. We can't do that until after the cluster has been deployed.
36+
37+
## Gather required configuration
38+
39+
- You will need a pull secret, which you can download from <https://console.redhat.com/openshift/downloads>. Scroll to the "Tokens" section and download the pull secret.
40+
41+
- You will probably want to provide an ssh public key. This will be provisioned for the `core` user on your nodes, allowing you to log in for troubleshooting purposes.
42+
43+
## Deploy the cluster
44+
45+
First, create the namespace for your cluster:
46+
47+
```
48+
oc create ns clusters-mycluster
49+
```
50+
51+
Now you can use the `hcp` cli to create appropriate cluster manifests:
52+
53+
```
54+
hcp create cluster agent \
55+
--name mycluster \
56+
--pull-secret pull-secret.txt \
57+
--agent-namespace hardware-inventory \
58+
--base-domain int.massopen.cloud \
59+
--api-server-address api.mycluster.int.massopen.cloud \
60+
--etcd-storage-class lvms-vg1 \
61+
--ssh-key larsks.pub \
62+
--namespace clusters \
63+
--control-plane-availability-policy HighlyAvailable \
64+
--release-image quay.io/openshift-release-dev/ocp-release:4.17.9-multi \
65+
--node-pool-replicas 3
66+
```
67+
68+
This will create several resources in the `clusters` namespace:
69+
70+
- A HostedCluster resource
71+
- A NodePool resource
72+
- Several Secrets:
73+
- A pull secret (`<clustername>-pull-secret`)
74+
- Your public ssh key (`<clustername>-ssh-key`)
75+
- An etcd encryption key (`<clustername>-etcd-encryption-key`)
76+
77+
This will trigger the process of deploying control plane services for your cluster into the `clusters-<clustername>` namespace.
78+
79+
If you would like to see the manifests generated by the `hcp` command, add the options `--render --render-sensitive`; this will write the manifests to *stdout* instead of deploying them to the cluster.
80+
81+
After creating the HostedCluster resource, the hosted control plane will immediately start to deploy. You will find the associated services in the `clusters-<clustername>` namespace. You can track the progress of the deployment by watching the `status` field of the `HostedCluster` resource:
82+
83+
```
84+
oc -n clusters get hostedcluster mycluster -o json | jq .status
85+
```
86+
87+
You will also see that an appropriate number of agents have been allocated from the agent pool:
88+
89+
```
90+
$ oc -n hardware-inventory get agents
91+
NAME CLUSTER APPROVED ROLE STAGE
92+
07e21dd7-5b00-2565-ffae-485f1bf3aabc mycluster true worker
93+
2f25a998-0f1d-c202-4fdd-a2c300c9b7da mycluster true worker
94+
36c4906e-b96e-2de5-e4ec-534b45d61fa7 true auto-assign
95+
384b3b4f-e111-6881-019e-3668abb7cb0f true auto-assign
96+
5180125a-614c-ac90-7adf-9222dc228704 true auto-assign
97+
5aed1b72-90c6-da99-0bee-e668ca41b2ff true auto-assign
98+
8542e6ac-41b4-eca3-fedd-6af8edd4a41e mycluster true worker
99+
b698178a-7b31-15d2-5e20-b2381972cbdf true auto-assign
100+
c6a86022-c6b9-c89d-b6b9-3dd5c4c1063e true auto-assign
101+
d2c0f44b-993c-3e32-4a22-39af4be355b8 true auto-assign
102+
```
103+
104+
## Interacting with the control plane
105+
106+
The hosted control plane will be available within a matter of minutes, but in order to interact with it you'll need to complete a few additional steps.
107+
108+
### Set up port forwarding for control plane services
109+
110+
The API service for the new cluster is deployed as a [NodePort] service on the management cluster, as are several other services that need to be exposed in order for the cluster deploy to complete.
111+
112+
1. Acquire a floating ip address from the hypershift project if you don't already have a free one:
113+
114+
```
115+
api_vip=$(openstack --os-cloud hypershift floating ip create external -f value -c floating_ip_address)
116+
```
117+
118+
1. Pick the address of one of the cluster nodes as a target for the port forwarding:
119+
120+
```
121+
internal_ip=$(oc get nodes -l node-role.kubernetes.io/worker= -o name |
122+
shuf |
123+
head -1 |
124+
xargs -INODE oc get NODE -o jsonpath='{.status.addresses[?(@.type == "InternalIP")].address}'
125+
)
126+
```
127+
128+
1. Set up appropriate port forwarding:
129+
130+
```
131+
openstack --os-cloud hypershift esi port forwarding create "$internal_ip" "$api_vip" $(
132+
oc -n clusters-mycluster get service -o json |
133+
jq '.items[]|select(.spec.type == "NodePort")|.spec.ports[].nodePort' |
134+
sed 's/^/-p /'
135+
)
136+
```
137+
138+
The output of the above command will look something like this:
139+
140+
```
141+
+--------------------------------------+---------------+---------------+----------+--------------+---------------+
142+
| ID | Internal Port | External Port | Protocol | Internal IP | External IP |
143+
+--------------------------------------+---------------+---------------+----------+--------------+---------------+
144+
| 2bc05619-d744-4e8a-b658-714da9cf1e89 | 31782 | 31782 | tcp | 10.233.2.107 | 128.31.20.161 |
145+
| f386638e-eca2-465f-a05c-2076d6c1df5a | 30296 | 30296 | tcp | 10.233.2.107 | 128.31.20.161 |
146+
| c06adaff-e1be-49f8-ab89-311b550182cc | 30894 | 30894 | tcp | 10.233.2.107 | 128.31.20.161 |
147+
| b45f08fa-bbf3-4c1d-b6ec-73b586b4b0a3 | 32148 | 32148 | tcp | 10.233.2.107 | 128.31.20.161 |
148+
+--------------------------------------+---------------+---------------+----------+--------------+---------------+
149+
```
150+
151+
[nodeport]: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
152+
153+
### Update DNS
154+
155+
Ensure that the DNS entry for your API address is correct. The names `api.<cluster_name>.<basedomain>` and `api-int.<cluster_name>.<basedomain>` must both point to the `$api_vip` address configured in the previous section.
156+
157+
### Obtain the admin kubeconfig file
158+
159+
The admin `kubeconfig` file is available as a Secret in the `clusters-<cluster_name>` namespace:
160+
161+
```
162+
oc -n clusters-mycluster extract secret/admin-kubeconfig --keys kubeconfig
163+
```
164+
165+
This will extract the file `kubeconfig` into your current directory. You can use that to interact with the hosted control plane:f6c5eae4-d8c8-4d82-bcd4-47718335c39c
166+
167+
```
168+
oc --kubeconfig kubeconfig get namespace
169+
```
170+
171+
## Set up port forwarding for the ingress service
172+
173+
1. Acquire a floating ip address from the ESI project that owns the bare metal nodes if you don't already have a free one:
174+
175+
```
176+
ingress_vip=$(openstack --os-cloud mycluster floating ip create external -f value -c floating_ip_address)
177+
```
178+
179+
1. Pick the address of one of the cluster nodes as a target for the port forwarding. Note that here we're using the `kubeconfig` file we downloaded in a previous step:
180+
181+
```
182+
internal_ip=$(oc --kubeconfig kubeconfig get nodes -l node-role.kubernetes.io/worker= -o name |
183+
shuf |
184+
head -1 |
185+
xargs -INODE oc --kubeconfig kubeconfig get NODE -o jsonpath='{.status.addresses[?(@.type == "InternalIP")].address}'
186+
)
187+
```
188+
189+
1. Set up appropriate port forwarding (in the bare metal node ESI project):
190+
191+
```
192+
openstack --os-cloud mycluster esi port forwarding create "$internal_ip" "$ingress" -p 80 -p 443
193+
```
194+
195+
## Wait for the cluster deploy to complete
196+
197+
When the target cluster is fully deployed, the output for the HostedCluster resource will look like this:
198+
199+
```
200+
$ oc -n clusters get hostedcluster mycluster
201+
NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE
202+
mycluster 4.17.9 mycluster-admin-kubeconfig Completed True False The hosted control plane is available
203+
```

0 commit comments

Comments
 (0)