Skip to content
This repository was archived by the owner on Feb 7, 2023. It is now read-only.

Commit d9e13d1

Browse files
author
Micah Abbott
authored
tests: OpenShift Ansible Installer sanity test (#162)
* tests: OpenShift Ansible Installer sanity test This is a 'meta' playbook that borrowed heavily from the work done in 'ansible-ansible-openshift-ansible' [0]. It will install an OpenShift Origin cluster on a single host via the openshift/openshfit-ansible installer. This playbook can be invoked as you would for any other tests in this repo, by supplying an inventory file or hostname/IP address to `ansible-playbook`. The host from the inventory is used to template out a second inventory file which is used when calling `ansible-playbook` on the openshift/openshift-ansible playbook. This, in theory, allows users to maintain the same workflow that is familiar to running other tests in this repo. There are variables defined in the playbook that control the version of OpenShift Origin to be installed, the `git` tag of the openshift/openshift-ansible repo to use, the host where the cluster should be installed, and the Ansible user to use when installing the cluster. These can all be overridden via CLI parameters. This lacks a couple of things that could be added later: - additional checks to determine health of cluster - cleanup/uninstall of the cluster - maybe deploying a project? * openshift-ansible-testing: make it work with CentOS + RHEL AH - added symlinks to callback_plugins + roles - introduced new vars to handle public vs private ip addresses - added conditional to skip memory check - added conditional to use Python3 for Fedora only - added README.md * fixup! openshift-ansible-testing: make it work with CentOS + RHEL AH * fixup! openshift-ansible-testing: make it work with CentOS + RHEL AH
1 parent 2b16eca commit d9e13d1

File tree

5 files changed

+135
-0
lines changed

5 files changed

+135
-0
lines changed
Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
This playbook performs a sanity test of a stable version of the OpenShift Ansible installer
2+
against an Atomic Host.
3+
4+
The test accepts normal inventory data as like every other test in the repo, then uses that
5+
data to generate a separate inventory that is used when running the OpenShift Ansible
6+
installer playbook.
7+
8+
This playbook only does a sanity check that the installer completes successfully and
9+
the expected pods are running afterwards. This does **NOT** perform any conformance
10+
testing or deployment of additional apps/projects afterwards.
11+
12+
### Prerequisites
13+
- Ansible version 2.2 (other versions are not supported)
14+
15+
- Configure subscription data (if used)
16+
17+
If running against a RHEL Atomic Host, you should provide subscription
18+
data that can be used by `subscription-manager`. See
19+
[roles/redhat_subscription/tasks/main.yml](roles/redhat_subscription/tasks/main.yml)
20+
for additional details.
21+
22+
### Running the Playbook
23+
24+
*NOTE*: You are responsible for providing a host to run the test against and the
25+
inventory file for that host.
26+
27+
To run the test, simply invoke as any other Ansible playbook:
28+
29+
```
30+
$ ansible-playbook -i inventory tests/openshift-ansible-testing/main.yml
31+
```
32+
33+
*NOTE*: If you are running this playbook against a host in the Amazon EC2 environment, it has
34+
been reported you will need to set the `cli_oo_host` variable to the internal IP
35+
address of your EC2 instance. This can be done via the `inventory` file passed in
36+
or on the command line like so:
37+
38+
`$ ansible-playbok -i inventory -e cli_oo_host=10.0.36.120 tests/openshift-ansible-testing/main.yml`
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
../../callback_plugins/

tests/openshift-ansible-test/main.yml

Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
---
2+
# vim: set ft=ansible:
3+
#
4+
- name: OpenShift Ansible Installer Test
5+
hosts: all
6+
7+
tasks:
8+
- name: Setup vars for templating the inventory, etc.
9+
set_fact:
10+
oo_ansible_user: "{{ cli_ansible_user | default(ansible_user) }}"
11+
oo_ansible_tag: "{{ cli_oo_ansible_tag | default('master') }}"
12+
oo_public_host: "{{ cli_oo_public_host | default(ansible_host) }}"
13+
# NOTE: If you intend to run the playbook against a host in the Amazon
14+
# EC2 enviroment, it has been reported that you will need to set the
15+
# 'cli_oo_host' variable to the interal IP address of your EC2
16+
# instance. This can be done via the inventory file or on the command
17+
# line, like so:
18+
#
19+
# $ ansible-playbook -i inventory -e cli_oo_host=10.0.36.120 tests/openshift-ansible-testing/main.yml
20+
#
21+
oo_host: "{{ cli_oo_host | default(ansible_default_ipv4.address) }}"
22+
oo_release: "{{ cli_oo_release | default('1.5.1') }}"
23+
oo_py_interpreter: "{{ '-e ansible_python_interpreter=/usr/bin/python3' if ansible_distribution == 'Fedora' else '' }}"
24+
oo_skip_memory_check: "{{ '-e openshift_disable_check=memory_availability' if ansible_memtotal_mb|int < 8192 else '' }}"
25+
26+
- name: Make temp directory of holding
27+
command: mktemp -d
28+
register: mktemp
29+
delegate_to: localhost
30+
31+
- name: git clone openshift-ansible repo
32+
git:
33+
repo: "https://github.com/openshift/openshift-ansible.git"
34+
dest: "{{ mktemp.stdout }}"
35+
version: "{{ oo_ansible_tag }}"
36+
delegate_to: localhost
37+
38+
- name: Template the inventory file
39+
template:
40+
src: "templates/cluster-inventory.j2"
41+
dest: "{{ mktemp.stdout }}/cluster-inventory"
42+
delegate_to: localhost
43+
44+
- name: Run the openshift-ansible playbook
45+
command: "ansible-playbook -i cluster-inventory playbooks/byo/config.yml {{ oo_py_interpreter }} {{ oo_skip_memory_check }}"
46+
args:
47+
chdir: "{{ mktemp.stdout }}"
48+
delegate_to: localhost
49+
50+
- name: Wait for 8443 to open up
51+
wait_for:
52+
port: 8443
53+
delay: 60
54+
55+
# this may not be required
56+
- name: Add admin user to cluster-admin role
57+
command: /usr/local/bin/oadm policy add-cluster-role-to-user cluster-admin admin
58+
59+
- name: Login to the cluster
60+
command: "/usr/local/bin/oc login -u admin -p OriginAdmin https://{{ oo_public_host }}:8443"
61+
62+
# this is kind of a hack; sometimes need to wait (5m) for the pods
63+
- name: Verify pods are running
64+
command: /usr/local/bin/oc get pods -o jsonpath='{.items[*].status.phase}'
65+
register: pods
66+
until: pods.stdout == "Running Running Running"
67+
retries: 10
68+
delay: 30

tests/openshift-ansible-test/roles

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
../../roles/
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
[OSEv3:children]
2+
masters
3+
nodes
4+
etcd
5+
6+
[OSEv3:vars]
7+
ansible_user={{ oo_ansible_user }}
8+
ansible_become=true
9+
deployment_type=origin
10+
containerized=true
11+
openshift_release={{ oo_release }}
12+
openshift_master_default_subdomain={{ oo_public_host }}.xip.io
13+
openshift_router_selector='router=true'
14+
openshift_registry_selector='registry=true'
15+
16+
# enable htpasswd auth
17+
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
18+
openshift_master_htpasswd_users={'admin': '$apr1$zgSjCrLt$1KSuj66CggeWSv.D.BXOA1'}
19+
20+
[masters]
21+
{{ oo_public_host }} openshift_public_hostname={{ oo_public_host }} openshift_hostname={{ oo_host }}
22+
23+
[etcd]
24+
{{ oo_public_host }}
25+
26+
[nodes]
27+
{{ oo_public_host }} openshift_schedulable=true openshift_node_labels="{'router':'true','registry':'true'}"

0 commit comments

Comments
 (0)