Skip to content
This repository was archived by the owner on Feb 7, 2023. It is now read-only.

tests: OpenShift Ansible Installer sanity test #162

Merged
merged 4 commits into from
Jun 13, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 38 additions & 0 deletions tests/openshift-ansible-test/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
This playbook performs a sanity test of a stable version of the OpenShift Ansible installer
against an Atomic Host.

The test accepts normal inventory data as like every other test in the repo, then uses that
data to generate a separate inventory that is used when running the OpenShift Ansible
installer playbook.

This playbook only does a sanity check that the installer completes successfully and
the expected pods are running afterwards. This does **NOT** perform any conformance
testing or deployment of additional apps/projects afterwards.

### Prerequisites
- Ansible version 2.2 (other versions are not supported)

- Configure subscription data (if used)

If running against a RHEL Atomic Host, you should provide subscription
data that can be used by `subscription-manager`. See
[roles/redhat_subscription/tasks/main.yml](roles/redhat_subscription/tasks/main.yml)
for additional details.

### Running the Playbook

*NOTE*: You are responsible for providing a host to run the test against and the
inventory file for that host.

To run the test, simply invoke as any other Ansible playbook:

```
$ ansible-playbook -i inventory tests/openshift-ansible-testing/main.yml
```

*NOTE*: If you are running this playbook against a host in the Amazon EC2 environment, it has
been reported you will need to set the `cli_oo_host` variable to the internal IP
address of your EC2 instance. This can be done via the `inventory` file passed in
or on the command line like so:

`$ ansible-playbok -i inventory -e cli_oo_host=10.0.36.120 tests/openshift-ansible-testing/main.yml`
1 change: 1 addition & 0 deletions tests/openshift-ansible-test/callback_plugins
68 changes: 68 additions & 0 deletions tests/openshift-ansible-test/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
---
# vim: set ft=ansible:
#
- name: OpenShift Ansible Installer Test
hosts: all

tasks:
- name: Setup vars for templating the inventory, etc.
set_fact:
oo_ansible_user: "{{ cli_ansible_user | default(ansible_user) }}"
oo_ansible_tag: "{{ cli_oo_ansible_tag | default('master') }}"
oo_public_host: "{{ cli_oo_public_host | default(ansible_host) }}"
# NOTE: If you intend to run the playbook against a host in the Amazon
# EC2 enviroment, it has been reported that you will need to set the
# 'cli_oo_host' variable to the interal IP address of your EC2
# instance. This can be done via the inventory file or on the command
# line, like so:
#
# $ ansible-playbook -i inventory -e cli_oo_host=10.0.36.120 tests/openshift-ansible-testing/main.yml
#
oo_host: "{{ cli_oo_host | default(ansible_default_ipv4.address) }}"
oo_release: "{{ cli_oo_release | default('1.5.1') }}"
oo_py_interpreter: "{{ '-e ansible_python_interpreter=/usr/bin/python3' if ansible_distribution == 'Fedora' else '' }}"
oo_skip_memory_check: "{{ '-e openshift_disable_check=memory_availability' if ansible_memtotal_mb|int < 8192 else '' }}"

- name: Make temp directory of holding
command: mktemp -d
register: mktemp
delegate_to: localhost

- name: git clone openshift-ansible repo
git:
repo: "https://github.com/openshift/openshift-ansible.git"
dest: "{{ mktemp.stdout }}"
version: "{{ oo_ansible_tag }}"
delegate_to: localhost

- name: Template the inventory file
template:
src: "templates/cluster-inventory.j2"
dest: "{{ mktemp.stdout }}/cluster-inventory"
delegate_to: localhost

- name: Run the openshift-ansible playbook
command: "ansible-playbook -i cluster-inventory playbooks/byo/config.yml {{ oo_py_interpreter }} {{ oo_skip_memory_check }}"
args:
chdir: "{{ mktemp.stdout }}"
delegate_to: localhost

- name: Wait for 8443 to open up
wait_for:
port: 8443
delay: 60

# this may not be required
- name: Add admin user to cluster-admin role
command: /usr/local/bin/oadm policy add-cluster-role-to-user cluster-admin admin

- name: Login to the cluster
command: "/usr/local/bin/oc login -u admin -p OriginAdmin https://{{ oo_public_host }}:8443"

# this is kind of a hack; sometimes need to wait (5m) for the pods
- name: Verify pods are running
command: /usr/local/bin/oc get pods -o jsonpath='{.items[*].status.phase}'
register: pods
until: pods.stdout == "Running Running Running"
retries: 10
delay: 30
1 change: 1 addition & 0 deletions tests/openshift-ansible-test/roles
27 changes: 27 additions & 0 deletions tests/openshift-ansible-test/templates/cluster-inventory.j2
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
[OSEv3:children]
masters
nodes
etcd

[OSEv3:vars]
ansible_user={{ oo_ansible_user }}
ansible_become=true
deployment_type=origin
containerized=true
openshift_release={{ oo_release }}
openshift_master_default_subdomain={{ oo_public_host }}.xip.io
openshift_router_selector='router=true'
openshift_registry_selector='registry=true'

# enable htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_users={'admin': '$apr1$zgSjCrLt$1KSuj66CggeWSv.D.BXOA1'}

[masters]
{{ oo_public_host }} openshift_public_hostname={{ oo_public_host }} openshift_hostname={{ oo_host }}

[etcd]
{{ oo_public_host }}

[nodes]
{{ oo_public_host }} openshift_schedulable=true openshift_node_labels="{'router':'true','registry':'true'}"