-
Notifications
You must be signed in to change notification settings - Fork 22
tests: OpenShift Ansible Installer sanity test #162
tests: OpenShift Ansible Installer sanity test #162
Conversation
This is a 'meta' playbook that borrowed heavily from the work done in 'ansible-ansible-openshift-ansible' [0]. It will install an OpenShift Origin cluster on a single host via the openshift/openshfit-ansible installer. This playbook can be invoked as you would for any other tests in this repo, by supplying an inventory file or hostname/IP address to `ansible-playbook`. The host from the inventory is used to template out a second inventory file which is used when calling `ansible-playbook` on the openshift/openshift-ansible playbook. This, in theory, allows users to maintain the same workflow that is familiar to running other tests in this repo. There are variables defined in the playbook that control the version of OpenShift Origin to be installed, the `git` tag of the openshift/openshift-ansible repo to use, the host where the cluster should be installed, and the Ansible user to use when installing the cluster. These can all be overridden via CLI parameters. This lacks a couple of things that could be added later: - additional checks to determine health of cluster - cleanup/uninstall of the cluster - maybe deploying a project?
@dustymabe Here's the first crack at the OpenShift Ansible Installer test |
Worth noting that I only tested this against F25 AH. Will run this against CAHC and RHELAH shortly. |
That's pretty straightforward indeed. Looks sane offhand to me. Though last time I was using o-a an issue I hit was that it was pulling |
Agreed. It will be up to the executor (be it human or computer) of the test to supply the desired values for the OpenShift Origin release and openshift-ansible release. |
Another enhancement would be some sort of sanity checking that the version of OpenShift Origin is compatible with openshift-ansible. They link the two as described here. |
thanks man - looks pretty sweet. A few questions:
I talked with scott dodson and we should be able to use the centos paas sig rpms as a way to determine what stable version of openshift-ansible to use. We can either use those rpms directly or we could use the Version info from the rpm to dictate a tag for us to do a |
@dustymabe do you have an example of what normal ansible output vs jumbled up output for reference? |
this is jumbled:
basically is there a way to make that one task (that essentially runs another ansible playbook) have output that looks more like ansible output, but maybe shifted over 4 spaces or something? |
@dustymabe Oh, the symlink to the callback plugin is missing. @miabbott can you drop that in? What you are seeing is the default ("normal") Ansible output not the formatted output from our callback_plugins. This is the output with the callback_plugin symlink in the test directory:
|
Slapped on the 'WIP' label as there are a number of issues to iron out. |
That is an easy fix. I thought that 'openshift-ansible' itself had a callback plugin to format the output, but maybe that is getting overridden by the 'a-h-t' plugin |
- added symlinks to callback_plugins + roles - introduced new vars to handle public vs private ip addresses - added conditional to skip memory check - added conditional to use Python3 for Fedora only - added README.md
Added a new commit with some changes based on additional testing ⬆️ @mike-nguyen Added the symlink to @dustymabe The playbook will set the necessary command line argument to skip the memory check auto-magically now. I tested this against F25, CentOS, and RHEL Atomic Host in OpenStack and local libvirt. If someone can give this a spin against AWS, that would be interesting to see how it works out. |
To run the test, simply invoke as any other Ansible playbook: | ||
|
||
``` | ||
$ ansible-playbook -i inventory tests/system-containers/main.yml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is that the right playbook yml file?
i suspect this will fail with aws, but maybe some of the updates you just did will allow me to work around that. I'll try it. |
this actually passed on aws with an inventory file like:
|
+1 from me |
@dustymabe Thanks for the AWS test. I'll make a note in the README and in the playbook file about setting up that variable when running in EC2. I may try to run my own EC2 instance and see if there is a way to programatically determine the value for that variable, too. |
@miabbott : I have been testing the PR on fedora 25 following image https://download.fedoraproject.org/pub/fedora/linux/releases/25/CloudImages/x86_64/images/Fedora-Cloud-Base-25-1.3.x86_64.qcow2 |
@samvarankashyap I've tested this using the latest F25 Atomic Host: This worked for me in OpenStack and on local libvirt. Feel free to share the errors you are encountering. Note, I did not test this PR against a non Atomic Host, so there may be issues there that I have not accounted for. |
@samvarankashyap - The image you tested against was the cloud base image, not the atomic host image. |
@miabbott @dustymabe : Thanks , I will run the PR on atomic image and get you back with feedback. |
@samvarankashyap I ran this playbook against an up-to-date F25 Cloud VM in OpenStack and encountered some problems along the way.
Since the PR is really targeting Atomic Host platforms, I'm not terribly worried about the first two problems but wanted to just report my findings. I'm going to work with @dustymabe about the 3rd problem, because that is probably something that should be fixed in |
@miabbott :
complete run log :https://gist.github.com/samvarankashyap/80f3e325e89d6630e3ea465c98f183f4 |
@miabbott if it helps , i found the same issue reported but unsolved on openshift-ansible repo |
@dustymabe I got access via the OpenShift group to AWS and was able to run the playbook like so:
It failed checking to see if |
Using a different security group alleviated this issue. I did notice that checking for the pods to be |
did you have to use |
Nope, it Just Worked For Me™ |
@samvarankashyap I ran this PR from within a container without trouble. I wasn't using a duffy node, but rather my own F25 workstation. In my setup I booted a F25 Atomic Host using local libvirt, then locally created a Docker image using a slightly modified Dockerfile in #167, and manually ran the playbook from inside the container targeting the F25AH VM. (The modification was just to checkout this PR before running With that proved out, I ran it automatically like so:
And that worked, too. Not sure if there is something specific to the duffy environment, but I'm not able to reproduce that error. (Granted, I only ran it twice in a container, but I feel pretty good that it should work fine) |
should we just merge this and fix issues later? |
I'll give another 24 hours for additional feedback. If nothing else is received, I'll merge this tomorrow. |
This is a 'meta' playbook that borrowed heavily from the work done in
'ansible-ansible-openshift-ansible' [0]. It will install an OpenShift
Origin cluster on a single host via the openshift/openshfit-ansible
installer.
This playbook can be invoked as you would for any other tests in this
repo, by supplying an inventory file or hostname/IP address to
ansible-playbook
. The host from the inventory is used to templateout a second inventory file which is used when calling
ansible-playbook
on the openshift/openshift-ansible playbook. This,in theory, allows users to maintain the same workflow that is familiar
to running other tests in this repo.
There are variables defined in the playbook that control the version
of OpenShift Origin to be installed, the
git
tag of theopenshift/openshift-ansible repo to use, the host where the cluster
should be installed, and the Ansible user to use when installing the
cluster. These can all be overridden via CLI parameters.
This lacks a couple of things that could be added later:
[0] https://pagure.io/ansible-ansible-openshift-ansible