Skip to content

chore(stoneintg-1092): refactor group snapshots tests #1504

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

jencull
Copy link
Contributor

@jencull jencull commented Feb 5, 2025

Description

Please include a summary of the changes and the related issue. Please also include relevant motivation and context. List any dependencies that are required for this change.

Issue ticket number and link

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

Checklist:

  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added meaningful description with JIRA/GitHub issue key(if applicable), for example HASSuiteDescribe("STONE-123456789 devfile source")
  • I have updated labels (if needed)

Copy link

openshift-ci bot commented Feb 5, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign psturc for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@jsztuka
Copy link
Contributor

jsztuka commented Feb 5, 2025

Changes look fine in general, let see if the CI passes.

@jsztuka
Copy link
Contributor

jsztuka commented Feb 5, 2025

I think once you change the commit message from chore(stoneintg-1092) to chore(stoneintg-1092): conventional commit check will pass.

@jencull
Copy link
Contributor Author

jencull commented Feb 5, 2025

I think once you change the commit message from chore(stoneintg-1092) to chore(stoneintg-1092): conventional commit check will pass.

Yep, have changed it but its not showing in the title. Will come back to it.

This is a WIP, I am just testing this before i start merging additional code from the status-reporting file. Will then test those changes and, finally, update the readme. So its marked as a WIP for now as I work my way through the stages.

@jencull jencull changed the title chore(stoneintg-1092) refactor group snapshots tests chore(stoneintg-1092): refactor group snapshots tests Feb 6, 2025
@jencull
Copy link
Contributor Author

jencull commented Feb 6, 2025

/retest

@jencull
Copy link
Contributor Author

jencull commented Feb 12, 2025

/retest

@jencull jencull force-pushed the stoneintg-1092 branch 3 times, most recently from 33dab36 to 454d654 Compare March 10, 2025 09:54
@jencull
Copy link
Contributor Author

jencull commented Mar 11, 2025

/retest

1 similar comment
@jencull
Copy link
Contributor Author

jencull commented Mar 13, 2025

/retest

@jencull jencull force-pushed the stoneintg-1092 branch 2 times, most recently from f958a9f to 2fbb497 Compare March 27, 2025 10:06
@jencull
Copy link
Contributor Author

jencull commented Mar 28, 2025

/retest

@jencull jencull force-pushed the stoneintg-1092 branch 3 times, most recently from 1534b0b to 77324c5 Compare April 1, 2025 09:28
@dirgim
Copy link
Contributor

dirgim commented Apr 2, 2025

/retest

@konflux-ci-qe-bot
Copy link

@jencull: The following test has Failed, say /retest to rerun failed tests.

PipelineRun Name Status Rerun command Build Log Test Log
konflux-e2e-5k84z Failed /retest View Pipeline Log View Test Logs

Inspecting Test Artifacts

To inspect your test artifacts, follow these steps:

  1. Install ORAS (see the ORAS installation guide).
  2. Download artifacts with the following commands:
mkdir -p oras-artifacts
cd oras-artifacts
oras pull quay.io/konflux-test-storage/konflux-team/e2e-tests:konflux-e2e-5k84z

Test results analysis

🚨 Failed to provision a cluster, see the log for more details:

Click to view logs
INFO: Log in to your Red Hat account...
INFO: Configure AWS Credentials...
WARN: The current version (1.2.50) is not up to date with latest rosa cli released version (1.2.52).
WARN: It is recommended that you update to the latest version.
INFO: Logged in as 'konflux-ci-418295695583' on 'https://api.openshift.com'
INFO: Create ROSA with HCP cluster...
WARN: The current version (1.2.50) is not up to date with latest rosa cli released version (1.2.52).
WARN: It is recommended that you update to the latest version.
time=2025-04-04T11:47:49Z level=info msg=Ignored check for policy key 'sts_hcp_ec2_registry_permission_policy' (zero egress feature toggle is not enabled)
INFO: Creating cluster 'kx-76733e3559'
INFO: To view a list of clusters and their status, run 'rosa list clusters'
INFO: Cluster 'kx-76733e3559' has been created.
INFO: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information.

Name: kx-76733e3559
Domain Prefix: kx-76733e3559
Display Name: kx-76733e3559
ID: 2hunhr1mrm1sc3h7b9lue8vtgk8a0e3h
External ID: cab8af96-28ba-4573-9673-d358e5e7ddf4
Control Plane: ROSA Service Hosted
OpenShift Version: 4.15.48
Channel Group: stable
DNS: Not ready
AWS Account: 418295695583
AWS Billing Account: 418295695583
API URL:
Console URL:
Region: us-east-1
Availability:

  • Control Plane: MultiAZ
  • Data Plane: MultiAZ

Nodes:

  • Compute (desired): 3
  • Compute (current): 0
    Network:
  • Type: OVNKubernetes
  • Service CIDR: 172.30.0.0/16
  • Machine CIDR: 10.0.0.0/16
  • Pod CIDR: 10.128.0.0/14
  • Host Prefix: /23
  • Subnets: subnet-001fc23497e4a3aeb, subnet-00ffba09365a434bc, subnet-074cbf0329958194a, subnet-0689cd077699b690a, subnet-0f9f09e46f74cde64, subnet-033f48892ddbaa09d
    EC2 Metadata Http Tokens: optional
    Role (STS) ARN: arn:aws:iam::418295695583:role/ManagedOpenShift-HCP-ROSA-Installer-Role
    Support Role ARN: arn:aws:iam::418295695583:role/ManagedOpenShift-HCP-ROSA-Support-Role
    Instance IAM Roles:
  • Worker: arn:aws:iam::418295695583:role/ManagedOpenShift-HCP-ROSA-Worker-Role
    Operator IAM Roles:
  • arn:aws:iam::418295695583:role/rosa-hcp-kube-system-kms-provider
  • arn:aws:iam::418295695583:role/rosa-hcp-kube-system-kube-controller-manager
  • arn:aws:iam::418295695583:role/rosa-hcp-kube-system-capa-controller-manager
  • arn:aws:iam::418295695583:role/rosa-hcp-kube-system-control-plane-operator
  • arn:aws:iam::418295695583:role/rosa-hcp-openshift-image-registry-installer-cloud-credentials
  • arn:aws:iam::418295695583:role/rosa-hcp-openshift-ingress-operator-cloud-credentials
  • arn:aws:iam::418295695583:role/rosa-hcp-openshift-cluster-csi-drivers-ebs-cloud-credentials
  • arn:aws:iam::418295695583:role/rosa-hcp-openshift-cloud-network-config-controller-cloud-credent
    Managed Policies: Yes
    State: waiting (Waiting for user action)
    Private: No
    Delete Protection: Disabled
    Created: Apr 4 2025 11:48:00 UTC
    User Workload Monitoring: Enabled
    Details Page: https://console.redhat.com/openshift/details/s/2vGN7Bz1uxhwCtAUpZm5pEusEhk
    OIDC Endpoint URL: https://oidc.op1.openshiftapps.com/2du11g36ejmoo4624pofphlrgf4r9tf3 (Managed)
    Etcd Encryption: Disabled
    Audit Log Forwarding: Disabled
    External Authentication: Disabled
    Zero Egress: Disabled

INFO: Preparing to create operator roles.
INFO: Operator Roles already exists
INFO: Preparing to create OIDC Provider.
INFO: OIDC provider already exists
INFO: To determine when your cluster is Ready, run 'rosa describe cluster -c kx-76733e3559'.
INFO: To watch your cluster installation logs, run 'rosa logs install -c kx-76733e3559 --watch'.
INFO: Track the progress of the cluster creation...
WARN: The current version (1.2.50) is not up to date with latest rosa cli released version (1.2.52).
WARN: It is recommended that you update to the latest version.
�[0;33mW:�[m Region flag will be removed from this command in future versions
INFO: Cluster 'kx-76733e3559' is in waiting state waiting for installation to begin. Logs will show up within 5 minutes
0001-01-01 00:00:00 +0000 UTC hostedclusters kx-76733e3559 Version
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Condition not found in the CVO.
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 The hosted control plane is not found
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Condition not found in the CVO.
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 The hosted control plane is not found
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 The hosted control plane is not found
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 The hosted control plane is not found
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 The hosted control plane is not found
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 The hosted control plane is not found
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Condition not found in the CVO.
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Waiting for hosted control plane to be healthy
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Condition not found in the CVO.
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 The hosted control plane is not found
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Condition not found in the CVO.
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Ignition server deployment not found
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Configuration passes validation
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 HostedCluster is supported by operator configuration
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Reconciliation active on resource
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 ValidAWSIdentityProvider StatusUnknown
2025-04-04 11:51:50 +0000 UTC hostedclusters kx-76733e3559 Release image is valid
2025-04-04 11:51:51 +0000 UTC hostedclusters kx-76733e3559 HostedCluster is at expected version
2025-04-04 11:51:56 +0000 UTC hostedclusters kx-76733e3559 Required platform credentials are found
2025-04-04 11:51:56 +0000 UTC hostedclusters kx-76733e3559 failed to get referenced secret ocm-production-2hunhr1mrm1sc3h7b9lue8vtgk8a0e3h/cluster-api-cert: Secret "cluster-api-cert" not found
0001-01-01 00:00:00 +0000 UTC hostedclusters kx-76733e3559 Version
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Condition not found in the CVO.
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Reconciliation active on resource
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 ValidAWSIdentityProvider StatusUnknown
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 router load balancer is not provisioned; 5s since creation.; router load balancer is not provisioned; 5s since creation.
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 HostedCluster is supported by operator configuration
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Configuration passes validation
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Ignition server deployment not found
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Condition not found in the CVO.
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Condition not found in the CVO.
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Condition not found in the CVO.
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Condition not found in the CVO.
2025-04-04 11:51:50 +0000 UTC hostedclusters kx-76733e3559 Release image is valid
2025-04-04 11:51:51 +0000 UTC hostedclusters kx-76733e3559 HostedCluster is at expected version
2025-04-04 11:51:56 +0000 UTC hostedclusters kx-76733e3559 Required platform credentials are found
2025-04-04 11:53:13 +0000 UTC hostedclusters kx-76733e3559 OIDC configuration is valid
2025-04-04 11:53:13 +0000 UTC hostedclusters kx-76733e3559 Reconciliation completed successfully
2025-04-04 11:53:24 +0000 UTC hostedclusters kx-76733e3559 All is well
2025-04-04 11:53:24 +0000 UTC hostedclusters kx-76733e3559 lookup api.kx-76733e3559.tn5s.p3.openshiftapps.com on 172.30.0.10:53: no such host
2025-04-04 11:53:24 +0000 UTC hostedclusters kx-76733e3559 Configuration passes validation
2025-04-04 11:53:24 +0000 UTC hostedclusters kx-76733e3559 EtcdAvailable StatefulSetNotFound
2025-04-04 11:53:24 +0000 UTC hostedclusters kx-76733e3559 Kube APIServer deployment not found
2025-04-04 11:53:24 +0000 UTC hostedclusters kx-76733e3559 router load balancer is not provisioned; 5s since creation.; router load balancer is not provisioned; 5s since creation.
2025-04-04 11:53:24 +0000 UTC hostedclusters kx-76733e3559 AWS KMS is not configured
2025-04-04 11:53:24 +0000 UTC hostedclusters kx-76733e3559 capi-provider deployment has 1 unavailable replicas
2025-04-04 11:53:24 +0000 UTC hostedclusters kx-76733e3559 Configuration passes validation
2025-04-04 11:53:24 +0000 UTC hostedclusters kx-76733e3559 lookup api.kx-76733e3559.tn5s.p3.openshiftapps.com on 172.30.0.10:53: no such host
2025-04-04 11:53:45 +0000 UTC hostedclusters kx-76733e3559 All is well
2025-04-04 11:53:50 +0000 UTC hostedclusters kx-76733e3559 WebIdentityErr
2025-04-04 11:54:27 +0000 UTC hostedclusters kx-76733e3559 EtcdAvailable QuorumAvailable
2025-04-04 11:55:31 +0000 UTC hostedclusters kx-76733e3559 Kube APIServer deployment is available
2025-04-04 11:55:50 +0000 UTC hostedclusters kx-76733e3559 All is well
2025-04-04 11:56:01 +0000 UTC hostedclusters kx-76733e3559 The hosted cluster is not degraded
0001-01-01 00:00:00 +0000 UTC hostedclusters kx-76733e3559 Version
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Condition not found in the CVO.
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Reconciliation active on resource
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 HostedCluster is supported by operator configuration
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Configuration passes validation
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Ignition server deployment is not yet available
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Condition not found in the CVO.
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Condition not found in the CVO.
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Condition not found in the CVO.
2025-04-04 11:51:39 +0000 UTC hostedclusters kx-76733e3559 Condition not found in the CVO.
2025-04-04 11:51:50 +0000 UTC hostedclusters kx-76733e3559 Release image is valid
2025-04-04 11:51:51 +0000 UTC hostedclusters kx-76733e3559 HostedCluster is at expected version
2025-04-04 11:51:56 +0000 UTC hostedclusters kx-76733e3559 Required platform credentials are found
2025-04-04 11:53:13 +0000 UTC hostedclusters kx-76733e3559 OIDC configuration is valid
2025-04-04 11:53:13 +0000 UTC hostedclusters kx-76733e3559 Reconciliation completed successfully
2025-04-04 11:53:24 +0000 UTC hostedclusters kx-76733e3559 AWS KMS is not configured
2025-04-04 11:53:24 +0000 UTC hostedclusters kx-76733e3559 All is well
2025-04-04 11:53:24 +0000 UTC hostedclusters kx-76733e3559 Configuration passes validation
2025-04-04 11:53:24 +0000 UTC hostedclusters kx-76733e3559 lookup api.kx-76733e3559.tn5s.p3.openshiftapps.com on 172.30.0.10:53: no such host
2025-04-04 11:53:45 +0000 UTC hostedclusters kx-76733e3559 All is well
2025-04-04 11:54:27 +0000 UTC hostedclusters kx-76733e3559 EtcdAvailable QuorumAvailable
2025-04-04 11:55:31 +0000 UTC hostedclusters kx-76733e3559 Kube APIServer deployment is available
2025-04-04 11:55:50 +0000 UTC hostedclusters kx-76733e3559 All is well
2025-04-04 11:56:08 +0000 UTC hostedclusters kx-76733e3559 All is well
2025-04-04 11:56:18 +0000 UTC hostedclusters kx-76733e3559 [catalog-operator deployment has 1 unavailable replicas, certified-operators-catalog deployment has 2 unavailable replicas, cloud-credential-operator deployment has 1 unavailable replicas, cluster-network-operator deployment has 1 unavailable replicas, cluster-storage-operator deployment has 1 unavailable replicas, cluster-version-operator deployment has 1 unavailable replicas, community-operators-catalog deployment has 2 unavailable replicas, csi-snapshot-controller-operator deployment has 1 unavailable replicas, dns-operator deployment has 1 unavailable replicas, hosted-cluster-config-operator deployment has 1 unavailable replicas, ignition-server deployment has 3 unavailable replicas, ingress-operator deployment has 1 unavailable replicas, olm-operator deployment has 1 unavailable replicas, packageserver deployment has 3 unavailable replicas, redhat-marketplace-catalog deployment has 2 unavailable replicas, redhat-operators-catalog deployment has 2 unavailable replicas, router deployment has 1 unavailable replicas]
2025-04-04 11:56:26 +0000 UTC hostedclusters kx-76733e3559 The hosted control plane is available
INFO: Cluster 'kx-76733e3559' is now ready
INFO: ROSA with HCP cluster is ready, create a cluster admin account for accessing the cluster
WARN: The current version (1.2.50) is not up to date with latest rosa cli released version (1.2.52).
WARN: It is recommended that you update to the latest version.
INFO: Storing login command...
INFO: Check if it's able to login to OCP cluster...
Retried 1 times...
Retried 2 times...
INFO: Check if apiserver is ready...
Waiting for cluster operators to be accessible for 2m...
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
console
csi-snapshot-controller 4.15.48 True False False 3m33s
dns 4.15.48 False False True 3m35s DNS "default" is unavailable.
image-registry False True True 2m54s Available: The deployment does not have available replicas...
ingress False True True 2m59s The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.)
insights
kube-apiserver 4.15.48 True False False 3m27s
kube-controller-manager 4.15.48 True False False 3m27s
kube-scheduler 4.15.48 True False False 3m27s
kube-storage-version-migrator
monitoring
network 4.15.48 True True False 3m3s DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready...
node-tuning False True False 2m53s DaemonSet "tuned" has no available Pod(s)
openshift-apiserver 4.15.48 True False False 3m27s
openshift-controller-manager 4.15.48 True False False 3m27s
openshift-samples
operator-lifecycle-manager 4.15.48 True False False 3m28s
operator-lifecycle-manager-catalog 4.15.48 True False False 3m26s
operator-lifecycle-manager-packageserver 4.15.48 True False False 3m27s
service-ca
storage 4.15.48 False False False 3m27s AWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service
cluster operators to be accessible finished!
[INFO] Cluster operators are accessible.
Waiting for cluster to be reported as healthy for 60m...
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
console
csi-snapshot-controller 4.15.48 True False False 3m34s
dns 4.15.48 False False True 3m36s DNS "default" is unavailable.
image-registry False True True 2m55s Available: The deployment does not have available replicas...
ingress False True True 3m The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.)
insights
kube-apiserver 4.15.48 True False False 3m28s
kube-controller-manager 4.15.48 True False False 3m28s
kube-scheduler 4.15.48 True False False 3m28s
kube-storage-version-migrator
monitoring
network 4.15.48 True True False 3m4s DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready...
node-tuning False True False 2m54s DaemonSet "tuned" has no available Pod(s)
openshift-apiserver 4.15.48 True False False 3m28s
openshift-controller-manager 4.15.48 True False False 3m28s
openshift-samples
operator-lifecycle-manager 4.15.48 True False False 3m29s
operator-lifecycle-manager-catalog 4.15.48 True False False 3m27s
operator-lifecycle-manager-packageserver 4.15.48 True False False 3m28s
service-ca
storage 4.15.48 False False False 3m28s AWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service
Waiting for cluster to be reported as healthy... Trying again in 60s
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
console
csi-snapshot-controller 4.15.48 True False False 4m34s
dns 4.15.48 False False True 4m36s DNS "default" is unavailable.
image-registry False True True 3m55s Available: The deployment does not have available replicas...
ingress False True True 4m The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.)
insights
kube-apiserver 4.15.48 True False False 4m28s
kube-controller-manager 4.15.48 True False False 4m28s
kube-scheduler 4.15.48 True False False 4m28s
kube-storage-version-migrator
monitoring
network 4.15.48 True True False 4m4s DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready...
node-tuning False True False 3m54s DaemonSet "tuned" has no available Pod(s)
openshift-apiserver 4.15.48 True False False 4m28s
openshift-controller-manager 4.15.48 True False False 4m28s
openshift-samples
operator-lifecycle-manager 4.15.48 True False False 4m29s
operator-lifecycle-manager-catalog 4.15.48 True False False 4m27s
operator-lifecycle-manager-packageserver 4.15.48 True False False 4m28s
service-ca
storage 4.15.48 False False False 4m28s AWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service
Waiting for cluster to be reported as healthy... Trying again in 60s
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
console
csi-snapshot-controller 4.15.48 True False False 5m34s
dns 4.15.48 False False True 5m36s DNS "default" is unavailable.
image-registry False True True 4m55s Available: The deployment does not have available replicas...
ingress False True True 5m The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.)
insights
kube-apiserver 4.15.48 True False False 5m28s
kube-controller-manager 4.15.48 True False False 5m28s
kube-scheduler 4.15.48 True False False 5m28s
kube-storage-version-migrator
monitoring
network 4.15.48 True True False 5m4s DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes)...
node-tuning 4.15.48 True False False 16s
openshift-apiserver 4.15.48 True False False 5m28s
openshift-controller-manager 4.15.48 True False False 5m28s
openshift-samples
operator-lifecycle-manager 4.15.48 True False False 5m29s
operator-lifecycle-manager-catalog 4.15.48 True False False 5m27s
operator-lifecycle-manager-packageserver 4.15.48 True False False 5m28s
service-ca
storage 4.15.48 True False False 16s
Waiting for cluster to be reported as healthy... Trying again in 60s
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
console
csi-snapshot-controller 4.15.48 True False False 6m35s
dns 4.15.48 False True True 6m37s DNS "default" is unavailable.
image-registry False True True 5m56s Available: The deployment does not have available replicas...
ingress False True True 6m1s The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.)
insights
kube-apiserver 4.15.48 True False False 6m29s
kube-controller-manager 4.15.48 True False False 6m29s
kube-scheduler 4.15.48 True False False 6m29s
kube-storage-version-migrator
monitoring
network 4.15.48 True True False 6m5s DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready...
node-tuning 4.15.48 True False False 60s
openshift-apiserver 4.15.48 True False False 6m29s
openshift-controller-manager 4.15.48 True False False 6m29s
openshift-samples
operator-lifecycle-manager 4.15.48 True False False 6m30s
operator-lifecycle-manager-catalog 4.15.48 True False False 6m28s
operator-lifecycle-manager-packageserver 4.15.48 True False False 6m29s
service-ca
storage 4.15.48 True False False 77s
Waiting for cluster to be reported as healthy... Trying again in 60s
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
console
csi-snapshot-controller 4.15.48 True False False 7m35s
dns 4.15.48 False True True 7m37s DNS "default" is unavailable.
image-registry False True True 6m56s Available: The deployment does not have available replicas...
ingress False True True 7m1s The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.)
insights
kube-apiserver 4.15.48 True False False 7m29s
kube-controller-manager 4.15.48 True False False 7m29s
kube-scheduler 4.15.48 True False False 7m29s
kube-storage-version-migrator
monitoring
network 4.15.48 True True False 7m5s DaemonSet "/openshift-multus/network-metrics-daemon" is waiting for other operators to become ready
node-tuning 4.15.48 True False False 2m
openshift-apiserver 4.15.48 True False False 7m29s
openshift-controller-manager 4.15.48 True False False 7m29s
openshift-samples
operator-lifecycle-manager 4.15.48 True False False 7m30s
operator-lifecycle-manager-catalog 4.15.48 True False False 7m28s
operator-lifecycle-manager-packageserver 4.15.48 True False False 7m29s
service-ca Unknown Unknown False 2s
storage 4.15.48 True False False 2m17s
Waiting for cluster to be reported as healthy... Trying again in 60s
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
console 4.15.48 False True False 17s DeploymentAvailable: 0 replicas available for console deployment
csi-snapshot-controller 4.15.48 True False False 8m35s
dns 4.15.48 False True True 8m37s DNS "default" is unavailable.
image-registry False True True 7m56s Available: The deployment does not have available replicas...
ingress False True True 8m1s The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.)
insights 4.15.48 True False False 59s
kube-apiserver 4.15.48 True False False 8m29s
kube-controller-manager 4.15.48 True False False 8m29s
kube-scheduler 4.15.48 True False False 8m29s
kube-storage-version-migrator 4.15.48 True False False 53s
monitoring Unknown True Unknown 28s Rolling out the stack.
network 4.15.48 True True False 8m5s DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes)...
node-tuning 4.15.48 True False False 51s
openshift-apiserver 4.15.48 True False False 8m29s
openshift-controller-manager 4.15.48 True False False 8m29s
openshift-samples
operator-lifecycle-manager 4.15.48 True False False 8m30s
operator-lifecycle-manager-catalog 4.15.48 True False False 8m28s
operator-lifecycle-manager-packageserver 4.15.48 True False False 8m29s
service-ca 4.15.48 True False False 60s
storage 4.15.48 True False False 3m17s
Waiting for cluster to be reported as healthy... Trying again in 60s
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
console 4.15.48 True False False 47s
csi-snapshot-controller 4.15.48 True False False 9m36s
dns 4.15.48 True True False 50s DNS "default" reports Progressing=True: "Have 2 available DNS pods, want 3."
image-registry 4.15.48 True False False 48s
ingress 4.15.48 True False False 58s
insights 4.15.48 True False False 2m
kube-apiserver 4.15.48 True False False 9m30s
kube-controller-manager 4.15.48 True False False 9m30s
kube-scheduler 4.15.48 True False False 9m30s
kube-storage-version-migrator 4.15.48 True False False 114s
monitoring Unknown True Unknown 89s Rolling out the stack.
network 4.15.48 True True False 9m6s DaemonSet "/openshift-multus/network-metrics-daemon" is not available (awaiting 1 nodes)
node-tuning 4.15.48 True False False 112s
openshift-apiserver 4.15.48 True False False 9m30s
openshift-controller-manager 4.15.48 True False False 9m30s
openshift-samples 4.15.48 True False False 35s
operator-lifecycle-manager 4.15.48 True False False 9m31s
operator-lifecycle-manager-catalog 4.15.48 True False False 9m29s
operator-lifecycle-manager-packageserver 4.15.48 True False False 9m30s
service-ca 4.15.48 True False False 2m1s
storage 4.15.48 True False False 4m18s
Waiting for cluster to be reported as healthy... Trying again in 60s
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
console 4.15.48 True False False 107s
csi-snapshot-controller 4.15.48 True False False 10m
dns 4.15.48 True False False 110s
image-registry 4.15.48 True False False 108s
ingress 4.15.48 True False False 118s
insights 4.15.48 True False False 3m
kube-apiserver 4.15.48 True False False 10m
kube-controller-manager 4.15.48 True False False 10m
kube-scheduler 4.15.48 True False False 10m
kube-storage-version-migrator 4.15.48 True False False 2m54s
monitoring 4.15.48 True False False 4s
network 4.15.48 True False False 10m
node-tuning 4.15.48 True False False 2m52s
openshift-apiserver 4.15.48 True False False 10m
openshift-controller-manager 4.15.48 True False False 10m
openshift-samples 4.15.48 True False False 95s
operator-lifecycle-manager 4.15.48 True False False 10m
operator-lifecycle-manager-catalog 4.15.48 True False False 10m
operator-lifecycle-manager-packageserver 4.15.48 True False False 10m
service-ca 4.15.48 True False False 3m1s
storage 4.15.48 True False False 5m18s
Waiting for cluster to be reported as healthy... Trying again in 60s
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
console 4.15.48 True False False 2m48s
csi-snapshot-controller 4.15.48 True False False 11m
dns 4.15.48 True False False 2m51s
image-registry 4.15.48 True False False 2m49s
ingress 4.15.48 True False False 2m59s
insights 4.15.48 True False False 4m1s
kube-apiserver 4.15.48 True False False 11m
kube-controller-manager 4.15.48 True False False 11m
kube-scheduler 4.15.48 True False False 11m
kube-storage-version-migrator 4.15.48 True False False 3m55s
monitoring 4.15.48 True False False 65s
network 4.15.48 True False False 11m
node-tuning 4.15.48 True False False 3m53s
openshift-apiserver 4.15.48 True False False 11m
openshift-controller-manager 4.15.48 True False False 11m
openshift-samples 4.15.48 True False False 2m36s
operator-lifecycle-manager 4.15.48 True False False 11m
operator-lifecycle-manager-catalog 4.15.48 True False False 11m
operator-lifecycle-manager-packageserver 4.15.48 True False False 11m
service-ca 4.15.48 True False False 4m2s
storage 4.15.48 True False False 6m19s
Waiting for cluster to be reported as healthy... Trying again in 60s
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
console 4.15.48 True False False 3m48s
csi-snapshot-controller 4.15.48 True False False 12m
dns 4.15.48 True False False 3m51s
image-registry 4.15.48 True False False 3m49s
ingress 4.15.48 True False False 3m59s
insights 4.15.48 True False False 5m1s
kube-apiserver 4.15.48 True False False 12m
kube-controller-manager 4.15.48 True False False 12m
kube-scheduler 4.15.48 True False False 12m
kube-storage-version-migrator 4.15.48 True False False 4m55s
monitoring 4.15.48 True False False 2m5s
network 4.15.48 True False False 12m
node-tuning 4.15.48 True False False 4m53s
openshift-apiserver 4.15.48 True False False 12m
openshift-controller-manager 4.15.48 True False False 12m
openshift-samples 4.15.48 True False False 3m36s
operator-lifecycle-manager 4.15.48 True False False 12m
operator-lifecycle-manager-catalog 4.15.48 True False False 12m
operator-lifecycle-manager-packageserver 4.15.48 True False False 12m
service-ca 4.15.48 True False False 5m2s
storage 4.15.48 True False False 7m19s
Waiting for cluster to be reported as healthy... Trying again in 60s
healthy
cluster to be reported as healthy finished!


It("should make changes to the multiple-repo", func() {
// Use mergeMultiResultSha for multi-repo (latest merged PR SHA for multi-repo)
err = f.AsKubeAdmin.CommonController.Github.CreateRef(
componentRepoNameForGeneralIntegration, multiComponentDefaultBranch, mergeMultiResultSha, multiComponentPRBranchName)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mergeMultiResultSha isn't initialized anymore.

if !CurrentSpecReport().Failed() {
cleanup(*f, testNamespace, applicationName, componentA.Name, snapshot)
cleanup(*f, testNamespace, applicationName, componentNames[0], snapshot)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we do cleanup for all 3 components?

Comment on lines -480 to -481
// Delete all the pipelineruns in the namespace before sending PR
//Expect(f.AsKubeAdmin.TektonController.DeleteAllPipelineRunsInASpecificNamespace(testNamespace)).To(Succeed())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do we think about uncommenting this? This would delete all PLRs and give us a clean slate to do the next phase of testing, where we interact with group snapshot. Needs to be tested locally first.

}, timeout, constants.PipelineRunPollingInterval).Should(Succeed(), fmt.Sprintf("Timed out waiting for Integration PipelineRun to start for %s/%s", testNamespace, componentName))
})

It(fmt.Sprintf("should merge the init PaC PR successfully for %s", componentName), func() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional: To save resources, we can cancel the "push"-based build PLR generated after merging the "init PaC" PRs. The test currently generates 5+ of those on each run.

Expect(f.AsKubeAdmin.CommonController.Github.GetCheckRunConclusion(expectedCheckRunName, multiComponentRepoNameForGroupSnapshot, prHeadSha, prNumber)).To(Equal(constants.CheckrunConclusionSuccess))
})
When("creating and testing multiple components", func() {
for _, contextDir := range multiComponentContextDirs {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

multiComponentContextDirs holds the names of 2 components from the multi-component repo. Is it intentional to skip the creation of the (3rd) monorepo component?


//Create the ref, add the files and create the PR - monorepo
err = f.AsKubeAdmin.CommonController.Github.CreateRef(multiComponentRepoNameForGroupSnapshot, multiComponentDefaultBranch, mergeResultSha, multiComponentPRBranchName)
It("makes sure that the group snapshot contains the last build PipelineRun for each component", func() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be worth having another When block after this one, where we check for the presence and status of Checkruns of Integration tests, especially the PR group integration test -- for context, presence of it would catch this bug in future: STONEINTG-1174

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants