Skip to content

Create OpenShift Service Catalog tables #260

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 13 commits into from
Aug 27, 2018

Conversation

Ladas
Copy link
Contributor

@Ladas Ladas commented Aug 21, 2018

Create OpenShift Service Catalog tables

This is a first version of what the tables could look like. It probably has more columns that will be needed, but we can delete the unused later. The docs do not contain API resources description https://github.com/kubernetes-incubator/service-catalog/blob/master/docs/v1/api.md

So basing this on the examples in docs:
https://github.com/kubernetes-incubator/service-catalog/blob/master/docs/walkthrough.md
https://github.com/kubernetes-incubator/service-catalog/blob/master/docs/cli.md
https://github.com/kubernetes-incubator/service-catalog/blob/master/docs/resources.md

@Ladas
Copy link
Contributor Author

Ladas commented Aug 21, 2018

@miq-bot assign @agrare
@miq-bot add_label enhancement

cc @syncrou @gtanzillo @gmcculloug @Fryguy

Do we need to get this in before the DB freeze?

@syncrou
Copy link
Contributor

syncrou commented Aug 21, 2018

Adding @mkanoor for review as well.

Create OpenShift Service Catalog tables
@Ladas Ladas force-pushed the create_openshift_service_catalog_tables branch from bc0a97a to 45b5eb1 Compare August 21, 2018 13:36
@Ladas
Copy link
Contributor Author

Ladas commented Aug 21, 2018

@syncrou do you know if there are docs for the API resources (attributes and types), I couldn't find it


t.jsonb :parameters
t.jsonb :parameters_from
t.jsonb :extra
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Ladas Would the extra column contain the status information about a service which is a nested hash. Here is an example of a failed service

apiVersion: v1
items:
- apiVersion: servicecatalog.k8s.io/v1beta1
  kind: ServiceInstance
  metadata:
    creationTimestamp: 2018-08-21T14:02:39Z
    finalizers:
    - kubernetes-incubator/service-catalog
    generateName: mariadb-persistent-
    generation: 1
    name: mariadb-persistent-qdkzt
    namespace: default
    resourceVersion: "35703842"
    selfLink: /apis/servicecatalog.k8s.io/v1beta1/namespaces/default/serviceinstances/mariadb-persistent-qdkzt
    uid: dd871fef-a54a-11e8-8fb9-0a580a800009
  spec:
    clusterServiceClassExternalName: mariadb-persistent
    clusterServiceClassRef:
      name: 1750b5d6-0d11-11e8-906a-d094660d31fb
    clusterServicePlanExternalName: default
    clusterServicePlanRef:
      name: 1750b5d6-0d11-11e8-906a-d094660d31fb
    externalID: 76af97e3-5650-4583-ae85-27294677f88d
    parametersFrom:
    - secretKeyRef:
        key: parameters
        name: mariadb-persistent-parametersbori0
    updateRequests: 0
    userInfo:
      extra:
        scopes.authorization.openshift.io:
        - user:full
      groups:
      - system:authenticated:oauth
      - system:authenticated
      uid: ""
      username: admin
  **status:**
    asyncOpInProgress: false
    conditions:
    - lastTransitionTime: 2018-08-21T14:02:44Z
      message: 'The provision call failed and will be retried: Error communicating
        with broker for provisioning: Put https://apiserver.openshift-template-service-broker.svc:443/brokers/template.openshift.io/v2/service_instances/76af97e3-5650-4583-ae85-27294677f88d?accepts_incomplete=true:
        dial tcp: lookup apiserver.openshift-template-service-broker.svc on [::1]:53:
        read udp [::1]:36349->[::1]:53: read: connection refused'
      reason: ErrorCallingProvision
      status: "False"
      type: Ready
    currentOperation: Provision
    deprovisionStatus: Required
    inProgressProperties:
      clusterServicePlanExternalID: 1750b5d6-0d11-11e8-906a-d094660d31fb
      clusterServicePlanExternalName: default
      parameterChecksum: 71118b9c659a984212608999d0323deda723b5cdba0805a0e350fc49931b6604
      parameters:
        DATABASE_SERVICE_NAME: <redacted>
        MARIADB_VERSION: <redacted>
        MEMORY_LIMIT: <redacted>
        MYSQL_DATABASE: <redacted>
        NAMESPACE: <redacted>
        VOLUME_CAPACITY: <redacted>
      userInfo:
        extra:
          scopes.authorization.openshift.io:
          - user:full
        groups:
        - system:authenticated:oauth
        - system:authenticated
        uid: ""
        username: admin
    operationStartTime: 2018-08-21T14:02:44Z
    orphanMitigationInProgress: false
    reconciledGeneration: 0
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yea we can put it to extra or do separate jsob column for it. The extra is for whatever else we need and we do not have column for (for start, we won't be able to order and search using the stuff in jsonb, since we don't have that coded in)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@syncrou @mkanoor but given the big amount of hash data, it would be nice if you guys would tell me what we actually need. So we don't bloat our DB with data we are not using for anything

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Ladas - At this time I'm not fully sure if we know 100% what we need. I'm not opposed to jamming the full hash into the extra column for now so we have it, with the understanding that we'll adjust it and add columns once we're set on what we need. @mkanoor Thoughts?

t.timestamps
end

create_table :container_service_bindings, :id => :bigserial, :force => :cascade do |t|
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to the documentation included in this PR, a service binding represents a binding of a service instance to an application. Although the doc has a TODO to define what an application is, do we need this if we don't have a table for container_applications?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this mentions application pods https://github.com/kubernetes-incubator/service-catalog/blob/master/docs/resources.md#servicebinding

right now, I am not sure what it is exactly :-)

Copy link
Contributor

@syncrou syncrou Aug 21, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Ladas @gtanzillo - The binding was a way ( via secrets ) to provision a multiple service instances all of which are needed to form a single service. E.g. MediaWiki: provision the wiki and the db so when the service instance is provisioned it includes both the frontend and the db as a single provisioned entity. We'll want to include bindings in this schema as they allow us to know all the parts of a provisioned service if the catalog ( service class ) has a binding defined.

Copy link
Contributor Author

@Ladas Ladas Aug 22, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so this is something we need?

Or we don't, because you've used past tense for 'The binding was a way'? :-)

@Fryguy
Copy link
Member

Fryguy commented Aug 21, 2018

My only question is if we actually need all of this plumbing. The goal is just to provide provisionable things, so some of the detailed objects and relationships may not be necessary

@Ladas
Copy link
Contributor Author

Ladas commented Aug 21, 2018

@Fryguy right, so now this is all the models the service catalog offers. So if @mkanoor and @syncrou can give us what it is we'll need, we should be able to trim it down to just necessary tables, attributes and relations.

@Ladas
Copy link
Contributor Author

Ladas commented Aug 21, 2018

@Fryguy @agrare also, for the jsonb columns, I am not sure if it's enough to have 1 extra column, for all structured json (or whatever extra stuff that do not deserve a column). Or more jsonb columns?

Also, there is a limit for the row size, that should be in hundreds of MBs, but still, we should probably make sure the json objects are not too huge? (I'd like to avoid storing this to BinaryBlob, when it's literally json data we are storing)

@Ladas
Copy link
Contributor Author

Ladas commented Aug 22, 2018

@cben btw. I am going to change resourceVersion of all container object to integer (or bigint). Not sure why k8s stores it as string, when the etcd's modifiedIndex in integer (https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#concurrency-control-and-consistency)

We need something that is comparable (so we can say what data are newer) and we will support integer or timestamps value. (if the k8s implementation changes, it should still fit this)

Does that sound ok?

@cben
Copy link
Contributor

cben commented Aug 22, 2018

resourceVersion being integer is k8s implemetation detail. yes, that's how it works with current etcd storage, but knowledgeable people repeatedly told us not to rely on it.

Please read kubernetes/website#6540 (comment).

I agree not comparing resourceVersions is a pain, but I think we need to live with it... 🤕
We can and should do watch restarts in a monotonic way, such that we never rewind to older data than we already got. 📈
We might be able to assign our own monotonic versions or timestamps, with a limited scope (a single connection?)

What exactly do we need comparing for?

t.timestamps
end

create_table :container_secrets, :id => :bigserial, :force => :cascade do |t|
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this for k8s Secret objects?
So far we refrained from fetching them. MIQ having them will increase attack surface of all software running in cluster (and increase minimal permissions miq needs; not sure if default setup already gives us access to secrets).
What's the use case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I think we might need just the secret names, but I am not sure. We will definitely not be storing the secrets themselves. Might be, we don't need these at all @syncrou @mkanoor ?

@Ladas
Copy link
Contributor Author

Ladas commented Aug 22, 2018

@cben we will be saving the objects in parallel, so we need something that will allow us to say which entity is newer. (So e.g. saving pods from watches while saving pods by full refresh). It's not very nice from k8s guys that they can't get clear versioning.

I've read https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#concurrency-control-and-consistency

They list alternatives as per object counter or timestamp. So it's still integer or timestamp, that seems to me that we should be good. The issue can be when they start to count back from 1...

@agrare
Copy link
Member

agrare commented Aug 22, 2018

@Ladas we talked with @gmcculloug and there is the potential that AWS will be added as another provider type supported by service catalog. Even if it doesn't happen right away I don't want to encode provider type specifics into generic service catalog models.

With that in mind this looks very container provider specific, I think this should be as much as possible made to be as generic as possible at the model level.

@Ladas
Copy link
Contributor Author

Ladas commented Aug 23, 2018

@agrare so should I drop the container prefix?

Although, it seems like we will still talk to k8s to get the data (https://aws.amazon.com/blogs/opensource/provision-aws-services-kubernetes-aws-service-broker/), so there it's consistent with prefixing all models taking data from containers as container_?

Or let me help with how will this look like? Will we need STI for the other providers? Or do we just take them all as service catalog items and k8s deals with the details? (@mkanoor should we be fetching brokers for this?)

@agrare
Copy link
Member

agrare commented Aug 23, 2018

so should I drop the container prefix?

I think we should drop the container specifics like container_project, container_service_broker (is this a thing already and I missed it? I don't see it added by this PR), etc... and lets find out from @syncrou et all what properties they need from a generic service catalog item that.

Although, it seems like we will still talk to k8s to get the data

Right the collector/parser will have to talk to the provider to refresh these just like any inventory object

Or do we just take them all as service catalog items and k8s deals with the details

Ohhh I see you're thinking we would be using k8s for provisioning the AWS instances? I think that's the discrepancy, I'm thinking this looks more like cloud provisioning where the provider collects provisionable objects and handles the operations to provision them on that provider, not cross provider provisioning.

@gmcculloug correct me if I got that wrong but I think if we added support for the AWS catalog it would be provisioned on AWS by the AWS provider code.

@gmcculloug
Copy link
Member

@agrare You are correct, the provider that owns the resource would be the one responsible for doing the provisioning.

@agrare
Copy link
Member

agrare commented Aug 23, 2018

@Ladas so lets change container_service_classes to catalog_items and drop the other three tables for now.

@gmcculloug
Copy link
Member

I do not have a better name at the moment, but catalog_items seems like it would introduce confusion as the UI references service_templates as catalog items today. I'll give this more thought, but would love to hear other suggestions before settling on that name.

@agrare
Copy link
Member

agrare commented Aug 23, 2018

@gmcculloug that's a good point, would the existing Classic UI show these in addition to ServiceTemplates?

@gmcculloug
Copy link
Member

There is no current expectation that they would, but still think we should avoid the naming collision if possible.

There is always a possibility that they want to expose these in at some point in the future. Which would mean we would likely create service_templates (shown in the "Catalog Items" section of the UI) to expose the catalog_items to end-users. 😕

@Ladas
Copy link
Contributor Author

Ladas commented Aug 24, 2018

@agrare right, so as I was saying before, @mkanoor and @syncrou said they need: [service_class, plan_parameters, service_instance]. So I think these 3 tables are now the minimum what is required.

Naming wise, seems like open service broker actually calls it Service Offering, not Class. https://github.com/openservicebrokerapi/servicebroker/blob/master/spec.md#terminology, so we could use also that. Any other ideas?

Remove container specific references, since these services
needs to be general
@agrare
Copy link
Member

agrare commented Aug 24, 2018

@Ladas yeah I'm good with ServiceOffering don't want to spend the entire time talking about names haha

@agrare
Copy link
Member

agrare commented Aug 24, 2018

@gmcculloug are you good with these names?

@gmcculloug
Copy link
Member

After talking some with @syncrou we were thinking that we use service_offerings and replace service_plans with service_parameters.

I think it is too early to introduce service_instances as the data is not needed at this moment and we may not need to add new tables for it. For example, looking at AWS their Service Catalog creates stacks, which we already have modeled, so I do not see why we would not reuse that modeling.

Might be worth another meeting to discuss.

@Ladas
Copy link
Contributor Author

Ladas commented Aug 24, 2018

@gmcculloug @syncrou not sure if service_parameters is very descriptive it's more of a service_parameters_set since the offering can have many plans, each having it's own parameters set

Ok I'll remove the ServiceInstance for now and add them for the follow up, where I'll link the ServiceInstance to stack (aws) and deployment (k8s). I think we will need this abstraction, but it might end up we don't.

Remove service_instances tables for now, we'll add it back
if it'll be needed for crosslink with the deployed stuff
@gmcculloug
Copy link
Member

The change to service_parameters_set makes sense. 👍

@agrare
Copy link
Member

agrare commented Aug 24, 2018

Its a little late for @Ladas so I pushed the change to his branch :)

@Ladas
Copy link
Contributor Author

Ladas commented Aug 27, 2018

@agrare looks good, can you merge it now? :-)

@Ladas
Copy link
Contributor Author

Ladas commented Aug 27, 2018

@agrare just changed it to service_parameters_sets since the table name should be plural

@miq-bot
Copy link
Member

miq-bot commented Aug 27, 2018

Checked commits Ladas/manageiq-schema@45b5eb1~...bc3dcdd with ruby 2.3.3, rubocop 0.52.1, haml-lint 0.20.0, and yamllint 1.10.0
1 file checked, 2 offenses detected

db/migrate/20180821112856_create_service_catalog_tables.rb

@agrare agrare merged commit d0838db into ManageIQ:master Aug 27, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants