Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUILD-260: auto provision tekton via tekton operator if necessary #18

Merged

Conversation

gabemontero
Copy link
Member

@gabemontero gabemontero commented Aug 23, 2021

Changes

Fixes #8

/kind feature

Submitter Checklist

  • [/ ] Includes tests if functionality changed/was added
  • [/ ] Includes docs if changes are user-facing
  • [/ ] Set a kind label on this PR
  • [/ ] Release notes block has been filled in, or marked NONE

Release Notes

The Shipwright operator will now install Tekton as needed via the Tekton operatoar

/assign @adambkaplan
/assign @otaviof

I've marked this WIP guys, as I just made coding changes so far, so if you want to hold off on any review, I understand. But if you have enough cycles to take a quick glance and see any glaring red flags, do feel free to comment here as you see fit.

@openshift-ci openshift-ci bot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. release-note kind/feature Categorizes issue or PR as related to a new feature. labels Aug 23, 2021
@openshift-ci openshift-ci bot requested review from HeavyWombat and mattcui August 23, 2021 22:14
@gabemontero gabemontero force-pushed the auto-prov-tekton-olm branch 3 times, most recently from 6c1f94e to 8594f4a Compare August 24, 2021 14:10
@gabemontero gabemontero force-pushed the auto-prov-tekton-olm branch from 8594f4a to 6c8f79a Compare August 24, 2021 21:53
@gabemontero gabemontero changed the title WIP: BUILD-260: auto provision tekton via tekton operator if necessary BUILD-260: auto provision tekton via tekton operator if necessary Aug 24, 2021
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Aug 24, 2021
@gabemontero
Copy link
Member Author

OK just got clean PR runs. I am retrying after pushing a brief README update.

I've removed WIP @otaviof @adambkaplan to indicate this should be "ready for review"

that said, I'll

/hold

for now as I still have not gone through the local dev flow to actually installing tekton operator (but not having it install tekton yet), then running this PRs changes

I'll unhold once that is done.

But in the meantime, I think it is OK to start gather comments from you guys, to start pulling in those changes as well.

thanks

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 24, 2021
@gabemontero
Copy link
Member Author

OK the rerun of default_test.go (the ginkgo only test that launches an etc server) flaked this time (it passed last time before the readme update).

I seen weirdness locally as well.

I'd like to cover in office hours (I"m not sure how to debug these at first blush) when we next can all talk.

thanks

@gabemontero
Copy link
Member Author

talked to @otaviof today

  • he seen some flakes in the past too, and the latest CI error looks like a timing/race condition with some of the delete related tests; if they persist enough to block merge I'll try to fix them with separate commits in this PR; otherwise, may just open an issue for them; or if I can sort out the fix quickly while working this PR, I will
  • my local env needs some work; I have an old operator-sdk that is possibly getting in the way; also, he did not have to download/install etcd, where his local test runs use the etcd that the test scripts download to testbin

@gabemontero
Copy link
Member Author

yeah removing operator-sdk and etcd on my system, make test works fine for me locally :-)

@gabemontero
Copy link
Member Author

ok pushed commit for last flake found .... e2e's green

@gabemontero
Copy link
Member Author

ran on kind, need to update rbac so these new changes can avoid perm problems like

2021-08-25T22:02:36.868Z ERROR controllers.ShipwrightBuild Problem listing TektonConfigs {"namespace": "", "name": "shipwright-operator", "error": "tektonconfigs.operator.tekton.dev is forbidden: User \"system:serviceaccount:shipwright-operator:default\" cannot list resource \"tektonconfigs\" in API group \"operator.tekton.dev\" at the cluster scope"}

will release hold once I've sorted this out and pushed the updates

@gabemontero
Copy link
Member Author

And the end to end flow has been verified on kind

Projects before flow:

gmontero ~/shipwright-notes $ oc projects 
You have access to the following projects and can switch between them with ' project <projectname>':

default
kube-node-lease
kube-public
kube-system
local-path-storage
olm
operators

Then

From the shipwright operator log: 2021-08-25T22:34:43.067Z INFO controllers.ShipwrightBuild A Tekton Operator config with the 'lite' profile has been applied to the cluster {"namespace": "", "name": "shipwright-operator"}

Projects after and pods:

gmontero ~/shipwright-notes $ oc projects 
You have access to the following projects and can switch between them with ' project <projectname>':

default
kube-node-lease
kube-public
kube-system
local-path-storage
olm
operators
shipwright-build
shipwright-operator
tekton-pipelines
gmontero ~/shipwright-notes $ oc get pods -n tekton-pipelines 
NAME                                             READY   STATUS    RESTARTS   AGE
tekton-operator-proxy-webhook-6d484d65dd-knhqb   1/1     Running   0          36s
tekton-pipelines-controller-697f689796-hkl8t     1/1     Running   0          37s
tekton-pipelines-webhook-74548c7949-m6hxc        1/1     Running   0          37s
gmontero ~/shipwright-notes $ oc get pods -n shipwright-build 
NAME                                           READY   STATUS    RESTARTS   AGE
shipwright-build-controller-54888d7796-tkth5   1/1     Running   0          48s
gmontero ~/shipwright-notes $ oc get pods -n shipwright-operator
NAME                                           READY   STATUS    RESTARTS   AGE
operator-controller-manager-6576488969-h22nl   2/2     Running   0          4m24s
gmontero ~/shipwright-notes $ oc get tektonconfigs 
NAME     READY   REASON
config   True    
gmontero ~/shipwright-notes $ 

/hold cancel

@adambkaplan @otaviof PTAL .... let's review and do final iterations so we can merge :-)

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 25, 2021
@@ -68,7 +68,10 @@ var _ = g.Describe("Reconcile default ShipwrightBuild installation", func() {
o.Expect(err).NotTo(o.HaveOccurred())

err = k8sClient.Delete(ctx, build, &client.DeleteOptions{})
o.Expect(err).NotTo(o.HaveOccurred())
// the delete e2e's can delete this object before this AfterEach runs
if !errors.IsNotFound(err) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checking gomega examples, we can assert against multiple conditions, which makes this if statement become:

o.Expect(err).To(o.BeNil(), o.MatchError(metav1.StatusReasonNotFound))

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ooh neat ... will update in a bit, thanks

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated @otaviof

case tektonAPIErr == nil && tektonOPErr != nil:
logger.Info("Tekton has been installed without use of its associated operator.")
case errors.IsNotFound(tektonAPIErr) && errors.IsNotFound(tektonOPErr):
//TODO should we error out here or proceed and give the user the opportunity of install Tekton after Shipwright?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a few use-cases that could be impacted here:

  • Tekton is already present in the cluster: In this case, this operator should only inspect if the version installed is compatible with Shipwright Controller;
  • Tekton is already present in the cluster, but outdated: How would we handle Tekton upgrades?
  • Tekton is ignored: This operator would not try to inspect or install Tekton;

I think we will need an extra flag to allow users during those use cases. Should we tackle this in a upcoming pull-request?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the first 2 bullets fall under the open questions and subsequent card items already called out in BUILD-260 @otaviof

IMO It falls under installing (or upgrading) the operator itself. I initially prototyped an idea around that using manifestival but @adambkaplan and I discussed and agreed to not pursue that or other approaches with this PR at this time.

For the third bullet, yeah, that IMO most directly falls under my TODO comment/question. I'm on the fence. How strongly do you feel about it?

@adambkaplan thoughts on all ^^

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Tekton is already present in the cluster: In this case, this operator should only inspect if the version installed is compatible with Shipwright Controller;
  • Tekton is already present in the cluster, but outdated: How would we handle Tekton upgrades?

I think both of these should be addressed in a follow-up issue. For an MVP we just want to detect if Tekton was installed by the Tekton operator.

  • Tekton is ignored: This operator would not try to inspect or install Tekton;

Not sure what is meant by "Tekton is ignored" - does this mean the cluster installed the Tekton operator, but didn't install Tekton pipelines?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To answer the root TODO - I think we should error here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Tekton is already present in the cluster: In this case, this operator should only inspect if the version installed is compatible with Shipwright Controller;
  • Tekton is already present in the cluster, but outdated: How would we handle Tekton upgrades?

I think both of these should be addressed in a follow-up issue. For an MVP we just want to detect if Tekton was installed by the Tekton operator.

  • Tekton is ignored: This operator would not try to inspect or install Tekton;

Not sure what is meant by "Tekton is ignored" - does this mean the cluster installed the Tekton operator, but didn't install Tekton pipelines?

My immediate interpretation was "Tekton is ignored" == the current statue of the operator. No inspection for Tekton is made. The shipwright operator moves forward assuming it is there, or the shipwright controller will function once it is put there.

But @otaviof please clarify if ^^ is not what you meant.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will update @adambkaplan to error out in next push.

if list == nil || len(list.Items) == 0 {
tektonOperatorCfg := &tektonoperatorv1alpha1.TektonConfig{}
tektonOperatorCfg.Name = "config"
tektonOperatorCfg.Spec.TargetNamespace = "tekton-pipelines"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we make this namespace configurable?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could go either way. How strongly do you feel about it?

@adambkaplan thoughts?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did do a survey of both our operator and tekton's upstream....as we have no specific CRD api and associated clients created to capture such config in a operator like fashion with this current operator ... just a few command line options that map to the k8s controller-runtime options

Unless we consider a command line option for specifying the target namesapce for tekton resources sufficient for configuring with this PR / Jira, my feeling is that introducing all ^^ should be a separate PR / Jira

thoughts @adambkaplan @otaviof ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree we should take this up as a separate issue. This feels like it is more of a concern for a downstream distribution.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Filed #20

@gabemontero gabemontero force-pushed the auto-prov-tekton-olm branch from 4923208 to a8b0d7c Compare August 26, 2021 12:31
Copy link
Member

@adambkaplan adambkaplan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few more minor changes requested. Otherwise looks good in its current form - most of the open questions here should be addressed in follow-up issues.

Comment on lines +10 to +12
- kind: ServiceAccount
name: default
namespace: system
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Binding to the default system service account? Don't we want to bind to the operator's SA?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok @adambkaplan and I talked in office hours ... when I looked at the file, saw I merged tekton operator vs. ours

now, we are currently following the tekton operator's conversion of using the default SA in the system namespace

@adambkaplan and I agreed we won't change things with this PR. But he will be opening upstream issues / Jira's for the future work cited in this PR, and created a non-default SA for our operator will be one of those items.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Filed #19

case tektonAPIErr == nil && tektonOPErr != nil:
logger.Info("Tekton has been installed without use of its associated operator.")
case errors.IsNotFound(tektonAPIErr) && errors.IsNotFound(tektonOPErr):
//TODO should we error out here or proceed and give the user the opportunity of install Tekton after Shipwright?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Tekton is already present in the cluster: In this case, this operator should only inspect if the version installed is compatible with Shipwright Controller;
  • Tekton is already present in the cluster, but outdated: How would we handle Tekton upgrades?

I think both of these should be addressed in a follow-up issue. For an MVP we just want to detect if Tekton was installed by the Tekton operator.

  • Tekton is ignored: This operator would not try to inspect or install Tekton;

Not sure what is meant by "Tekton is ignored" - does this mean the cluster installed the Tekton operator, but didn't install Tekton pipelines?

controllers/shipwrightbuild_controller.go Outdated Show resolved Hide resolved
controllers/shipwrightbuild_controller.go Outdated Show resolved Hide resolved
case tektonAPIErr == nil && tektonOPErr != nil:
logger.Info("Tekton has been installed without use of its associated operator.")
case errors.IsNotFound(tektonAPIErr) && errors.IsNotFound(tektonOPErr):
//TODO should we error out here or proceed and give the user the opportunity of install Tekton after Shipwright?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To answer the root TODO - I think we should error here.

if tektonOpCRD.Labels == nil {
retErr := fmt.Errorf("the CRD TektonConfig does not have labels set, inclding its version")
logger.Error(retErr, "Problem confirming Tekton Operator version")
return RequeueWithError(retErr)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we can requeue here - this is a permanent failure.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will change to perm fail / error out in next push

if !exists {
retErr := fmt.Errorf("the CRD TektonConfig does not have labels set, inclding its version")
logger.Error(retErr, "Problem confirming Tekton Operator version")
return RequeueWithError(retErr)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar - this is a permanent failure.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will change to perm fail / error out in next push

controllers/shipwrightbuild_controller.go Outdated Show resolved Hide resolved
// replace v0 with v1 to make k8s version check happy
if strings.HasPrefix(value, "v0") {
value = strings.ReplaceAll(value, "v0", "v1")
}
version, err := version.ParseSemantic(value)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious why this is the case - v0.6.1 is a perfectly valid semver.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it has been that way since this utility landed in apimachinery: see kubernetes/apimachinery@0e7924a#diff-7f2dfde88efef53aa3c8aba302b41cb5b2b9037095700fa5a3ecb461a9123374 and the godoc for ParseGeneric. There at least is explicit intent there.

I think the code was in k8s/k8s before that, though I did not find it immediately, and claimed due diligence 10 minutes into that.

If your curiosity prompts you to continue the search and find out @adambkaplan do let us know :-)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if list == nil || len(list.Items) == 0 {
tektonOperatorCfg := &tektonoperatorv1alpha1.TektonConfig{}
tektonOperatorCfg.Name = "config"
tektonOperatorCfg.Spec.TargetNamespace = "tekton-pipelines"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree we should take this up as a separate issue. This feels like it is more of a concern for a downstream distribution.

@gabemontero gabemontero force-pushed the auto-prov-tekton-olm branch from 3b2b99b to f72a901 Compare August 31, 2021 22:27
@gabemontero
Copy link
Member Author

ok @adambkaplan thanks for the confirmation on the "outside scope of this jira" items

I've pushed as separate commits a squashed commit of your log update, and then a separate commit of the changes for when to error out vs. requeue, and the unit/integration test changes to account for that

I'll reorg/squash once we are good on those changes

the only comment which did not result in a code change is the one around no deployment specific SA's existing for tekton/shipwright operators

@gabemontero
Copy link
Member Author

@adambkaplan @otaviof - another item I did not add to the PR, since it was not explicitly called out in the BUILD-260, is the creation of the target namesapce for shipwright.

If you look at

// filtering out namespace resource, so it does not create new namespaces accidentally, and
// transforming object to target the namespace informed on the CRD (.spec.namespace)
manifest, err := r.Manifest.
Filter(manifestival.Not(manifestival.ByKind("Namespace"))).
Transform(manifestival.InjectNamespace(targetNamespace))
if err != nil {
logger.Error(err, "Transforming manifests, injecting namespace")
return RequeueWithError(err)
}
there was that comment about not "accidentally" creating a new namespace.
And I confirmed that manifestival call there only changes the namespace ref in the other objects being created by manifestival.

Also, if you look at the top level usage example at https://github.com/shipwright-io/operator/blob/a9c5e270d1452163f0a162f7e414bb1940ad03a0/README.md#usage the namespace creation is there.

So with that background, my question: do we want to take on creating the namespace in the operator ?

Pluses: it is arguably an ease of use improvement
Negatives: given the operator the ability to create namespaces .... would security concerned folks care?

WDYT?

@otaviof
Copy link
Member

otaviof commented Sep 2, 2021

So with that background, my question: do we want to take on creating the namespace in the operator ?

@gabemontero, @adambkaplan In my opinion namespaces should exist before anything is installed, so maybe we could simply pass the namespace name as a configuration entry, instead of creating the namespace.

@gabemontero
Copy link
Member Author

So with that background, my question: do we want to take on creating the namespace in the operator ?

@gabemontero, @adambkaplan In my opinion namespaces should exist before anything is installed, so maybe we could simply pass the namespace name as a configuration entry, instead of creating the namespace.

Either I'm missing something you are getting at, or we are not on the same page @otaviof ..... let me try to at least clarify what I am saying.

To do so, let me post the current yaml contents at https://github.com/shipwright-io/operator/blob/a9c5e270d1452163f0a162f7e414bb1940ad03a0/README.md#usage 👍

---
apiVersion: v1
kind: Namespace
metadata:
  name: shipwright-build
spec: {}

---
apiVersion: operator.shipwright.io/v1alpha1
kind: ShipwrightBuild
metadata:
  name: shipwright-operator
spec:
  targetNamespace: shipwright-build
  namespace: default

So the namespace in question which we currently have the user create prior to poking the operator is shipwright-build

Then, in shipwrightbuild.operator.shipwright.io.spec.targetNamespace, we again specify shipwright-build

So we you say "pass the namespace name as a configuration entry", do you mean we should capture it? If so, we are already doing that, right? Or did you mean something else.

Lastly, what I'm saying is making sure that namespace exists and creating it if need be before calling manifestival.

Given my clarification, are you saying we should just not do that ?

@gabemontero
Copy link
Member Author

ok target namespace creation added (e2e's work for me locally, simply removing the creation of the target namespace in the e2e's tests the path)

commits squashed

PTAL @adambkaplan

@gabemontero
Copy link
Member Author

curious that unit test failure is @otaviof o.Expect(err).To(o.BeNil(), o.MatchError(metav1.StatusReasonNotFound)) which has passed multiple times since I added it

a flake?

we need to update the perms in this repo so PR author's can retest ... I cannot currently @adambkaplan

can you do it ?

@otaviof
Copy link
Member

otaviof commented Sep 3, 2021

curious that unit test failure is @otaviof o.Expect(err).To(o.BeNil(), o.MatchError(metav1.StatusReasonNotFound)) which has passed multiple times since I added it

a flake?

we need to update the perms in this repo so PR author's can retest ... I cannot currently @adambkaplan

can you do it ?

Might be a flake, indeed. And second the need to be able to trigger CI again.

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Sep 3, 2021
Copy link
Member

@adambkaplan adambkaplan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One nit with the use of the semver library, otherwise looks good.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: namespace-role
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@otaviof I think our CI didn't catch this because we aren't testing on a "real" cluster yet.

Comment on lines +10 to +12
- kind: ServiceAccount
name: default
namespace: system
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Filed #19

// replace v0 with v1 to make k8s version check happy
if strings.HasPrefix(value, "v0") {
value = strings.ReplaceAll(value, "v0", "v1")
}
version, err := version.ParseSemantic(value)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if list == nil || len(list.Items) == 0 {
tektonOperatorCfg := &tektonoperatorv1alpha1.TektonConfig{}
tektonOperatorCfg.Name = "config"
tektonOperatorCfg.Spec.TargetNamespace = "tekton-pipelines"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Filed #20

@adambkaplan
Copy link
Member

Interesting test panic here:

   Test Panicked
    interface conversion: interface {} is *matchers.MatchErrorMatcher, not string
    /opt/hostedtoolcache/go/1.15.15/x64/src/runtime/iface.go:261

    Full Stack Trace
    github.com/onsi/gomega/internal/assertion.(*Assertion).buildDescription(0xc000601180, 0xc0008a5c10, 0x1, 0x1, 0x2f1, 0x0)
    	/home/runner/work/operator/operator/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:60 +0x187
    github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc000601180, 0x1ba2b80, 0x27114e0, 0x1, 0xc0008a5c10, 0x1, 0x1, 0xc0008a5c10)
    	/home/runner/work/operator/operator/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:78 +0x14f
    github.com/onsi/gomega/internal/assertion.(*Assertion).To(0xc000601180, 0x1ba2b80, 0x27114e0, 0xc0008a5c10, 0x1, 0x1, 0x1)
    	/home/runner/work/operator/operator/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:38 +0xc7
    github.com/shipwright-io/operator/controllers.glob..func1.2()
    	/home/runner/work/operator/operator/controllers/default_test.go:157 +0x407
    github.com/shipwright-io/operator/controllers.TestAPIs(0xc00047c900)
    	/home/runner/work/operator/operator/controllers/suite_test.go:46 +0xed
    testing.tRunner(0xc00047c900, 0x1a56a00)
    	/opt/hostedtoolcache/go/1.15.15/x64/src/testing/testing.go:1123 +0xef
    created by testing.(*T).Run
    	/opt/hostedtoolcache/go/1.15.15/x64/src/testing/testing.go:1168 +0x2b3

@gabemontero
Copy link
Member Author

Interesting test panic here:

   Test Panicked
    interface conversion: interface {} is *matchers.MatchErrorMatcher, not string
    /opt/hostedtoolcache/go/1.15.15/x64/src/runtime/iface.go:261

    Full Stack Trace
    github.com/onsi/gomega/internal/assertion.(*Assertion).buildDescription(0xc000601180, 0xc0008a5c10, 0x1, 0x1, 0x2f1, 0x0)
    	/home/runner/work/operator/operator/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:60 +0x187
    github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc000601180, 0x1ba2b80, 0x27114e0, 0x1, 0xc0008a5c10, 0x1, 0x1, 0xc0008a5c10)
    	/home/runner/work/operator/operator/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:78 +0x14f
    github.com/onsi/gomega/internal/assertion.(*Assertion).To(0xc000601180, 0x1ba2b80, 0x27114e0, 0xc0008a5c10, 0x1, 0x1, 0x1)
    	/home/runner/work/operator/operator/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:38 +0xc7
    github.com/shipwright-io/operator/controllers.glob..func1.2()
    	/home/runner/work/operator/operator/controllers/default_test.go:157 +0x407
    github.com/shipwright-io/operator/controllers.TestAPIs(0xc00047c900)
    	/home/runner/work/operator/operator/controllers/suite_test.go:46 +0xed
    testing.tRunner(0xc00047c900, 0x1a56a00)
    	/opt/hostedtoolcache/go/1.15.15/x64/src/testing/testing.go:1123 +0xef
    created by testing.(*T).Run
    	/opt/hostedtoolcache/go/1.15.15/x64/src/testing/testing.go:1168 +0x2b3

yeah that is what I noted in #18 (comment) @adambkaplan

we'll see what happens when I push the semver change in a few minutes and trigger new tests

test for TaskRun CRD presence, create/configure tekton operator as needed
README update
fix race on delete in integration default_test.go
add tektonconfig rbac
add namespace rbac;
create target namespace if not present
make bundle
@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Sep 3, 2021
@gabemontero
Copy link
Member Author

semver fix pushed @adambkaplan PTAL thanks

@gabemontero
Copy link
Member Author

Interesting test panic here:

   Test Panicked
    interface conversion: interface {} is *matchers.MatchErrorMatcher, not string
    /opt/hostedtoolcache/go/1.15.15/x64/src/runtime/iface.go:261

    Full Stack Trace
    github.com/onsi/gomega/internal/assertion.(*Assertion).buildDescription(0xc000601180, 0xc0008a5c10, 0x1, 0x1, 0x2f1, 0x0)
    	/home/runner/work/operator/operator/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:60 +0x187
    github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc000601180, 0x1ba2b80, 0x27114e0, 0x1, 0xc0008a5c10, 0x1, 0x1, 0xc0008a5c10)
    	/home/runner/work/operator/operator/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:78 +0x14f
    github.com/onsi/gomega/internal/assertion.(*Assertion).To(0xc000601180, 0x1ba2b80, 0x27114e0, 0xc0008a5c10, 0x1, 0x1, 0x1)
    	/home/runner/work/operator/operator/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:38 +0xc7
    github.com/shipwright-io/operator/controllers.glob..func1.2()
    	/home/runner/work/operator/operator/controllers/default_test.go:157 +0x407
    github.com/shipwright-io/operator/controllers.TestAPIs(0xc00047c900)
    	/home/runner/work/operator/operator/controllers/suite_test.go:46 +0xed
    testing.tRunner(0xc00047c900, 0x1a56a00)
    	/opt/hostedtoolcache/go/1.15.15/x64/src/testing/testing.go:1123 +0xef
    created by testing.(*T).Run
    	/opt/hostedtoolcache/go/1.15.15/x64/src/testing/testing.go:1168 +0x2b3

yeah that is what I noted in #18 (comment) @adambkaplan

we'll see what happens when I push the semver change in a few minutes and trigger new tests

yeah it ran clean this time @adambkaplan @otaviof

see https://github.com/shipwright-io/operator/pull/18/checks?check_run_id=3509444530

Copy link
Member

@adambkaplan adambkaplan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Sep 3, 2021
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 3, 2021

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: adambkaplan

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 3, 2021
@openshift-merge-robot openshift-merge-robot merged commit 32476e2 into shipwright-io:main Sep 3, 2021
@gabemontero gabemontero deleted the auto-prov-tekton-olm branch September 3, 2021 19:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. kind/feature Categorizes issue or PR as related to a new feature. lgtm Indicates that a PR is ready to be merged. release-note
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Automatically provision Tekton Pipelines
4 participants