Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regression testing #119

Open
KellyStathis opened this issue Jun 15, 2020 · 6 comments
Open

Regression testing #119

KellyStathis opened this issue Jun 15, 2020 · 6 comments

Comments

@KellyStathis
Copy link
Collaborator

No description provided.

@axfelix
Copy link
Owner

axfelix commented Jun 15, 2020

This would be relatively simple to implement if we had a non-production server that scheduled harvester runs and logged when results are missing -- we currently do this in production to an extent but it should really be something that blocks commits to master.

@KellyStathis
Copy link
Collaborator Author

In theory, would we limit the testing to only harvest certain repositories, or a limited number of records from each? I could see it taking a while to approve a commit if it does a full run each time. I've never set up anything like this before though!

@axfelix
Copy link
Owner

axfelix commented Jun 15, 2020

I would want to see at least a representative sample, if not a full-full run, every time we try to push changes to master, yeah.

@axfelix
Copy link
Owner

axfelix commented Jun 16, 2020

Best idea so far is to, through the existing Jenkins instance set up for Radiam/FRDR, do a full run against a new SQLite and diff it against the previous master's full-run SQLite, every time something is merged into master. This would be fairly doable I think, would take a bit of work.

@axfelix
Copy link
Owner

axfelix commented Jul 30, 2020

FYI so I don't forget -- this is mostly set up and working on https://jenkins.frdr.ca:9443/login?from=%2Fjob%2Ffrdr_harvest%2F, but the CI hooks aren't firing. Need to look into that.

@axfelix
Copy link
Owner

axfelix commented Aug 27, 2020

We should also implement checking all of our local URLs and ensuring they don't return 404s, which will help test when our deletes aren't persisting to Globus properly, though this should probably be a separate job -- I have the jenkins link above set up to trigger on pull requests, though the delete checking should probably just run on a regular cron.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants