Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SoT: Testing / E2E Enhancement #4744

Open
15 tasks
jadudm opened this issue Mar 5, 2025 · 3 comments
Open
15 tasks

SoT: Testing / E2E Enhancement #4744

jadudm opened this issue Mar 5, 2025 · 3 comments
Assignees
Labels

Comments

@jadudm
Copy link
Contributor

jadudm commented Mar 5, 2025

Problem

The Cypress E2E tests current use the API to check if a record exists.

With the SoT introduction, we could end up with a picture that looks like this:

Submission ───────► SAC ────────► Dissemination────► API 1.1.0
     │                                                        
     │                                                        
     └───────────► SoT ─────────► API 2.0

The problem we now want to solve with our E2E tests is the following: how do we know that the exact same data is exists in all of our endpoints for all submissions under both APIs? We suspect that adding a new testing action, that runs a Python script to compare the endpoints, may be more valuable to us than updating Cypress. (Or, there may be other solutions.)

The goal is to ultimately eliminate the SAC/1.1.0 path way, and leave only SoT/2.0, where we would rename "2.0" down to 1.1.0, providing zero interruption in service to partners.

How did we discover this problem?

The single source of truth implementation is a complex change that impacts many areas of the app's codebase, so we need to ensure it will continue to operate normally and without loss of data.

Job Story(s)

When we switch the app over to use a single source of truth, we want to ensure continued app functionality so we can have full confidence in our implementation.

What are we planning to do about it?

We will be creating tests that compare submission data for the currently used SAC model with the new source of truth model. We will implement both real-time and out-of-band testing:

  • Real-time: Testing that will be performed on individual submissions that will run as part of the audit submission/dissemination process.
  • Out-of-band: This will be a Django command that compares existing submissions in bulk. A date range can be provided to limit the scope of the testing, which will allow us to set up a GitHub cron job to run a daily test on the previous day's submissions.
    • We will also create a command to compare data via API requests, as opposed to just querying the data models.

What are we not planning to do about it?

How will we measure success?

The single source of truth implementation will continue to be refined until errors are no longer encountered in our real-time and out-of-band testing.

Security Considerations

Required per CM-4.


Process checklist
  • Has a clear story statement
  • Can reasonably be done in a few days (otherwise, split this up!)
  • Shepherds have been identified
  • UX youexes all the things
  • Design designs all the things
  • Engineering engineers all the things
  • Meets acceptance criteria
  • Meets QASP conditions
  • Presented in a review
  • Includes screenshots or references to artifacts
  • Tagged with the sprint where it was finished
  • Archived

If there's UI...

  • Screen reader - Listen to the experience with a screen reader extension, ensure the information presented in order
  • Keyboard navigation - Run through acceptance criteria with keyboard tabs, ensure it works.
  • Text scaling - Adjust viewport to 1280 pixels wide and zoom to 200%, ensure everything renders as expected. Document 400% zoom issues with USWDS if appropriate.
@github-project-automation github-project-automation bot moved this to Triage in FAC Mar 5, 2025
@jadudm
Copy link
Contributor Author

jadudm commented Mar 5, 2025

In production, we want to verify every single submission. SACs should match the SoTs. So, we should be able to have two APIs live at the same time, and be testing live all the time.

@jadudm
Copy link
Contributor Author

jadudm commented Mar 6, 2025

@phildominguez-gsa , @anagradova , as a first step for this, it would be good for y'all to flesh out the ticket, and (either along the way or after you have a draft), bring it back to the team for some conversation, so it links up with the other work.

Ultimately, the question "how do we know that the exact same data is exists in all of our endpoints for all submissions under both APIs?" is what we want to be able to answer with 100% confidence. We need to have complete confidence in the new infrastructure, and it seems (but might not be) that the API(s) is/are a good way to validate this. If you have other ideas that you think are better, please suggest/explore them.

@jadudm
Copy link
Contributor Author

jadudm commented Mar 10, 2025

@anagradova , @phildominguez-gsa , I know the two of you have come up with some good next steps. Feel free to story it out here, and 🐎 . (I'm not sure what a racehorse has to do with anything.)

@jadudm jadudm moved this from Triage to In Progress in FAC Mar 10, 2025
@jadudm jadudm added the data label Mar 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: In Progress
Development

No branches or pull requests

3 participants