Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Command-line flag to dynamically generate default valid examples in place of invalid static examples #2383

Closed

Conversation

ilanashapiro
Copy link
Contributor

@ilanashapiro ilanashapiro commented Sep 12, 2023

Addresses #2384

Summary

This PR introduces a command line flag, --defaultExamples or --de that, when mocking a schema, indicates that if the static example to be returned in the response body is invalid (i.e. the example results in validation errors), a valid example is dynamically generated instead.

Currently, Prism's behavior is to log validation errors without returning errors if the --errors flag is omitted. In this case, the request "flows to the actual API and returns what it returns." Valid examples can be dynamically generated by including the --dynamic flag, which will ignore all provided static examples and skip the validation process (the --dynamic flag overrides the --errors flag). However, there is currently no option to use the static example only if it is valid, and to otherwise log the validation errors and dynamically generate a valid example in place of the invalid one. This command line flag introduces this behavior.

Checklist

  • The basics
    • I tested these changes manually in my local or dev environment
  • Tests
    • Added or updated
    • N/A
  • Event Tracking
    • I added event tracking and followed the event tracking guidelines
    • N/A
  • Error Reporting
    • I reported errors and followed the error reporting guidelines
    • N/A

Screenshots
Consider the following schema:

openapi: 3.0.2
paths:
  /pets:
    get:
      responses:
        '200':
          description: OK
          content:
            application/json:
              schema:
                '$ref': '#/schemas/Pet'
              examples:
                cat:
                  summary: An example of a cat
                  value:
                    id: "id"
                    photoUrls: [[]]
                    status: 123
                dog:
                  value: {}
                bird:
                  value:
                    id: 123
                    name: "bird"
                    photoUrls: ["url"]
                    status: "available"

schemas:
  Pet:
    type: object
    required:
    - name
    - photoUrls
    properties:
      id:
        type: integer
        format: int64
      name:
        type: string
        example: doggie
      photoUrls:
        type: array
        xml:
          name: photoUrl
          wrapped: true
        items:
          type: string
      status:
        type: string
        description: pet status in the store
        enum:
        - available
        - pending
        - sold

Note that the first two examples (cat, dog) are invalid, and third (bird) is valid.

By mocking this schema with the --defaultExamples command line flag, we successfully get the following valid dynamically generated example in place of the invalid "cat" example that would be returned otherwise:
image
image

Without the --defaultExamples flag, we see how the invalid "cat" example gets returned:
image
image

With the --defaultExamples flag, the same behavior of dynamically generating valid examples holds for when we manually select an invalid example using the Prefer header:
image
image

However, if we select a valid example, it still gets returned successfully:
image
image

@ilanashapiro ilanashapiro changed the title Feature/invalid examples Command-line flag to dynamically generate default valid examples in place of invalid static examples Sep 13, 2023
@ilanashapiro ilanashapiro marked this pull request as ready for review September 13, 2023 01:47
@ilanashapiro ilanashapiro requested a review from a team as a code owner September 13, 2023 01:47
@ilanashapiro ilanashapiro requested review from EdVinyard and removed request for a team September 13, 2023 01:47
@ilanashapiro
Copy link
Contributor Author

@chohmann
Copy link
Contributor

@ilanashapiro Thank you for your continued efforts for Prism! We appreciate that this is a difficult situation to run into, but do not feel that spec validation on its own is something that belongs in Prism. Prism's wheelhouse is validating requests against the spec people give it, not ensuring the spec provided is valid. We feel that spec validation on its own is better suited for other tools, such as our other open sourced tool, Spectral. We're going to close this PR, but thank you again for always putting effort into improving Prism!

@rattrayalex
Copy link

Hey @chohmann, thanks for your consideration here. I work with Ilana and asked her to take this on; I'd like to put in an appeal to reconsider.

The purpose of this PR is not to perform spec validation, but to allow Prism to only return mock results that are valid, which as a Prism user very much feels like something I should be able to ask Prism to do, since it's quite painful to ask Prism to mock a server and then get invalid results, which can throw errors in many clients.

As a user, my only alternative to this would be to rewrite the OpenAPI spec to remove invalid results prior to passing it to Prism, but that's fairly onerous in contexts where the OpenAPI spec isn't mine. In fact, our team has actually done this work – we have a tool that removes invalid examples from OpenAPI specs – but we felt that it should be Prism's responsibility to be able to always return correct results.

As an alternative, a command-line flag to ignore all examples in responses could work for us too, but there's an obvious tradeoff there in that you throw out the baby (valid examples) with the bathwater (invalid examples).

Does that make sense? Thoughts?

@chohmann
Copy link
Contributor

@rattrayalex thank your for your response!

As a user, my only alternative to this would be to rewrite the OpenAPI spec to remove invalid results prior to passing it to Prism

This may be true if you want to only mock in static mode (having prism return the examples defined in the spec). However, if you don't trust the validity of the spec, you can always mock in dynamic mode, and have prism generate a response that matches the schema, which would ensure the response would always be valid.

As a user, if I've requested Prism to mock in static mode, and then Prism decides to mock in dynamic mode for some reason, unbeknownst to me, then I would think that was a bug. We would be more inclined for Prism to just notify the user in some way (i.e. via a log message) that the requested static example is not in compliance with the schema.

@rattrayalex
Copy link

rattrayalex commented Sep 22, 2023

Ah, thank you so much Chelsea! I don't think we knew about dynamic mode, or didn't realize it'd help with this case. Unfortunately, we seem to hit errors with it: #2081 (comment) and #2391

@rattrayalex
Copy link

Actually, the docs on dynamic mode are a bit unclear – will it always generate different things on the fly, unlike static mode which will always use the same data?

I don't actually want the data returned to change on each run of prism, since we run these tests in CI and I'd rather have them be stable than slowly fuzz our code.

Perhaps what I really want is static mode, with an --ignoreExamples option, that just ignores all examples regardless of whether they're valid or invalid? Would that be something you'd accept a PR adding?

Apologies for my poor understanding of the current options & behavior!

@ilanashapiro
Copy link
Contributor Author

@chohmann Hi Chelsea, just wanted to follow up, would you accept a PR for the --ignoreExamples flag that Alex discussed? Thanks!

@chohmann
Copy link
Contributor

The team has decided that the --ignoreExamples flag is not appropriate. We don't expect to build this in a way that suits all of our customers' needs, or even yours, as the specification changes over time. For example, the addition of new properties will likely change many values for properties in the response, so would break your tests anyways.

As a user, my only alternative to this would be to rewrite the OpenAPI spec to remove invalid results prior to passing it to Prism, but that's fairly onerous in contexts where the OpenAPI spec isn't mine. In fact, our team has actually done this work – we have a tool that removes invalid examples from OpenAPI specs

Have you considered updating your team's tool to use a tool like json-schema-faker/json-schema-sampler to generate a valid example to replace the invalid one it is removing?

@rattrayalex
Copy link

rattrayalex commented Sep 29, 2023

For example, the addition of new properties will likely change many values for properties in the response, so would break your tests anyways.

I'm not sure I understand? Empirically, adding properties does not break our tests (this happens regularly).

Have you considered updating your team's tool to use a tool like json-schema-faker/json-schema-sampler to generate a valid example to replace the invalid one it is removing?

I'm not sure you're quite understanding our use-case. We are mocking third party OpenAPI specs - not specs we control. We want to avoid editing them as much as possible. Generating a prism-specific version of third-party specs is not great for a variety of reasons - happy to elaborate if it'd be helpful.

My assumption was that mocking third-party specs is a common use-case for Prism users – is that incorrect?

@ryotrellim
Copy link
Contributor

Hi @rattrayalex, I'm the product manager on @chohmann's team and helping to shepherd Prism. I'd like to better understand your use case and will be reaching out via email with a calendly link so we can connect.

Here's my 2 cents until we chat:

  • Mocking third-party specs is an excellent use case, but modifying them or changing how they act is as much your responsibility as it is Prism's. Which is to say I think it's out of scope for both teams.
  • However, if reliable static examples are critical to your business case, then I think a transformation middleware will be required so that Prism can work as intended.
  • I'm especially interested to learn about why static examples are a must-have for your CI/CD pipeline. It seems to me that testing for specific types returned by prism in dynamic mode would be more holistic.

@rattrayalex
Copy link

Thanks @ryotrellim , I'll look out for an email! You can reach me at [email protected]

I may misunderstand the nature/design of static vs dynamic mode – I was surprised to see dynamic differ so strongly from static, since static mode also invents fake/mock data, but we encountered several bugs with dynamic mode.

Assuming dynamic mode tries to send different data every time (eg, 1, 0, -MaxInt, +MaxInt, etc for an int field), you're correct that the tests would get more coverage. However, I prefer to avoid "coverage across runs" in favor of "coverage within runs", so to speak. When I want to fuzz something, I do it in a dedicated fashion so that I can handle the edge-cases that are surfaced head-on.

In the realities of a software team, test failures that arise from randomness (and happen rarely) are just likely to get ignored or ultimately swallowed by an auto-retry, and may be a pain to repro.

Does that make sense?

@chohmann
Copy link
Contributor

@rattrayalex after the meeting you had with @ryotrellim and company on Monday, we have a better understanding of your ask

Perhaps what I really want is static mode, with an --ignoreExamples option, that just ignores all examples regardless of whether they're valid or invalid? Would that be something you'd accept a PR adding?

We would accept a PR for this approach.

And just to be specific this PR should:

  • introduce a new flag named --ignoreExamples
  • when the flag is present, and in static mode, treat the spec as if it doesn't have an example defined
  • continue to do what Prism already does in static mode when an example is not present, which is to return an example that has not been generated using json-schema-faker, but was created by Prism.
  • when the flag is present, and in dynamic mode, the flag is essentially ignored, since in dynamic mode, examples are not consulted and json-schema-faker is used to generate a response based on the schema defined in the spec
  • descriptive documentation about the flag and it's purpose is added to docs/guides/01-mocking.md in the Static Response Generation section
  • helper text for the new flag is added to the CLI

@rattrayalex
Copy link

Thank you Chelsea! @ilanashapiro is excited to get to work on those items soon!

@ilanashapiro
Copy link
Contributor Author

PR #2408 is out!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants