Bulk data for testing orgs with large amounts of data #3
Labels
documentation
Improvements or additions to documentation
Ready for Sprint
Can easily be worked on during an SFDO Sprint
use case
Orgs expected to have large amounts of data, need to have fairly large data sets for testing. The details of the data do not matter a great deal, but do need the volume to ensure triggers, flow, and similar, have appropriate filters.
As a developer I want to generate data sets large enough to use most, or all, of the storage in a partial or full sandbox to QA build with large volumes.
The text was updated successfully, but these errors were encountered: