Skip to content

Use Case – Reset Data in Sandbox Org

ubraig edited this page Jul 12, 2023 · 1 revision

Overview

In this advanced example scenario, we want to reset a training environment in a Salesforce sandbox org:

  • Delete all existing data by exporting and hard-deleting a set of Objects in a specific order.
  • Upload / initial load fresh data from a set of .csv files with well-defined test/trainig data.

Prerequisites

  • A key file MyKeyfile.key and the password + security token for the user have been prepared with the New-SfEncryptionKeyFile and ConvertTo-SfEncryptedString commands.
  • Based on this, we can generate an authentication token for the target org in the $MyOrg variable via Get-SfAuthToken
  • The user has assigned the 'Bulk API Hard Delete' permission.
  • The sequence of object names in the delete job is optimized to e.g. delete child records befor its master records.

Hard-Delete Existing Data

We will:

  1. Prepare the Auth Token from the encrypted password.
  2. Created the list of object names in the desired order of execution.
  3. Loop through the list to get all the record ids.
  4. Loop through the list to run the hard-delete job.

The corresponding script would look like:

$MyEncryptedPassword = 'EncryptionResultFromPreviousStep'
$MyOrg = Get-SfAuthToken -Username [email protected] -EncryptedString $MyEncryptedPassword -KeyFile MyKeyfile.key -InstanceUrl https://test.salesforce.com

$MyObjectsToDeleteList = @(
  'CampaignMember',
  'Campaign',
  'Lead',
  'Contact',
  'Account'
)

foreach($Object in $MyObjectsToDeleteList){ 
  Export-SfRecords $MyOrg $Object -Bulk Serial
  Remove-SfRecords $MyOrg $Object -Bulk SerialHardDelete
}

Notes on implicit behaviour involved here:

  • On the export, we do not provide a SOQL statement. So it will default to SELECT Id FROM <ObjectName>.
  • Neither on the export nor on the remove operation we do provide a path to a .csv file. It will always default to <ObjectName>.csv in the current directory.
  • For the remove operation we do not provide a .sdl mapping file. So it will auto-create one from the column headers in the .csv file, which will result to a .sdl mapping file named <ObjectName>.sdl with one entry each: Id=Id.

Upsert Clean Set of Training Data

Assumptions:

  • Authentication as described above.
  • We will have a set of .csv files named by the object name: Account.csv Contact.csv Lead.csv etc. Each .csv file will follow these conventions:
    • An ExternalId__c custom field to be used in the upsert operation.
    • The .csv column names exactly match the field API names, so auto-creation of the mapping file will work.
    • Relationship lookups between records, e.g. Contact-to-Account assignments will use the proper field name syntax and refer to the ExternalId__c value of the respective target record.

The corresponding script would look like (Short syntax and command aliases are used for illustration here):

$MyEncryptedPassword = 'EncryptionResultFromPreviousStep'
$MyOrg = sfauth [email protected] $MyEncryptedPassword -KeyFile MyKeyfile.key -InstanceUrl https://test.salesforce.com

$MyObjectsToUpsertList = @(
  'Account',
  'Contact',
  'Lead',
  'Campaign'
)

foreach($Object in $MyObjectsToUpsertList){ 
  sfupsert $MyOrg $Object -ExternalIdField ExternalId__c -Bulk Serial
}

Notes on implicit behaviour involved here:

  • We do not provide a path to a .csv file. It will always default to <ObjectName>.csv in the current directory.
  • We do not provide a .sdl mapping file. So it will auto-create one from the column headers in the .csv file, which will result to a .sdl mapping file named <ObjectName>.sdl that 1:1 maps all fields as provided by the column headers in the .csv files.