Skip to content

Latest commit

 

History

History
62 lines (44 loc) · 5.67 KB

tune_programming_run-time.md

File metadata and controls

62 lines (44 loc) · 5.67 KB

Global Enablement & Learning

Tune the Programming Run-Time

When: After platform changes

Review Tuning Programming Resources and Programs [Doc], and consider which of the actions it outlines may improve performance sufficiently to justify the effort and cost, specifically for your SAS Viya deployment.

Ways in which the SAS Viya Programming Runtime that can be tuned for performance

  1. Provide faster and/or larger capacity storage for temporary files and datasets:

  2. Pre-start compute server pods, reducing the time to start compute sessions:

    • Two other changes are prerequisites for pre-starting a pool of available compute server pods. These changes may also be useful in their own right:
      • First, compute sessions running with a given compute context must run as a shared service account.
      • Second, compute sessions in that same context must be made reusable.
    • See Modifying Compute Resource Usage [Doc] Which gives an overview of the steps, in context of the other tuning recommendations.
    • See Server Contexts: How To [Doc] which details the steps for both the two pre-requisites, and for how to configure a pool of available compute servers, all on one page.
    • The SAS Communities Library post Add, update and remove compute context attributes with new pyviyatools [Blog] describes a pair of pyviyatools [GitHub] which can help you automate these customizations.
  3. Enable your programs to use multithreading, and tune thread count.

  4. Adjust the CPU and memory requests and limits for launched SAS Programming Run-time pods. See:

    Note: You should read both of the above sections of documentation. While they have identical section titles and discuss the same overall concept, they are quite distinct.

  5. Resize your Kubernetes cluster's Compute node pool to better suit the CPU and memory requirements of your organization's SAS Programming Run-time pods.

    Consider working with your solution architect and Kubernetes administrator to implement a cluster autoscaler for your compute node pool, so that additional nodes are provisioned in the node pool when it is under heavy load, and unnecessary nodes are removed when under light load. See the first part of The SAS Workload Management Approach to Autoscaling [Blog] for more on this.

    As SAS Administrator, you should be prepared to provide them with information about:

    • the number of SAS compute, connect and batch sessions/pods your deployment typically runs in a given period of time
    • how long each typically runs for
    • how much CPU and memory the sessions typically use, and
    • the variations you see in each of those typical values.

    Your solution architect and Kubernetes administrator would then be more able to optimize the size of your SAS Viya platform compute node pool by:

    • Adjusting the number of nodes in the Compute node pool
    • Adjusting the size of the nodes in the Compute node pool, in terms of number of CPUs and available memory each has

    This sort of sizing should of course be performed before initial deployment. If your deployment's initial sizings were based on estimates of the workload, or your use of the SAS Viya platform has evolved since those estimates were made, then with real usage data you may be able to improve on them significantly.

See also Performance Tuning for the SAS Viya Platform [Blog] and SAS Viya Platform Administration: Tuning [Doc].

Back to checklist