Skip to content

Latest commit

 

History

History
177 lines (136 loc) · 9.03 KB

classes.md

File metadata and controls

177 lines (136 loc) · 9.03 KB
copyright lastupdated keywords subcollection
years
2017, 2022
2022-10-12
storage classes, tiers, cost, buckets, location constraint, provisioning code, locationconstraint
cloud-object-storage

{{site.data.keyword.attribute-definition-list}}

Using storage classes

{: #classes}

Not all data feeds active workloads. Archival data might sit untouched for long periods of time. For less active workloads, you can create buckets with different storage classes. Objects that are stored in these buckets incur charges on a different schedule than standard storage. {: shortdesc}

This feature is not currently supported in {{site.data.keyword.cos_short}} for {{site.data.keyword.satelliteshort}}. Learn more. {: note}

What are the classes?

{: #classes-about}

You can choose from four storage classes:

  • Smart Tier can be used for any workload, especially dynamic workloads where access patterns are unknown or difficult to predict. Smart Tier provides a simplified pricing structure and automatic cost optimization by classifying the data into "hot", "cool", and "cold" tiers based on monthly usage patterns. All data in the bucket is then billed at the lowest applicable rate. There are no threshold object sizes or storage periods, and there are no retrieval fees. For a detailed explanation of how it works, see the billing topic.
  • Standard is used for active workloads, with no charge for data retrieved (other than the cost of the operational request itself).
  • Vault is used for cool workloads where data is accessed less than once a month - an extra retrieval charge ($/GB) is applied each time data is read. The service includes a minimum threshold for object size and storage period consistent with the intended use of this service for cooler, less-active data.
  • Cold Vault is used for cold workloads where data is accessed every 90 days or less - a larger extra retrieval charge ($/GB) is applied each time data is read. The service includes a longer minimum threshold for object size and storage period consistent with the intended use of this service for cold, inactive data.

Flex has been replaced by Smart Tier for dynamic workloads. Flex users can continue to manage their data in existing Flex buckets, although no new Flex buckets may be created. Existing users can reference pricing information here. {:note}

For more information, see the pricing table at ibm.com{: external}.

The Active storage class is only used with One Rate plans, and can not be used in Standard or Lite plans. {:important}

For more information about how to create buckets with different storage classes, see the API reference.

For each storage class, billing is based on aggregated usage across all buckets at the instance level. For example, for Smart Tier, the billing is based on usage across all Smart Tier buckets in a given instance - not on the individual buckets. {: important}

How do I create a bucket with a different storage class?

{: #classes-locationconstraint}

When creating a bucket in the console, there is a menu that allows for storage class selection.

When creating buckets programmatically, it is necessary to specify a LocationConstraint that corresponds with the endpoint used. Valid provisioning codes for LocationConstraint are
   US Geo us-standard / us-vault / us-cold / us-smart
   US East us-east-standard / us-east-vault / us-east-cold / us-east-smart
   US South us-south-standard / us-south-vault/ us-south-cold/ us-south-smart
   EU Geo eu-standard / eu-vault / eu-cold / eu-smart
   EU Great Britain eu-gb-standard / eu-gb-vault / eu-gb-cold / eu-gb-smart
   EU Germany eu-de-standard / eu-de-vault / eu-de-cold / eu-de-smart
   AP Geo ap-standard / ap-vault / ap-cold / ap-smart
   AP Tokyo jp-tok-standard / jp-tok-vault / jp-tok-cold / jp-tok-smart
   AP Osaka jp-osa-standard / jp-osa-vault / jp-osa-cold / jp-osa-smart
   AP Australia au-syd-standard / au-syd-vault / au-syd-cold / au-syd-smart
   CA Toronto ca-tor-standard / ca-tor-vault / ca-tor-cold / ca-tor-smart
   Amsterdam ams03-standard / ams03-vault / ams03-cold / ams03-smart
   Chennai che01-standard / che01-vault / che01-cold / che01-smart
   Mexico mex01-standard / mex01-vault / mex01-cold / mex01-smart
   Milan mil01-standard / mil01-vault / mil01-cold / mil01-smart
   Montréal mon01-standard / mon01-vault / mon01-cold / mon01-smart
   Paris par01-standard / par01-vault / par01-cold / par01-smart
   San Jose sjc04-standard / sjc04-vault / sjc04-cold / sjc04-smart
   São Paulo sao01-standard / sao01-vault / sao01-cold / sao01-smart
   Singapore sng01-standard / sng01-vault / sng01-cold / sng01-smart

For more information about endpoints, see Endpoints and storage locations.

Using the REST API, Libraries, and SDKs

{: #classes-sdk}

Several new APIs have been introduced to the IBM COS SDKs to provide support for applications working with retention policies. Select a language (curl, Java, JavaScript, Go, or Python) at the beginning of this page to view examples that use the appropriate COS SDK.

All code examples assume the existence of a client object that is called cos that can call the different methods. For details on creating clients, see the specific SDK guides.

Create a bucket with a storage class

public static void createBucket(String bucketName) {
    System.out.printf("Creating new bucket: %s\n", bucketName);
    _cos.createBucket(bucketName, "us-vault");
    System.out.printf("Bucket: %s created!\n", bucketName);
}

{: codeblock} {: java}

function createBucket(bucketName) {
    console.log(`Creating new bucket: ${bucketName}`);
    return cos.createBucket({
        Bucket: bucketName,
        CreateBucketConfiguration: {
          LocationConstraint: 'us-standard'
        },        
    }).promise()
    .then((() => {
        console.log(`Bucket: ${bucketName} created!`);
    }))
    .catch((e) => {
        console.error(`ERROR: ${e.code} - ${e.message}\n`);
    });
}

{: codeblock} {: javascript}

def create_bucket(bucket_name):
    print("Creating new bucket: {0}".format(bucket_name))
    try:
        cos.Bucket(bucket_name).create(
            CreateBucketConfiguration={
                "LocationConstraint":COS_BUCKET_LOCATION
            }
        )
        print("Bucket: {0} created!".format(bucket_name))
    except ClientError as be:
        print("CLIENT ERROR: {0}\n".format(be))
    except Exception as e:
        print("Unable to create bucket: {0}".format(e))

{: codeblock} {: python}

func main() {

    // Create client
    sess := session.Must(session.NewSession())
    client := s3.New(sess, conf)

    // Bucket Names
    newBucket := "<NEW_BUCKET_NAME>"

    input := &s3.CreateBucketInput{
        Bucket: aws.String(newBucket),
        CreateBucketConfiguration: &s3.CreateBucketConfiguration{
            LocationConstraint: aws.String("us-cold"),
        },
    }
    client.CreateBucket(input)

    d, _ := client.ListBuckets(&s3.ListBucketsInput{})
    fmt.Println(d)
}

{: codeblock} {: go}

curl -X "PUT" "https://(endpoint)/(bucket-name)"
 -H "Content-Type: text/plain; charset=utf-8"
 -H "Authorization: Bearer (token)"
 -H "ibm-service-instance-id: (resource-instance-id)"
 -d "<CreateBucketConfiguration>
       <LocationConstraint>(provisioning-code)</LocationConstraint>
     </CreateBucketConfiguration>"

{:codeblock} {: curl}

It isn't possible to change the storage class of a bucket once the bucket is created. If objects need to be reclassified, it's necessary to move the data to another bucket with the wanted storage class.