copyright | lastupdated | keywords | subcollection | ||
---|---|---|---|---|---|
|
2019-11-19 |
object storage, go, sdk |
cloud-object-storage |
{:new_window: target="_blank"} {:external: target="_blank" .external} {:shortdesc: .shortdesc} {:codeblock: .codeblock} {:pre: .pre} {:screen: .screen} {:tip: .tip} {:important: .important} {:note: .note} {:download: .download} {:http: .ph data-hd-programlang='http'} {:javascript: .ph data-hd-programlang='javascript'} {:java: .ph data-hd-programlang='java'} {:python: .ph data-hd-programlang='python'} {:go: .ph data-hd-programlang='go'} {:faq: data-hd-content-type='faq'} {:support: data-reuse='support'}
{: #using-go}
The {{site.data.keyword.cos_full}} SDK for Go provides features to make the most of {{site.data.keyword.cos_full_notm}}. {: shortdesc}
The {{site.data.keyword.cos_full_notm}} SDK for Go is comprehensive, with many features and capabilities that exceed the scope and space of this guide. For detailed class and method documentation see the Go API documentation{: external}. Source code can be found in the GitHub repository{: external}.
{: #go-get-sdk}
Use go get
to retrieve the SDK to add it to your GOPATH workspace, or project's Go module dependencies. The SDK requires a minimum version of Go 1.10 and maximum version of Go 1.12. Future versions of Go will be supported once our quality control process has been completed.
go get github.com/IBM/ibm-cos-sdk-go
{: pre}
To update the SDK use go get -u
to retrieve the latest version of the SDK.
go get -u github.com/IBM/ibm-cos-sdk-go
{: pre}
{: #go-import-packages}
After you have installed the SDK, you will need to import the packages that you require into your Go applications to use the SDK, as shown in the following example:
import (
"github.com/IBM/ibm-cos-sdk-go/aws/credentials/ibmiam"
"github.com/IBM/ibm-cos-sdk-go/aws"
"github.com/IBM/ibm-cos-sdk-go/aws/session"
"github.com/IBM/ibm-cos-sdk-go/service/s3"
)
{: codeblock}
{: #go-client-credentials}
To connect to {{site.data.keyword.cos_full_notm}}, a client is created and configured by providing credential information (API key and service instance ID). These values can also be automatically sourced from a credentials file or from environment variables.
The credentials can be found by creating a Service Credential, or through the CLI.
Figure 1 shows an example of how to define environment variables in an application runtime at the {{site.data.keyword.cos_full_notm}} portal. The required variables are IBM_API_KEY_ID
containing your Service Credential 'apikey', IBM_SERVICE_INSTANCE_ID
holding the 'resource_instance_id' also from your Service Credential, and an IBM_AUTH_ENDPOINT
with a value appropriate to your account, like https://iam.cloud.ibm.com/identity/token
. If using environment variables to define your application credentials, use WithCredentials(ibmiam.NewEnvCredentials(aws.NewConfig())).
, replacing the similar method used in the configuration example.
{: caption="Figure 1. Environment Variables"}
{: #go-init-config}
// Constants for IBM COS values
const (
apiKey = "<API_KEY>" // eg "0viPHOY7LbLNa9eLftrtHPpTjoGv6hbLD1QalRXikliJ"
serviceInstanceID = "<RESOURCE_INSTANCE_ID>" // "crn:v1:bluemix:public:cloud-object-storage:global:a/<CREDENTIAL_ID_AS_GENERATED>:<SERVICE_ID_AS_GENERATED>::"
authEndpoint = "https://iam.cloud.ibm.com/identity/token"
serviceEndpoint = "<SERVICE_ENDPOINT>" // eg "https://s3.us.cloud-object-storage.appdomain.cloud"
bucketLocation = "<LOCATION>" // eg "us"
)
// Create config
conf := aws.NewConfig().
WithRegion("us-standard").
WithEndpoint(serviceEndpoint).
WithCredentials(ibmiam.NewStaticCredentials(aws.NewConfig(), authEndpoint, apiKey, serviceInstanceID)).
WithS3ForcePathStyle(true)
{: codeblock}
For more information about endpoints, see Endpoints and storage locations.
{: #go-code-examples}
{: #go-new-bucket}
A list of valid provisioning codes for LocationConstraint
can be referenced in the Storage Classes guide. Please note that the sample uses the appropriate location constraint for the Cold Vault storage based on the sample configuration. Your locations and configuration may vary.
func main() {
// Create client
sess := session.Must(session.NewSession())
client := s3.New(sess, conf)
// Bucket Names
newBucket := "<NEW_BUCKET_NAME>"
newColdBucket := "<NEW_COLD_BUCKET_NAME>"
input := &s3.CreateBucketInput{
Bucket: aws.String(newBucket),
}
client.CreateBucket(input)
input2 := &s3.CreateBucketInput{
Bucket: aws.String(newColdBucket),
CreateBucketConfiguration: &s3.CreateBucketConfiguration{
LocationConstraint: aws.String("us-cold"),
},
}
client.CreateBucket(input2)
d, _ := client.ListBuckets(&s3.ListBucketsInput{})
fmt.Println(d)
}
{: codeblock}
{: #go-list-buckets}
func main() {
// Create client
sess := session.Must(session.NewSession())
client := s3.New(sess, conf)
// Call Function
d, _ := client.ListBuckets(&s3.ListBucketsInput{})
fmt.Println(d)
}
{: codeblock}
{: #go-put-object}
func main() {
// Create client
sess := session.Must(session.NewSession())
client := s3.New(sess, conf)
// Variables and random content to sample, replace when appropriate
bucketName := "<BUCKET_NAME>"
key := "<OBJECT_KEY>"
content := bytes.NewReader([]byte("<CONTENT>"))
input := s3.PutObjectInput{
Bucket: aws.String(bucketName),
Key: aws.String(key),
Body: content,
}
// Call Function to upload (Put) an object
result, _ := client.PutObject(&input)
fmt.Println(result)
}
{: codeblock}
{: #go-list-objects-v2}
func main() {
// Create client
sess := session.Must(session.NewSession())
client := s3.New(sess, conf)
// Bucket Name
Bucket := "<BUCKET_NAME>"
// Call Function
Input := &s3.ListObjectsV2Input{
Bucket: aws.String(Bucket),
}
l, e := client.ListObjectsV2(Input)
fmt.Println(l)
fmt.Println(e) // prints "<nil>"
}
// The response should be formatted like the following example:
//{
// Contents: [{
// ETag: "\"dbxxxxx53xxx7d06378204e3xxxxxx9f\"",
// Key: "file1.json",
// LastModified: 2019-10-15 22:22:52.62 +0000 UTC,
// Size: 1045,
// StorageClass: "STANDARD"
// },{
// ETag: "\"6e1xxxxx63xxxdefb440f72axxxxxxc2\"",
// Key: "file2.json",
// LastModified: 2019-10-15 23:08:10.074 +0000 UTC,
// Size: 1045,
// StorageClass: "STANDARD"
// }],
// Delimiter: "",
// IsTruncated: false,
// KeyCount: 2,
// MaxKeys: 1000,
// Name: "<BUCKET_NAME>",
// Prefix: ""
//}
{: codeblock}
{: #go-get-object}
func main() {
// Create client
sess := session.Must(session.NewSession())
client := s3.New(sess, conf)
// Variables
bucketName := "<NEW_BUCKET_NAME>"
key := "<OBJECT_KEY>"
// users will need to create bucket, key (flat string name)
Input := s3.GetObjectInput{
Bucket: aws.String(bucketName),
Key: aws.String(key),
}
// Call Function
res, _ := client.GetObject(&Input)
body, _ := ioutil.ReadAll(res.Body)
fmt.Println(body)
}
{: codeblock}
{: #go-delete-object}
func main() {
// Create client
sess := session.Must(session.NewSession())
client := s3.New(sess, conf)
// Bucket Name
bucket := "<BUCKET_NAME>"
input := &s3.DeleteObjectInput{
Bucket: aws.String(bucket),
Key: aws.String("<OBJECT_KEY>"),
}
d, _ := client.DeleteObject(input)
fmt.Println(d)
}
{: codeblock}
{: #go-multidelete}
func main() {
// Create client
sess := session.Must(session.NewSession())
client := s3.New(sess, conf)
// Bucket Name
bucket := "<BUCKET_NAME>"
input := &s3.DeleteObjectsInput{
Bucket: aws.String(bucket),
Delete: &s3.Delete{
Objects: []*s3.ObjectIdentifier{
{
Key: aws.String("<OBJECT_KEY1>"),
},
{
Key: aws.String("<OBJECT_KEY2>"),
},
{
Key: aws.String("<OBJECT_KEY3>"),
},
},
Quiet: aws.Bool(false),
},
}
d, _ := client.DeleteObjects(input)
fmt.Println(d)
}
{: codeblock}
{: #go-delete-bucket}
func main() {
// Bucket Name
bucket := "<BUCKET_NAME>"
// Create client
sess := session.Must(session.NewSession())
client := s3.New(sess, conf)
input := &s3.DeleteBucketInput{
Bucket: aws.String(bucket),
}
d, _ := client.DeleteBucket(input)
fmt.Println(d)
}
{: codeblock}
{: #go-multipart}
func main() {
// Variables
bucket := "<BUCKET_NAME>"
key := "<OBJECT_KEY>"
content := bytes.NewReader([]byte("<CONTENT>"))
input := s3.CreateMultipartUploadInput{
Bucket: aws.String(bucket),
Key: aws.String(key),
}
// Create client
sess := session.Must(session.NewSession())
client := s3.New(sess, conf)
upload, _ := client.CreateMultipartUpload(&input)
uploadPartInput := s3.UploadPartInput{
Bucket: aws.String(bucket),
Key: aws.String(key),
PartNumber: aws.Int64(int64(1)),
UploadId: upload.UploadId,
Body: content,
}
var completedParts []*s3.CompletedPart
completedPart, _ := client.UploadPart(&uploadPartInput)
completedParts = append(completedParts, &s3.CompletedPart{
ETag: completedPart.ETag,
PartNumber: aws.Int64(int64(1)),
})
completeMPUInput := s3.CompleteMultipartUploadInput{
Bucket: aws.String(bucket),
Key: aws.String(key),
MultipartUpload: &s3.CompletedMultipartUpload{
Parts: completedParts,
},
UploadId: upload.UploadId,
}
d, _ := client.CompleteMultipartUpload(&completeMPUInput)
fmt.Println(d)
}
{: codeblock}
{: #go-examples-kp}
Key Protect can be added to a storage bucket to manage encryption keys. All data is encrypted in IBM COS, but Key Protect provides a service for generating, rotating, and controlling access to encryption keys by using a centralized service.
{: #go-examples-kp-prereqs}
The following items are necessary in order to create a bucket with Key-Protect enabled:
- A Key Protect service provisioned
- A Root key available (either generated or imported)
{: #go-examples-kp-root}
- Retrieve the instance ID for your Key Protect service
- Use the Key Protect API to retrieve all your available keys
- You can either use
curl
commands or an API REST Client such as Postman to access the Key Protect API.
- You can either use
- Retrieve the CRN of the root key you use to enabled Key Protect on your bucket. The CRN looks similar to below:
crn:v1:bluemix:public:kms:us-south:a/3d624cd74a0dea86ed8efe3101341742:90b6a1db-0fe1-4fe9-b91e-962c327df531:key:0bg3e33e-a866-50f2-b715-5cba2bc93234
{: #go-examples-kp-new-bucket}
func main() {
// Create client
sess := session.Must(session.NewSession())
client := s3.New(sess, conf)
// Bucket Names
newBucket := "<NEW_BUCKET_NAME>"
fmt.Println("Creating new encrypted bucket:", newBucket)
input := &s3.CreateBucketInput{
Bucket: aws.String(newBucket),
IBMSSEKPCustomerRootKeyCrn: aws.String("<ROOT-KEY-CRN>"),
IBMSSEKPEncryptionAlgorithm:aws.String("<ALGORITHM>"),
}
client.CreateBucket(input)
// List Buckets
d, _ := client.ListBuckets(&s3.ListBucketsInput{})
fmt.Println(d)
}
{: codeblock}
Key Values
<NEW_BUCKET_NAME>
- The name of the new bucket.<ROOT-KEY-CRN>
- CRN of the Root Key that is obtained from the Key Protect service.<ALGORITHM>
- The encryption algorithm that is used for new objects added to the bucket (Default is AES256).
{: #go-transfer}
func main() {
// Variables
bucket := "<BUCKET_NAME>"
key := "<OBJECT_KEY>"
// Create client
sess := session.Must(session.NewSession())
client := s3.New(sess, conf)
// Create an uploader with S3 client and custom options
uploader := s3manager.NewUploaderWithClient(client, func(u *s3manager.Uploader) {
u.PartSize = 5 * 1024 * 1024 // 64MB per part
})
// make a buffer of 5MB
buffer := make([]byte, 15*1024*1024, 15*1024*1024)
random := rand.New(rand.NewSource(time.Now().Unix()))
random.Read(buffer)
input := &s3manager.UploadInput{
Bucket: aws.String(bucket),
Key: aws.String(key),
Body: io.ReadSeeker(bytes.NewReader(buffer)),
}
// Perform an upload.
d, _ := uploader.Upload(input)
fmt.Println(d)
// Perform upload with options different than the those in the Uploader.
f, _ := uploader.Upload(input, func(u *s3manager.Uploader) {
u.PartSize = 10 * 1024 * 1024 // 10MB part size
u.LeavePartsOnError = true // Don't delete the parts if the upload fails.
})
fmt.Println(f)
}
{: codeblock}
{: #go-list-buckets-extended}
func main() {
// Create client
sess := session.Must(session.NewSession())
client := s3.New(sess, conf)
input := new(s3.ListBucketsExtendedInput).SetMaxKeys(<MAX_KEYS>).SetMarker("<MARKER>").SetPrefix("<PREFIX>")
output, _ := client.ListBucketsExtended(input)
jsonBytes, _ := json.MarshalIndent(output, " ", " ")
fmt.Println(string(jsonBytes))
}
{: codeblock}
Key Values
<MAX_KEYS>
- Maximum number of buckets to retrieve in the request.<MARKER>
- The bucket name to start the listing (Skip until this bucket).<PREFIX
- Only include buckets whose name start with this prefix.
{: #go-list-buckets-extended-pagination}
func main() {
// Create client
sess := session.Must(session.NewSession())
client := s3.New(sess, conf)
i := 0
input := new(s3.ListBucketsExtendedInput).SetMaxKeys(<MAX_KEYS>).SetMarker("<MARKER>").SetPrefix("<PREFIX>")
output, _ := client.ListBucketsExtended(input)
for _, bucket := range output.Buckets {
fmt.Println(i, "\t\t", *bucket.Name, "\t\t", *bucket.LocationConstraint, "\t\t", *bucket.CreationDate)
}
}
{: codeblock}
Key Values
<MAX_KEYS>
- Maximum number of buckets to retrieve in the request.<MARKER>
- The bucket name to start the listing (Skip until this bucket).<PREFIX
- Only include buckets whose name start with this prefix.
{: #go-archive-tier-support}
You can automatically archive objects after a specified length of time or after a specified date. Once archived, a temporary copy of an object can be restored for access as needed. Please note the time required to restore the temporary copy of the object(s) may take up to 12 hours.
To use the example provided, provide your own configuration—including replacing <apikey>
and other bracketed <...>
information—keeping in mind that using environment variables are more secure, and one should not put credentials in code that will be versioned.
An archive policy is set at the bucket level by calling the PutBucketLifecycleConfiguration
method on a client instance. A newly added or modified archive policy applies to new objects uploaded and does not affect existing objects.
func main() {
// Create Client
sess := session.Must(session.NewSession())
client := s3.New(sess, conf)
// PUT BUCKET LIFECYCLE CONFIGURATION
// Replace <BUCKET_NAME> with the name of the bucket
lInput := &s3.PutBucketLifecycleConfigurationInput{
Bucket: aws.String("<BUCKET_NAME>"),
LifecycleConfiguration: &s3.LifecycleConfiguration{
Rules: []*s3.LifecycleRule{
{
Status: aws.String("Enabled"),
Filter: &s3.LifecycleRuleFilter{},
ID: aws.String("id3"),
Transitions: []*s3.Transition{
{
Days: aws.Int64(5),
StorageClass: aws.String("Glacier"),
},
},
},
},
},
}
l, e := client.PutBucketLifecycleConfiguration(lInput)
fmt.Println(l) // should print an empty bracket
fmt.Println(e) // should print <nil>
// GET BUCKET LIFECYCLE CONFIGURATION
gInput := &s3.GetBucketLifecycleConfigurationInput{
Bucket: aws.String("<bucketname>"),
}
g, e := client.GetBucketLifecycleConfiguration(gInput)
fmt.Println(g)
fmt.Println(e) // see response for results
// RESTORE OBJECT
// Replace <OBJECT_KEY> with the appropriate key
rInput := &s3.RestoreObjectInput{
Bucket: aws.String("<BUCKET_NAME>"),
Key: aws.String("<OBJECT_KEY>"),
RestoreRequest: &s3.RestoreRequest{
Days: aws.Int64(100),
GlacierJobParameters: &s3.GlacierJobParameters{
Tier: aws.String("Bulk"),
},
},
}
r, e := client.RestoreObject(rInput)
fmt.Println(r)
fmt.Println(e)
}
{: codeblock}
The typical response is exemplified here.
{
Rules: [{
Filter: {
},
ID: "id3",
Status: "Enabled",
Transitions: [{
Days: 5,
StorageClass: "GLACIER"
}]
}]
}
{: codeblock}
{: #go-immutable-object-storage}
Users can configure buckets with an Immutable Object Storage policy to prevent objects from being modified or deleted for a defined period of time. The retention period can be specified on a per-object basis, or objects can inherit a default retention period set on the bucket. It is also possible to set open-ended and permanent retention periods. Immutable Object Storage meets the rules set forth by the SEC governing record retention, and IBM Cloud administrators are unable to bypass these restrictions.
Note: Immutable Object Storage does not support Aspera transfers via the SDK to upload objects or directories at this stage.
func main() {
// Create Client
sess := session.Must(session.NewSession())
client := s3.New(sess, conf)
// Create a bucket
input := &s3.CreateBucketInput{
Bucket: aws.String("<BUCKET_NAME>"),
}
d, e := client.CreateBucket(input)
fmt.Println(d) // should print an empty bracket
fmt.Println(e) // should print <nil>
// PUT BUCKET PROTECTION CONFIGURATION
pInput := &s3.PutBucketProtectionConfigurationInput{
Bucket: aws.String("<BUCKET_NAME>"),
ProtectionConfiguration: &s3.ProtectionConfiguration{
DefaultRetention: &s3.BucketProtectionDefaultRetention{
Days: aws.Int64(100),
},
MaximumRetention: &s3.BucketProtectionMaximumRetention{
Days: aws.Int64(1000),
},
MinimumRetention: &s3.BucketProtectionMinimumRetention{
Days: aws.Int64(10),
},
Status: aws.String("Retention"),
},
}
p, e := client.PutBucketProtectionConfiguration(pInput)
fmt.Println(p)
fmt.Println(e) // see response for results
// GET BUCKET PROTECTION CONFIGURATION
gInput := &s3.GetBucketProtectionConfigurationInput{
Bucket: aws.String("<BUCKET_NAME>"),
}
g, e := client.GetBucketProtectionConfiguration(gInput)
fmt.Println(g)
fmt.Println(e)
}
{: codeblock}
The typical response is exemplified here.
{
ProtectionConfiguration: {
DefaultRetention: {
Days: 100
},
MaximumRetention: {
Days: 1000
},
MinimumRetention: {
Days: 10
},
Status: "COMPLIANCE"
}
}
{: codeblock}
{: #go-guide-hosted-static-website-create}
This operation requires permissions, as only the bucket owner is typically permitted to configure a bucket to host a static website. The parameters determine the default suffix for visitors to the site as well as an optional error document included here to complete the example.
func main() {
// Create Client
sess := session.Must(session.NewSession())
client := s3.New(sess, conf)
// Create a bucket
input := &s3.CreateBucketInput{
Bucket: aws.String("<BUCKET_NAME>"),
}
d, e := client.CreateBucket(input)
fmt.Println(d) // should print an empty bracket
fmt.Println(e) // should print <nil>
// PUT BUCKET WEBSITE
pInput := s3.PutBucketWebsiteInput{
Bucket: input,
WebsiteConfiguration: &s3.WebsiteConfiguration{
IndexDocument: &s3.IndexDocument{
Suffix: aws.String("index.html"),
},
},
}
pInput.WebsiteConfiguration.ErrorDocument = &s3.ErrorDocument{
Key: aws.String("error.html"),
}
p, e := client.PutBucketWebsite(¶ms)
fmt.Println(p)
fmt.Println(e) // see response for results
}
{: codeblock}
{: #go-next-steps}
If you haven't already, please see the detailed class and method documentation available at the Go API documentation.