diff --git a/RELEASENOTES.md b/RELEASENOTES.md
index 55ba10d5..458848ee 100644
--- a/RELEASENOTES.md
+++ b/RELEASENOTES.md
@@ -1,5 +1,35 @@
# Release Notes for pyrax
+###2013.10.24 - Version 1.6.0
+ - New:
+ - Added support for **Cloud Queues** (Marconi).
+ - Cloud Files:
+ - Fixed an issue where the `last_modified` datetime values for Cloud Files
+ storage_objects were returned inconsistently.
+ - Added ability to cache `temp_url_key`. GitHub #221
+ - Added ability to do partial downloads. GitHub #150
+ - Fixed an issue where calling `delete_object_in_seconds()` deleted existing
+ metadata. GitHub #135
+ - Cloud Databases:
+ - Added missing pagination parameters to several methods. GitHub #226
+ - Cloud DNS:
+ - Changed the `findall()` method to be case-insensitive.
+ - Fixed some error-handling issues. GitHub #219
+ - Auto Scale:
+ - Added code to force 'flavor' arguments to `str` type.
+ - Fixed creation/retrieval of webhooks with policy ID.
+ - Added several replacement methods for configurations.
+ - Load Balancers:
+ - Removed requirement that nodes be passed when creating a load balancer.
+ GitHub #222
+ - Testing:
+ - Improved the smoketest.py integrated test script by adding more services.
+ - Fixed the smoketest to work when running in multiple regions that don't
+ all offer the same services.
+ - General:
+ - Refactored the `_create_body()` method from the `BaseClient` class to the
+ `BaseManager` class.
+
###2013.10.04 - Version 1.5.1
- Pyrax in general:
- Moved the `get_limits()` behavior to the base client (Nate House)
diff --git a/docs/autoscaling.md b/docs/autoscaling.md
index e87c471f..50771c65 100644
--- a/docs/autoscaling.md
+++ b/docs/autoscaling.md
@@ -1,30 +1,30 @@
-# Autoscaling
+# Auto Scale
## Basic Concepts
-Autoscale is a service that enables you to scale your application by adding or removing servers based on monitoring events, a schedule, or arbitrary webhooks.
+Auto Scale is a service that enables you to scale your application by adding or removing servers based on monitoring events, a schedule, or arbitrary webhooks.
Please note that _this is a Rackspace-specific service_. It is not available in any other OpenStack cloud, so if you add it to your application, keep the code isolated if you need to run your application on non-Rackspace clouds.
-Autoscale functions by linking three services:
+Auto Scale functions by linking three services:
* Monitoring (such as Monitoring as a Service)
-* Autoscale API
+* Auto Scale API
* Servers and Load Balancers
## Workflow
-An Autoscaling group is monitored by Rackspace Cloud Monitoring. When Monitoring triggers an alarm for high utilization within the Autoscaling group, a webhook is triggered. The webhook calls the autoscale service, which consults a policy in accordance with the webhook. The policy determines how many additional Cloud Servers should be added or removed in accordance with the alarm.
+A _scaling group_ is monitored by Rackspace Cloud Monitoring. When Monitoring triggers an alarm for high utilization within the scaling group, a webhook is triggered. The webhook calls the Auto Scale service, which consults a policy in accordance with the webhook. The policy determines how many additional Cloud Servers should be added or removed in accordance with the alarm.
-Alarms may trigger scaling up or scaling down. Scale down events always remove the oldest server in the group.
+Alarms may trigger scaling up or scaling down. Scale-down events always remove the oldest server in the group.
Cooldowns allow you to ensure that you don't scale up or down too fast. When a scaling policy runs, both the scaling policy cooldown and the group cooldown start. Any additional requests to the group are discarded while the group cooldown is active. Any additional requests to the specific policy are discarded when the policy cooldown is active.
-It is important to remember that Autoscale does not configure anything within a server. This means that all images should be self-provisioning. It is up to you to make sure that your services are configured to function properly when the server is started. We recommend using something like Chef, Salt, or Puppet.
+It is important to remember that Auto Scale does not configure anything within a server. This means that all images should be self-provisioning. It is up to you to make sure that your services are configured to function properly when the server is started. We recommend using something like Chef, Salt, or Puppet.
-## Using Autoscaling in pyrax
-Once you have authenticated, you can reference the Autoscaling service via `pyrax.autoscale`. That is a lot to type over and over in your code, so it is easier if you include the following line at the beginning of your code:
+## Using Auto Scale in pyrax
+Once you have authenticated, you can reference the Auto Scale service via `pyrax.autoscale`. That is a lot to type again and again in your code, so it is easier if you include the following line at the beginning of your code:
au = pyrax.autoscale
@@ -32,10 +32,10 @@ Then you can simply use the alias `au` to reference the service. All of the code
## The Scaling Group
-The **Scaling Group** is the basic unit of Autoscaling. It determines the minimum and maximum number of servers that exist at any time for the group, the cooldown period between Autoscaling events, the configuration for each new server, the load balancer to add these servers to (optional), and any policies that are used for this group.
+The **scaling group** is the basic unit of Auto Scale. It determines the minimum and maximum number of servers that exist at any time for the group, the cooldown period between scaling events, the configuration for each new server, the load balancer to add these servers to (optional), and any policies that are used for this group.
### Listing Your Scaling Groups
-The `list()` method displays all the Scaling Groups currently defined in your account:
+The `list()` method displays all the scaling groups currently defined in your account:
print au.list()
@@ -50,7 +50,7 @@ This returns a list of `ScalingGroup` objects:
pendingCapacity=0, name=SecondTest, cooldown=90, metadata={},
min_entities=2, max_entities=5>]
-To see the [launch configuration](#launch-configuration) for a group, call the `get_launch_config()` method:
+To see the [launch configuration](#launch_configuration) for a group, call the `get_launch_config()` method:
groups = au.list()
group = groups[0]
@@ -87,8 +87,8 @@ The `active` key holds a list of the IDs of the servers created as part of this
Key | Respresents
---- | ----
-**active_capacity** | The number of active servers that are part of this scaling group
-**desired_capacity** | The target number of servers for this scaling group, based on the combination of configuration settings and monitoring alarm responses
+**active_capacity** | The number of active servers that are part of this scaling group.
+**desired_capacity** | The target number of servers for this scaling group, based on the combination of configuration settings and monitoring alarm responses.
**pending_capacity** | The number of servers which are in the process of being created (when positive) or destroyed (when negative).
### Pausing a Scaling Group's Policies
@@ -113,9 +113,9 @@ To create a scaling group, you call the `create()` method of the client with the
disk_config="AUTO", metadata={"mykey": "myvalue"},
load_balancers=(1234, 80))
-This creates the Scaling Group with the name "MyScalingGroup", and returns a `ScalingGroup` object representing the new group. Since the `min_entities` is 2, it immediately creates 2 servers for the group, based on the image whose ID is in the variable `my_image_id`. When they are created, they are then added to the load balancer whose ID is `1234`, and receive requests on port 80.
+This creates the scaling group with the name "MyScalingGroup", and returns a `ScalingGroup` object representing the new group. Since the `min_entities` is 2, it immediately creates 2 servers for the group, based on the image whose ID is in the variable `my_image_id`. When they are created, they are then added to the load balancer whose ID is `1234`, and receive requests on port 80.
-Note that the `server_name` parameter represents a base string to which Autoscale prepends a 10-character prefix to create a unique name for each server. The prefix always begins with 'as' and is followed by 8 random hex digits. For example, if you set the server_name to 'testgroup', and the scaling group creates 3 servers, their names would look like these:
+Note that the `server_name` parameter represents a base string to which Auto Scale prepends a 10-character prefix to create a unique name for each server. The prefix always begins with 'as' and is followed by 8 random hex digits and a dash (-). For example, if you set the server_name to 'testgroup', and the scaling group creates 3 servers, their names would look like these:
as5defddd4-testgroup
as92e512fe-testgroup
@@ -125,16 +125,16 @@ Note that the `server_name` parameter represents a base string to which Autoscal
Parameter | Required | Default | Notes
---- | ---- | ---- | ----
**name** | yes | |
-**cooldown** | yes | | Period in seconds after a scaling event in which further events are ignored
+**cooldown** | yes | | Period in seconds after a scaling event in which further events are ignored.
**min_entities** | yes | |
**max_entities** | yes | |
-**launch_config_type** | yes | | Only option currently is`launch_server`
-**flavor** | yes | | Flavor to use for each server that is launched
+**launch_config_type** | yes | | Only option currently is `launch_server`.
+**flavor** | yes | | Flavor to use for each server that is launched.
**server_name** | yes | | The base name for servers created by Autoscale.
**image** | yes | | Either a Cloud Servers Image object, or its ID. This is the image that all new servers are created from.
**disk_config** | no | MANUAL | Determines if the server's disk is partitioned to the full size of the flavor ('AUTO') or just to the size of the image ('MANUAL').
**metadata** | no | | Arbitrary key-value pairs you want to associate with your servers.
-**personality** | no | | Small text files that are created on the new servers. _Personality_ is discussed in the [Rackspace Cloud Servers documentation](http://docs.rackspace.com/servers/api/v2/cs-devguide/content/Server_Personality-d1e2543.html)
+**personality** | no | | Small text files that are created on the new servers. _Personality_ is discussed in the [Rackspace Cloud Servers documentation](http://docs.rackspace.com/servers/api/v2/cs-devguide/content/Server_Personality-d1e2543.html).
**networks** | no | | The networks to which you want to attach new servers. See the [Create Servers documentation](http://docs.rackspace.com/servers/api/v2/cs-devguide/content/CreateServers.html) for the required format.
**load_balancers** | no | | Either a list of (id, port) tuples or a single such tuple, representing the loadbalancer(s) to add the new servers to.
**scaling_policies** | no | | You can define the scaling policies when you create the group, or add them later.
@@ -144,11 +144,11 @@ You can modify the settings for a scaling group by calling its `update()` method
sg.update(cooldown=120, max_entities=16)
-where `sg` is a reference to the scaling group. Similarly, you can make the call on the autoscale client itself, passing in the reference to the scaling group you wish to update:
+where `sg` is a reference to the scaling group. Similarly, you can make the call on the Auto Scale client itself, passing in the reference to the scaling group you wish to update:
au.update(sg, cooldown=120, max_entities=16)
-**Note**: If you pass any metadata values in this call, it must be the full set of metadata for the Scaling Group, since the underlying API call **overwrites** any existing metadata. If you simply wish to update an existing metadata key, or add a new key/value pair, you must call the `update_metadata(new_meta)` method instead. This call preserves your existing key/value pairs, and only updates it with your changes.
+**Note**: If you pass any metadata values in this call, it must be the full set of metadata for the scaling group, since the underlying API call **overwrites** any existing metadata. If you simply wish to update an existing metadata key, or add a new key/value pair, you must call the `update_metadata(new_meta)` method instead. This call preserves your existing key/value pairs, and only updates it with your changes.
### Deleting a Scaling Group
To remove a scaling group, call its `delete()` method:
@@ -166,10 +166,10 @@ Note: you cannot delete a scaling group that has active servers in it. You must
Once the servers are deleted you can then delete the scaling group.
-## Launch Configurations
+## Launch Configurations
Each scaling group has an associated **launch configuration**. This determines the properties of servers that are created in response to a scaling event.
-The `server_name` represents a base string to which Autoscale prepends a 10-character prefix. The prefix always begins with 'as' and is followed by 8 random hex digits. For example, if you set the `server_name` to 'testgroup', and the scaling group creates 3 servers, their names would look like these:
+The `server_name` represents a base string to which Auto Scale prepends a 10-character prefix. The prefix always begins with 'as' and is followed by 8 random hex digits and a dash (-). For example, if you set the `server_name` to 'testgroup', and the scaling group creates 3 servers, their names would look like these:
as5defddd4-testgroup
as92e512fe-testgroup
@@ -186,11 +186,11 @@ You can also modify the launch configuration for your scaling group by calling t
sg.update_launch_config(image=new_image_id)
-You may also make the call on the autoscale client itself, passing in the scaling group you want to modify:
+You may also make the call on the Auto Scale client itself, passing in the scaling group you want to modify:
au.update_launch_config(sg, image=new_image_id)
-**Note**: If you pass any metadata values in this call, it must be the full set of metadata for the Launch Configuration, since the underlying API call **overwrites** any existing metadata. If you simply wish to update an existing metadata key in your launch configuration, or add a new key/value pair, you must call the `update_launch_metadata()` method instead. This call preserves your existing key/value pairs, and only updates with your changes.
+**Note**: If you pass any metadata values in this call, it must be the full set of metadata for the launch configuration, since the underlying API call **overwrites** any existing metadata. If you simply wish to update an existing metadata key in your launch configuration, or add a new key/value pair, you must call the `update_launch_metadata()` method instead. This call preserves your existing key/value pairs, and only updates with your changes.
## Policies
@@ -217,7 +217,7 @@ To add a policy to a scaling group, call the `add_policy()` method:
Parameter | Required | Default | Notes
---- | ---- | ---- | ----
**name** | yes | |
-**policy_type** | yes | | Only available type now is 'webhook'
+**policy_type** | yes | | Only available type now is 'webhook'.
**cooldown** | yes | | Period in seconds after a policy execution in which further events are ignored. This is separate from the overall cooldown for the scaling group.
**change** | yes | | Can be positive or negative, which makes this a scale-up or scale-down policy, respectively.
**is_percent** | no | False | Determines whether the value passed in the `change` parameter is interpreted an absolute number, or a percentage.
@@ -227,7 +227,7 @@ You may update a policy at any time, passing in any or all of the above paramete
policy.update(cooldown=60, change=-3)
-You may also call the `update_policy()` method of either the scaling group for this policy, or the autoscale client itself. Either of the following two calls is equivalent to the call above:
+You may also call the `update_policy()` method of either the scaling group for this policy, or the Auto Scale client itself. Either of the following two calls is equivalent to the call above:
sg.update_policy(policy, cooldown=60, change=-3)
# or
@@ -286,7 +286,7 @@ You may update a webhook at any time to change either its name or its metadata:
webhook.update(name="something_new",
metadata={"owner": "webteam"})
-You may also call the `update_webhook()` method of either the policy for this webhook, or the scaling group for that policy, or the autoscale client itself. Any of the following calls is equivalent to the call above:
+You may also call the `update_webhook()` method of either the policy for this webhook, or the scaling group for that policy, or the Auto Scale client itself. Any of the following calls is equivalent to the call above:
policy.update_webhook(webhook, name="something_new",
metadata={"owner": "webteam"})
@@ -297,7 +297,7 @@ You may also call the `update_webhook()` method of either the policy for this we
au.update_webhook(sg, policy, webhook, name="something_new",
metadata={"owner": "webteam"})
-**Note**: If you pass any metadata values in this call, it must be the full set of metadata for the Webhook, since the underlying API call **overwrites** any existing metadata. If you simply wish to update an existing metadata key, or add a new key/value pair, you must call the `webhook.update_metadata(new_meta)` method instead (or the corresponding `au.update_webhook_metadata(sg, policy, webhook, new_meta)`). This call preserves your existing key/value pairs, and only updates it with your changes.
+**Note**: If you pass any metadata values in this call, it must be the full set of metadata for the webhook, since the underlying API call **overwrites** any existing metadata. If you simply wish to update an existing metadata key, or add a new key/value pair, you must call the `webhook.update_metadata(new_meta)` method instead (or the corresponding `au.update_webhook_metadata(sg, policy, webhook, new_meta)`). This call preserves your existing key/value pairs, and only updates it with your changes.
### Deleting a webhook
When you wish to remove a webhook, call its `delete()` method:
@@ -312,10 +312,3 @@ You can also call the `delete_webhook()` method of the webhook's policy, or the
# or
au.delete_webhook(sg, policy, webhook)
-
-
-
-
-
-
-
diff --git a/docs/cloud_loadbalancers.md b/docs/cloud_loadbalancers.md
index f104c3d4..c0c09636 100644
--- a/docs/cloud_loadbalancers.md
+++ b/docs/cloud_loadbalancers.md
@@ -9,7 +9,7 @@ Once you have authenticated and connected to the load balancer service, you can
For the sake of brevity and convenience, it is common to define abbreviated aliases for the modules. All the code in the document assumes that at the top of your script, you have added the following lines:
- clb = pyrax.cloudloadbalancers
+ clb = pyrax.cloud_loadbalancers
cs = pyrax.cloudservers
@@ -149,7 +149,7 @@ DNS_TCP | This protocol works with IPv6 and allows your DNS server to receive tr
DNS_UDP | This protocol works with IPv6 and allows your DNS server to receive traffic using UDP port 53.
FTP | The File Transfer Protocol defines how files are transported over the Internet. It is typically used when downloading or uploading files to or from web servers.
HTTP | The Hypertext Transfer Protocol defines how communications occur on the Internet between clients and web servers. For example, if you request a web page in your browser, HTTP defines how the web server fetches the page and returns it your browser.
-HTTPS | The Hypertext Transfer Protocol over Secure Socket Layer (SSL) provides encrypted communication over the Internet. It securely verifies the authenticity of the web server you are communicating with.
+HTTPS | The Hypertext Transfer Protocol over Secure Socket Layer (SSL) provides encrypted communication over the Internet. It securely verifies the authenticity of the web server you are communicating with.
IMAPS | The Internet Message Application Protocol over Secure Socket Layer (SSL) defines how an email client, such as Microsoft Outlook, retrieves and transfers email messages with a mail server.
IMAPv2 | Version 2 of IMAPS.
IMAPv3 | Version 3 of IMAPS.
@@ -163,7 +163,7 @@ SFTP | The SSH File Transfer Protocol is a secure file transfer and management p
SMTP | The Simple Mail Transfer Protocol is used by electronic mail servers to send and receive email messages. Email clients use this protocol to relay messages to another computer or web server, but use IMAP or POP to send and receive messages.
TCP | The Transmission Control Protocol is a part of the Transport Layer protocol and is one of the core protocols of the Internet Protocol Suite. It provides a reliable, ordered delivery of a stream of bytes from one program on a computer to another program on another computer. Applications that require an ordered and reliable delivery of packets use this protocol.
TCP_CLIE (TCP_CLIENT_FIRST) | This protocol is similiar to TCP, but is more efficient when a client is expected to write the data first.
-UDP | The User Datagram Protocol provides a datagram service that emphasizes speed over reliability, It works well with applications that provide security through other measures.
+UDP | The User Datagram Protocol provides a datagram service that emphasizes speed over reliability, It works well with applications that provide security through other measures.
UDP_STRE (UDP_STREAM) | This protocol is designed to stream media over networks and is built on top of UDP.
diff --git a/docs/cloud_servers.md b/docs/cloud_servers.md
index 74a09818..b96249e7 100644
--- a/docs/cloud_servers.md
+++ b/docs/cloud_servers.md
@@ -2,6 +2,8 @@
----
+*Note: pyrax works with OpenStack-based clouds. Rackspace's "First Generation" servers are based on a different API, and are not supported.*
+
## Listing Servers
Start by listing all the servers in your account:
diff --git a/docs/getting_started.md b/docs/getting_started.md
index c92c2ad0..61ae434e 100644
--- a/docs/getting_started.md
+++ b/docs/getting_started.md
@@ -4,7 +4,7 @@
## Getting Started With pyrax
-**pyrax** is the Python language binding for **OpenStack** and the **Rackspace Cloud**. By installing pyrax, you have the ability to build on any OpenStack cloud using standard Python objects and code.
+**pyrax** is the Python language binding for **OpenStack** and the **Rackspace Cloud**. By installing pyrax, you have the ability to build on any OpenStack cloud using standard Python objects and code. *Note: since pyrax works with the OpenStack API, it does not support Rackspace's "First Generation" Cloud Servers, which are based on a different technology.*
## Prerequisites
diff --git a/docs/html/____init_____8py.html b/docs/html/____init_____8py.html
index f426e001..36c61fc6 100644
--- a/docs/html/____init_____8py.html
+++ b/docs/html/____init_____8py.html
@@ -143,6 +143,8 @@
Pauses all execution of the policies for the specified scaling group.
+
+
+Pauses all execution of the policies for the specified scaling group.
+
+
+Gets a list of all domains, or optionally a page of domains.
-Gets a specific item.
-Adds an instance method to an object.
+
+
+
- def pyrax.utils.random_name
+ def pyrax.utils.random_ascii
(
length = 20
,
@@ -511,7 +545,27 @@
Generates a random name; useful for testing.
-
By default it will return an encoded string containing unicode values up to code point 1000. If you only need or want ASCII values, pass True to the ascii_only parameter.
+
Returns a string of the specified length containing only ASCII characters.
+
+
+
+
+
+
+
+
+
Generates a random name; useful for testing.
+
Returns an encoded string of the specified length containing unicode values up to code point 1000.
@@ -846,7 +900,7 @@
diff --git a/docs/html/namespacepyrax_1_1version.html b/docs/html/namespacepyrax_1_1version.html
index 5a542616..92c05a31 100644
--- a/docs/html/namespacepyrax_1_1version.html
+++ b/docs/html/namespacepyrax_1_1version.html
@@ -105,7 +105,7 @@
Variable Documentation
@@ -113,7 +113,7 @@
@@ -139,7 +139,7 @@
diff --git a/docs/html/namespaces.html b/docs/html/namespaces.html
index 1108c6ab..f680c0c1 100644
--- a/docs/html/namespaces.html
+++ b/docs/html/namespaces.html
@@ -101,6 +101,7 @@
pyrax::identity::keystone_identity
pyrax::identity::rax_identity
pyrax::manager
+ pyrax::queueing
pyrax::resource
pyrax::service_catalog
pyrax::utils
@@ -125,7 +126,7 @@
diff --git a/docs/html/queueing_8py.html b/docs/html/queueing_8py.html
new file mode 100644
index 00000000..dac0a1c6
--- /dev/null
+++ b/docs/html/queueing_8py.html
@@ -0,0 +1,143 @@
+
+
+
+
+
+pyrax: pyrax/queueing.py File Reference
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ pyrax
+
+
+ Python Bindings for the Rackspace Cloud
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/docs/html/search/all_5f.js b/docs/html/search/all_5f.js
index a4e32a64..7b04ad7a 100644
--- a/docs/html/search/all_5f.js
+++ b/docs/html/search/all_5f.js
@@ -5,7 +5,7 @@ var searchData=
['_5f_5feq_5f_5f',['__eq__',['../classpyrax_1_1cloudloadbalancers_1_1Node.html#a449f8fd74d358c0ad641b6c6d6917ba0',1,'pyrax::cloudloadbalancers::Node.__eq__()'],['../classpyrax_1_1resource_1_1BaseResource.html#a449f8fd74d358c0ad641b6c6d6917ba0',1,'pyrax::resource::BaseResource.__eq__()']]],
['_5f_5fexit_5f_5f',['__exit__',['../classpyrax_1_1utils_1_1SelfDeletingTempfile.html#a6de07022804200d0fb6383c0a237ee8e',1,'pyrax::utils::SelfDeletingTempfile.__exit__()'],['../classpyrax_1_1utils_1_1SelfDeletingTempDirectory.html#a6de07022804200d0fb6383c0a237ee8e',1,'pyrax::utils::SelfDeletingTempDirectory.__exit__()']]],
['_5f_5fgetattr_5f_5f',['__getattr__',['../classpyrax_1_1resource_1_1BaseResource.html#a0a990b3ec3889d40889daca9ee5e4695',1,'pyrax::resource::BaseResource']]],
- ['_5f_5finit_5f_5f',['__init__',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::autoscale::ScalingGroup.__init__()'],['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::autoscale::ScalingGroupManager.__init__()'],['../classpyrax_1_1autoscale_1_1AutoScalePolicy.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::autoscale::AutoScalePolicy.__init__()'],['../classpyrax_1_1autoscale_1_1AutoScaleWebhook.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::autoscale::AutoScaleWebhook.__init__()'],['../classpyrax_1_1base__identity_1_1BaseAuth.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::base_identity::BaseAuth.__init__()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::client::CFClient.__init__()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1Connection.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::client::Connection.__init__()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1FolderUploader.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::client::FolderUploader.__init__()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1BulkDeleter.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::client::BulkDeleter.__init__()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::container::Container.__init__()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::storage_object::StorageObject.__init__()'],['../classpyrax_1_1client_1_1BaseClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::client::BaseClient.__init__()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageVolume.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudblockstorage::CloudBlockStorageVolume.__init__()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseVolume.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddatabases::CloudDatabaseVolume.__init__()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseInstance.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddatabases::CloudDatabaseInstance.__init__()'],['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddns::CloudDNSPTRRecord.__init__()'],['../classpyrax_1_1clouddns_1_1CloudDNSManager.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddns::CloudDNSManager.__init__()'],['../classpyrax_1_1clouddns_1_1ResultsIterator.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddns::ResultsIterator.__init__()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.__init__()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudloadbalancers::Node.__init__()'],['../classpyrax_1_1cloudloadbalancers_1_1VirtualIP.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudloadbalancers::VirtualIP.__init__()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.__init__()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudmonitoring::CloudMonitorCheck.__init__()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorAlarm.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudmonitoring::CloudMonitorAlarm.__init__()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudmonitoring::CloudMonitorClient.__init__()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetworkClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudnetworks::CloudNetworkClient.__init__()'],['../classpyrax_1_1exceptions_1_1AmbiguousEndpoints.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::exceptions::AmbiguousEndpoints.__init__()'],['../classpyrax_1_1exceptions_1_1ClientException.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::exceptions::ClientException.__init__()'],['../classpyrax_1_1manager_1_1BaseManager.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::manager::BaseManager.__init__()'],['../classpyrax_1_1resource_1_1BaseResource.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::resource::BaseResource.__init__()'],['../classpyrax_1_1service__catalog_1_1ServiceCatalog.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::service_catalog::ServiceCatalog.__init__()'],['../classpyrax_1_1utils_1_1__WaitThread.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::utils::_WaitThread.__init__()']]],
+ ['_5f_5finit_5f_5f',['__init__',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::autoscale::ScalingGroup.__init__()'],['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::autoscale::ScalingGroupManager.__init__()'],['../classpyrax_1_1autoscale_1_1AutoScalePolicy.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::autoscale::AutoScalePolicy.__init__()'],['../classpyrax_1_1autoscale_1_1AutoScaleWebhook.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::autoscale::AutoScaleWebhook.__init__()'],['../classpyrax_1_1base__identity_1_1BaseAuth.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::base_identity::BaseAuth.__init__()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::client::CFClient.__init__()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1Connection.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::client::Connection.__init__()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1FolderUploader.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::client::FolderUploader.__init__()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1BulkDeleter.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::client::BulkDeleter.__init__()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::container::Container.__init__()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::storage_object::StorageObject.__init__()'],['../classpyrax_1_1client_1_1BaseClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::client::BaseClient.__init__()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageVolume.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudblockstorage::CloudBlockStorageVolume.__init__()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseVolume.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddatabases::CloudDatabaseVolume.__init__()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseInstance.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddatabases::CloudDatabaseInstance.__init__()'],['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddns::CloudDNSPTRRecord.__init__()'],['../classpyrax_1_1clouddns_1_1CloudDNSManager.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddns::CloudDNSManager.__init__()'],['../classpyrax_1_1clouddns_1_1ResultsIterator.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddns::ResultsIterator.__init__()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.__init__()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudloadbalancers::Node.__init__()'],['../classpyrax_1_1cloudloadbalancers_1_1VirtualIP.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudloadbalancers::VirtualIP.__init__()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.__init__()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudmonitoring::CloudMonitorCheck.__init__()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorAlarm.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudmonitoring::CloudMonitorAlarm.__init__()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudmonitoring::CloudMonitorClient.__init__()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetworkClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudnetworks::CloudNetworkClient.__init__()'],['../classpyrax_1_1exceptions_1_1AmbiguousEndpoints.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::exceptions::AmbiguousEndpoints.__init__()'],['../classpyrax_1_1exceptions_1_1ClientException.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::exceptions::ClientException.__init__()'],['../classpyrax_1_1manager_1_1BaseManager.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::manager::BaseManager.__init__()'],['../classpyrax_1_1queueing_1_1Queue.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::queueing::Queue.__init__()'],['../classpyrax_1_1resource_1_1BaseResource.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::resource::BaseResource.__init__()'],['../classpyrax_1_1service__catalog_1_1ServiceCatalog.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::service_catalog::ServiceCatalog.__init__()'],['../classpyrax_1_1utils_1_1__WaitThread.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::utils::_WaitThread.__init__()']]],
['_5f_5finit_5f_5f_2epy',['__init__.py',['../____init_____8py.html',1,'']]],
['_5f_5finit_5f_5f_2epy',['__init__.py',['../identity_2____init_____8py.html',1,'']]],
['_5f_5finit_5f_5f_2epy',['__init__.py',['../cf__wrapper_2____init_____8py.html',1,'']]],
diff --git a/docs/html/search/all_61.js b/docs/html/search/all_61.js
index 8c770391..820984dc 100644
--- a/docs/html/search/all_61.js
+++ b/docs/html/search/all_61.js
@@ -17,6 +17,7 @@ var searchData=
['add_5fvirtualip',['add_virtualip',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#ab264d37d545101c96e50d1e2924724cc',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.add_virtualip()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#ab264d37d545101c96e50d1e2924724cc',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.add_virtualip()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#ab264d37d545101c96e50d1e2924724cc',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.add_virtualip()']]],
['add_5fwebhook',['add_webhook',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#ab26891c9d3c749645f264b371784898a',1,'pyrax::autoscale::ScalingGroup.add_webhook()'],['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#ab26891c9d3c749645f264b371784898a',1,'pyrax::autoscale::ScalingGroupManager.add_webhook()'],['../classpyrax_1_1autoscale_1_1AutoScalePolicy.html#ab26891c9d3c749645f264b371784898a',1,'pyrax::autoscale::AutoScalePolicy.add_webhook()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#ab26891c9d3c749645f264b371784898a',1,'pyrax::autoscale::AutoScaleClient.add_webhook()']]],
['address',['address',['../classpyrax_1_1cloudloadbalancers_1_1Node.html#ade5a18d52133ef21f211020ceb464c07',1,'pyrax::cloudloadbalancers::Node::address()'],['../classpyrax_1_1cloudloadbalancers_1_1VirtualIP.html#ade5a18d52133ef21f211020ceb464c07',1,'pyrax::cloudloadbalancers::VirtualIP.address()']]],
+ ['age',['age',['../classpyrax_1_1queueing_1_1QueueMessage.html#a043a7693e1d2d30988a0f821e0ab5f94',1,'pyrax::queueing::QueueMessage']]],
['algorithms',['algorithms',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#aed666a925114f957f496874543d3f2a7',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient']]],
['all',['all',['../namespacepyrax_1_1manager.html#acedac857b1708c80eefe0a6c379bedec',1,'pyrax::manager']]],
['allowed_5fdomains',['allowed_domains',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a0a5c9a644fa7ae4b7a662ab33f00ed9c',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient']]],
@@ -27,6 +28,7 @@ var searchData=
['assure_5finstance',['assure_instance',['../namespacepyrax_1_1clouddatabases.html#a8f8b217553a66a79f0135899e14964b4',1,'pyrax::clouddatabases']]],
['assure_5floadbalancer',['assure_loadbalancer',['../namespacepyrax_1_1cloudloadbalancers.html#ac1fa7d34dba3d7cf4a26fdab6afe2d22',1,'pyrax::cloudloadbalancers']]],
['assure_5fparent',['assure_parent',['../namespacepyrax_1_1cloudloadbalancers.html#ae8088c141e57bdabed8a13866eed8cdf',1,'pyrax::cloudloadbalancers']]],
+ ['assure_5fqueue',['assure_queue',['../namespacepyrax_1_1queueing.html#acc3d3bba9230fe3dd67e9f470b0b445a',1,'pyrax::queueing']]],
['assure_5fsnapshot',['assure_snapshot',['../namespacepyrax_1_1cloudblockstorage.html#a8a930b1066981115404f992adb243aa2',1,'pyrax::cloudblockstorage']]],
['assure_5fvolume',['assure_volume',['../namespacepyrax_1_1cloudblockstorage.html#a68a6bb754146fc1cdfe07a97456d71e9',1,'pyrax::cloudblockstorage']]],
['att',['att',['../classpyrax_1_1utils_1_1__WaitThread.html#ac356deedcb6c8bb875aaedf10db0a455',1,'pyrax::utils::_WaitThread']]],
diff --git a/docs/html/search/all_62.js b/docs/html/search/all_62.js
index e51eef60..7148c29f 100644
--- a/docs/html/search/all_62.js
+++ b/docs/html/search/all_62.js
@@ -6,7 +6,9 @@ var searchData=
['baseauth',['BaseAuth',['../classpyrax_1_1base__identity_1_1BaseAuth.html',1,'pyrax::base_identity']]],
['baseclient',['BaseClient',['../classpyrax_1_1client_1_1BaseClient.html',1,'pyrax::client']]],
['basemanager',['BaseManager',['../classpyrax_1_1manager_1_1BaseManager.html',1,'pyrax::manager']]],
+ ['basequeuemanager',['BaseQueueManager',['../classpyrax_1_1queueing_1_1BaseQueueManager.html',1,'pyrax::queueing']]],
['baseresource',['BaseResource',['../classpyrax_1_1resource_1_1BaseResource.html',1,'pyrax::resource']]],
+ ['body',['body',['../classpyrax_1_1queueing_1_1QueueMessage.html#a14d48c2e9f05d0b03044eb45f308fcb0',1,'pyrax::queueing::QueueMessage']]],
['bulk_5fdelete',['bulk_delete',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a31f701453fcf7a01b2af1e67d4d9a55b',1,'pyrax::cf_wrapper::client::CFClient']]],
['bulk_5fdelete_5finterval',['bulk_delete_interval',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#accd7312117a68637f8a9ad3f2b6968e6',1,'pyrax::cf_wrapper::client::CFClient']]],
['bulkdeleter',['BulkDeleter',['../classpyrax_1_1cf__wrapper_1_1client_1_1BulkDeleter.html',1,'pyrax::cf_wrapper::client']]]
diff --git a/docs/html/search/all_63.js b/docs/html/search/all_63.js
index d9c246ae..fd936ab6 100644
--- a/docs/html/search/all_63.js
+++ b/docs/html/search/all_63.js
@@ -3,6 +3,7 @@ var searchData=
['callback',['callback',['../classpyrax_1_1utils_1_1__WaitThread.html#adf568d8baca0701772280d0011e68a72',1,'pyrax::utils::_WaitThread']]],
['callstack',['callstack',['../namespacepyrax.html#ae78f359d64f9eeed6dc0df4a1102dca8',1,'pyrax']]],
['cancel_5ffolder_5fupload',['cancel_folder_upload',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a621a0e61fdb17290b5e9f5dec5eb12fc',1,'pyrax::cf_wrapper::client::CFClient']]],
+ ['case_5finsensitive_5fupdate',['case_insensitive_update',['../namespacepyrax_1_1utils.html#af3249b7bd46bd8a85fda01ce6a90304e',1,'pyrax::utils']]],
['catalog',['catalog',['../classpyrax_1_1service__catalog_1_1ServiceCatalog.html#a6be3d84fa45e3612e5d22f1015b288af',1,'pyrax::service_catalog::ServiceCatalog']]],
['cdn_5fconnection',['cdn_connection',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a9ebd30d3926b0fe246ca66e564caf148',1,'pyrax::cf_wrapper::client::CFClient.cdn_connection()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1Connection.html#a9ebd30d3926b0fe246ca66e564caf148',1,'pyrax::cf_wrapper::client::Connection.cdn_connection()']]],
['cdn_5fenabled',['cdn_enabled',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a15b27a248ed9133d6a84543a420fe2f9',1,'pyrax::cf_wrapper::client::CFClient.cdn_enabled()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a2335e6e10f37188b2c023f9c5575c299',1,'pyrax::cf_wrapper::container::Container.cdn_enabled()']]],
@@ -25,12 +26,16 @@ var searchData=
['changes_5fsince',['changes_since',['../classpyrax_1_1clouddns_1_1CloudDNSDomain.html#ac31a3a61b9555672f6d51ae74ff22999',1,'pyrax::clouddns::CloudDNSDomain.changes_since()'],['../classpyrax_1_1clouddns_1_1CloudDNSManager.html#ac31a3a61b9555672f6d51ae74ff22999',1,'pyrax::clouddns::CloudDNSManager.changes_since()'],['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#ac31a3a61b9555672f6d51ae74ff22999',1,'pyrax::clouddns::CloudDNSClient.changes_since()']]],
['check_5ftoken',['check_token',['../classpyrax_1_1base__identity_1_1BaseAuth.html#a89a6ef6961d772a79cba6292b7edf926',1,'pyrax::base_identity::BaseAuth']]],
['cidr',['cidr',['../classpyrax_1_1cloudnetworks_1_1CloudNetwork.html#a22262d03c6a363b1f41f4dffd3b1c797',1,'pyrax::cloudnetworks::CloudNetwork']]],
+ ['claim',['claim',['../classpyrax_1_1queueing_1_1QueueClaimManager.html#a3135f2c41e736cc621dd7943e9d34189',1,'pyrax::queueing::QueueClaimManager']]],
+ ['claim_5fid',['claim_id',['../classpyrax_1_1queueing_1_1QueueMessage.html#a4fba796dda2883012b75419f84e148ee',1,'pyrax::queueing::QueueMessage']]],
+ ['claim_5fmessages',['claim_messages',['../classpyrax_1_1queueing_1_1Queue.html#a4e661325a97751869d0b9b024bb73d20',1,'pyrax::queueing::Queue.claim_messages()'],['../classpyrax_1_1queueing_1_1QueueClient.html#a4e661325a97751869d0b9b024bb73d20',1,'pyrax::queueing::QueueClient.claim_messages()']]],
['classifiers',['classifiers',['../namespacesetup.html#a501bfc1867c9d0b5d91873982919a191',1,'setup']]],
['clear_5fcredentials',['clear_credentials',['../namespacepyrax.html#ac84933adaea04f7479d32c6a5cf6e028',1,'pyrax']]],
['clear_5ferror_5fpage',['clear_error_page',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#aff0ccd9424518b3db68c4fbc3c99a710',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.clear_error_page()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#aff0ccd9424518b3db68c4fbc3c99a710',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.clear_error_page()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#aff0ccd9424518b3db68c4fbc3c99a710',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.clear_error_page()']]],
['client',['client',['../classpyrax_1_1cf__wrapper_1_1client_1_1FolderUploader.html#ad5bc32b75da65fe60067f501a4bb6665',1,'pyrax::cf_wrapper::client::FolderUploader.client()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1BulkDeleter.html#ad5bc32b75da65fe60067f501a4bb6665',1,'pyrax::cf_wrapper::client::BulkDeleter::client()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#ad5bc32b75da65fe60067f501a4bb6665',1,'pyrax::cf_wrapper::container::Container::client()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#ad5bc32b75da65fe60067f501a4bb6665',1,'pyrax::cf_wrapper::storage_object::StorageObject::client()']]],
['client_2epy',['client.py',['../cf__wrapper_2client_8py.html',1,'']]],
['client_2epy',['client.py',['../client_8py.html',1,'']]],
+ ['client_5fid',['client_id',['../classpyrax_1_1queueing_1_1QueueClient.html#a3880622ca383fee22fbbac18442bae32',1,'pyrax::queueing::QueueClient']]],
['clientexception',['ClientException',['../classpyrax_1_1exceptions_1_1ClientException.html',1,'pyrax::exceptions']]],
['cloud_5fblockstorage',['cloud_blockstorage',['../namespacepyrax.html#a7f4dc3b1da79f21103723f78b910c8a5',1,'pyrax']]],
['cloud_5fdatabases',['cloud_databases',['../namespacepyrax.html#af1a86dab674b703fc06491e66aacadb6',1,'pyrax']]],
@@ -99,9 +104,10 @@ var searchData=
['connect_5fto_5fcloud_5fnetworks',['connect_to_cloud_networks',['../namespacepyrax.html#af30f9f18e048c8f0e677808a1028d29a',1,'pyrax']]],
['connect_5fto_5fcloudfiles',['connect_to_cloudfiles',['../namespacepyrax.html#a34593b67ad113f95973c1c7a6546fa68',1,'pyrax']]],
['connect_5fto_5fcloudservers',['connect_to_cloudservers',['../namespacepyrax.html#a93dcb702dfed414cb32073e78fdff831',1,'pyrax']]],
+ ['connect_5fto_5fqueues',['connect_to_queues',['../namespacepyrax.html#ac8d659180e8fac04349063827198b196',1,'pyrax']]],
['connect_5fto_5fservices',['connect_to_services',['../namespacepyrax.html#a708483dfb93616381fb0ec9338ab5528',1,'pyrax']]],
- ['connection',['connection',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a10275a078bd1abcbebc206cc5d19e18b',1,'pyrax::cf_wrapper::client::CFClient']]],
['connection',['Connection',['../classpyrax_1_1cf__wrapper_1_1client_1_1Connection.html',1,'pyrax::cf_wrapper::client']]],
+ ['connection',['connection',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a10275a078bd1abcbebc206cc5d19e18b',1,'pyrax::cf_wrapper::client::CFClient']]],
['connection_5flogging',['connection_logging',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a863329ee2898f7d6212281bab325e419',1,'pyrax::cloudloadbalancers::CloudLoadBalancer']]],
['connection_5fretries',['CONNECTION_RETRIES',['../namespacepyrax_1_1cf__wrapper_1_1client.html#a19074fb0e7d33e5cb6f2fc49877e64c1',1,'pyrax::cf_wrapper::client']]],
['connection_5ftimeout',['CONNECTION_TIMEOUT',['../namespacepyrax_1_1cf__wrapper_1_1client.html#a32d68802775b5e101a259fb6f3edf5d7',1,'pyrax::cf_wrapper::client']]],
@@ -114,7 +120,7 @@ var searchData=
['cooldown',['cooldown',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a2268f26f552b5fffb9a0cb5fca09048b',1,'pyrax::autoscale::ScalingGroup.cooldown'],['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a2268f26f552b5fffb9a0cb5fca09048b',1,'pyrax::autoscale::ScalingGroup.cooldown']]],
['copy',['copy',['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a2fa43c22b5f7af93ba8b4a56871f006a',1,'pyrax::cf_wrapper::storage_object::StorageObject']]],
['copy_5fobject',['copy_object',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a7d45157fae0af819d907da6b00fa2378',1,'pyrax::cf_wrapper::client::CFClient.copy_object()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a7d45157fae0af819d907da6b00fa2378',1,'pyrax::cf_wrapper::container::Container.copy_object()']]],
- ['create',['create',['../classpyrax_1_1client_1_1BaseClient.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::client::BaseClient.create()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageSnapshotManager.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudblockstorage::CloudBlockStorageSnapshotManager.create()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageClient.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudblockstorage::CloudBlockStorageClient.create()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorNotificationManager.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudmonitoring::CloudMonitorNotificationManager.create()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorNotificationPlanManager.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudmonitoring::CloudMonitorNotificationPlanManager.create()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudmonitoring::CloudMonitorClient.create()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetworkClient.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudnetworks::CloudNetworkClient.create()'],['../classpyrax_1_1manager_1_1BaseManager.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::manager::BaseManager.create()']]],
+ ['create',['create',['../classpyrax_1_1client_1_1BaseClient.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::client::BaseClient.create()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageSnapshotManager.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudblockstorage::CloudBlockStorageSnapshotManager.create()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageClient.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudblockstorage::CloudBlockStorageClient.create()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorNotificationManager.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudmonitoring::CloudMonitorNotificationManager.create()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorNotificationPlanManager.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudmonitoring::CloudMonitorNotificationPlanManager.create()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudmonitoring::CloudMonitorClient.create()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetworkClient.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudnetworks::CloudNetworkClient.create()'],['../classpyrax_1_1manager_1_1BaseManager.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::manager::BaseManager.create()'],['../classpyrax_1_1queueing_1_1QueueManager.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::queueing::QueueManager.create()'],['../classpyrax_1_1queueing_1_1QueueClient.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::queueing::QueueClient.create()']]],
['create_5falarm',['create_alarm',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#a5e3e83b1715922337eb99e95373166e6',1,'pyrax::cloudmonitoring::CloudMonitorEntity.create_alarm()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#a5e3e83b1715922337eb99e95373166e6',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.create_alarm()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#a5e3e83b1715922337eb99e95373166e6',1,'pyrax::cloudmonitoring::CloudMonitorCheck.create_alarm()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a5e3e83b1715922337eb99e95373166e6',1,'pyrax::cloudmonitoring::CloudMonitorClient.create_alarm()']]],
['create_5fcheck',['create_check',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#a6fc873d6c66c1173dc63793fd2cc72d6',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.create_check()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a6fc873d6c66c1173dc63793fd2cc72d6',1,'pyrax::cloudmonitoring::CloudMonitorClient.create_check()']]],
['create_5fcontainer',['create_container',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a31b9196903b253dd2cd99dc4d7a0774e',1,'pyrax::cf_wrapper::client::CFClient']]],
diff --git a/docs/html/search/all_64.js b/docs/html/search/all_64.js
index 2e973192..4660d4c6 100644
--- a/docs/html/search/all_64.js
+++ b/docs/html/search/all_64.js
@@ -2,6 +2,7 @@ var searchData=
[
['data',['data',['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#a511ae0b1c13f95e5f08f1a0dd3da3d93',1,'pyrax::clouddns::CloudDNSRecord.data()'],['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#a511ae0b1c13f95e5f08f1a0dd3da3d93',1,'pyrax::clouddns::CloudDNSPTRRecord.data()']]],
['date_5fformat',['DATE_FORMAT',['../namespacepyrax_1_1base__identity.html#a71e8d76a94f92b1959b2ea603c78df9e',1,'pyrax::base_identity.DATE_FORMAT()'],['../namespacepyrax_1_1cf__wrapper_1_1client.html#a71e8d76a94f92b1959b2ea603c78df9e',1,'pyrax::cf_wrapper::client.DATE_FORMAT()']]],
+ ['days_5f14',['DAYS_14',['../namespacepyrax_1_1queueing.html#af085154866e1b5a067335fc640e279bd',1,'pyrax::queueing']]],
['debug',['debug',['../namespacepyrax.html#a4c919e19877c5868fcd9f7662c236649',1,'pyrax']]],
['default_5fcdn_5fttl',['default_cdn_ttl',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a15a3ad1a3abde8c5a3fede95c2b24c19',1,'pyrax::cf_wrapper::client::CFClient']]],
['default_5fdelay',['DEFAULT_DELAY',['../namespacepyrax_1_1clouddns.html#a0695d4ce7bb0b1de03ba3068cde8d89a',1,'pyrax::clouddns']]],
@@ -14,6 +15,7 @@ var searchData=
['delete_5falarm',['delete_alarm',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#aa6f9a8547a22941467c661e1c3b83178',1,'pyrax::cloudmonitoring::CloudMonitorEntity.delete_alarm()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#aa6f9a8547a22941467c661e1c3b83178',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.delete_alarm()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#aa6f9a8547a22941467c661e1c3b83178',1,'pyrax::cloudmonitoring::CloudMonitorClient.delete_alarm()']]],
['delete_5fall_5fobjects',['delete_all_objects',['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a29c7d5f06a4fdc49b064a4352e8c8763',1,'pyrax::cf_wrapper::container::Container']]],
['delete_5fall_5fsnapshots',['delete_all_snapshots',['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageVolume.html#aec8a47767c07e19ef3ad9f6b974240bb',1,'pyrax::cloudblockstorage::CloudBlockStorageVolume']]],
+ ['delete_5fby_5fids',['delete_by_ids',['../classpyrax_1_1queueing_1_1Queue.html#ab3f3c42b68d285259934117c1a8f65e5',1,'pyrax::queueing::Queue.delete_by_ids()'],['../classpyrax_1_1queueing_1_1QueueMessageManager.html#ab3f3c42b68d285259934117c1a8f65e5',1,'pyrax::queueing::QueueMessageManager.delete_by_ids()']]],
['delete_5fcheck',['delete_check',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#a4efaacdc5baa3b0cf23478e90a16a5b9',1,'pyrax::cloudmonitoring::CloudMonitorEntity.delete_check()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#a4efaacdc5baa3b0cf23478e90a16a5b9',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.delete_check()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a4efaacdc5baa3b0cf23478e90a16a5b9',1,'pyrax::cloudmonitoring::CloudMonitorClient.delete_check()']]],
['delete_5fconnection_5fthrottle',['delete_connection_throttle',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#ae08266e5e13ddc6e99a972671e132d51',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.delete_connection_throttle()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#ae08266e5e13ddc6e99a972671e132d51',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.delete_connection_throttle()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#ae08266e5e13ddc6e99a972671e132d51',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.delete_connection_throttle()']]],
['delete_5fcontainer',['delete_container',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#ac314b663a2e0552b6403f04558be2735',1,'pyrax::cf_wrapper::client::CFClient']]],
@@ -21,6 +23,8 @@ var searchData=
['delete_5fentity',['delete_entity',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a4f0f541b858158a007e2eca4daa25445',1,'pyrax::cloudmonitoring::CloudMonitorClient']]],
['delete_5fhealth_5fmonitor',['delete_health_monitor',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a856597bc9c0b9484f62e3d7ae78c3099',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.delete_health_monitor()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a856597bc9c0b9484f62e3d7ae78c3099',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.delete_health_monitor()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a856597bc9c0b9484f62e3d7ae78c3099',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.delete_health_monitor()']]],
['delete_5fin_5fseconds',['delete_in_seconds',['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a4c205bc29b6fdece30a5bf28e85dfd3f',1,'pyrax::cf_wrapper::storage_object::StorageObject']]],
+ ['delete_5fmessage',['delete_message',['../classpyrax_1_1queueing_1_1Queue.html#acff825cd612230881431faa0b8f1ba64',1,'pyrax::queueing::Queue.delete_message()'],['../classpyrax_1_1queueing_1_1QueueClient.html#acff825cd612230881431faa0b8f1ba64',1,'pyrax::queueing::QueueClient.delete_message()']]],
+ ['delete_5fmessages_5fby_5fids',['delete_messages_by_ids',['../classpyrax_1_1queueing_1_1QueueClient.html#abaed1d07238dbd332c4609a83ea8dbff',1,'pyrax::queueing::QueueClient']]],
['delete_5fmetadata',['delete_metadata',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#ab4b26663be70bc5c9a8d3bf3b473b328',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.delete_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#ab4b26663be70bc5c9a8d3bf3b473b328',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.delete_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#ab4b26663be70bc5c9a8d3bf3b473b328',1,'pyrax::cloudloadbalancers::Node.delete_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#ab4b26663be70bc5c9a8d3bf3b473b328',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.delete_metadata()']]],
['delete_5fmetadata_5ffor_5fnode',['delete_metadata_for_node',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a38317481667772b2f3483f5360bbb08f',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.delete_metadata_for_node()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a38317481667772b2f3483f5360bbb08f',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.delete_metadata_for_node()']]],
['delete_5fnode',['delete_node',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a616947868beb4492f797b21aeb320d90',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.delete_node()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a616947868beb4492f797b21aeb320d90',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.delete_node()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a616947868beb4492f797b21aeb320d90',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.delete_node()']]],
@@ -61,5 +65,6 @@ var searchData=
['domainupdatefailed',['DomainUpdateFailed',['../classpyrax_1_1exceptions_1_1DomainUpdateFailed.html',1,'pyrax::exceptions']]],
['download',['download',['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a87785e5d44757bc4bd91c6694e751daa',1,'pyrax::cf_wrapper::storage_object::StorageObject']]],
['download_5fobject',['download_object',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a4d536318080f1745a14922224b2a28b5',1,'pyrax::cf_wrapper::client::CFClient.download_object()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a4d536318080f1745a14922224b2a28b5',1,'pyrax::cf_wrapper::container::Container.download_object()']]],
+ ['duplicatequeue',['DuplicateQueue',['../classpyrax_1_1exceptions_1_1DuplicateQueue.html',1,'pyrax::exceptions']]],
['duplicateuser',['DuplicateUser',['../classpyrax_1_1exceptions_1_1DuplicateUser.html',1,'pyrax::exceptions']]]
];
diff --git a/docs/html/search/all_66.js b/docs/html/search/all_66.js
index 74c1c491..bacde74d 100644
--- a/docs/html/search/all_66.js
+++ b/docs/html/search/all_66.js
@@ -4,6 +4,7 @@ var searchData=
['fault',['FAULT',['../namespacepyrax_1_1cf__wrapper_1_1container.html#a892f51831156b6ea326c363e4b10631a',1,'pyrax::cf_wrapper::container']]],
['fetch',['fetch',['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a271dcd2cab08dc966228cd3d12e7cfb7',1,'pyrax::cf_wrapper::storage_object::StorageObject']]],
['fetch_5fobject',['fetch_object',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a1c274968ef395c2b88a4d67c24f30f8e',1,'pyrax::cf_wrapper::client::CFClient.fetch_object()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a1c274968ef395c2b88a4d67c24f30f8e',1,'pyrax::cf_wrapper::container::Container.fetch_object()']]],
+ ['fetch_5fpartial',['fetch_partial',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#ad1e7a72f5953d9a66e17bbe53b38fdc2',1,'pyrax::cf_wrapper::client::CFClient']]],
['field_5fnames',['field_names',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheckType.html#a0b9c1a726225d02b85a31d2fc7508352',1,'pyrax::cloudmonitoring::CloudMonitorCheckType']]],
['filenotfound',['FileNotFound',['../classpyrax_1_1exceptions_1_1FileNotFound.html',1,'pyrax::exceptions']]],
['find',['find',['../classpyrax_1_1client_1_1BaseClient.html#a01f90f57b7acd55e177611f5d0f7df23',1,'pyrax::client::BaseClient.find()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a01f90f57b7acd55e177611f5d0f7df23',1,'pyrax::cloudmonitoring::CloudMonitorClient.find()'],['../classpyrax_1_1manager_1_1BaseManager.html#a01f90f57b7acd55e177611f5d0f7df23',1,'pyrax::manager::BaseManager.find()']]],
diff --git a/docs/html/search/all_67.js b/docs/html/search/all_67.js
index 8ddf9e7d..4431e314 100644
--- a/docs/html/search/all_67.js
+++ b/docs/html/search/all_67.js
@@ -1,6 +1,6 @@
var searchData=
[
- ['get',['get',['../classpyrax_1_1Settings.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::Settings.get()'],['../classpyrax_1_1autoscale_1_1AutoScalePolicy.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::autoscale::AutoScalePolicy.get()'],['../classpyrax_1_1autoscale_1_1AutoScaleWebhook.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::autoscale::AutoScaleWebhook.get()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cf_wrapper::storage_object::StorageObject.get()'],['../classpyrax_1_1client_1_1BaseClient.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::client::BaseClient.get()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseVolume.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::clouddatabases::CloudDatabaseVolume.get()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseManager.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::clouddatabases::CloudDatabaseManager.get()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseInstance.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::clouddatabases::CloudDatabaseInstance.get()'],['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::clouddns::CloudDNSRecord.get()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cloudmonitoring::CloudMonitorCheck.get()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorAlarm.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cloudmonitoring::CloudMonitorAlarm.get()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cloudmonitoring::CloudMonitorClient.get()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetwork.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cloudnetworks::CloudNetwork.get()'],['../classpyrax_1_1manager_1_1BaseManager.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::manager::BaseManager.get()'],['../classpyrax_1_1resource_1_1BaseResource.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::resource::BaseResource.get()']]],
+ ['get',['get',['../classpyrax_1_1Settings.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::Settings.get()'],['../classpyrax_1_1autoscale_1_1AutoScalePolicy.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::autoscale::AutoScalePolicy.get()'],['../classpyrax_1_1autoscale_1_1AutoScaleWebhook.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::autoscale::AutoScaleWebhook.get()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cf_wrapper::storage_object::StorageObject.get()'],['../classpyrax_1_1client_1_1BaseClient.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::client::BaseClient.get()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseVolume.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::clouddatabases::CloudDatabaseVolume.get()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseManager.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::clouddatabases::CloudDatabaseManager.get()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseInstance.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::clouddatabases::CloudDatabaseInstance.get()'],['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::clouddns::CloudDNSRecord.get()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cloudmonitoring::CloudMonitorCheck.get()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorAlarm.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cloudmonitoring::CloudMonitorAlarm.get()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cloudmonitoring::CloudMonitorClient.get()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetwork.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cloudnetworks::CloudNetwork.get()'],['../classpyrax_1_1manager_1_1BaseManager.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::manager::BaseManager.get()'],['../classpyrax_1_1queueing_1_1QueueManager.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::queueing::QueueManager.get()'],['../classpyrax_1_1resource_1_1BaseResource.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::resource::BaseResource.get()']]],
['get_5fabsolute_5flimits',['get_absolute_limits',['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#af5584781384edbc4caa037fbc9749092',1,'pyrax::clouddns::CloudDNSClient']]],
['get_5faccess_5flist',['get_access_list',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a3aa1b2aa693e56eb9ff1535ca86c3c7c',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.get_access_list()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a3aa1b2aa693e56eb9ff1535ca86c3c7c',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.get_access_list()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a3aa1b2aa693e56eb9ff1535ca86c3c7c',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.get_access_list()']]],
['get_5faccount',['get_account',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a4d751385e14b90deacafe71308ee04dc',1,'pyrax::cloudmonitoring::CloudMonitorClient']]],
@@ -11,6 +11,7 @@ var searchData=
['get_5fcheck',['get_check',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#a663e1c9857cbc0bf083314e6404e976f',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.get_check()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a663e1c9857cbc0bf083314e6404e976f',1,'pyrax::cloudmonitoring::CloudMonitorClient.get_check()']]],
['get_5fcheck_5ftype',['get_check_type',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a4ec84467f00cf7b6b321984347025d14',1,'pyrax::cloudmonitoring::CloudMonitorClient']]],
['get_5fchecksum',['get_checksum',['../namespacepyrax_1_1utils.html#a9e1881e14792f2c07dd799fd7b9d53d1',1,'pyrax::utils']]],
+ ['get_5fclaim',['get_claim',['../classpyrax_1_1queueing_1_1Queue.html#a015b0e04f4bfd93df60cdb650bed2186',1,'pyrax::queueing::Queue.get_claim()'],['../classpyrax_1_1queueing_1_1QueueClient.html#a015b0e04f4bfd93df60cdb650bed2186',1,'pyrax::queueing::QueueClient.get_claim()']]],
['get_5fconfiguration',['get_configuration',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a4765b6651625ae1a9e0b6170c0cfe447',1,'pyrax::autoscale::ScalingGroup.get_configuration()'],['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#a4765b6651625ae1a9e0b6170c0cfe447',1,'pyrax::autoscale::ScalingGroupManager.get_configuration()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#a4765b6651625ae1a9e0b6170c0cfe447',1,'pyrax::autoscale::AutoScaleClient.get_configuration()']]],
['get_5fconnection_5flogging',['get_connection_logging',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#ab259705d7fd79cee133de68e9b29846b',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.get_connection_logging()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#ab259705d7fd79cee133de68e9b29846b',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.get_connection_logging()']]],
['get_5fconnection_5fthrottle',['get_connection_throttle',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a3caddfe1f948f22c3926b810485bd645',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.get_connection_throttle()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a3caddfe1f948f22c3926b810485bd645',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.get_connection_throttle()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a3caddfe1f948f22c3926b810485bd645',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.get_connection_throttle()']]],
@@ -34,12 +35,14 @@ var searchData=
['get_5fextensions',['get_extensions',['../classpyrax_1_1base__identity_1_1BaseAuth.html#ab5a28d881fa99dd93c68bc1daf9e6710',1,'pyrax::base_identity::BaseAuth']]],
['get_5fflavor',['get_flavor',['../classpyrax_1_1clouddatabases_1_1CloudDatabaseClient.html#a5ef10110fd842db8ecae2798b08bdd5b',1,'pyrax::clouddatabases::CloudDatabaseClient']]],
['get_5fhealth_5fmonitor',['get_health_monitor',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a882537f0d9ea2f375dca9ff59170532d',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.get_health_monitor()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a882537f0d9ea2f375dca9ff59170532d',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.get_health_monitor()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a882537f0d9ea2f375dca9ff59170532d',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.get_health_monitor()']]],
+ ['get_5fhome_5fdocument',['get_home_document',['../classpyrax_1_1queueing_1_1QueueClient.html#a97fc2eb61171be491fb16708763edf2c',1,'pyrax::queueing::QueueClient']]],
['get_5fhttp_5fdebug',['get_http_debug',['../namespacepyrax.html#a2766ee16854adf9575c29ac661f447fd',1,'pyrax']]],
['get_5fid',['get_id',['../namespacepyrax_1_1utils.html#a9cc7cce8ec3ad4b58c806254ca8ea58e',1,'pyrax::utils']]],
['get_5finfo',['get_info',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a384aea54f97656a3742b60bae861bc34',1,'pyrax::cf_wrapper::client::CFClient']]],
['get_5flaunch_5fconfig',['get_launch_config',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#ae370445e311ea96ab9b0bdfbc3eafa38',1,'pyrax::autoscale::ScalingGroup.get_launch_config()'],['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#ae370445e311ea96ab9b0bdfbc3eafa38',1,'pyrax::autoscale::ScalingGroupManager.get_launch_config()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#ae370445e311ea96ab9b0bdfbc3eafa38',1,'pyrax::autoscale::AutoScaleClient.get_launch_config()']]],
['get_5flimits',['get_limits',['../classpyrax_1_1client_1_1BaseClient.html#ab5ef84a0682afc9a357f6e76b15f1640',1,'pyrax::client::BaseClient.get_limits()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseClient.html#ab5ef84a0682afc9a357f6e76b15f1640',1,'pyrax::clouddatabases::CloudDatabaseClient.get_limits()']]],
- ['get_5fmetadata',['get_metadata',['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cf_wrapper::container::Container.get_metadata()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cf_wrapper::storage_object::StorageObject.get_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.get_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.get_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cloudloadbalancers::Node.get_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.get_metadata()']]],
+ ['get_5fmessage',['get_message',['../classpyrax_1_1queueing_1_1Queue.html#a56468e8bf0910ac8be0def8886f1feae',1,'pyrax::queueing::Queue.get_message()'],['../classpyrax_1_1queueing_1_1QueueClient.html#a56468e8bf0910ac8be0def8886f1feae',1,'pyrax::queueing::QueueClient.get_message()']]],
+ ['get_5fmetadata',['get_metadata',['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cf_wrapper::container::Container.get_metadata()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cf_wrapper::storage_object::StorageObject.get_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.get_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.get_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cloudloadbalancers::Node.get_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.get_metadata()'],['../classpyrax_1_1queueing_1_1QueueManager.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::queueing::QueueManager.get_metadata()'],['../classpyrax_1_1queueing_1_1QueueClient.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::queueing::QueueClient.get_metadata()']]],
['get_5fmetadata_5ffor_5fnode',['get_metadata_for_node',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#aebfb9c2caec7532153b2558fe14347cf',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.get_metadata_for_node()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#aebfb9c2caec7532153b2558fe14347cf',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.get_metadata_for_node()']]],
['get_5fmetric_5fdata_5fpoints',['get_metric_data_points',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#a58698bf0d5e72dc683fcf9384d110d69',1,'pyrax::cloudmonitoring::CloudMonitorEntity.get_metric_data_points()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#a58698bf0d5e72dc683fcf9384d110d69',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.get_metric_data_points()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#a58698bf0d5e72dc683fcf9384d110d69',1,'pyrax::cloudmonitoring::CloudMonitorCheck.get_metric_data_points()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a58698bf0d5e72dc683fcf9384d110d69',1,'pyrax::cloudmonitoring::CloudMonitorClient.get_metric_data_points()']]],
['get_5fmonitoring_5fzone',['get_monitoring_zone',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a264e06827aca6792c0bb9342703b90f6',1,'pyrax::cloudmonitoring::CloudMonitorClient']]],
@@ -60,7 +63,7 @@ var searchData=
['get_5fsetting',['get_setting',['../namespacepyrax.html#a5dbd20ff4ad6c1590c1c4723852763da',1,'pyrax']]],
['get_5fssl_5ftermination',['get_ssl_termination',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#adbf49e882fc9a7b55f38fd6213440d50',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.get_ssl_termination()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#adbf49e882fc9a7b55f38fd6213440d50',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.get_ssl_termination()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#adbf49e882fc9a7b55f38fd6213440d50',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.get_ssl_termination()']]],
['get_5fstate',['get_state',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#adc4030ba0851f007b904cc254c2cb489',1,'pyrax::autoscale::ScalingGroup.get_state()'],['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#adc4030ba0851f007b904cc254c2cb489',1,'pyrax::autoscale::ScalingGroupManager.get_state()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#adc4030ba0851f007b904cc254c2cb489',1,'pyrax::autoscale::AutoScaleClient.get_state()']]],
- ['get_5fstats',['get_stats',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a46cdb2ba90b6fff4c5cf4d76ae0a1697',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager']]],
+ ['get_5fstats',['get_stats',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a46cdb2ba90b6fff4c5cf4d76ae0a1697',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.get_stats()'],['../classpyrax_1_1queueing_1_1QueueManager.html#a46cdb2ba90b6fff4c5cf4d76ae0a1697',1,'pyrax::queueing::QueueManager.get_stats()'],['../classpyrax_1_1queueing_1_1QueueClient.html#a46cdb2ba90b6fff4c5cf4d76ae0a1697',1,'pyrax::queueing::QueueClient.get_stats()']]],
['get_5fsubdomain_5fiterator',['get_subdomain_iterator',['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#ac532463e7fb1cbb0a17181a816460adc',1,'pyrax::clouddns::CloudDNSClient']]],
['get_5ftemp_5furl',['get_temp_url',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a26cd51b10d04d2f632fc6f12fa2e3b43',1,'pyrax::cf_wrapper::client::CFClient.get_temp_url()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a26cd51b10d04d2f632fc6f12fa2e3b43',1,'pyrax::cf_wrapper::container::Container.get_temp_url()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a26cd51b10d04d2f632fc6f12fa2e3b43',1,'pyrax::cf_wrapper::storage_object::StorageObject.get_temp_url()']]],
['get_5ftemp_5furl_5fkey',['get_temp_url_key',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a80f3fd33724985de91d7930036344668',1,'pyrax::cf_wrapper::client::CFClient']]],
diff --git a/docs/html/search/all_68.js b/docs/html/search/all_68.js
index 88fd75a6..50faa835 100644
--- a/docs/html/search/all_68.js
+++ b/docs/html/search/all_68.js
@@ -3,6 +3,7 @@ var searchData=
['handle_5fswiftclient_5fexception',['handle_swiftclient_exception',['../namespacepyrax_1_1cf__wrapper_1_1client.html#aed48921b20ac7d5d1bf17dae8e2f971d',1,'pyrax::cf_wrapper::client']]],
['head',['head',['../classpyrax_1_1manager_1_1BaseManager.html#a6ffb8c9775dd06f2a95c5be870862051',1,'pyrax::manager::BaseManager']]],
['head_5fdate_5fformat',['HEAD_DATE_FORMAT',['../namespacepyrax_1_1cf__wrapper_1_1client.html#aea95a0010e586f71e6587a111cae730c',1,'pyrax::cf_wrapper::client']]],
+ ['href',['href',['../classpyrax_1_1queueing_1_1QueueMessage.html#aecfca4286e302d5d945be6fe76b99c86',1,'pyrax::queueing::QueueMessage.href()'],['../classpyrax_1_1queueing_1_1QueueClaim.html#ab8d8e60d0ff1588f6381ad0bef8ad4b7',1,'pyrax::queueing::QueueClaim.href()']]],
['http_5flog_5fdebug',['http_log_debug',['../classpyrax_1_1base__identity_1_1BaseAuth.html#a80b9e3606456b3ebde43de9500b1fcbb',1,'pyrax::base_identity::BaseAuth.http_log_debug()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a80b9e3606456b3ebde43de9500b1fcbb',1,'pyrax::cf_wrapper::client::CFClient.http_log_debug()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1Connection.html#a80b9e3606456b3ebde43de9500b1fcbb',1,'pyrax::cf_wrapper::client::Connection::http_log_debug()'],['../classpyrax_1_1client_1_1BaseClient.html#a80b9e3606456b3ebde43de9500b1fcbb',1,'pyrax::client::BaseClient.http_log_debug()']]],
['http_5flog_5freq',['http_log_req',['../classpyrax_1_1client_1_1BaseClient.html#a1d196f692455d5ea3eafe3d08178b131',1,'pyrax::client::BaseClient']]],
['http_5flog_5fresp',['http_log_resp',['../classpyrax_1_1client_1_1BaseClient.html#ac3cd5495847543298c0440645432b5db',1,'pyrax::client::BaseClient']]],
diff --git a/docs/html/search/all_69.js b/docs/html/search/all_69.js
index e4e91fcc..a7f68205 100644
--- a/docs/html/search/all_69.js
+++ b/docs/html/search/all_69.js
@@ -1,6 +1,6 @@
var searchData=
[
- ['id',['id',['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::clouddns::CloudDNSPTRRecord.id()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::cloudloadbalancers::Node.id()'],['../classpyrax_1_1cloudloadbalancers_1_1VirtualIP.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::cloudloadbalancers::VirtualIP.id()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetwork.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::cloudnetworks::CloudNetwork::id()'],['../classpyrax_1_1resource_1_1BaseResource.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::resource::BaseResource::id()']]],
+ ['id',['id',['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::clouddns::CloudDNSPTRRecord.id()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::cloudloadbalancers::Node.id()'],['../classpyrax_1_1cloudloadbalancers_1_1VirtualIP.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::cloudloadbalancers::VirtualIP.id()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetwork.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::cloudnetworks::CloudNetwork::id()'],['../classpyrax_1_1queueing_1_1QueueMessage.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::queueing::QueueMessage::id()'],['../classpyrax_1_1queueing_1_1QueueClaim.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::queueing::QueueClaim::id()'],['../classpyrax_1_1resource_1_1BaseResource.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::resource::BaseResource::id()'],['../classpyrax_1_1queueing_1_1Queue.html#a35d1e2c2471fb3f05e6abe42bb74d25a',1,'pyrax::queueing::Queue.id'],['../classpyrax_1_1queueing_1_1Queue.html#a35d1e2c2471fb3f05e6abe42bb74d25a',1,'pyrax::queueing::Queue.id']]],
['identity',['identity',['../namespacepyrax.html#ab7fc5a23efc53b58e213cd4cdf931c9f',1,'pyrax']]],
['identityclassnotdefined',['IdentityClassNotDefined',['../classpyrax_1_1exceptions_1_1IdentityClassNotDefined.html',1,'pyrax::exceptions']]],
['ignore',['ignore',['../classpyrax_1_1cf__wrapper_1_1client_1_1FolderUploader.html#a0e575fb50e8e0cc27c24104a3ced5a5c',1,'pyrax::cf_wrapper::client::FolderUploader']]],
@@ -25,6 +25,7 @@ var searchData=
['invalidnodecondition',['InvalidNodeCondition',['../classpyrax_1_1exceptions_1_1InvalidNodeCondition.html',1,'pyrax::exceptions']]],
['invalidnodeparameters',['InvalidNodeParameters',['../classpyrax_1_1exceptions_1_1InvalidNodeParameters.html',1,'pyrax::exceptions']]],
['invalidptrrecord',['InvalidPTRRecord',['../classpyrax_1_1exceptions_1_1InvalidPTRRecord.html',1,'pyrax::exceptions']]],
+ ['invalidqueuename',['InvalidQueueName',['../classpyrax_1_1exceptions_1_1InvalidQueueName.html',1,'pyrax::exceptions']]],
['invalidsessionpersistencetype',['InvalidSessionPersistenceType',['../classpyrax_1_1exceptions_1_1InvalidSessionPersistenceType.html',1,'pyrax::exceptions']]],
['invalidsetting',['InvalidSetting',['../classpyrax_1_1exceptions_1_1InvalidSetting.html',1,'pyrax::exceptions']]],
['invalidsize',['InvalidSize',['../classpyrax_1_1exceptions_1_1InvalidSize.html',1,'pyrax::exceptions']]],
diff --git a/docs/html/search/all_6c.js b/docs/html/search/all_6c.js
index 3adbecd0..c4e0b9fb 100644
--- a/docs/html/search/all_6c.js
+++ b/docs/html/search/all_6c.js
@@ -2,8 +2,10 @@ var searchData=
[
['label',['label',['../classpyrax_1_1cloudnetworks_1_1CloudNetwork.html#a22f45a3cb4f074e609f58ebaeef0ecf9',1,'pyrax::cloudnetworks::CloudNetwork']]],
['last_5fmodified',['last_modified',['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#aacadc30373e677c508e7b598fe832e32',1,'pyrax::cf_wrapper::storage_object::StorageObject']]],
- ['list',['list',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::cf_wrapper::client::CFClient.list()'],['../classpyrax_1_1client_1_1BaseClient.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::client::BaseClient.list()'],['../classpyrax_1_1clouddns_1_1CloudDNSManager.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::clouddns::CloudDNSManager.list()'],['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::clouddns::CloudDNSClient.list()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::cloudmonitoring::CloudMonitorClient.list()'],['../classpyrax_1_1manager_1_1BaseManager.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::manager::BaseManager.list()']]],
+ ['list',['list',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::cf_wrapper::client::CFClient.list()'],['../classpyrax_1_1client_1_1BaseClient.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::client::BaseClient.list()'],['../classpyrax_1_1clouddns_1_1CloudDNSManager.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::clouddns::CloudDNSManager.list()'],['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::clouddns::CloudDNSClient.list()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::cloudmonitoring::CloudMonitorClient.list()'],['../classpyrax_1_1manager_1_1BaseManager.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::manager::BaseManager.list()'],['../classpyrax_1_1queueing_1_1Queue.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::queueing::Queue.list()'],['../classpyrax_1_1queueing_1_1QueueMessageManager.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::queueing::QueueMessageManager.list()']]],
['list_5falarms',['list_alarms',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#aba03029cc0b951e300763dfb7b374b8f',1,'pyrax::cloudmonitoring::CloudMonitorEntity.list_alarms()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#aba03029cc0b951e300763dfb7b374b8f',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.list_alarms()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#aba03029cc0b951e300763dfb7b374b8f',1,'pyrax::cloudmonitoring::CloudMonitorClient.list_alarms()']]],
+ ['list_5fby_5fclaim',['list_by_claim',['../classpyrax_1_1queueing_1_1Queue.html#ac7bd1cb861452925a16e74d9b6aafd43',1,'pyrax::queueing::Queue']]],
+ ['list_5fby_5fids',['list_by_ids',['../classpyrax_1_1queueing_1_1Queue.html#ae6ec4aa7bb7da740187a3039da271f47',1,'pyrax::queueing::Queue.list_by_ids()'],['../classpyrax_1_1queueing_1_1QueueMessageManager.html#ae6ec4aa7bb7da740187a3039da271f47',1,'pyrax::queueing::QueueMessageManager.list_by_ids()']]],
['list_5fcheck_5ftypes',['list_check_types',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a360142ff4cbb495a8e5ba41b9c810a8b',1,'pyrax::cloudmonitoring::CloudMonitorClient']]],
['list_5fchecks',['list_checks',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#aedd0a1ae642c3f9e56df9bb5a1e1384c',1,'pyrax::cloudmonitoring::CloudMonitorEntity.list_checks()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#aedd0a1ae642c3f9e56df9bb5a1e1384c',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.list_checks()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#aedd0a1ae642c3f9e56df9bb5a1e1384c',1,'pyrax::cloudmonitoring::CloudMonitorClient.list_checks()']]],
['list_5fcontainer_5fsubdirs',['list_container_subdirs',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a67a23cb99d99eb7027e487e506cf68f6',1,'pyrax::cf_wrapper::client::CFClient']]],
@@ -11,9 +13,13 @@ var searchData=
['list_5fcontainers_5finfo',['list_containers_info',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#ac56c2c448f1e4c975621d2921bfb1e9c',1,'pyrax::cf_wrapper::client::CFClient']]],
['list_5fcredentials',['list_credentials',['../classpyrax_1_1identity_1_1rax__identity_1_1RaxIdentity.html#a752ff9d933b98fdeb87b51a7409f6d4a',1,'pyrax::identity::rax_identity::RaxIdentity']]],
['list_5fdatabases',['list_databases',['../classpyrax_1_1clouddatabases_1_1CloudDatabaseInstance.html#ae180be633edf68a4a6599e8c64310c5f',1,'pyrax::clouddatabases::CloudDatabaseInstance.list_databases()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseClient.html#ae180be633edf68a4a6599e8c64310c5f',1,'pyrax::clouddatabases::CloudDatabaseClient.list_databases()']]],
+ ['list_5fdate_5fformat',['LIST_DATE_FORMAT',['../namespacepyrax_1_1cf__wrapper_1_1client.html#abd01608caf4565408309af08113eb2b0',1,'pyrax::cf_wrapper::client']]],
['list_5fentities',['list_entities',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a03cd18f945d37d5cb1e0dc666b23ed3a',1,'pyrax::cloudmonitoring::CloudMonitorClient']]],
['list_5fenvironments',['list_environments',['../namespacepyrax.html#aaf4742684739d9a72d01f13093dc9a87',1,'pyrax']]],
['list_5fflavors',['list_flavors',['../classpyrax_1_1clouddatabases_1_1CloudDatabaseClient.html#ad54dc84b9febeac129b211c63c636612',1,'pyrax::clouddatabases::CloudDatabaseClient']]],
+ ['list_5fmessages',['list_messages',['../classpyrax_1_1queueing_1_1QueueClient.html#a2c448aaaca6c6858cb03494cef06f331',1,'pyrax::queueing::QueueClient']]],
+ ['list_5fmessages_5fby_5fclaim',['list_messages_by_claim',['../classpyrax_1_1queueing_1_1QueueClient.html#a67f1debfc0fcbb482b300f1eef48e31b',1,'pyrax::queueing::QueueClient']]],
+ ['list_5fmessages_5fby_5fids',['list_messages_by_ids',['../classpyrax_1_1queueing_1_1QueueClient.html#a0b7095bcaf881caa8bbe5b123a94982a',1,'pyrax::queueing::QueueClient']]],
['list_5fmethod',['list_method',['../classpyrax_1_1clouddns_1_1DomainResultsIterator.html#aef1de03914a0c0e4a1e69d43e72a4dbc',1,'pyrax::clouddns::DomainResultsIterator::list_method()'],['../classpyrax_1_1clouddns_1_1SubdomainResultsIterator.html#aef1de03914a0c0e4a1e69d43e72a4dbc',1,'pyrax::clouddns::SubdomainResultsIterator::list_method()'],['../classpyrax_1_1clouddns_1_1RecordResultsIterator.html#aef1de03914a0c0e4a1e69d43e72a4dbc',1,'pyrax::clouddns::RecordResultsIterator::list_method()']]],
['list_5fmetrics',['list_metrics',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#af232f5b9467cd7214f217a3fe6d51473',1,'pyrax::cloudmonitoring::CloudMonitorEntity.list_metrics()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#af232f5b9467cd7214f217a3fe6d51473',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.list_metrics()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#af232f5b9467cd7214f217a3fe6d51473',1,'pyrax::cloudmonitoring::CloudMonitorCheck.list_metrics()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#af232f5b9467cd7214f217a3fe6d51473',1,'pyrax::cloudmonitoring::CloudMonitorClient.list_metrics()']]],
['list_5fmonitoring_5fzones',['list_monitoring_zones',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#aa4b0e32f41ba11a5cd20dd43a9fc7c66',1,'pyrax::cloudmonitoring::CloudMonitorClient']]],
diff --git a/docs/html/search/all_6d.js b/docs/html/search/all_6d.js
index 22c01403..2bafd4d8 100644
--- a/docs/html/search/all_6d.js
+++ b/docs/html/search/all_6d.js
@@ -7,20 +7,24 @@ var searchData=
['management_5furl',['management_url',['../classpyrax_1_1client_1_1BaseClient.html#a0e7003a466834b21ae00b9640955da9f',1,'pyrax::client::BaseClient']]],
['manager',['manager',['../classpyrax_1_1clouddns_1_1ResultsIterator.html#a23416379944e641a8ad6bdbc95ef1859',1,'pyrax::clouddns::ResultsIterator::manager()'],['../classpyrax_1_1resource_1_1BaseResource.html#a23416379944e641a8ad6bdbc95ef1859',1,'pyrax::resource::BaseResource.manager()']]],
['manager_2epy',['manager.py',['../manager_8py.html',1,'']]],
+ ['marker_5fpat',['marker_pat',['../namespacepyrax_1_1queueing.html#a4ee1e020cf80b7e79c3f93360c95c811',1,'pyrax::queueing']]],
['match_5fpattern',['match_pattern',['../namespacepyrax_1_1utils.html#ab32790e8c29f35cd0c810dd86dedba2c',1,'pyrax::utils']]],
['max_5fentities',['max_entities',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a2a68ed3a21e0314dc3b7d6e1ac6db90d',1,'pyrax::autoscale::ScalingGroup.max_entities'],['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a2a68ed3a21e0314dc3b7d6e1ac6db90d',1,'pyrax::autoscale::ScalingGroup.max_entities']]],
['max_5ffile_5fsize',['max_file_size',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a38c5b8bfe2405c63c56e3dec85a057bc',1,'pyrax::cf_wrapper::client::CFClient']]],
['max_5fsize',['MAX_SIZE',['../namespacepyrax_1_1cloudblockstorage.html#a395b0fb68a5628e06819cb4aa43631fe',1,'pyrax::cloudblockstorage']]],
['message',['message',['../classpyrax_1_1exceptions_1_1ClientException.html#ab8140947611504abcb64a4c277effcf5',1,'pyrax::exceptions::ClientException.message()'],['../classpyrax_1_1exceptions_1_1BadRequest.html#ae1ed0d7a6f352c7ee3ad978429822c6f',1,'pyrax::exceptions::BadRequest.message()'],['../classpyrax_1_1exceptions_1_1Unauthorized.html#ae1ed0d7a6f352c7ee3ad978429822c6f',1,'pyrax::exceptions::Unauthorized.message()'],['../classpyrax_1_1exceptions_1_1Forbidden.html#ae1ed0d7a6f352c7ee3ad978429822c6f',1,'pyrax::exceptions::Forbidden.message()'],['../classpyrax_1_1exceptions_1_1NotFound.html#ae1ed0d7a6f352c7ee3ad978429822c6f',1,'pyrax::exceptions::NotFound.message()'],['../classpyrax_1_1exceptions_1_1NoUniqueMatch.html#ae1ed0d7a6f352c7ee3ad978429822c6f',1,'pyrax::exceptions::NoUniqueMatch.message()'],['../classpyrax_1_1exceptions_1_1OverLimit.html#ae1ed0d7a6f352c7ee3ad978429822c6f',1,'pyrax::exceptions::OverLimit.message()'],['../classpyrax_1_1exceptions_1_1HTTPNotImplemented.html#ae1ed0d7a6f352c7ee3ad978429822c6f',1,'pyrax::exceptions::HTTPNotImplemented.message()']]],
+ ['messages',['messages',['../classpyrax_1_1queueing_1_1QueueClaim.html#a7048605d09bb21159ccaab63402dc4e5',1,'pyrax::queueing::QueueClaim']]],
['metadata',['metadata',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a2fe8c1ec91a77ed22c86049d4ffef3f4',1,'pyrax::autoscale::ScalingGroup.metadata'],['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a2fe8c1ec91a77ed22c86049d4ffef3f4',1,'pyrax::autoscale::ScalingGroup.metadata']]],
['method_5fdelete',['method_delete',['../classpyrax_1_1base__identity_1_1BaseAuth.html#a1132703a22def73f131d037165a971aa',1,'pyrax::base_identity::BaseAuth.method_delete()'],['../classpyrax_1_1client_1_1BaseClient.html#a1132703a22def73f131d037165a971aa',1,'pyrax::client::BaseClient.method_delete()']]],
['method_5fget',['method_get',['../classpyrax_1_1base__identity_1_1BaseAuth.html#ac1f6b6211af6452ff038fbb8a25f4822',1,'pyrax::base_identity::BaseAuth.method_get()'],['../classpyrax_1_1client_1_1BaseClient.html#ac1f6b6211af6452ff038fbb8a25f4822',1,'pyrax::client::BaseClient.method_get()']]],
['method_5fhead',['method_head',['../classpyrax_1_1base__identity_1_1BaseAuth.html#a2b66a305940ec13628995f2a93c55b89',1,'pyrax::base_identity::BaseAuth.method_head()'],['../classpyrax_1_1client_1_1BaseClient.html#a2b66a305940ec13628995f2a93c55b89',1,'pyrax::client::BaseClient.method_head()']]],
+ ['method_5fpatch',['method_patch',['../classpyrax_1_1client_1_1BaseClient.html#a8212f8a94b29f7c39e38a9b6d8741322',1,'pyrax::client::BaseClient']]],
['method_5fpost',['method_post',['../classpyrax_1_1base__identity_1_1BaseAuth.html#a248efd43b254ea67d11575531bad3247',1,'pyrax::base_identity::BaseAuth.method_post()'],['../classpyrax_1_1client_1_1BaseClient.html#a248efd43b254ea67d11575531bad3247',1,'pyrax::client::BaseClient.method_post()']]],
['method_5fput',['method_put',['../classpyrax_1_1base__identity_1_1BaseAuth.html#a49de945eec86f955f4ac7d1487dcf286',1,'pyrax::base_identity::BaseAuth.method_put()'],['../classpyrax_1_1client_1_1BaseClient.html#a49de945eec86f955f4ac7d1487dcf286',1,'pyrax::client::BaseClient.method_put()']]],
['min_5fentities',['min_entities',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a395e6beed5694f7d60ea9657d57e9ad0',1,'pyrax::autoscale::ScalingGroup.min_entities'],['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a395e6beed5694f7d60ea9657d57e9ad0',1,'pyrax::autoscale::ScalingGroup.min_entities']]],
['min_5fsize',['MIN_SIZE',['../namespacepyrax_1_1cloudblockstorage.html#aaba5e7c5484ccde364fadc3e6a496b1f',1,'pyrax::cloudblockstorage']]],
['missingauthsettings',['MissingAuthSettings',['../classpyrax_1_1exceptions_1_1MissingAuthSettings.html',1,'pyrax::exceptions']]],
+ ['missingclaimparameters',['MissingClaimParameters',['../classpyrax_1_1exceptions_1_1MissingClaimParameters.html',1,'pyrax::exceptions']]],
['missingdnssettings',['MissingDNSSettings',['../classpyrax_1_1exceptions_1_1MissingDNSSettings.html',1,'pyrax::exceptions']]],
['missinghealthmonitorsettings',['MissingHealthMonitorSettings',['../classpyrax_1_1exceptions_1_1MissingHealthMonitorSettings.html',1,'pyrax::exceptions']]],
['missingloadbalancerparameters',['MissingLoadBalancerParameters',['../classpyrax_1_1exceptions_1_1MissingLoadBalancerParameters.html',1,'pyrax::exceptions']]],
@@ -31,5 +35,6 @@ var searchData=
['monitoringchecktargetnotspecified',['MonitoringCheckTargetNotSpecified',['../classpyrax_1_1exceptions_1_1MonitoringCheckTargetNotSpecified.html',1,'pyrax::exceptions']]],
['monitoringzonespollmissing',['MonitoringZonesPollMissing',['../classpyrax_1_1exceptions_1_1MonitoringZonesPollMissing.html',1,'pyrax::exceptions']]],
['move',['move',['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#af348e10e4e4162711ceedc9841276eb4',1,'pyrax::cf_wrapper::storage_object::StorageObject']]],
- ['move_5fobject',['move_object',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a536af175d76546af80d86eda21eaaccc',1,'pyrax::cf_wrapper::client::CFClient.move_object()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a536af175d76546af80d86eda21eaaccc',1,'pyrax::cf_wrapper::container::Container.move_object()']]]
+ ['move_5fobject',['move_object',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a536af175d76546af80d86eda21eaaccc',1,'pyrax::cf_wrapper::client::CFClient.move_object()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a536af175d76546af80d86eda21eaaccc',1,'pyrax::cf_wrapper::container::Container.move_object()']]],
+ ['msg_5flimit',['MSG_LIMIT',['../namespacepyrax_1_1queueing.html#ae9142e29cab13b8a9c7b02ea91ba9695',1,'pyrax::queueing']]]
];
diff --git a/docs/html/search/all_6e.js b/docs/html/search/all_6e.js
index baa15792..74299353 100644
--- a/docs/html/search/all_6e.js
+++ b/docs/html/search/all_6e.js
@@ -1,6 +1,6 @@
var searchData=
[
- ['name',['name',['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::autoscale::AutoScaleClient::name()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cf_wrapper::container::Container.name()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cf_wrapper::storage_object::StorageObject.name()'],['../classpyrax_1_1client_1_1BaseClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::client::BaseClient.name()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageSnapshot.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cloudblockstorage::CloudBlockStorageSnapshot.name()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageVolume.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cloudblockstorage::CloudBlockStorageVolume.name()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::cloudblockstorage::CloudBlockStorageClient::name()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::clouddatabases::CloudDatabaseClient::name()'],['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::clouddns::CloudDNSRecord.name()'],['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::clouddns::CloudDNSPTRRecord.name()'],['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::clouddns::CloudDNSClient::name()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient::name()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cloudmonitoring::CloudMonitorClient::name()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetwork.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cloudnetworks::CloudNetwork.name()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetworkClient.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cloudnetworks::CloudNetworkClient::name()'],['../classpyrax_1_1utils_1_1SelfDeletingTempfile.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::utils::SelfDeletingTempfile::name()'],['../classpyrax_1_1utils_1_1SelfDeletingTempDirectory.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::utils::SelfDeletingTempDirectory::name()'],['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a757840459670ee7692e00cf5ddc722d5',1,'pyrax::autoscale::ScalingGroup.name'],['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a757840459670ee7692e00cf5ddc722d5',1,'pyrax::autoscale::ScalingGroup.name'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#a757840459670ee7692e00cf5ddc722d5',1,'pyrax::cloudmonitoring::CloudMonitorEntity.name()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#a757840459670ee7692e00cf5ddc722d5',1,'pyrax::cloudmonitoring::CloudMonitorCheck.name()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorZone.html#a757840459670ee7692e00cf5ddc722d5',1,'pyrax::cloudmonitoring::CloudMonitorZone.name()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorNotification.html#a757840459670ee7692e00cf5ddc722d5',1,'pyrax::cloudmonitoring::CloudMonitorNotification.name()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorNotificationType.html#a757840459670ee7692e00cf5ddc722d5',1,'pyrax::cloudmonitoring::CloudMonitorNotificationType.name()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorNotificationPlan.html#a757840459670ee7692e00cf5ddc722d5',1,'pyrax::cloudmonitoring::CloudMonitorNotificationPlan.name()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorAlarm.html#a757840459670ee7692e00cf5ddc722d5',1,'pyrax::cloudmonitoring::CloudMonitorAlarm.name()'],['../namespacesetup.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'setup.name()']]],
+ ['name',['name',['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::autoscale::AutoScaleClient::name()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cf_wrapper::container::Container.name()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cf_wrapper::storage_object::StorageObject.name()'],['../classpyrax_1_1client_1_1BaseClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::client::BaseClient.name()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageSnapshot.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cloudblockstorage::CloudBlockStorageSnapshot.name()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageVolume.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cloudblockstorage::CloudBlockStorageVolume.name()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::cloudblockstorage::CloudBlockStorageClient::name()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::clouddatabases::CloudDatabaseClient::name()'],['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::clouddns::CloudDNSRecord.name()'],['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::clouddns::CloudDNSPTRRecord.name()'],['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::clouddns::CloudDNSClient::name()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient::name()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cloudmonitoring::CloudMonitorClient::name()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetwork.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cloudnetworks::CloudNetwork.name()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetworkClient.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cloudnetworks::CloudNetworkClient::name()'],['../classpyrax_1_1queueing_1_1Queue.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::queueing::Queue::name()'],['../classpyrax_1_1queueing_1_1QueueClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::queueing::QueueClient::name()'],['../classpyrax_1_1utils_1_1SelfDeletingTempfile.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::utils::SelfDeletingTempfile::name()'],['../classpyrax_1_1utils_1_1SelfDeletingTempDirectory.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::utils::SelfDeletingTempDirectory::name()'],['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a757840459670ee7692e00cf5ddc722d5',1,'pyrax::autoscale::ScalingGroup.name'],['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a757840459670ee7692e00cf5ddc722d5',1,'pyrax::autoscale::ScalingGroup.name'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#a757840459670ee7692e00cf5ddc722d5',1,'pyrax::cloudmonitoring::CloudMonitorEntity.name()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#a757840459670ee7692e00cf5ddc722d5',1,'pyrax::cloudmonitoring::CloudMonitorCheck.name()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorZone.html#a757840459670ee7692e00cf5ddc722d5',1,'pyrax::cloudmonitoring::CloudMonitorZone.name()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorNotification.html#a757840459670ee7692e00cf5ddc722d5',1,'pyrax::cloudmonitoring::CloudMonitorNotification.name()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorNotificationType.html#a757840459670ee7692e00cf5ddc722d5',1,'pyrax::cloudmonitoring::CloudMonitorNotificationType.name()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorNotificationPlan.html#a757840459670ee7692e00cf5ddc722d5',1,'pyrax::cloudmonitoring::CloudMonitorNotificationPlan.name()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorAlarm.html#a757840459670ee7692e00cf5ddc722d5',1,'pyrax::cloudmonitoring::CloudMonitorAlarm.name()'],['../namespacesetup.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'setup.name()']]],
['name_5fattr',['NAME_ATTR',['../classpyrax_1_1resource_1_1BaseResource.html#a74fac10a98253f8b0308159a33113ab9',1,'pyrax::resource::BaseResource']]],
['networkcidrinvalid',['NetworkCIDRInvalid',['../classpyrax_1_1exceptions_1_1NetworkCIDRInvalid.html',1,'pyrax::exceptions']]],
['networkcidrmalformed',['NetworkCIDRMalformed',['../classpyrax_1_1exceptions_1_1NetworkCIDRMalformed.html',1,'pyrax::exceptions']]],
diff --git a/docs/html/search/all_70.js b/docs/html/search/all_70.js
index dfcf9c06..0bc1c72a 100644
--- a/docs/html/search/all_70.js
+++ b/docs/html/search/all_70.js
@@ -25,11 +25,12 @@ var searchData=
['path',['path',['../namespacepyrax_1_1identity.html#ae6fc00af7c5b5a7c5f40ce6dc6b47d85',1,'pyrax::identity']]],
['pause',['pause',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#ad87957c5b208fe27e24a5260f5ddbb95',1,'pyrax::autoscale::ScalingGroup.pause()'],['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#ad87957c5b208fe27e24a5260f5ddbb95',1,'pyrax::autoscale::ScalingGroupManager.pause()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#ad87957c5b208fe27e24a5260f5ddbb95',1,'pyrax::autoscale::AutoScaleClient.pause()']]],
['plug_5fhole_5fin_5fswiftclient_5fauth',['plug_hole_in_swiftclient_auth',['../namespacepyrax.html#a52520cf6c40b52d2b67faf9762accb18',1,'pyrax']]],
- ['plural_5fresponse_5fkey',['plural_response_key',['../classpyrax_1_1manager_1_1BaseManager.html#a692be54e20855ad7f41d446719f26491',1,'pyrax::manager::BaseManager']]],
+ ['plural_5fresponse_5fkey',['plural_response_key',['../classpyrax_1_1manager_1_1BaseManager.html#a692be54e20855ad7f41d446719f26491',1,'pyrax::manager::BaseManager.plural_response_key()'],['../classpyrax_1_1queueing_1_1QueueMessageManager.html#a692be54e20855ad7f41d446719f26491',1,'pyrax::queueing::QueueMessageManager::plural_response_key()']]],
['policies',['policies',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a20e4bd6dc33dd1f27f9b7eaed505f80e',1,'pyrax::autoscale::ScalingGroup']]],
['policy',['policy',['../classpyrax_1_1autoscale_1_1AutoScaleWebhook.html#ad986b73e9d5f47a623a9b6d773c25e34',1,'pyrax::autoscale::AutoScaleWebhook']]],
['policy_5fcount',['policy_count',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a221c84e10c0ca6bc4f240b139076bf4d',1,'pyrax::autoscale::ScalingGroup']]],
['port',['port',['../classpyrax_1_1cloudloadbalancers_1_1Node.html#af8fb0f45ee0195c7422a49e6a8d72369',1,'pyrax::cloudloadbalancers::Node']]],
+ ['post_5fmessage',['post_message',['../classpyrax_1_1queueing_1_1Queue.html#a6ccb26d9187a005281b0342f7eb995ca',1,'pyrax::queueing::Queue.post_message()'],['../classpyrax_1_1queueing_1_1QueueClient.html#a6ccb26d9187a005281b0342f7eb995ca',1,'pyrax::queueing::QueueClient.post_message()']]],
['priority',['priority',['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#a6a5183df4c54c3e28dc8dc704f2487d5',1,'pyrax::clouddns::CloudDNSRecord']]],
['projectid',['projectid',['../classpyrax_1_1client_1_1BaseClient.html#af6e68e7b4a48c30549646fd3d5ed1aae',1,'pyrax::client::BaseClient']]],
['protocolmismatch',['ProtocolMismatch',['../classpyrax_1_1exceptions_1_1ProtocolMismatch.html',1,'pyrax::exceptions']]],
@@ -45,6 +46,7 @@ var searchData=
['pypath',['pypath',['../namespacepyrax_1_1identity.html#a41a87c16270907c740784cfa25d0c46a',1,'pyrax::identity']]],
['pyrax',['pyrax',['../namespacepyrax.html',1,'']]],
['pyraxexception',['PyraxException',['../classpyrax_1_1exceptions_1_1PyraxException.html',1,'pyrax::exceptions']]],
+ ['queueing',['queueing',['../namespacepyrax_1_1queueing.html',1,'pyrax']]],
['rax_5fidentity',['rax_identity',['../namespacepyrax_1_1identity_1_1rax__identity.html',1,'pyrax::identity']]],
['resource',['resource',['../namespacepyrax_1_1resource.html',1,'pyrax']]],
['service_5fcatalog',['service_catalog',['../namespacepyrax_1_1service__catalog.html',1,'pyrax']]],
diff --git a/docs/html/search/all_71.html b/docs/html/search/all_71.html
new file mode 100644
index 00000000..b4dc1e6e
--- /dev/null
+++ b/docs/html/search/all_71.html
@@ -0,0 +1,25 @@
+
+
+
+
+
+
+
+
+
+
Loading...
+
+
+
Searching...
+
No Matches
+
+
+
+
diff --git a/docs/html/search/all_71.js b/docs/html/search/all_71.js
new file mode 100644
index 00000000..1a640360
--- /dev/null
+++ b/docs/html/search/all_71.js
@@ -0,0 +1,14 @@
+var searchData=
+[
+ ['queue',['Queue',['../classpyrax_1_1queueing_1_1Queue.html',1,'pyrax::queueing']]],
+ ['queue_5fexists',['queue_exists',['../classpyrax_1_1queueing_1_1QueueClient.html#a4bc133ed8033eabfa1954379b1f10d9e',1,'pyrax::queueing::QueueClient']]],
+ ['queueclaim',['QueueClaim',['../classpyrax_1_1queueing_1_1QueueClaim.html',1,'pyrax::queueing']]],
+ ['queueclaimmanager',['QueueClaimManager',['../classpyrax_1_1queueing_1_1QueueClaimManager.html',1,'pyrax::queueing']]],
+ ['queueclient',['QueueClient',['../classpyrax_1_1queueing_1_1QueueClient.html',1,'pyrax::queueing']]],
+ ['queueclientidnotdefined',['QueueClientIDNotDefined',['../classpyrax_1_1exceptions_1_1QueueClientIDNotDefined.html',1,'pyrax::exceptions']]],
+ ['queueing_2epy',['queueing.py',['../queueing_8py.html',1,'']]],
+ ['queuemanager',['QueueManager',['../classpyrax_1_1queueing_1_1QueueManager.html',1,'pyrax::queueing']]],
+ ['queuemessage',['QueueMessage',['../classpyrax_1_1queueing_1_1QueueMessage.html',1,'pyrax::queueing']]],
+ ['queuemessagemanager',['QueueMessageManager',['../classpyrax_1_1queueing_1_1QueueMessageManager.html',1,'pyrax::queueing']]],
+ ['queues',['queues',['../namespacepyrax.html#a66c651710751cc178b5a5c0f029a9e8f',1,'pyrax']]]
+];
diff --git a/docs/html/search/all_72.js b/docs/html/search/all_72.js
index 90daf70f..00dea7f5 100644
--- a/docs/html/search/all_72.js
+++ b/docs/html/search/all_72.js
@@ -1,6 +1,7 @@
var searchData=
[
- ['random_5fname',['random_name',['../namespacepyrax_1_1utils.html#a15ae8eb19e0cbaf0281e950aebca0962',1,'pyrax::utils']]],
+ ['random_5fascii',['random_ascii',['../namespacepyrax_1_1utils.html#ad1dd5f67ceaa944c1f0bae698488632f',1,'pyrax::utils']]],
+ ['random_5funicode',['random_unicode',['../namespacepyrax_1_1utils.html#a22d977f9099b32cca29ae8f2bf85c738',1,'pyrax::utils']]],
['rax_5fidentity_2epy',['rax_identity.py',['../rax__identity_8py.html',1,'']]],
['raxidentity',['RaxIdentity',['../classpyrax_1_1identity_1_1rax__identity_1_1RaxIdentity.html',1,'pyrax::identity::rax_identity']]],
['read_5fconfig',['read_config',['../classpyrax_1_1Settings.html#a53943930dc298ed49aa0950b4898bb65',1,'pyrax::Settings']]],
@@ -8,12 +9,17 @@ var searchData=
['region',['region',['../classpyrax_1_1base__identity_1_1BaseAuth.html#a1b9edddb3735d131c67e9e824f07c402',1,'pyrax::base_identity::BaseAuth']]],
['region_5fname',['region_name',['../classpyrax_1_1client_1_1BaseClient.html#a326b5b91b887c67a677e2eb509b569b6',1,'pyrax::client::BaseClient']]],
['regions',['regions',['../classpyrax_1_1base__identity_1_1BaseAuth.html#a4c4786354df7358bf12c3c65069dd8b7',1,'pyrax::base_identity::BaseAuth.regions()'],['../namespacepyrax.html#a2b45bebec67926b49ea55f14eb0b8f8e',1,'pyrax.regions()']]],
+ ['release_5fclaim',['release_claim',['../classpyrax_1_1queueing_1_1Queue.html#a72572b2049988d314b2943909f1a3284',1,'pyrax::queueing::Queue.release_claim()'],['../classpyrax_1_1queueing_1_1QueueClient.html#a72572b2049988d314b2943909f1a3284',1,'pyrax::queueing::QueueClient.release_claim()']]],
['reload',['reload',['../classpyrax_1_1autoscale_1_1AutoScalePolicy.html#ac5c05266f4f3b5937cefb6a818fc6675',1,'pyrax::autoscale::AutoScalePolicy.reload()'],['../classpyrax_1_1autoscale_1_1AutoScaleWebhook.html#ac5c05266f4f3b5937cefb6a818fc6675',1,'pyrax::autoscale::AutoScaleWebhook.reload()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#ac5c05266f4f3b5937cefb6a818fc6675',1,'pyrax::cloudmonitoring::CloudMonitorCheck.reload()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorAlarm.html#ac5c05266f4f3b5937cefb6a818fc6675',1,'pyrax::cloudmonitoring::CloudMonitorAlarm.reload()'],['../classpyrax_1_1resource_1_1BaseResource.html#ac5c05266f4f3b5937cefb6a818fc6675',1,'pyrax::resource::BaseResource.reload()']]],
['remove_5fcontainer_5ffrom_5fcache',['remove_container_from_cache',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a71217dd504682153d8dbbd8bd84eed56',1,'pyrax::cf_wrapper::client::CFClient']]],
['remove_5fcontainer_5fmetadata_5fkey',['remove_container_metadata_key',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a91fcdf99119a696e0b626957628605bb',1,'pyrax::cf_wrapper::client::CFClient']]],
['remove_5ffrom_5fcache',['remove_from_cache',['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#adf3d192d83c2f53a96b0bd594c15b84d',1,'pyrax::cf_wrapper::container::Container']]],
['remove_5fmetadata_5fkey',['remove_metadata_key',['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a01a67fdfd99526a56bef1e3c0a01a68d',1,'pyrax::cf_wrapper::container::Container.remove_metadata_key()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a01a67fdfd99526a56bef1e3c0a01a68d',1,'pyrax::cf_wrapper::storage_object::StorageObject.remove_metadata_key()']]],
['remove_5fobject_5fmetadata_5fkey',['remove_object_metadata_key',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#ac3072e2b9369e7435f660ff89ae85aa5',1,'pyrax::cf_wrapper::client::CFClient']]],
+ ['replace',['replace',['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#ab285a64f9f3fb8e2efb536e440b34466',1,'pyrax::autoscale::ScalingGroupManager.replace()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#ab285a64f9f3fb8e2efb536e440b34466',1,'pyrax::autoscale::AutoScaleClient.replace()']]],
+ ['replace_5flaunch_5fconfig',['replace_launch_config',['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#a851fe41ce6befc7dd1b0a3f98328bbda',1,'pyrax::autoscale::ScalingGroupManager.replace_launch_config()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#a851fe41ce6befc7dd1b0a3f98328bbda',1,'pyrax::autoscale::AutoScaleClient.replace_launch_config()']]],
+ ['replace_5fpolicy',['replace_policy',['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#a4aea046272a0a459df2cc40ba2ee4d21',1,'pyrax::autoscale::ScalingGroupManager.replace_policy()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#a4aea046272a0a459df2cc40ba2ee4d21',1,'pyrax::autoscale::AutoScaleClient.replace_policy()']]],
+ ['replace_5fwebhook',['replace_webhook',['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#a652340a0267b6f56e5416b97b367c5fc',1,'pyrax::autoscale::ScalingGroupManager.replace_webhook()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#a652340a0267b6f56e5416b97b367c5fc',1,'pyrax::autoscale::AutoScaleClient.replace_webhook()']]],
['request',['request',['../classpyrax_1_1client_1_1BaseClient.html#a7ea72716d3813b3d175a880ff91eca73',1,'pyrax::client::BaseClient']]],
['request_5fid',['request_id',['../classpyrax_1_1exceptions_1_1ClientException.html#a24b613add05b03f7af1be9c4dab66d59',1,'pyrax::exceptions::ClientException']]],
['required_5ffield_5fnames',['required_field_names',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheckType.html#a6962c65252e3f5bfc1c2084ce219cd68',1,'pyrax::cloudmonitoring::CloudMonitorCheckType']]],
diff --git a/docs/html/search/all_73.js b/docs/html/search/all_73.js
index a547ef1a..0ef84890 100644
--- a/docs/html/search/all_73.js
+++ b/docs/html/search/all_73.js
@@ -33,7 +33,7 @@ var searchData=
['set_5fenvironment',['set_environment',['../namespacepyrax.html#a083408c25f486a8aa2fab4a17a587f26',1,'pyrax']]],
['set_5ferror_5fpage',['set_error_page',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a9e8cf14c622fff01da294f2c2352fd92',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.set_error_page()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a9e8cf14c622fff01da294f2c2352fd92',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.set_error_page()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a9e8cf14c622fff01da294f2c2352fd92',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.set_error_page()']]],
['set_5fhttp_5fdebug',['set_http_debug',['../namespacepyrax.html#a9d6ec1abac4bb1602676632e096ac945',1,'pyrax']]],
- ['set_5fmetadata',['set_metadata',['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cf_wrapper::container::Container.set_metadata()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cf_wrapper::storage_object::StorageObject.set_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.set_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.set_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cloudloadbalancers::Node.set_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.set_metadata()']]],
+ ['set_5fmetadata',['set_metadata',['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cf_wrapper::container::Container.set_metadata()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cf_wrapper::storage_object::StorageObject.set_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.set_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.set_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cloudloadbalancers::Node.set_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.set_metadata()'],['../classpyrax_1_1queueing_1_1QueueManager.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::queueing::QueueManager.set_metadata()'],['../classpyrax_1_1queueing_1_1QueueClient.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::queueing::QueueClient.set_metadata()']]],
['set_5fmetadata_5ffor_5fnode',['set_metadata_for_node',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a40aea8e89a0b3f7a146312305263dd8e',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.set_metadata_for_node()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a40aea8e89a0b3f7a146312305263dd8e',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.set_metadata_for_node()']]],
['set_5fobject_5fmetadata',['set_object_metadata',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a684f0273a86b04ee46bf5a7304df768d',1,'pyrax::cf_wrapper::client::CFClient']]],
['set_5fsession_5fpersistence',['set_session_persistence',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a407ba440587e1dc45a9c75ced3f6ea8c',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.set_session_persistence()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a407ba440587e1dc45a9c75ced3f6ea8c',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.set_session_persistence()']]],
diff --git a/docs/html/search/all_74.js b/docs/html/search/all_74.js
index 01256969..b89e53df 100644
--- a/docs/html/search/all_74.js
+++ b/docs/html/search/all_74.js
@@ -13,6 +13,6 @@ var searchData=
['token',['token',['../classpyrax_1_1base__identity_1_1BaseAuth.html#a623ef6987ef3bd185c07b28b13e46d34',1,'pyrax::base_identity::BaseAuth.token()'],['../classpyrax_1_1base__identity_1_1BaseAuth.html#a87da3d8264af1c9427605148f20dd9c4',1,'pyrax::base_identity::BaseAuth.token()']]],
['total_5fbytes',['total_bytes',['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a15d97ed7e27cf9263c3bc520f95e1d82',1,'pyrax::cf_wrapper::container::Container.total_bytes()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a15d97ed7e27cf9263c3bc520f95e1d82',1,'pyrax::cf_wrapper::storage_object::StorageObject.total_bytes()']]],
['trace',['trace',['../namespacepyrax_1_1utils.html#a1ef4c4162762c60a00cf44b5969127c5',1,'pyrax::utils']]],
- ['ttl',['ttl',['../classpyrax_1_1cf__wrapper_1_1client_1_1FolderUploader.html#a24139992a63da93bef33b5c8e6adc8bf',1,'pyrax::cf_wrapper::client::FolderUploader.ttl()'],['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#a24139992a63da93bef33b5c8e6adc8bf',1,'pyrax::clouddns::CloudDNSRecord.ttl()'],['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#a24139992a63da93bef33b5c8e6adc8bf',1,'pyrax::clouddns::CloudDNSPTRRecord.ttl()']]],
+ ['ttl',['ttl',['../classpyrax_1_1cf__wrapper_1_1client_1_1FolderUploader.html#a24139992a63da93bef33b5c8e6adc8bf',1,'pyrax::cf_wrapper::client::FolderUploader.ttl()'],['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#a24139992a63da93bef33b5c8e6adc8bf',1,'pyrax::clouddns::CloudDNSRecord.ttl()'],['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#a24139992a63da93bef33b5c8e6adc8bf',1,'pyrax::clouddns::CloudDNSPTRRecord.ttl()'],['../classpyrax_1_1queueing_1_1QueueMessage.html#a24139992a63da93bef33b5c8e6adc8bf',1,'pyrax::queueing::QueueMessage.ttl()']]],
['type',['type',['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#a7aead736a07eaf25623ad7bfa1f0ee2d',1,'pyrax::clouddns::CloudDNSRecord.type()'],['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#a7aead736a07eaf25623ad7bfa1f0ee2d',1,'pyrax::clouddns::CloudDNSPTRRecord::type()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#a7aead736a07eaf25623ad7bfa1f0ee2d',1,'pyrax::cloudloadbalancers::Node.type()'],['../classpyrax_1_1cloudloadbalancers_1_1VirtualIP.html#a7aead736a07eaf25623ad7bfa1f0ee2d',1,'pyrax::cloudloadbalancers::VirtualIP::type()']]]
];
diff --git a/docs/html/search/all_75.js b/docs/html/search/all_75.js
index 33156dd0..9fa775a4 100644
--- a/docs/html/search/all_75.js
+++ b/docs/html/search/all_75.js
@@ -6,9 +6,10 @@ var searchData=
['unauthenticated',['unauthenticated',['../namespacepyrax_1_1utils.html#a662924ed2118b3ba66f1d1521a7c2b40',1,'pyrax::utils']]],
['unauthorized',['Unauthorized',['../classpyrax_1_1exceptions_1_1Unauthorized.html',1,'pyrax::exceptions']]],
['unicodepatherror',['UnicodePathError',['../classpyrax_1_1exceptions_1_1UnicodePathError.html',1,'pyrax::exceptions']]],
- ['update',['update',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::ScalingGroup.update()'],['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::ScalingGroupManager.update()'],['../classpyrax_1_1autoscale_1_1AutoScalePolicy.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::AutoScalePolicy.update()'],['../classpyrax_1_1autoscale_1_1AutoScaleWebhook.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::AutoScaleWebhook.update()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::AutoScaleClient.update()'],['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#abe52b977c101e59f342489ed18140819',1,'pyrax::clouddns::CloudDNSRecord.update()'],['../classpyrax_1_1clouddns_1_1CloudDNSDomain.html#abe52b977c101e59f342489ed18140819',1,'pyrax::clouddns::CloudDNSDomain.update()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.update()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.update()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudloadbalancers::Node.update()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.update()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudmonitoring::CloudMonitorEntity.update()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudmonitoring::CloudMonitorCheck.update()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorNotification.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudmonitoring::CloudMonitorNotification.update()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorAlarm.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudmonitoring::CloudMonitorAlarm.update()']]],
+ ['update',['update',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::ScalingGroup.update()'],['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::ScalingGroupManager.update()'],['../classpyrax_1_1autoscale_1_1AutoScalePolicy.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::AutoScalePolicy.update()'],['../classpyrax_1_1autoscale_1_1AutoScaleWebhook.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::AutoScaleWebhook.update()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::AutoScaleClient.update()'],['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#abe52b977c101e59f342489ed18140819',1,'pyrax::clouddns::CloudDNSRecord.update()'],['../classpyrax_1_1clouddns_1_1CloudDNSDomain.html#abe52b977c101e59f342489ed18140819',1,'pyrax::clouddns::CloudDNSDomain.update()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.update()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.update()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudloadbalancers::Node.update()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.update()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudmonitoring::CloudMonitorEntity.update()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudmonitoring::CloudMonitorCheck.update()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorNotification.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudmonitoring::CloudMonitorNotification.update()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorAlarm.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudmonitoring::CloudMonitorAlarm.update()'],['../classpyrax_1_1queueing_1_1QueueClaimManager.html#abe52b977c101e59f342489ed18140819',1,'pyrax::queueing::QueueClaimManager.update()']]],
['update_5falarm',['update_alarm',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#a5890867a408848b989089ddf26d20a18',1,'pyrax::cloudmonitoring::CloudMonitorEntity.update_alarm()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#a5890867a408848b989089ddf26d20a18',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.update_alarm()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a5890867a408848b989089ddf26d20a18',1,'pyrax::cloudmonitoring::CloudMonitorClient.update_alarm()']]],
['update_5fcheck',['update_check',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#a31db8ea7ae986d187cad68055d066a20',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.update_check()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a31db8ea7ae986d187cad68055d066a20',1,'pyrax::cloudmonitoring::CloudMonitorClient.update_check()']]],
+ ['update_5fclaim',['update_claim',['../classpyrax_1_1queueing_1_1Queue.html#ac20bf3b335c30af755690d5d618b5d1e',1,'pyrax::queueing::Queue.update_claim()'],['../classpyrax_1_1queueing_1_1QueueClient.html#ac20bf3b335c30af755690d5d618b5d1e',1,'pyrax::queueing::QueueClient.update_claim()']]],
['update_5fdomain',['update_domain',['../classpyrax_1_1clouddns_1_1CloudDNSManager.html#a8ee67d6346234099ee06b44af2cb229b',1,'pyrax::clouddns::CloudDNSManager.update_domain()'],['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#a8ee67d6346234099ee06b44af2cb229b',1,'pyrax::clouddns::CloudDNSClient.update_domain()']]],
['update_5fentity',['update_entity',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#a6677002dbfa8807383168cfaa74670c8',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.update_entity()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a6677002dbfa8807383168cfaa74670c8',1,'pyrax::cloudmonitoring::CloudMonitorClient.update_entity()']]],
['update_5fexc',['update_exc',['../namespacepyrax_1_1utils.html#aa76e326dca328bf4b12f55a0b94be973',1,'pyrax::utils']]],
diff --git a/docs/html/search/classes_62.js b/docs/html/search/classes_62.js
index bf7c657a..ae027290 100644
--- a/docs/html/search/classes_62.js
+++ b/docs/html/search/classes_62.js
@@ -4,6 +4,7 @@ var searchData=
['baseauth',['BaseAuth',['../classpyrax_1_1base__identity_1_1BaseAuth.html',1,'pyrax::base_identity']]],
['baseclient',['BaseClient',['../classpyrax_1_1client_1_1BaseClient.html',1,'pyrax::client']]],
['basemanager',['BaseManager',['../classpyrax_1_1manager_1_1BaseManager.html',1,'pyrax::manager']]],
+ ['basequeuemanager',['BaseQueueManager',['../classpyrax_1_1queueing_1_1BaseQueueManager.html',1,'pyrax::queueing']]],
['baseresource',['BaseResource',['../classpyrax_1_1resource_1_1BaseResource.html',1,'pyrax::resource']]],
['bulkdeleter',['BulkDeleter',['../classpyrax_1_1cf__wrapper_1_1client_1_1BulkDeleter.html',1,'pyrax::cf_wrapper::client']]]
];
diff --git a/docs/html/search/classes_64.js b/docs/html/search/classes_64.js
index 545f2d09..18b327ca 100644
--- a/docs/html/search/classes_64.js
+++ b/docs/html/search/classes_64.js
@@ -10,5 +10,6 @@ var searchData=
['domainrecordupdatefailed',['DomainRecordUpdateFailed',['../classpyrax_1_1exceptions_1_1DomainRecordUpdateFailed.html',1,'pyrax::exceptions']]],
['domainresultsiterator',['DomainResultsIterator',['../classpyrax_1_1clouddns_1_1DomainResultsIterator.html',1,'pyrax::clouddns']]],
['domainupdatefailed',['DomainUpdateFailed',['../classpyrax_1_1exceptions_1_1DomainUpdateFailed.html',1,'pyrax::exceptions']]],
+ ['duplicatequeue',['DuplicateQueue',['../classpyrax_1_1exceptions_1_1DuplicateQueue.html',1,'pyrax::exceptions']]],
['duplicateuser',['DuplicateUser',['../classpyrax_1_1exceptions_1_1DuplicateUser.html',1,'pyrax::exceptions']]]
];
diff --git a/docs/html/search/classes_69.js b/docs/html/search/classes_69.js
index e69f7403..a6ed8bcf 100644
--- a/docs/html/search/classes_69.js
+++ b/docs/html/search/classes_69.js
@@ -16,6 +16,7 @@ var searchData=
['invalidnodecondition',['InvalidNodeCondition',['../classpyrax_1_1exceptions_1_1InvalidNodeCondition.html',1,'pyrax::exceptions']]],
['invalidnodeparameters',['InvalidNodeParameters',['../classpyrax_1_1exceptions_1_1InvalidNodeParameters.html',1,'pyrax::exceptions']]],
['invalidptrrecord',['InvalidPTRRecord',['../classpyrax_1_1exceptions_1_1InvalidPTRRecord.html',1,'pyrax::exceptions']]],
+ ['invalidqueuename',['InvalidQueueName',['../classpyrax_1_1exceptions_1_1InvalidQueueName.html',1,'pyrax::exceptions']]],
['invalidsessionpersistencetype',['InvalidSessionPersistenceType',['../classpyrax_1_1exceptions_1_1InvalidSessionPersistenceType.html',1,'pyrax::exceptions']]],
['invalidsetting',['InvalidSetting',['../classpyrax_1_1exceptions_1_1InvalidSetting.html',1,'pyrax::exceptions']]],
['invalidsize',['InvalidSize',['../classpyrax_1_1exceptions_1_1InvalidSize.html',1,'pyrax::exceptions']]],
diff --git a/docs/html/search/classes_6d.js b/docs/html/search/classes_6d.js
index 409bda76..76adec4d 100644
--- a/docs/html/search/classes_6d.js
+++ b/docs/html/search/classes_6d.js
@@ -1,6 +1,7 @@
var searchData=
[
['missingauthsettings',['MissingAuthSettings',['../classpyrax_1_1exceptions_1_1MissingAuthSettings.html',1,'pyrax::exceptions']]],
+ ['missingclaimparameters',['MissingClaimParameters',['../classpyrax_1_1exceptions_1_1MissingClaimParameters.html',1,'pyrax::exceptions']]],
['missingdnssettings',['MissingDNSSettings',['../classpyrax_1_1exceptions_1_1MissingDNSSettings.html',1,'pyrax::exceptions']]],
['missinghealthmonitorsettings',['MissingHealthMonitorSettings',['../classpyrax_1_1exceptions_1_1MissingHealthMonitorSettings.html',1,'pyrax::exceptions']]],
['missingloadbalancerparameters',['MissingLoadBalancerParameters',['../classpyrax_1_1exceptions_1_1MissingLoadBalancerParameters.html',1,'pyrax::exceptions']]],
diff --git a/docs/html/search/classes_71.html b/docs/html/search/classes_71.html
new file mode 100644
index 00000000..80a4fbb8
--- /dev/null
+++ b/docs/html/search/classes_71.html
@@ -0,0 +1,25 @@
+
+
+
+
+
+
+
+
+
+
Loading...
+
+
+
Searching...
+
No Matches
+
+
+
+
diff --git a/docs/html/search/classes_71.js b/docs/html/search/classes_71.js
new file mode 100644
index 00000000..ddbc3b4b
--- /dev/null
+++ b/docs/html/search/classes_71.js
@@ -0,0 +1,11 @@
+var searchData=
+[
+ ['queue',['Queue',['../classpyrax_1_1queueing_1_1Queue.html',1,'pyrax::queueing']]],
+ ['queueclaim',['QueueClaim',['../classpyrax_1_1queueing_1_1QueueClaim.html',1,'pyrax::queueing']]],
+ ['queueclaimmanager',['QueueClaimManager',['../classpyrax_1_1queueing_1_1QueueClaimManager.html',1,'pyrax::queueing']]],
+ ['queueclient',['QueueClient',['../classpyrax_1_1queueing_1_1QueueClient.html',1,'pyrax::queueing']]],
+ ['queueclientidnotdefined',['QueueClientIDNotDefined',['../classpyrax_1_1exceptions_1_1QueueClientIDNotDefined.html',1,'pyrax::exceptions']]],
+ ['queuemanager',['QueueManager',['../classpyrax_1_1queueing_1_1QueueManager.html',1,'pyrax::queueing']]],
+ ['queuemessage',['QueueMessage',['../classpyrax_1_1queueing_1_1QueueMessage.html',1,'pyrax::queueing']]],
+ ['queuemessagemanager',['QueueMessageManager',['../classpyrax_1_1queueing_1_1QueueMessageManager.html',1,'pyrax::queueing']]]
+];
diff --git a/docs/html/search/files_71.html b/docs/html/search/files_71.html
new file mode 100644
index 00000000..bb4ccc7c
--- /dev/null
+++ b/docs/html/search/files_71.html
@@ -0,0 +1,25 @@
+
+
+
+
+
+
+
+
+
+
Loading...
+
+
+
Searching...
+
No Matches
+
+
+
+
diff --git a/docs/html/search/files_71.js b/docs/html/search/files_71.js
new file mode 100644
index 00000000..a4558aec
--- /dev/null
+++ b/docs/html/search/files_71.js
@@ -0,0 +1,4 @@
+var searchData=
+[
+ ['queueing_2epy',['queueing.py',['../queueing_8py.html',1,'']]]
+];
diff --git a/docs/html/search/functions_5f.js b/docs/html/search/functions_5f.js
index 11355cfe..12e2a40f 100644
--- a/docs/html/search/functions_5f.js
+++ b/docs/html/search/functions_5f.js
@@ -4,7 +4,7 @@ var searchData=
['_5f_5feq_5f_5f',['__eq__',['../classpyrax_1_1cloudloadbalancers_1_1Node.html#a449f8fd74d358c0ad641b6c6d6917ba0',1,'pyrax::cloudloadbalancers::Node.__eq__()'],['../classpyrax_1_1resource_1_1BaseResource.html#a449f8fd74d358c0ad641b6c6d6917ba0',1,'pyrax::resource::BaseResource.__eq__()']]],
['_5f_5fexit_5f_5f',['__exit__',['../classpyrax_1_1utils_1_1SelfDeletingTempfile.html#a6de07022804200d0fb6383c0a237ee8e',1,'pyrax::utils::SelfDeletingTempfile.__exit__()'],['../classpyrax_1_1utils_1_1SelfDeletingTempDirectory.html#a6de07022804200d0fb6383c0a237ee8e',1,'pyrax::utils::SelfDeletingTempDirectory.__exit__()']]],
['_5f_5fgetattr_5f_5f',['__getattr__',['../classpyrax_1_1resource_1_1BaseResource.html#a0a990b3ec3889d40889daca9ee5e4695',1,'pyrax::resource::BaseResource']]],
- ['_5f_5finit_5f_5f',['__init__',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::autoscale::ScalingGroup.__init__()'],['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::autoscale::ScalingGroupManager.__init__()'],['../classpyrax_1_1autoscale_1_1AutoScalePolicy.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::autoscale::AutoScalePolicy.__init__()'],['../classpyrax_1_1autoscale_1_1AutoScaleWebhook.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::autoscale::AutoScaleWebhook.__init__()'],['../classpyrax_1_1base__identity_1_1BaseAuth.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::base_identity::BaseAuth.__init__()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::client::CFClient.__init__()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1Connection.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::client::Connection.__init__()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1FolderUploader.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::client::FolderUploader.__init__()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1BulkDeleter.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::client::BulkDeleter.__init__()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::container::Container.__init__()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::storage_object::StorageObject.__init__()'],['../classpyrax_1_1client_1_1BaseClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::client::BaseClient.__init__()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageVolume.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudblockstorage::CloudBlockStorageVolume.__init__()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseVolume.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddatabases::CloudDatabaseVolume.__init__()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseInstance.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddatabases::CloudDatabaseInstance.__init__()'],['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddns::CloudDNSPTRRecord.__init__()'],['../classpyrax_1_1clouddns_1_1CloudDNSManager.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddns::CloudDNSManager.__init__()'],['../classpyrax_1_1clouddns_1_1ResultsIterator.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddns::ResultsIterator.__init__()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.__init__()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudloadbalancers::Node.__init__()'],['../classpyrax_1_1cloudloadbalancers_1_1VirtualIP.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudloadbalancers::VirtualIP.__init__()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.__init__()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudmonitoring::CloudMonitorCheck.__init__()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorAlarm.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudmonitoring::CloudMonitorAlarm.__init__()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudmonitoring::CloudMonitorClient.__init__()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetworkClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudnetworks::CloudNetworkClient.__init__()'],['../classpyrax_1_1exceptions_1_1AmbiguousEndpoints.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::exceptions::AmbiguousEndpoints.__init__()'],['../classpyrax_1_1exceptions_1_1ClientException.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::exceptions::ClientException.__init__()'],['../classpyrax_1_1manager_1_1BaseManager.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::manager::BaseManager.__init__()'],['../classpyrax_1_1resource_1_1BaseResource.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::resource::BaseResource.__init__()'],['../classpyrax_1_1service__catalog_1_1ServiceCatalog.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::service_catalog::ServiceCatalog.__init__()'],['../classpyrax_1_1utils_1_1__WaitThread.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::utils::_WaitThread.__init__()']]],
+ ['_5f_5finit_5f_5f',['__init__',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::autoscale::ScalingGroup.__init__()'],['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::autoscale::ScalingGroupManager.__init__()'],['../classpyrax_1_1autoscale_1_1AutoScalePolicy.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::autoscale::AutoScalePolicy.__init__()'],['../classpyrax_1_1autoscale_1_1AutoScaleWebhook.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::autoscale::AutoScaleWebhook.__init__()'],['../classpyrax_1_1base__identity_1_1BaseAuth.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::base_identity::BaseAuth.__init__()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::client::CFClient.__init__()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1Connection.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::client::Connection.__init__()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1FolderUploader.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::client::FolderUploader.__init__()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1BulkDeleter.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::client::BulkDeleter.__init__()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::container::Container.__init__()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cf_wrapper::storage_object::StorageObject.__init__()'],['../classpyrax_1_1client_1_1BaseClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::client::BaseClient.__init__()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageVolume.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudblockstorage::CloudBlockStorageVolume.__init__()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseVolume.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddatabases::CloudDatabaseVolume.__init__()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseInstance.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddatabases::CloudDatabaseInstance.__init__()'],['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddns::CloudDNSPTRRecord.__init__()'],['../classpyrax_1_1clouddns_1_1CloudDNSManager.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddns::CloudDNSManager.__init__()'],['../classpyrax_1_1clouddns_1_1ResultsIterator.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::clouddns::ResultsIterator.__init__()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.__init__()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudloadbalancers::Node.__init__()'],['../classpyrax_1_1cloudloadbalancers_1_1VirtualIP.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudloadbalancers::VirtualIP.__init__()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.__init__()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudmonitoring::CloudMonitorCheck.__init__()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorAlarm.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudmonitoring::CloudMonitorAlarm.__init__()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudmonitoring::CloudMonitorClient.__init__()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetworkClient.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::cloudnetworks::CloudNetworkClient.__init__()'],['../classpyrax_1_1exceptions_1_1AmbiguousEndpoints.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::exceptions::AmbiguousEndpoints.__init__()'],['../classpyrax_1_1exceptions_1_1ClientException.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::exceptions::ClientException.__init__()'],['../classpyrax_1_1manager_1_1BaseManager.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::manager::BaseManager.__init__()'],['../classpyrax_1_1queueing_1_1Queue.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::queueing::Queue.__init__()'],['../classpyrax_1_1resource_1_1BaseResource.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::resource::BaseResource.__init__()'],['../classpyrax_1_1service__catalog_1_1ServiceCatalog.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::service_catalog::ServiceCatalog.__init__()'],['../classpyrax_1_1utils_1_1__WaitThread.html#ac775ee34451fdfa742b318538164070e',1,'pyrax::utils::_WaitThread.__init__()']]],
['_5f_5fiter_5f_5f',['__iter__',['../classpyrax_1_1clouddns_1_1ResultsIterator.html#a3009f152864dea4eb5e89cd94143d563',1,'pyrax::clouddns::ResultsIterator']]],
['_5f_5fne_5f_5f',['__ne__',['../classpyrax_1_1cloudloadbalancers_1_1Node.html#ad69df72a6bf0be3525fe45cd2f77f343',1,'pyrax::cloudloadbalancers::Node']]],
['_5f_5fnonzero_5f_5f',['__nonzero__',['../classpyrax_1_1cf__wrapper_1_1container_1_1Fault.html#a14f4a7f4cbfde7254adc73da3b2de9a5',1,'pyrax::cf_wrapper::container::Fault']]],
diff --git a/docs/html/search/functions_61.js b/docs/html/search/functions_61.js
index f118dc9e..7d451cdc 100644
--- a/docs/html/search/functions_61.js
+++ b/docs/html/search/functions_61.js
@@ -20,6 +20,7 @@ var searchData=
['assure_5finstance',['assure_instance',['../namespacepyrax_1_1clouddatabases.html#a8f8b217553a66a79f0135899e14964b4',1,'pyrax::clouddatabases']]],
['assure_5floadbalancer',['assure_loadbalancer',['../namespacepyrax_1_1cloudloadbalancers.html#ac1fa7d34dba3d7cf4a26fdab6afe2d22',1,'pyrax::cloudloadbalancers']]],
['assure_5fparent',['assure_parent',['../namespacepyrax_1_1cloudloadbalancers.html#ae8088c141e57bdabed8a13866eed8cdf',1,'pyrax::cloudloadbalancers']]],
+ ['assure_5fqueue',['assure_queue',['../namespacepyrax_1_1queueing.html#acc3d3bba9230fe3dd67e9f470b0b445a',1,'pyrax::queueing']]],
['assure_5fsnapshot',['assure_snapshot',['../namespacepyrax_1_1cloudblockstorage.html#a8a930b1066981115404f992adb243aa2',1,'pyrax::cloudblockstorage']]],
['assure_5fvolume',['assure_volume',['../namespacepyrax_1_1cloudblockstorage.html#a68a6bb754146fc1cdfe07a97456d71e9',1,'pyrax::cloudblockstorage']]],
['attach_5fto_5finstance',['attach_to_instance',['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageVolume.html#a7a946d1fbbaba8aa5fbad457498e2db4',1,'pyrax::cloudblockstorage::CloudBlockStorageVolume.attach_to_instance()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageClient.html#a7a946d1fbbaba8aa5fbad457498e2db4',1,'pyrax::cloudblockstorage::CloudBlockStorageClient.attach_to_instance()']]],
diff --git a/docs/html/search/functions_63.js b/docs/html/search/functions_63.js
index d6af59fe..43367bb3 100644
--- a/docs/html/search/functions_63.js
+++ b/docs/html/search/functions_63.js
@@ -1,6 +1,7 @@
var searchData=
[
['cancel_5ffolder_5fupload',['cancel_folder_upload',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a621a0e61fdb17290b5e9f5dec5eb12fc',1,'pyrax::cf_wrapper::client::CFClient']]],
+ ['case_5finsensitive_5fupdate',['case_insensitive_update',['../namespacepyrax_1_1utils.html#af3249b7bd46bd8a85fda01ce6a90304e',1,'pyrax::utils']]],
['cdn_5fenabled',['cdn_enabled',['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a2335e6e10f37188b2c023f9c5575c299',1,'pyrax::cf_wrapper::container::Container']]],
['cdn_5frequest',['cdn_request',['../classpyrax_1_1cf__wrapper_1_1client_1_1Connection.html#ad1c79a1e8a4bbf3b8286ca87c62e6da5',1,'pyrax::cf_wrapper::client::Connection']]],
['change_5fcontent_5ftype',['change_content_type',['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a9ff9b500eb0d5020f41595749b3b636f',1,'pyrax::cf_wrapper::storage_object::StorageObject']]],
@@ -9,6 +10,8 @@ var searchData=
['change_5fuser_5fpassword',['change_user_password',['../classpyrax_1_1clouddatabases_1_1CloudDatabaseUserManager.html#a3d7d0bcb779f81d9803cb5afffed69a3',1,'pyrax::clouddatabases::CloudDatabaseUserManager.change_user_password()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseInstance.html#a3d7d0bcb779f81d9803cb5afffed69a3',1,'pyrax::clouddatabases::CloudDatabaseInstance.change_user_password()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseClient.html#a3d7d0bcb779f81d9803cb5afffed69a3',1,'pyrax::clouddatabases::CloudDatabaseClient.change_user_password()']]],
['changes_5fsince',['changes_since',['../classpyrax_1_1clouddns_1_1CloudDNSDomain.html#ac31a3a61b9555672f6d51ae74ff22999',1,'pyrax::clouddns::CloudDNSDomain.changes_since()'],['../classpyrax_1_1clouddns_1_1CloudDNSManager.html#ac31a3a61b9555672f6d51ae74ff22999',1,'pyrax::clouddns::CloudDNSManager.changes_since()'],['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#ac31a3a61b9555672f6d51ae74ff22999',1,'pyrax::clouddns::CloudDNSClient.changes_since()']]],
['check_5ftoken',['check_token',['../classpyrax_1_1base__identity_1_1BaseAuth.html#a89a6ef6961d772a79cba6292b7edf926',1,'pyrax::base_identity::BaseAuth']]],
+ ['claim',['claim',['../classpyrax_1_1queueing_1_1QueueClaimManager.html#a3135f2c41e736cc621dd7943e9d34189',1,'pyrax::queueing::QueueClaimManager']]],
+ ['claim_5fmessages',['claim_messages',['../classpyrax_1_1queueing_1_1Queue.html#a4e661325a97751869d0b9b024bb73d20',1,'pyrax::queueing::Queue.claim_messages()'],['../classpyrax_1_1queueing_1_1QueueClient.html#a4e661325a97751869d0b9b024bb73d20',1,'pyrax::queueing::QueueClient.claim_messages()']]],
['clear_5fcredentials',['clear_credentials',['../namespacepyrax.html#ac84933adaea04f7479d32c6a5cf6e028',1,'pyrax']]],
['clear_5ferror_5fpage',['clear_error_page',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#aff0ccd9424518b3db68c4fbc3c99a710',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.clear_error_page()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#aff0ccd9424518b3db68c4fbc3c99a710',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.clear_error_page()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#aff0ccd9424518b3db68c4fbc3c99a710',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.clear_error_page()']]],
['coerce_5fstring_5fto_5flist',['coerce_string_to_list',['../namespacepyrax_1_1utils.html#a399c6b9652f5150282c3edb32d4d05f3',1,'pyrax::utils']]],
@@ -21,11 +24,12 @@ var searchData=
['connect_5fto_5fcloud_5fnetworks',['connect_to_cloud_networks',['../namespacepyrax.html#af30f9f18e048c8f0e677808a1028d29a',1,'pyrax']]],
['connect_5fto_5fcloudfiles',['connect_to_cloudfiles',['../namespacepyrax.html#a34593b67ad113f95973c1c7a6546fa68',1,'pyrax']]],
['connect_5fto_5fcloudservers',['connect_to_cloudservers',['../namespacepyrax.html#a93dcb702dfed414cb32073e78fdff831',1,'pyrax']]],
+ ['connect_5fto_5fqueues',['connect_to_queues',['../namespacepyrax.html#ac8d659180e8fac04349063827198b196',1,'pyrax']]],
['connect_5fto_5fservices',['connect_to_services',['../namespacepyrax.html#a708483dfb93616381fb0ec9338ab5528',1,'pyrax']]],
['cooldown',['cooldown',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a2268f26f552b5fffb9a0cb5fca09048b',1,'pyrax::autoscale::ScalingGroup.cooldown'],['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a2268f26f552b5fffb9a0cb5fca09048b',1,'pyrax::autoscale::ScalingGroup.cooldown']]],
['copy',['copy',['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a2fa43c22b5f7af93ba8b4a56871f006a',1,'pyrax::cf_wrapper::storage_object::StorageObject']]],
['copy_5fobject',['copy_object',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a7d45157fae0af819d907da6b00fa2378',1,'pyrax::cf_wrapper::client::CFClient.copy_object()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a7d45157fae0af819d907da6b00fa2378',1,'pyrax::cf_wrapper::container::Container.copy_object()']]],
- ['create',['create',['../classpyrax_1_1client_1_1BaseClient.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::client::BaseClient.create()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageSnapshotManager.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudblockstorage::CloudBlockStorageSnapshotManager.create()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageClient.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudblockstorage::CloudBlockStorageClient.create()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorNotificationManager.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudmonitoring::CloudMonitorNotificationManager.create()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorNotificationPlanManager.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudmonitoring::CloudMonitorNotificationPlanManager.create()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudmonitoring::CloudMonitorClient.create()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetworkClient.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudnetworks::CloudNetworkClient.create()'],['../classpyrax_1_1manager_1_1BaseManager.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::manager::BaseManager.create()']]],
+ ['create',['create',['../classpyrax_1_1client_1_1BaseClient.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::client::BaseClient.create()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageSnapshotManager.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudblockstorage::CloudBlockStorageSnapshotManager.create()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageClient.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudblockstorage::CloudBlockStorageClient.create()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorNotificationManager.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudmonitoring::CloudMonitorNotificationManager.create()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorNotificationPlanManager.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudmonitoring::CloudMonitorNotificationPlanManager.create()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudmonitoring::CloudMonitorClient.create()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetworkClient.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::cloudnetworks::CloudNetworkClient.create()'],['../classpyrax_1_1manager_1_1BaseManager.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::manager::BaseManager.create()'],['../classpyrax_1_1queueing_1_1QueueManager.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::queueing::QueueManager.create()'],['../classpyrax_1_1queueing_1_1QueueClient.html#a5b7ef0221e471e99fa7f0a87a28ba1ea',1,'pyrax::queueing::QueueClient.create()']]],
['create_5falarm',['create_alarm',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#a5e3e83b1715922337eb99e95373166e6',1,'pyrax::cloudmonitoring::CloudMonitorEntity.create_alarm()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#a5e3e83b1715922337eb99e95373166e6',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.create_alarm()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#a5e3e83b1715922337eb99e95373166e6',1,'pyrax::cloudmonitoring::CloudMonitorCheck.create_alarm()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a5e3e83b1715922337eb99e95373166e6',1,'pyrax::cloudmonitoring::CloudMonitorClient.create_alarm()']]],
['create_5fcheck',['create_check',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#a6fc873d6c66c1173dc63793fd2cc72d6',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.create_check()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a6fc873d6c66c1173dc63793fd2cc72d6',1,'pyrax::cloudmonitoring::CloudMonitorClient.create_check()']]],
['create_5fcontainer',['create_container',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a31b9196903b253dd2cd99dc4d7a0774e',1,'pyrax::cf_wrapper::client::CFClient']]],
diff --git a/docs/html/search/functions_64.js b/docs/html/search/functions_64.js
index ebd12278..597d5f13 100644
--- a/docs/html/search/functions_64.js
+++ b/docs/html/search/functions_64.js
@@ -6,6 +6,7 @@ var searchData=
['delete_5falarm',['delete_alarm',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#aa6f9a8547a22941467c661e1c3b83178',1,'pyrax::cloudmonitoring::CloudMonitorEntity.delete_alarm()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#aa6f9a8547a22941467c661e1c3b83178',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.delete_alarm()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#aa6f9a8547a22941467c661e1c3b83178',1,'pyrax::cloudmonitoring::CloudMonitorClient.delete_alarm()']]],
['delete_5fall_5fobjects',['delete_all_objects',['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a29c7d5f06a4fdc49b064a4352e8c8763',1,'pyrax::cf_wrapper::container::Container']]],
['delete_5fall_5fsnapshots',['delete_all_snapshots',['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageVolume.html#aec8a47767c07e19ef3ad9f6b974240bb',1,'pyrax::cloudblockstorage::CloudBlockStorageVolume']]],
+ ['delete_5fby_5fids',['delete_by_ids',['../classpyrax_1_1queueing_1_1Queue.html#ab3f3c42b68d285259934117c1a8f65e5',1,'pyrax::queueing::Queue.delete_by_ids()'],['../classpyrax_1_1queueing_1_1QueueMessageManager.html#ab3f3c42b68d285259934117c1a8f65e5',1,'pyrax::queueing::QueueMessageManager.delete_by_ids()']]],
['delete_5fcheck',['delete_check',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#a4efaacdc5baa3b0cf23478e90a16a5b9',1,'pyrax::cloudmonitoring::CloudMonitorEntity.delete_check()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#a4efaacdc5baa3b0cf23478e90a16a5b9',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.delete_check()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a4efaacdc5baa3b0cf23478e90a16a5b9',1,'pyrax::cloudmonitoring::CloudMonitorClient.delete_check()']]],
['delete_5fconnection_5fthrottle',['delete_connection_throttle',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#ae08266e5e13ddc6e99a972671e132d51',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.delete_connection_throttle()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#ae08266e5e13ddc6e99a972671e132d51',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.delete_connection_throttle()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#ae08266e5e13ddc6e99a972671e132d51',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.delete_connection_throttle()']]],
['delete_5fcontainer',['delete_container',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#ac314b663a2e0552b6403f04558be2735',1,'pyrax::cf_wrapper::client::CFClient']]],
@@ -13,6 +14,8 @@ var searchData=
['delete_5fentity',['delete_entity',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a4f0f541b858158a007e2eca4daa25445',1,'pyrax::cloudmonitoring::CloudMonitorClient']]],
['delete_5fhealth_5fmonitor',['delete_health_monitor',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a856597bc9c0b9484f62e3d7ae78c3099',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.delete_health_monitor()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a856597bc9c0b9484f62e3d7ae78c3099',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.delete_health_monitor()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a856597bc9c0b9484f62e3d7ae78c3099',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.delete_health_monitor()']]],
['delete_5fin_5fseconds',['delete_in_seconds',['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a4c205bc29b6fdece30a5bf28e85dfd3f',1,'pyrax::cf_wrapper::storage_object::StorageObject']]],
+ ['delete_5fmessage',['delete_message',['../classpyrax_1_1queueing_1_1Queue.html#acff825cd612230881431faa0b8f1ba64',1,'pyrax::queueing::Queue.delete_message()'],['../classpyrax_1_1queueing_1_1QueueClient.html#acff825cd612230881431faa0b8f1ba64',1,'pyrax::queueing::QueueClient.delete_message()']]],
+ ['delete_5fmessages_5fby_5fids',['delete_messages_by_ids',['../classpyrax_1_1queueing_1_1QueueClient.html#abaed1d07238dbd332c4609a83ea8dbff',1,'pyrax::queueing::QueueClient']]],
['delete_5fmetadata',['delete_metadata',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#ab4b26663be70bc5c9a8d3bf3b473b328',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.delete_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#ab4b26663be70bc5c9a8d3bf3b473b328',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.delete_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#ab4b26663be70bc5c9a8d3bf3b473b328',1,'pyrax::cloudloadbalancers::Node.delete_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#ab4b26663be70bc5c9a8d3bf3b473b328',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.delete_metadata()']]],
['delete_5fmetadata_5ffor_5fnode',['delete_metadata_for_node',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a38317481667772b2f3483f5360bbb08f',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.delete_metadata_for_node()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a38317481667772b2f3483f5360bbb08f',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.delete_metadata_for_node()']]],
['delete_5fnode',['delete_node',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a616947868beb4492f797b21aeb320d90',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.delete_node()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a616947868beb4492f797b21aeb320d90',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.delete_node()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a616947868beb4492f797b21aeb320d90',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.delete_node()']]],
diff --git a/docs/html/search/functions_66.js b/docs/html/search/functions_66.js
index 3113f01f..b90d076d 100644
--- a/docs/html/search/functions_66.js
+++ b/docs/html/search/functions_66.js
@@ -1,6 +1,7 @@
var searchData=
[
['fetch_5fobject',['fetch_object',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a1c274968ef395c2b88a4d67c24f30f8e',1,'pyrax::cf_wrapper::client::CFClient.fetch_object()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a1c274968ef395c2b88a4d67c24f30f8e',1,'pyrax::cf_wrapper::container::Container.fetch_object()']]],
+ ['fetch_5fpartial',['fetch_partial',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#ad1e7a72f5953d9a66e17bbe53b38fdc2',1,'pyrax::cf_wrapper::client::CFClient']]],
['field_5fnames',['field_names',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheckType.html#a0b9c1a726225d02b85a31d2fc7508352',1,'pyrax::cloudmonitoring::CloudMonitorCheckType']]],
['find',['find',['../classpyrax_1_1client_1_1BaseClient.html#a01f90f57b7acd55e177611f5d0f7df23',1,'pyrax::client::BaseClient.find()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a01f90f57b7acd55e177611f5d0f7df23',1,'pyrax::cloudmonitoring::CloudMonitorClient.find()'],['../classpyrax_1_1manager_1_1BaseManager.html#a01f90f57b7acd55e177611f5d0f7df23',1,'pyrax::manager::BaseManager.find()']]],
['find_5fall_5fchecks',['find_all_checks',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#a85522a4725d32d0924f3196f59f91d92',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.find_all_checks()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a85522a4725d32d0924f3196f59f91d92',1,'pyrax::cloudmonitoring::CloudMonitorClient.find_all_checks()']]],
diff --git a/docs/html/search/functions_67.js b/docs/html/search/functions_67.js
index 14a12ab0..621ec434 100644
--- a/docs/html/search/functions_67.js
+++ b/docs/html/search/functions_67.js
@@ -1,6 +1,6 @@
var searchData=
[
- ['get',['get',['../classpyrax_1_1Settings.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::Settings.get()'],['../classpyrax_1_1autoscale_1_1AutoScalePolicy.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::autoscale::AutoScalePolicy.get()'],['../classpyrax_1_1autoscale_1_1AutoScaleWebhook.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::autoscale::AutoScaleWebhook.get()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cf_wrapper::storage_object::StorageObject.get()'],['../classpyrax_1_1client_1_1BaseClient.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::client::BaseClient.get()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseVolume.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::clouddatabases::CloudDatabaseVolume.get()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseManager.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::clouddatabases::CloudDatabaseManager.get()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseInstance.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::clouddatabases::CloudDatabaseInstance.get()'],['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::clouddns::CloudDNSRecord.get()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cloudmonitoring::CloudMonitorCheck.get()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorAlarm.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cloudmonitoring::CloudMonitorAlarm.get()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cloudmonitoring::CloudMonitorClient.get()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetwork.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cloudnetworks::CloudNetwork.get()'],['../classpyrax_1_1manager_1_1BaseManager.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::manager::BaseManager.get()'],['../classpyrax_1_1resource_1_1BaseResource.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::resource::BaseResource.get()']]],
+ ['get',['get',['../classpyrax_1_1Settings.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::Settings.get()'],['../classpyrax_1_1autoscale_1_1AutoScalePolicy.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::autoscale::AutoScalePolicy.get()'],['../classpyrax_1_1autoscale_1_1AutoScaleWebhook.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::autoscale::AutoScaleWebhook.get()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cf_wrapper::storage_object::StorageObject.get()'],['../classpyrax_1_1client_1_1BaseClient.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::client::BaseClient.get()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseVolume.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::clouddatabases::CloudDatabaseVolume.get()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseManager.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::clouddatabases::CloudDatabaseManager.get()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseInstance.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::clouddatabases::CloudDatabaseInstance.get()'],['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::clouddns::CloudDNSRecord.get()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cloudmonitoring::CloudMonitorCheck.get()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorAlarm.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cloudmonitoring::CloudMonitorAlarm.get()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cloudmonitoring::CloudMonitorClient.get()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetwork.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::cloudnetworks::CloudNetwork.get()'],['../classpyrax_1_1manager_1_1BaseManager.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::manager::BaseManager.get()'],['../classpyrax_1_1queueing_1_1QueueManager.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::queueing::QueueManager.get()'],['../classpyrax_1_1resource_1_1BaseResource.html#a444a1328efb32d5d9d2dcb2efe855d3b',1,'pyrax::resource::BaseResource.get()']]],
['get_5fabsolute_5flimits',['get_absolute_limits',['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#af5584781384edbc4caa037fbc9749092',1,'pyrax::clouddns::CloudDNSClient']]],
['get_5faccess_5flist',['get_access_list',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a3aa1b2aa693e56eb9ff1535ca86c3c7c',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.get_access_list()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a3aa1b2aa693e56eb9ff1535ca86c3c7c',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.get_access_list()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a3aa1b2aa693e56eb9ff1535ca86c3c7c',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.get_access_list()']]],
['get_5faccount',['get_account',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a4d751385e14b90deacafe71308ee04dc',1,'pyrax::cloudmonitoring::CloudMonitorClient']]],
@@ -11,6 +11,7 @@ var searchData=
['get_5fcheck',['get_check',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#a663e1c9857cbc0bf083314e6404e976f',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.get_check()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a663e1c9857cbc0bf083314e6404e976f',1,'pyrax::cloudmonitoring::CloudMonitorClient.get_check()']]],
['get_5fcheck_5ftype',['get_check_type',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a4ec84467f00cf7b6b321984347025d14',1,'pyrax::cloudmonitoring::CloudMonitorClient']]],
['get_5fchecksum',['get_checksum',['../namespacepyrax_1_1utils.html#a9e1881e14792f2c07dd799fd7b9d53d1',1,'pyrax::utils']]],
+ ['get_5fclaim',['get_claim',['../classpyrax_1_1queueing_1_1Queue.html#a015b0e04f4bfd93df60cdb650bed2186',1,'pyrax::queueing::Queue.get_claim()'],['../classpyrax_1_1queueing_1_1QueueClient.html#a015b0e04f4bfd93df60cdb650bed2186',1,'pyrax::queueing::QueueClient.get_claim()']]],
['get_5fconfiguration',['get_configuration',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a4765b6651625ae1a9e0b6170c0cfe447',1,'pyrax::autoscale::ScalingGroup.get_configuration()'],['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#a4765b6651625ae1a9e0b6170c0cfe447',1,'pyrax::autoscale::ScalingGroupManager.get_configuration()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#a4765b6651625ae1a9e0b6170c0cfe447',1,'pyrax::autoscale::AutoScaleClient.get_configuration()']]],
['get_5fconnection_5flogging',['get_connection_logging',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#ab259705d7fd79cee133de68e9b29846b',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.get_connection_logging()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#ab259705d7fd79cee133de68e9b29846b',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.get_connection_logging()']]],
['get_5fconnection_5fthrottle',['get_connection_throttle',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a3caddfe1f948f22c3926b810485bd645',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.get_connection_throttle()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a3caddfe1f948f22c3926b810485bd645',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.get_connection_throttle()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a3caddfe1f948f22c3926b810485bd645',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.get_connection_throttle()']]],
@@ -33,12 +34,14 @@ var searchData=
['get_5fextensions',['get_extensions',['../classpyrax_1_1base__identity_1_1BaseAuth.html#ab5a28d881fa99dd93c68bc1daf9e6710',1,'pyrax::base_identity::BaseAuth']]],
['get_5fflavor',['get_flavor',['../classpyrax_1_1clouddatabases_1_1CloudDatabaseClient.html#a5ef10110fd842db8ecae2798b08bdd5b',1,'pyrax::clouddatabases::CloudDatabaseClient']]],
['get_5fhealth_5fmonitor',['get_health_monitor',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a882537f0d9ea2f375dca9ff59170532d',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.get_health_monitor()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a882537f0d9ea2f375dca9ff59170532d',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.get_health_monitor()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a882537f0d9ea2f375dca9ff59170532d',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.get_health_monitor()']]],
+ ['get_5fhome_5fdocument',['get_home_document',['../classpyrax_1_1queueing_1_1QueueClient.html#a97fc2eb61171be491fb16708763edf2c',1,'pyrax::queueing::QueueClient']]],
['get_5fhttp_5fdebug',['get_http_debug',['../namespacepyrax.html#a2766ee16854adf9575c29ac661f447fd',1,'pyrax']]],
['get_5fid',['get_id',['../namespacepyrax_1_1utils.html#a9cc7cce8ec3ad4b58c806254ca8ea58e',1,'pyrax::utils']]],
['get_5finfo',['get_info',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a384aea54f97656a3742b60bae861bc34',1,'pyrax::cf_wrapper::client::CFClient']]],
['get_5flaunch_5fconfig',['get_launch_config',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#ae370445e311ea96ab9b0bdfbc3eafa38',1,'pyrax::autoscale::ScalingGroup.get_launch_config()'],['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#ae370445e311ea96ab9b0bdfbc3eafa38',1,'pyrax::autoscale::ScalingGroupManager.get_launch_config()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#ae370445e311ea96ab9b0bdfbc3eafa38',1,'pyrax::autoscale::AutoScaleClient.get_launch_config()']]],
['get_5flimits',['get_limits',['../classpyrax_1_1client_1_1BaseClient.html#ab5ef84a0682afc9a357f6e76b15f1640',1,'pyrax::client::BaseClient.get_limits()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseClient.html#ab5ef84a0682afc9a357f6e76b15f1640',1,'pyrax::clouddatabases::CloudDatabaseClient.get_limits()']]],
- ['get_5fmetadata',['get_metadata',['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cf_wrapper::container::Container.get_metadata()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cf_wrapper::storage_object::StorageObject.get_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.get_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.get_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cloudloadbalancers::Node.get_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.get_metadata()']]],
+ ['get_5fmessage',['get_message',['../classpyrax_1_1queueing_1_1Queue.html#a56468e8bf0910ac8be0def8886f1feae',1,'pyrax::queueing::Queue.get_message()'],['../classpyrax_1_1queueing_1_1QueueClient.html#a56468e8bf0910ac8be0def8886f1feae',1,'pyrax::queueing::QueueClient.get_message()']]],
+ ['get_5fmetadata',['get_metadata',['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cf_wrapper::container::Container.get_metadata()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cf_wrapper::storage_object::StorageObject.get_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.get_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.get_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cloudloadbalancers::Node.get_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.get_metadata()'],['../classpyrax_1_1queueing_1_1QueueManager.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::queueing::QueueManager.get_metadata()'],['../classpyrax_1_1queueing_1_1QueueClient.html#abc9975b34823bfba383eb240ecde3783',1,'pyrax::queueing::QueueClient.get_metadata()']]],
['get_5fmetadata_5ffor_5fnode',['get_metadata_for_node',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#aebfb9c2caec7532153b2558fe14347cf',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.get_metadata_for_node()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#aebfb9c2caec7532153b2558fe14347cf',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.get_metadata_for_node()']]],
['get_5fmetric_5fdata_5fpoints',['get_metric_data_points',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#a58698bf0d5e72dc683fcf9384d110d69',1,'pyrax::cloudmonitoring::CloudMonitorEntity.get_metric_data_points()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#a58698bf0d5e72dc683fcf9384d110d69',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.get_metric_data_points()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#a58698bf0d5e72dc683fcf9384d110d69',1,'pyrax::cloudmonitoring::CloudMonitorCheck.get_metric_data_points()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a58698bf0d5e72dc683fcf9384d110d69',1,'pyrax::cloudmonitoring::CloudMonitorClient.get_metric_data_points()']]],
['get_5fmonitoring_5fzone',['get_monitoring_zone',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a264e06827aca6792c0bb9342703b90f6',1,'pyrax::cloudmonitoring::CloudMonitorClient']]],
@@ -59,7 +62,7 @@ var searchData=
['get_5fsetting',['get_setting',['../namespacepyrax.html#a5dbd20ff4ad6c1590c1c4723852763da',1,'pyrax']]],
['get_5fssl_5ftermination',['get_ssl_termination',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#adbf49e882fc9a7b55f38fd6213440d50',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.get_ssl_termination()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#adbf49e882fc9a7b55f38fd6213440d50',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.get_ssl_termination()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#adbf49e882fc9a7b55f38fd6213440d50',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.get_ssl_termination()']]],
['get_5fstate',['get_state',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#adc4030ba0851f007b904cc254c2cb489',1,'pyrax::autoscale::ScalingGroup.get_state()'],['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#adc4030ba0851f007b904cc254c2cb489',1,'pyrax::autoscale::ScalingGroupManager.get_state()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#adc4030ba0851f007b904cc254c2cb489',1,'pyrax::autoscale::AutoScaleClient.get_state()']]],
- ['get_5fstats',['get_stats',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a46cdb2ba90b6fff4c5cf4d76ae0a1697',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager']]],
+ ['get_5fstats',['get_stats',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a46cdb2ba90b6fff4c5cf4d76ae0a1697',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.get_stats()'],['../classpyrax_1_1queueing_1_1QueueManager.html#a46cdb2ba90b6fff4c5cf4d76ae0a1697',1,'pyrax::queueing::QueueManager.get_stats()'],['../classpyrax_1_1queueing_1_1QueueClient.html#a46cdb2ba90b6fff4c5cf4d76ae0a1697',1,'pyrax::queueing::QueueClient.get_stats()']]],
['get_5fsubdomain_5fiterator',['get_subdomain_iterator',['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#ac532463e7fb1cbb0a17181a816460adc',1,'pyrax::clouddns::CloudDNSClient']]],
['get_5ftemp_5furl',['get_temp_url',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a26cd51b10d04d2f632fc6f12fa2e3b43',1,'pyrax::cf_wrapper::client::CFClient.get_temp_url()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a26cd51b10d04d2f632fc6f12fa2e3b43',1,'pyrax::cf_wrapper::container::Container.get_temp_url()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a26cd51b10d04d2f632fc6f12fa2e3b43',1,'pyrax::cf_wrapper::storage_object::StorageObject.get_temp_url()']]],
['get_5ftemp_5furl_5fkey',['get_temp_url_key',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a80f3fd33724985de91d7930036344668',1,'pyrax::cf_wrapper::client::CFClient']]],
diff --git a/docs/html/search/functions_69.js b/docs/html/search/functions_69.js
index a9b5d61c..dd8304e0 100644
--- a/docs/html/search/functions_69.js
+++ b/docs/html/search/functions_69.js
@@ -1,5 +1,6 @@
var searchData=
[
+ ['id',['id',['../classpyrax_1_1queueing_1_1Queue.html#a35d1e2c2471fb3f05e6abe42bb74d25a',1,'pyrax::queueing::Queue.id'],['../classpyrax_1_1queueing_1_1Queue.html#a35d1e2c2471fb3f05e6abe42bb74d25a',1,'pyrax::queueing::Queue.id']]],
['import_5fclass',['import_class',['../namespacepyrax_1_1utils.html#a7d8b82b8246b4f514abfb4b6cd19aeb1',1,'pyrax::utils']]],
['import_5fdomain',['import_domain',['../classpyrax_1_1clouddns_1_1CloudDNSManager.html#a836e9097c126d4a1f02466e69a008071',1,'pyrax::clouddns::CloudDNSManager.import_domain()'],['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#a836e9097c126d4a1f02466e69a008071',1,'pyrax::clouddns::CloudDNSClient.import_domain()']]],
['is_5fisolated',['is_isolated',['../classpyrax_1_1cloudnetworks_1_1CloudNetwork.html#a87870f821e842943d8c6dce12a334519',1,'pyrax::cloudnetworks::CloudNetwork']]],
diff --git a/docs/html/search/functions_6c.js b/docs/html/search/functions_6c.js
index 5bf5aded..d55ecdca 100644
--- a/docs/html/search/functions_6c.js
+++ b/docs/html/search/functions_6c.js
@@ -1,7 +1,9 @@
var searchData=
[
- ['list',['list',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::cf_wrapper::client::CFClient.list()'],['../classpyrax_1_1client_1_1BaseClient.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::client::BaseClient.list()'],['../classpyrax_1_1clouddns_1_1CloudDNSManager.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::clouddns::CloudDNSManager.list()'],['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::clouddns::CloudDNSClient.list()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::cloudmonitoring::CloudMonitorClient.list()'],['../classpyrax_1_1manager_1_1BaseManager.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::manager::BaseManager.list()']]],
+ ['list',['list',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::cf_wrapper::client::CFClient.list()'],['../classpyrax_1_1client_1_1BaseClient.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::client::BaseClient.list()'],['../classpyrax_1_1clouddns_1_1CloudDNSManager.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::clouddns::CloudDNSManager.list()'],['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::clouddns::CloudDNSClient.list()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::cloudmonitoring::CloudMonitorClient.list()'],['../classpyrax_1_1manager_1_1BaseManager.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::manager::BaseManager.list()'],['../classpyrax_1_1queueing_1_1Queue.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::queueing::Queue.list()'],['../classpyrax_1_1queueing_1_1QueueMessageManager.html#a9b522b4ef7526bfa60b78fe735caa55c',1,'pyrax::queueing::QueueMessageManager.list()']]],
['list_5falarms',['list_alarms',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#aba03029cc0b951e300763dfb7b374b8f',1,'pyrax::cloudmonitoring::CloudMonitorEntity.list_alarms()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#aba03029cc0b951e300763dfb7b374b8f',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.list_alarms()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#aba03029cc0b951e300763dfb7b374b8f',1,'pyrax::cloudmonitoring::CloudMonitorClient.list_alarms()']]],
+ ['list_5fby_5fclaim',['list_by_claim',['../classpyrax_1_1queueing_1_1Queue.html#ac7bd1cb861452925a16e74d9b6aafd43',1,'pyrax::queueing::Queue']]],
+ ['list_5fby_5fids',['list_by_ids',['../classpyrax_1_1queueing_1_1Queue.html#ae6ec4aa7bb7da740187a3039da271f47',1,'pyrax::queueing::Queue.list_by_ids()'],['../classpyrax_1_1queueing_1_1QueueMessageManager.html#ae6ec4aa7bb7da740187a3039da271f47',1,'pyrax::queueing::QueueMessageManager.list_by_ids()']]],
['list_5fcheck_5ftypes',['list_check_types',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a360142ff4cbb495a8e5ba41b9c810a8b',1,'pyrax::cloudmonitoring::CloudMonitorClient']]],
['list_5fchecks',['list_checks',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#aedd0a1ae642c3f9e56df9bb5a1e1384c',1,'pyrax::cloudmonitoring::CloudMonitorEntity.list_checks()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#aedd0a1ae642c3f9e56df9bb5a1e1384c',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.list_checks()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#aedd0a1ae642c3f9e56df9bb5a1e1384c',1,'pyrax::cloudmonitoring::CloudMonitorClient.list_checks()']]],
['list_5fcontainer_5fsubdirs',['list_container_subdirs',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a67a23cb99d99eb7027e487e506cf68f6',1,'pyrax::cf_wrapper::client::CFClient']]],
@@ -12,6 +14,9 @@ var searchData=
['list_5fentities',['list_entities',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a03cd18f945d37d5cb1e0dc666b23ed3a',1,'pyrax::cloudmonitoring::CloudMonitorClient']]],
['list_5fenvironments',['list_environments',['../namespacepyrax.html#aaf4742684739d9a72d01f13093dc9a87',1,'pyrax']]],
['list_5fflavors',['list_flavors',['../classpyrax_1_1clouddatabases_1_1CloudDatabaseClient.html#ad54dc84b9febeac129b211c63c636612',1,'pyrax::clouddatabases::CloudDatabaseClient']]],
+ ['list_5fmessages',['list_messages',['../classpyrax_1_1queueing_1_1QueueClient.html#a2c448aaaca6c6858cb03494cef06f331',1,'pyrax::queueing::QueueClient']]],
+ ['list_5fmessages_5fby_5fclaim',['list_messages_by_claim',['../classpyrax_1_1queueing_1_1QueueClient.html#a67f1debfc0fcbb482b300f1eef48e31b',1,'pyrax::queueing::QueueClient']]],
+ ['list_5fmessages_5fby_5fids',['list_messages_by_ids',['../classpyrax_1_1queueing_1_1QueueClient.html#a0b7095bcaf881caa8bbe5b123a94982a',1,'pyrax::queueing::QueueClient']]],
['list_5fmetrics',['list_metrics',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#af232f5b9467cd7214f217a3fe6d51473',1,'pyrax::cloudmonitoring::CloudMonitorEntity.list_metrics()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#af232f5b9467cd7214f217a3fe6d51473',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.list_metrics()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#af232f5b9467cd7214f217a3fe6d51473',1,'pyrax::cloudmonitoring::CloudMonitorCheck.list_metrics()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#af232f5b9467cd7214f217a3fe6d51473',1,'pyrax::cloudmonitoring::CloudMonitorClient.list_metrics()']]],
['list_5fmonitoring_5fzones',['list_monitoring_zones',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#aa4b0e32f41ba11a5cd20dd43a9fc7c66',1,'pyrax::cloudmonitoring::CloudMonitorClient']]],
['list_5fnext_5fpage',['list_next_page',['../classpyrax_1_1clouddns_1_1CloudDNSManager.html#acc587f3fc63273a1236d635f463edb91',1,'pyrax::clouddns::CloudDNSManager.list_next_page()'],['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#acc587f3fc63273a1236d635f463edb91',1,'pyrax::clouddns::CloudDNSClient.list_next_page()']]],
diff --git a/docs/html/search/functions_6d.js b/docs/html/search/functions_6d.js
index 3af19053..0f9bc69d 100644
--- a/docs/html/search/functions_6d.js
+++ b/docs/html/search/functions_6d.js
@@ -10,6 +10,7 @@ var searchData=
['method_5fdelete',['method_delete',['../classpyrax_1_1base__identity_1_1BaseAuth.html#a1132703a22def73f131d037165a971aa',1,'pyrax::base_identity::BaseAuth.method_delete()'],['../classpyrax_1_1client_1_1BaseClient.html#a1132703a22def73f131d037165a971aa',1,'pyrax::client::BaseClient.method_delete()']]],
['method_5fget',['method_get',['../classpyrax_1_1base__identity_1_1BaseAuth.html#ac1f6b6211af6452ff038fbb8a25f4822',1,'pyrax::base_identity::BaseAuth.method_get()'],['../classpyrax_1_1client_1_1BaseClient.html#ac1f6b6211af6452ff038fbb8a25f4822',1,'pyrax::client::BaseClient.method_get()']]],
['method_5fhead',['method_head',['../classpyrax_1_1base__identity_1_1BaseAuth.html#a2b66a305940ec13628995f2a93c55b89',1,'pyrax::base_identity::BaseAuth.method_head()'],['../classpyrax_1_1client_1_1BaseClient.html#a2b66a305940ec13628995f2a93c55b89',1,'pyrax::client::BaseClient.method_head()']]],
+ ['method_5fpatch',['method_patch',['../classpyrax_1_1client_1_1BaseClient.html#a8212f8a94b29f7c39e38a9b6d8741322',1,'pyrax::client::BaseClient']]],
['method_5fpost',['method_post',['../classpyrax_1_1base__identity_1_1BaseAuth.html#a248efd43b254ea67d11575531bad3247',1,'pyrax::base_identity::BaseAuth.method_post()'],['../classpyrax_1_1client_1_1BaseClient.html#a248efd43b254ea67d11575531bad3247',1,'pyrax::client::BaseClient.method_post()']]],
['method_5fput',['method_put',['../classpyrax_1_1base__identity_1_1BaseAuth.html#a49de945eec86f955f4ac7d1487dcf286',1,'pyrax::base_identity::BaseAuth.method_put()'],['../classpyrax_1_1client_1_1BaseClient.html#a49de945eec86f955f4ac7d1487dcf286',1,'pyrax::client::BaseClient.method_put()']]],
['min_5fentities',['min_entities',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a395e6beed5694f7d60ea9657d57e9ad0',1,'pyrax::autoscale::ScalingGroup.min_entities'],['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a395e6beed5694f7d60ea9657d57e9ad0',1,'pyrax::autoscale::ScalingGroup.min_entities']]],
diff --git a/docs/html/search/functions_70.js b/docs/html/search/functions_70.js
index 6e4a09cb..e55cadbc 100644
--- a/docs/html/search/functions_70.js
+++ b/docs/html/search/functions_70.js
@@ -4,6 +4,7 @@ var searchData=
['pause',['pause',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#ad87957c5b208fe27e24a5260f5ddbb95',1,'pyrax::autoscale::ScalingGroup.pause()'],['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#ad87957c5b208fe27e24a5260f5ddbb95',1,'pyrax::autoscale::ScalingGroupManager.pause()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#ad87957c5b208fe27e24a5260f5ddbb95',1,'pyrax::autoscale::AutoScaleClient.pause()']]],
['plug_5fhole_5fin_5fswiftclient_5fauth',['plug_hole_in_swiftclient_auth',['../namespacepyrax.html#a52520cf6c40b52d2b67faf9762accb18',1,'pyrax']]],
['policy_5fcount',['policy_count',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a221c84e10c0ca6bc4f240b139076bf4d',1,'pyrax::autoscale::ScalingGroup']]],
+ ['post_5fmessage',['post_message',['../classpyrax_1_1queueing_1_1Queue.html#a6ccb26d9187a005281b0342f7eb995ca',1,'pyrax::queueing::Queue.post_message()'],['../classpyrax_1_1queueing_1_1QueueClient.html#a6ccb26d9187a005281b0342f7eb995ca',1,'pyrax::queueing::QueueClient.post_message()']]],
['projectid',['projectid',['../classpyrax_1_1client_1_1BaseClient.html#af6e68e7b4a48c30549646fd3d5ed1aae',1,'pyrax::client::BaseClient']]],
['protocols',['protocols',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a7ffc66e16cbb4a549e7874dce9d61e6e',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient']]],
['purge',['purge',['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a24447386d5cd26a6a61fd75fd19842c7',1,'pyrax::cf_wrapper::storage_object::StorageObject']]],
diff --git a/docs/html/search/functions_71.html b/docs/html/search/functions_71.html
new file mode 100644
index 00000000..af909d4a
--- /dev/null
+++ b/docs/html/search/functions_71.html
@@ -0,0 +1,25 @@
+
+
+
+
+
+
+
+
+
+
Loading...
+
+
+
Searching...
+
No Matches
+
+
+
+
diff --git a/docs/html/search/functions_71.js b/docs/html/search/functions_71.js
new file mode 100644
index 00000000..333ac3b8
--- /dev/null
+++ b/docs/html/search/functions_71.js
@@ -0,0 +1,4 @@
+var searchData=
+[
+ ['queue_5fexists',['queue_exists',['../classpyrax_1_1queueing_1_1QueueClient.html#a4bc133ed8033eabfa1954379b1f10d9e',1,'pyrax::queueing::QueueClient']]]
+];
diff --git a/docs/html/search/functions_72.js b/docs/html/search/functions_72.js
index 4559416c..58aad181 100644
--- a/docs/html/search/functions_72.js
+++ b/docs/html/search/functions_72.js
@@ -1,12 +1,18 @@
var searchData=
[
- ['random_5fname',['random_name',['../namespacepyrax_1_1utils.html#a15ae8eb19e0cbaf0281e950aebca0962',1,'pyrax::utils']]],
+ ['random_5fascii',['random_ascii',['../namespacepyrax_1_1utils.html#ad1dd5f67ceaa944c1f0bae698488632f',1,'pyrax::utils']]],
+ ['random_5funicode',['random_unicode',['../namespacepyrax_1_1utils.html#a22d977f9099b32cca29ae8f2bf85c738',1,'pyrax::utils']]],
['read_5fconfig',['read_config',['../classpyrax_1_1Settings.html#a53943930dc298ed49aa0950b4898bb65',1,'pyrax::Settings']]],
+ ['release_5fclaim',['release_claim',['../classpyrax_1_1queueing_1_1Queue.html#a72572b2049988d314b2943909f1a3284',1,'pyrax::queueing::Queue.release_claim()'],['../classpyrax_1_1queueing_1_1QueueClient.html#a72572b2049988d314b2943909f1a3284',1,'pyrax::queueing::QueueClient.release_claim()']]],
['remove_5fcontainer_5ffrom_5fcache',['remove_container_from_cache',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a71217dd504682153d8dbbd8bd84eed56',1,'pyrax::cf_wrapper::client::CFClient']]],
['remove_5fcontainer_5fmetadata_5fkey',['remove_container_metadata_key',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a91fcdf99119a696e0b626957628605bb',1,'pyrax::cf_wrapper::client::CFClient']]],
['remove_5ffrom_5fcache',['remove_from_cache',['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#adf3d192d83c2f53a96b0bd594c15b84d',1,'pyrax::cf_wrapper::container::Container']]],
['remove_5fmetadata_5fkey',['remove_metadata_key',['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a01a67fdfd99526a56bef1e3c0a01a68d',1,'pyrax::cf_wrapper::container::Container.remove_metadata_key()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a01a67fdfd99526a56bef1e3c0a01a68d',1,'pyrax::cf_wrapper::storage_object::StorageObject.remove_metadata_key()']]],
['remove_5fobject_5fmetadata_5fkey',['remove_object_metadata_key',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#ac3072e2b9369e7435f660ff89ae85aa5',1,'pyrax::cf_wrapper::client::CFClient']]],
+ ['replace',['replace',['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#ab285a64f9f3fb8e2efb536e440b34466',1,'pyrax::autoscale::ScalingGroupManager.replace()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#ab285a64f9f3fb8e2efb536e440b34466',1,'pyrax::autoscale::AutoScaleClient.replace()']]],
+ ['replace_5flaunch_5fconfig',['replace_launch_config',['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#a851fe41ce6befc7dd1b0a3f98328bbda',1,'pyrax::autoscale::ScalingGroupManager.replace_launch_config()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#a851fe41ce6befc7dd1b0a3f98328bbda',1,'pyrax::autoscale::AutoScaleClient.replace_launch_config()']]],
+ ['replace_5fpolicy',['replace_policy',['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#a4aea046272a0a459df2cc40ba2ee4d21',1,'pyrax::autoscale::ScalingGroupManager.replace_policy()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#a4aea046272a0a459df2cc40ba2ee4d21',1,'pyrax::autoscale::AutoScaleClient.replace_policy()']]],
+ ['replace_5fwebhook',['replace_webhook',['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#a652340a0267b6f56e5416b97b367c5fc',1,'pyrax::autoscale::ScalingGroupManager.replace_webhook()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#a652340a0267b6f56e5416b97b367c5fc',1,'pyrax::autoscale::AutoScaleClient.replace_webhook()']]],
['request',['request',['../classpyrax_1_1client_1_1BaseClient.html#a7ea72716d3813b3d175a880ff91eca73',1,'pyrax::client::BaseClient']]],
['required_5ffield_5fnames',['required_field_names',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheckType.html#a6962c65252e3f5bfc1c2084ce219cd68',1,'pyrax::cloudmonitoring::CloudMonitorCheckType']]],
['reset_5ftimings',['reset_timings',['../classpyrax_1_1client_1_1BaseClient.html#ac8ac904919fa009e510cc25731bc20f8',1,'pyrax::client::BaseClient']]],
diff --git a/docs/html/search/functions_73.js b/docs/html/search/functions_73.js
index c33ed457..9eca5f8b 100644
--- a/docs/html/search/functions_73.js
+++ b/docs/html/search/functions_73.js
@@ -18,7 +18,7 @@ var searchData=
['set_5fenvironment',['set_environment',['../namespacepyrax.html#a083408c25f486a8aa2fab4a17a587f26',1,'pyrax']]],
['set_5ferror_5fpage',['set_error_page',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a9e8cf14c622fff01da294f2c2352fd92',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.set_error_page()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a9e8cf14c622fff01da294f2c2352fd92',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.set_error_page()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a9e8cf14c622fff01da294f2c2352fd92',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.set_error_page()']]],
['set_5fhttp_5fdebug',['set_http_debug',['../namespacepyrax.html#a9d6ec1abac4bb1602676632e096ac945',1,'pyrax']]],
- ['set_5fmetadata',['set_metadata',['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cf_wrapper::container::Container.set_metadata()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cf_wrapper::storage_object::StorageObject.set_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.set_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.set_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cloudloadbalancers::Node.set_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.set_metadata()']]],
+ ['set_5fmetadata',['set_metadata',['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cf_wrapper::container::Container.set_metadata()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cf_wrapper::storage_object::StorageObject.set_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.set_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.set_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cloudloadbalancers::Node.set_metadata()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.set_metadata()'],['../classpyrax_1_1queueing_1_1QueueManager.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::queueing::QueueManager.set_metadata()'],['../classpyrax_1_1queueing_1_1QueueClient.html#a72a400370828e749dd0d63a622a95ee2',1,'pyrax::queueing::QueueClient.set_metadata()']]],
['set_5fmetadata_5ffor_5fnode',['set_metadata_for_node',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#a40aea8e89a0b3f7a146312305263dd8e',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.set_metadata_for_node()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a40aea8e89a0b3f7a146312305263dd8e',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.set_metadata_for_node()']]],
['set_5fobject_5fmetadata',['set_object_metadata',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a684f0273a86b04ee46bf5a7304df768d',1,'pyrax::cf_wrapper::client::CFClient']]],
['set_5fsession_5fpersistence',['set_session_persistence',['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#a407ba440587e1dc45a9c75ced3f6ea8c',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.set_session_persistence()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a407ba440587e1dc45a9c75ced3f6ea8c',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.set_session_persistence()']]],
diff --git a/docs/html/search/functions_75.js b/docs/html/search/functions_75.js
index ec250468..3add706d 100644
--- a/docs/html/search/functions_75.js
+++ b/docs/html/search/functions_75.js
@@ -2,9 +2,10 @@ var searchData=
[
['unauthenticate',['unauthenticate',['../classpyrax_1_1base__identity_1_1BaseAuth.html#a143a677947f3597dc3cc3055a0eae66e',1,'pyrax::base_identity::BaseAuth.unauthenticate()'],['../classpyrax_1_1client_1_1BaseClient.html#a143a677947f3597dc3cc3055a0eae66e',1,'pyrax::client::BaseClient.unauthenticate()']]],
['unauthenticated',['unauthenticated',['../namespacepyrax_1_1utils.html#a662924ed2118b3ba66f1d1521a7c2b40',1,'pyrax::utils']]],
- ['update',['update',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::ScalingGroup.update()'],['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::ScalingGroupManager.update()'],['../classpyrax_1_1autoscale_1_1AutoScalePolicy.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::AutoScalePolicy.update()'],['../classpyrax_1_1autoscale_1_1AutoScaleWebhook.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::AutoScaleWebhook.update()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::AutoScaleClient.update()'],['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#abe52b977c101e59f342489ed18140819',1,'pyrax::clouddns::CloudDNSRecord.update()'],['../classpyrax_1_1clouddns_1_1CloudDNSDomain.html#abe52b977c101e59f342489ed18140819',1,'pyrax::clouddns::CloudDNSDomain.update()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.update()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.update()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudloadbalancers::Node.update()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.update()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudmonitoring::CloudMonitorEntity.update()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudmonitoring::CloudMonitorCheck.update()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorNotification.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudmonitoring::CloudMonitorNotification.update()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorAlarm.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudmonitoring::CloudMonitorAlarm.update()']]],
+ ['update',['update',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::ScalingGroup.update()'],['../classpyrax_1_1autoscale_1_1ScalingGroupManager.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::ScalingGroupManager.update()'],['../classpyrax_1_1autoscale_1_1AutoScalePolicy.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::AutoScalePolicy.update()'],['../classpyrax_1_1autoscale_1_1AutoScaleWebhook.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::AutoScaleWebhook.update()'],['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#abe52b977c101e59f342489ed18140819',1,'pyrax::autoscale::AutoScaleClient.update()'],['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#abe52b977c101e59f342489ed18140819',1,'pyrax::clouddns::CloudDNSRecord.update()'],['../classpyrax_1_1clouddns_1_1CloudDNSDomain.html#abe52b977c101e59f342489ed18140819',1,'pyrax::clouddns::CloudDNSDomain.update()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancer.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudloadbalancers::CloudLoadBalancer.update()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerManager.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudloadbalancers::CloudLoadBalancerManager.update()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudloadbalancers::Node.update()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient.update()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudmonitoring::CloudMonitorEntity.update()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorCheck.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudmonitoring::CloudMonitorCheck.update()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorNotification.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudmonitoring::CloudMonitorNotification.update()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorAlarm.html#abe52b977c101e59f342489ed18140819',1,'pyrax::cloudmonitoring::CloudMonitorAlarm.update()'],['../classpyrax_1_1queueing_1_1QueueClaimManager.html#abe52b977c101e59f342489ed18140819',1,'pyrax::queueing::QueueClaimManager.update()']]],
['update_5falarm',['update_alarm',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntity.html#a5890867a408848b989089ddf26d20a18',1,'pyrax::cloudmonitoring::CloudMonitorEntity.update_alarm()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#a5890867a408848b989089ddf26d20a18',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.update_alarm()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a5890867a408848b989089ddf26d20a18',1,'pyrax::cloudmonitoring::CloudMonitorClient.update_alarm()']]],
['update_5fcheck',['update_check',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#a31db8ea7ae986d187cad68055d066a20',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.update_check()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a31db8ea7ae986d187cad68055d066a20',1,'pyrax::cloudmonitoring::CloudMonitorClient.update_check()']]],
+ ['update_5fclaim',['update_claim',['../classpyrax_1_1queueing_1_1Queue.html#ac20bf3b335c30af755690d5d618b5d1e',1,'pyrax::queueing::Queue.update_claim()'],['../classpyrax_1_1queueing_1_1QueueClient.html#ac20bf3b335c30af755690d5d618b5d1e',1,'pyrax::queueing::QueueClient.update_claim()']]],
['update_5fdomain',['update_domain',['../classpyrax_1_1clouddns_1_1CloudDNSManager.html#a8ee67d6346234099ee06b44af2cb229b',1,'pyrax::clouddns::CloudDNSManager.update_domain()'],['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#a8ee67d6346234099ee06b44af2cb229b',1,'pyrax::clouddns::CloudDNSClient.update_domain()']]],
['update_5fentity',['update_entity',['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorEntityManager.html#a6677002dbfa8807383168cfaa74670c8',1,'pyrax::cloudmonitoring::CloudMonitorEntityManager.update_entity()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#a6677002dbfa8807383168cfaa74670c8',1,'pyrax::cloudmonitoring::CloudMonitorClient.update_entity()']]],
['update_5fexc',['update_exc',['../namespacepyrax_1_1utils.html#aa76e326dca328bf4b12f55a0b94be973',1,'pyrax::utils']]],
diff --git a/docs/html/search/namespaces_70.js b/docs/html/search/namespaces_70.js
index 63d008cd..177e0770 100644
--- a/docs/html/search/namespaces_70.js
+++ b/docs/html/search/namespaces_70.js
@@ -17,6 +17,7 @@ var searchData=
['keystone_5fidentity',['keystone_identity',['../namespacepyrax_1_1identity_1_1keystone__identity.html',1,'pyrax::identity']]],
['manager',['manager',['../namespacepyrax_1_1manager.html',1,'pyrax']]],
['pyrax',['pyrax',['../namespacepyrax.html',1,'']]],
+ ['queueing',['queueing',['../namespacepyrax_1_1queueing.html',1,'pyrax']]],
['rax_5fidentity',['rax_identity',['../namespacepyrax_1_1identity_1_1rax__identity.html',1,'pyrax::identity']]],
['resource',['resource',['../namespacepyrax_1_1resource.html',1,'pyrax']]],
['service_5fcatalog',['service_catalog',['../namespacepyrax_1_1service__catalog.html',1,'pyrax']]],
diff --git a/docs/html/search/search.js b/docs/html/search/search.js
index 70d744be..854bb278 100644
--- a/docs/html/search/search.js
+++ b/docs/html/search/search.js
@@ -7,12 +7,12 @@
var indexSectionsWithContent =
{
- 0: "0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010111111111011111101111110000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
- 1: "0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010111111011010111101111100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
+ 0: "0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010111111111011111111111110000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
+ 1: "0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010111111011010111111111100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
2: "0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001000000000000100100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
- 3: "0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010111010000010100001101100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
- 4: "0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010111111111011111101111010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
- 5: "0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010111111111011111101111110000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
+ 3: "0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010111010000010100011101100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
+ 4: "0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010111111111011111111111010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
+ 5: "0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010111111111011111111111110000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
6: "0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001111010001010000101000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
};
diff --git a/docs/html/search/variables_61.js b/docs/html/search/variables_61.js
index e870bae6..d926a1b1 100644
--- a/docs/html/search/variables_61.js
+++ b/docs/html/search/variables_61.js
@@ -3,6 +3,7 @@ var searchData=
['account_5fmeta_5fprefix',['account_meta_prefix',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#afc65c059b6cbdda2bf91e6f5b6608608',1,'pyrax::cf_wrapper::client::CFClient']]],
['add_5frecord',['add_record',['../classpyrax_1_1clouddns_1_1CloudDNSDomain.html#ad1ce0203df93b6f58971458de77f264e',1,'pyrax::clouddns::CloudDNSDomain.add_record()'],['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#ad1ce0203df93b6f58971458de77f264e',1,'pyrax::clouddns::CloudDNSClient.add_record()']]],
['address',['address',['../classpyrax_1_1cloudloadbalancers_1_1Node.html#ade5a18d52133ef21f211020ceb464c07',1,'pyrax::cloudloadbalancers::Node::address()'],['../classpyrax_1_1cloudloadbalancers_1_1VirtualIP.html#ade5a18d52133ef21f211020ceb464c07',1,'pyrax::cloudloadbalancers::VirtualIP.address()']]],
+ ['age',['age',['../classpyrax_1_1queueing_1_1QueueMessage.html#a043a7693e1d2d30988a0f821e0ab5f94',1,'pyrax::queueing::QueueMessage']]],
['api',['api',['../classpyrax_1_1manager_1_1BaseManager.html#a6da13b696f737097e0146e47cc0d0985',1,'pyrax::manager::BaseManager']]],
['api_5fdate_5fpattern',['API_DATE_PATTERN',['../namespacepyrax_1_1base__identity.html#abc3f00c435af9a25ee98b025c8df847e',1,'pyrax::base_identity']]],
['att',['att',['../classpyrax_1_1utils_1_1__WaitThread.html#ac356deedcb6c8bb875aaedf10db0a455',1,'pyrax::utils::_WaitThread']]],
diff --git a/docs/html/search/variables_62.js b/docs/html/search/variables_62.js
index 4b6aba97..7d187e7b 100644
--- a/docs/html/search/variables_62.js
+++ b/docs/html/search/variables_62.js
@@ -1,5 +1,6 @@
var searchData=
[
['base_5fpath',['base_path',['../classpyrax_1_1cf__wrapper_1_1client_1_1FolderUploader.html#a80ae992797a306309f835d904c520c28',1,'pyrax::cf_wrapper::client::FolderUploader']]],
+ ['body',['body',['../classpyrax_1_1queueing_1_1QueueMessage.html#a14d48c2e9f05d0b03044eb45f308fcb0',1,'pyrax::queueing::QueueMessage']]],
['bulk_5fdelete_5finterval',['bulk_delete_interval',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#accd7312117a68637f8a9ad3f2b6968e6',1,'pyrax::cf_wrapper::client::CFClient']]]
];
diff --git a/docs/html/search/variables_63.js b/docs/html/search/variables_63.js
index 463f25c1..31fb82de 100644
--- a/docs/html/search/variables_63.js
+++ b/docs/html/search/variables_63.js
@@ -8,8 +8,10 @@ var searchData=
['cdn_5fmeta_5fprefix',['cdn_meta_prefix',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a2952143e08d5da585f27b1b2271c03be',1,'pyrax::cf_wrapper::client::CFClient']]],
['cdn_5furl',['cdn_url',['../classpyrax_1_1cf__wrapper_1_1client_1_1Connection.html#af8614020022a9bd864339bd42b8fceb4',1,'pyrax::cf_wrapper::client::Connection']]],
['cidr',['cidr',['../classpyrax_1_1cloudnetworks_1_1CloudNetwork.html#a22262d03c6a363b1f41f4dffd3b1c797',1,'pyrax::cloudnetworks::CloudNetwork']]],
+ ['claim_5fid',['claim_id',['../classpyrax_1_1queueing_1_1QueueMessage.html#a4fba796dda2883012b75419f84e148ee',1,'pyrax::queueing::QueueMessage']]],
['classifiers',['classifiers',['../namespacesetup.html#a501bfc1867c9d0b5d91873982919a191',1,'setup']]],
['client',['client',['../classpyrax_1_1cf__wrapper_1_1client_1_1FolderUploader.html#ad5bc32b75da65fe60067f501a4bb6665',1,'pyrax::cf_wrapper::client::FolderUploader.client()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1BulkDeleter.html#ad5bc32b75da65fe60067f501a4bb6665',1,'pyrax::cf_wrapper::client::BulkDeleter::client()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#ad5bc32b75da65fe60067f501a4bb6665',1,'pyrax::cf_wrapper::container::Container::client()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#ad5bc32b75da65fe60067f501a4bb6665',1,'pyrax::cf_wrapper::storage_object::StorageObject::client()']]],
+ ['client_5fid',['client_id',['../classpyrax_1_1queueing_1_1QueueClient.html#a3880622ca383fee22fbbac18442bae32',1,'pyrax::queueing::QueueClient']]],
['cloud_5fblockstorage',['cloud_blockstorage',['../namespacepyrax.html#a7f4dc3b1da79f21103723f78b910c8a5',1,'pyrax']]],
['cloud_5fdatabases',['cloud_databases',['../namespacepyrax.html#af1a86dab674b703fc06491e66aacadb6',1,'pyrax']]],
['cloud_5fdns',['cloud_dns',['../namespacepyrax.html#acb0f91693d36d52270ed91fd2e919fcc',1,'pyrax']]],
diff --git a/docs/html/search/variables_64.js b/docs/html/search/variables_64.js
index 18ddbeb1..9c403f02 100644
--- a/docs/html/search/variables_64.js
+++ b/docs/html/search/variables_64.js
@@ -2,6 +2,7 @@ var searchData=
[
['data',['data',['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#a511ae0b1c13f95e5f08f1a0dd3da3d93',1,'pyrax::clouddns::CloudDNSRecord.data()'],['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#a511ae0b1c13f95e5f08f1a0dd3da3d93',1,'pyrax::clouddns::CloudDNSPTRRecord.data()']]],
['date_5fformat',['DATE_FORMAT',['../namespacepyrax_1_1base__identity.html#a71e8d76a94f92b1959b2ea603c78df9e',1,'pyrax::base_identity.DATE_FORMAT()'],['../namespacepyrax_1_1cf__wrapper_1_1client.html#a71e8d76a94f92b1959b2ea603c78df9e',1,'pyrax::cf_wrapper::client.DATE_FORMAT()']]],
+ ['days_5f14',['DAYS_14',['../namespacepyrax_1_1queueing.html#af085154866e1b5a067335fc640e279bd',1,'pyrax::queueing']]],
['debug',['debug',['../namespacepyrax.html#a4c919e19877c5868fcd9f7662c236649',1,'pyrax']]],
['default_5fcdn_5fttl',['default_cdn_ttl',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a15a3ad1a3abde8c5a3fede95c2b24c19',1,'pyrax::cf_wrapper::client::CFClient']]],
['default_5fdelay',['DEFAULT_DELAY',['../namespacepyrax_1_1clouddns.html#a0695d4ce7bb0b1de03ba3068cde8d89a',1,'pyrax::clouddns']]],
diff --git a/docs/html/search/variables_68.js b/docs/html/search/variables_68.js
index b3592b9e..e94c3045 100644
--- a/docs/html/search/variables_68.js
+++ b/docs/html/search/variables_68.js
@@ -1,6 +1,7 @@
var searchData=
[
['head_5fdate_5fformat',['HEAD_DATE_FORMAT',['../namespacepyrax_1_1cf__wrapper_1_1client.html#aea95a0010e586f71e6587a111cae730c',1,'pyrax::cf_wrapper::client']]],
+ ['href',['href',['../classpyrax_1_1queueing_1_1QueueMessage.html#aecfca4286e302d5d945be6fe76b99c86',1,'pyrax::queueing::QueueMessage.href()'],['../classpyrax_1_1queueing_1_1QueueClaim.html#ab8d8e60d0ff1588f6381ad0bef8ad4b7',1,'pyrax::queueing::QueueClaim.href()']]],
['http_5flog_5fdebug',['http_log_debug',['../classpyrax_1_1base__identity_1_1BaseAuth.html#a80b9e3606456b3ebde43de9500b1fcbb',1,'pyrax::base_identity::BaseAuth.http_log_debug()'],['../classpyrax_1_1cf__wrapper_1_1client_1_1Connection.html#a80b9e3606456b3ebde43de9500b1fcbb',1,'pyrax::cf_wrapper::client::Connection::http_log_debug()'],['../classpyrax_1_1client_1_1BaseClient.html#a80b9e3606456b3ebde43de9500b1fcbb',1,'pyrax::client::BaseClient.http_log_debug()']]],
['http_5fstatus',['http_status',['../classpyrax_1_1exceptions_1_1BadRequest.html#adc16a229378030a650cd7e8d6dc16677',1,'pyrax::exceptions::BadRequest::http_status()'],['../classpyrax_1_1exceptions_1_1Unauthorized.html#adc16a229378030a650cd7e8d6dc16677',1,'pyrax::exceptions::Unauthorized::http_status()'],['../classpyrax_1_1exceptions_1_1Forbidden.html#adc16a229378030a650cd7e8d6dc16677',1,'pyrax::exceptions::Forbidden::http_status()'],['../classpyrax_1_1exceptions_1_1NotFound.html#adc16a229378030a650cd7e8d6dc16677',1,'pyrax::exceptions::NotFound::http_status()'],['../classpyrax_1_1exceptions_1_1NoUniqueMatch.html#adc16a229378030a650cd7e8d6dc16677',1,'pyrax::exceptions::NoUniqueMatch::http_status()'],['../classpyrax_1_1exceptions_1_1OverLimit.html#adc16a229378030a650cd7e8d6dc16677',1,'pyrax::exceptions::OverLimit::http_status()'],['../classpyrax_1_1exceptions_1_1HTTPNotImplemented.html#adc16a229378030a650cd7e8d6dc16677',1,'pyrax::exceptions::HTTPNotImplemented::http_status()']]],
['human_5fid',['HUMAN_ID',['../classpyrax_1_1resource_1_1BaseResource.html#a494d50825f5dfc848a56d8a568c16172',1,'pyrax::resource::BaseResource']]]
diff --git a/docs/html/search/variables_69.js b/docs/html/search/variables_69.js
index 44d62579..cd13aa28 100644
--- a/docs/html/search/variables_69.js
+++ b/docs/html/search/variables_69.js
@@ -1,6 +1,6 @@
var searchData=
[
- ['id',['id',['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::clouddns::CloudDNSPTRRecord.id()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::cloudloadbalancers::Node.id()'],['../classpyrax_1_1cloudloadbalancers_1_1VirtualIP.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::cloudloadbalancers::VirtualIP.id()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetwork.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::cloudnetworks::CloudNetwork::id()'],['../classpyrax_1_1resource_1_1BaseResource.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::resource::BaseResource::id()']]],
+ ['id',['id',['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::clouddns::CloudDNSPTRRecord.id()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::cloudloadbalancers::Node.id()'],['../classpyrax_1_1cloudloadbalancers_1_1VirtualIP.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::cloudloadbalancers::VirtualIP.id()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetwork.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::cloudnetworks::CloudNetwork::id()'],['../classpyrax_1_1queueing_1_1QueueMessage.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::queueing::QueueMessage::id()'],['../classpyrax_1_1queueing_1_1QueueClaim.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::queueing::QueueClaim::id()'],['../classpyrax_1_1resource_1_1BaseResource.html#acf2488b95c97e0378c9bf49de3b50f28',1,'pyrax::resource::BaseResource::id()']]],
['identity',['identity',['../namespacepyrax.html#ab7fc5a23efc53b58e213cd4cdf931c9f',1,'pyrax']]],
['ignore',['ignore',['../classpyrax_1_1cf__wrapper_1_1client_1_1FolderUploader.html#a0e575fb50e8e0cc27c24104a3ced5a5c',1,'pyrax::cf_wrapper::client::FolderUploader']]],
['in_5fsetup',['in_setup',['../namespacepyrax.html#aee18dc40b08b618d107a4dc24a9e0459',1,'pyrax']]],
diff --git a/docs/html/search/variables_6c.js b/docs/html/search/variables_6c.js
index 4d4a9595..b1dc3b10 100644
--- a/docs/html/search/variables_6c.js
+++ b/docs/html/search/variables_6c.js
@@ -2,5 +2,6 @@ var searchData=
[
['label',['label',['../classpyrax_1_1cloudnetworks_1_1CloudNetwork.html#a22f45a3cb4f074e609f58ebaeef0ecf9',1,'pyrax::cloudnetworks::CloudNetwork']]],
['last_5fmodified',['last_modified',['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#aacadc30373e677c508e7b598fe832e32',1,'pyrax::cf_wrapper::storage_object::StorageObject']]],
+ ['list_5fdate_5fformat',['LIST_DATE_FORMAT',['../namespacepyrax_1_1cf__wrapper_1_1client.html#abd01608caf4565408309af08113eb2b0',1,'pyrax::cf_wrapper::client']]],
['list_5fmethod',['list_method',['../classpyrax_1_1clouddns_1_1DomainResultsIterator.html#aef1de03914a0c0e4a1e69d43e72a4dbc',1,'pyrax::clouddns::DomainResultsIterator::list_method()'],['../classpyrax_1_1clouddns_1_1SubdomainResultsIterator.html#aef1de03914a0c0e4a1e69d43e72a4dbc',1,'pyrax::clouddns::SubdomainResultsIterator::list_method()'],['../classpyrax_1_1clouddns_1_1RecordResultsIterator.html#aef1de03914a0c0e4a1e69d43e72a4dbc',1,'pyrax::clouddns::RecordResultsIterator::list_method()']]]
];
diff --git a/docs/html/search/variables_6d.js b/docs/html/search/variables_6d.js
index feda84a5..216492ca 100644
--- a/docs/html/search/variables_6d.js
+++ b/docs/html/search/variables_6d.js
@@ -2,8 +2,11 @@ var searchData=
[
['management_5furl',['management_url',['../classpyrax_1_1client_1_1BaseClient.html#a0e7003a466834b21ae00b9640955da9f',1,'pyrax::client::BaseClient']]],
['manager',['manager',['../classpyrax_1_1clouddns_1_1ResultsIterator.html#a23416379944e641a8ad6bdbc95ef1859',1,'pyrax::clouddns::ResultsIterator::manager()'],['../classpyrax_1_1resource_1_1BaseResource.html#a23416379944e641a8ad6bdbc95ef1859',1,'pyrax::resource::BaseResource.manager()']]],
+ ['marker_5fpat',['marker_pat',['../namespacepyrax_1_1queueing.html#a4ee1e020cf80b7e79c3f93360c95c811',1,'pyrax::queueing']]],
['max_5ffile_5fsize',['max_file_size',['../classpyrax_1_1cf__wrapper_1_1client_1_1CFClient.html#a38c5b8bfe2405c63c56e3dec85a057bc',1,'pyrax::cf_wrapper::client::CFClient']]],
['max_5fsize',['MAX_SIZE',['../namespacepyrax_1_1cloudblockstorage.html#a395b0fb68a5628e06819cb4aa43631fe',1,'pyrax::cloudblockstorage']]],
['message',['message',['../classpyrax_1_1exceptions_1_1ClientException.html#ab8140947611504abcb64a4c277effcf5',1,'pyrax::exceptions::ClientException.message()'],['../classpyrax_1_1exceptions_1_1BadRequest.html#ae1ed0d7a6f352c7ee3ad978429822c6f',1,'pyrax::exceptions::BadRequest.message()'],['../classpyrax_1_1exceptions_1_1Unauthorized.html#ae1ed0d7a6f352c7ee3ad978429822c6f',1,'pyrax::exceptions::Unauthorized.message()'],['../classpyrax_1_1exceptions_1_1Forbidden.html#ae1ed0d7a6f352c7ee3ad978429822c6f',1,'pyrax::exceptions::Forbidden.message()'],['../classpyrax_1_1exceptions_1_1NotFound.html#ae1ed0d7a6f352c7ee3ad978429822c6f',1,'pyrax::exceptions::NotFound.message()'],['../classpyrax_1_1exceptions_1_1NoUniqueMatch.html#ae1ed0d7a6f352c7ee3ad978429822c6f',1,'pyrax::exceptions::NoUniqueMatch.message()'],['../classpyrax_1_1exceptions_1_1OverLimit.html#ae1ed0d7a6f352c7ee3ad978429822c6f',1,'pyrax::exceptions::OverLimit.message()'],['../classpyrax_1_1exceptions_1_1HTTPNotImplemented.html#ae1ed0d7a6f352c7ee3ad978429822c6f',1,'pyrax::exceptions::HTTPNotImplemented.message()']]],
- ['min_5fsize',['MIN_SIZE',['../namespacepyrax_1_1cloudblockstorage.html#aaba5e7c5484ccde364fadc3e6a496b1f',1,'pyrax::cloudblockstorage']]]
+ ['messages',['messages',['../classpyrax_1_1queueing_1_1QueueClaim.html#a7048605d09bb21159ccaab63402dc4e5',1,'pyrax::queueing::QueueClaim']]],
+ ['min_5fsize',['MIN_SIZE',['../namespacepyrax_1_1cloudblockstorage.html#aaba5e7c5484ccde364fadc3e6a496b1f',1,'pyrax::cloudblockstorage']]],
+ ['msg_5flimit',['MSG_LIMIT',['../namespacepyrax_1_1queueing.html#ae9142e29cab13b8a9c7b02ea91ba9695',1,'pyrax::queueing']]]
];
diff --git a/docs/html/search/variables_6e.js b/docs/html/search/variables_6e.js
index 8ea6c711..a8ff5269 100644
--- a/docs/html/search/variables_6e.js
+++ b/docs/html/search/variables_6e.js
@@ -1,6 +1,6 @@
var searchData=
[
- ['name',['name',['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::autoscale::AutoScaleClient::name()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cf_wrapper::container::Container.name()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cf_wrapper::storage_object::StorageObject.name()'],['../classpyrax_1_1client_1_1BaseClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::client::BaseClient.name()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::cloudblockstorage::CloudBlockStorageClient::name()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::clouddatabases::CloudDatabaseClient::name()'],['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::clouddns::CloudDNSRecord.name()'],['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::clouddns::CloudDNSPTRRecord.name()'],['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::clouddns::CloudDNSClient::name()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient::name()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cloudmonitoring::CloudMonitorClient::name()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetworkClient.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cloudnetworks::CloudNetworkClient::name()'],['../classpyrax_1_1utils_1_1SelfDeletingTempfile.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::utils::SelfDeletingTempfile::name()'],['../classpyrax_1_1utils_1_1SelfDeletingTempDirectory.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::utils::SelfDeletingTempDirectory::name()'],['../namespacesetup.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'setup.name()']]],
+ ['name',['name',['../classpyrax_1_1autoscale_1_1AutoScaleClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::autoscale::AutoScaleClient::name()'],['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cf_wrapper::container::Container.name()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cf_wrapper::storage_object::StorageObject.name()'],['../classpyrax_1_1client_1_1BaseClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::client::BaseClient.name()'],['../classpyrax_1_1cloudblockstorage_1_1CloudBlockStorageClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::cloudblockstorage::CloudBlockStorageClient::name()'],['../classpyrax_1_1clouddatabases_1_1CloudDatabaseClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::clouddatabases::CloudDatabaseClient::name()'],['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::clouddns::CloudDNSRecord.name()'],['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::clouddns::CloudDNSPTRRecord.name()'],['../classpyrax_1_1clouddns_1_1CloudDNSClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::clouddns::CloudDNSClient::name()'],['../classpyrax_1_1cloudloadbalancers_1_1CloudLoadBalancerClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::cloudloadbalancers::CloudLoadBalancerClient::name()'],['../classpyrax_1_1cloudmonitoring_1_1CloudMonitorClient.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cloudmonitoring::CloudMonitorClient::name()'],['../classpyrax_1_1cloudnetworks_1_1CloudNetworkClient.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::cloudnetworks::CloudNetworkClient::name()'],['../classpyrax_1_1queueing_1_1Queue.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::queueing::Queue::name()'],['../classpyrax_1_1queueing_1_1QueueClient.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'pyrax::queueing::QueueClient::name()'],['../classpyrax_1_1utils_1_1SelfDeletingTempfile.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::utils::SelfDeletingTempfile::name()'],['../classpyrax_1_1utils_1_1SelfDeletingTempDirectory.html#ab74e6bf80237ddc4109968cedc58c151',1,'pyrax::utils::SelfDeletingTempDirectory::name()'],['../namespacesetup.html#a8ccf841cb59e451791bcb2e1ac4f1edc',1,'setup.name()']]],
['name_5fattr',['NAME_ATTR',['../classpyrax_1_1resource_1_1BaseResource.html#a74fac10a98253f8b0308159a33113ab9',1,'pyrax::resource::BaseResource']]],
['next_5furi',['next_uri',['../classpyrax_1_1clouddns_1_1ResultsIterator.html#a52d2d557c979c7d47d2f6733f9bb30c7',1,'pyrax::clouddns::ResultsIterator']]],
['no_5fsuch_5fcontainer_5fpattern',['no_such_container_pattern',['../namespacepyrax_1_1cf__wrapper_1_1client.html#ab3150aa95b9e341f9b4c3f11214a9097',1,'pyrax::cf_wrapper::client']]],
diff --git a/docs/html/search/variables_70.js b/docs/html/search/variables_70.js
index 98d3b194..84a4f791 100644
--- a/docs/html/search/variables_70.js
+++ b/docs/html/search/variables_70.js
@@ -5,7 +5,7 @@ var searchData=
['parent',['parent',['../classpyrax_1_1cloudloadbalancers_1_1Node.html#a457d913bff1ebc8671c1eca1c9d5fc03',1,'pyrax::cloudloadbalancers::Node.parent()'],['../classpyrax_1_1cloudloadbalancers_1_1VirtualIP.html#a457d913bff1ebc8671c1eca1c9d5fc03',1,'pyrax::cloudloadbalancers::VirtualIP.parent()']]],
['password',['password',['../classpyrax_1_1base__identity_1_1BaseAuth.html#ac0d6a26a6e1c25921ff65ba7790ee92d',1,'pyrax::base_identity::BaseAuth.password()'],['../classpyrax_1_1base__identity_1_1BaseAuth.html#a9dbb300e28bc21c8dab41b01883918eb',1,'pyrax::base_identity::BaseAuth.password()'],['../classpyrax_1_1identity_1_1rax__identity_1_1RaxIdentity.html#a9dbb300e28bc21c8dab41b01883918eb',1,'pyrax::identity::rax_identity::RaxIdentity.password()']]],
['path',['path',['../namespacepyrax_1_1identity.html#ae6fc00af7c5b5a7c5f40ce6dc6b47d85',1,'pyrax::identity']]],
- ['plural_5fresponse_5fkey',['plural_response_key',['../classpyrax_1_1manager_1_1BaseManager.html#a692be54e20855ad7f41d446719f26491',1,'pyrax::manager::BaseManager']]],
+ ['plural_5fresponse_5fkey',['plural_response_key',['../classpyrax_1_1manager_1_1BaseManager.html#a692be54e20855ad7f41d446719f26491',1,'pyrax::manager::BaseManager.plural_response_key()'],['../classpyrax_1_1queueing_1_1QueueMessageManager.html#a692be54e20855ad7f41d446719f26491',1,'pyrax::queueing::QueueMessageManager::plural_response_key()']]],
['policies',['policies',['../classpyrax_1_1autoscale_1_1ScalingGroup.html#a20e4bd6dc33dd1f27f9b7eaed505f80e',1,'pyrax::autoscale::ScalingGroup']]],
['policy',['policy',['../classpyrax_1_1autoscale_1_1AutoScaleWebhook.html#ad986b73e9d5f47a623a9b6d773c25e34',1,'pyrax::autoscale::AutoScaleWebhook']]],
['port',['port',['../classpyrax_1_1cloudloadbalancers_1_1Node.html#af8fb0f45ee0195c7422a49e6a8d72369',1,'pyrax::cloudloadbalancers::Node']]],
diff --git a/docs/html/search/variables_71.html b/docs/html/search/variables_71.html
new file mode 100644
index 00000000..cceeff20
--- /dev/null
+++ b/docs/html/search/variables_71.html
@@ -0,0 +1,25 @@
+
+
+
+
+
+
+
+
+
+
Loading...
+
+
+
Searching...
+
No Matches
+
+
+
+
diff --git a/docs/html/search/variables_71.js b/docs/html/search/variables_71.js
new file mode 100644
index 00000000..697e92da
--- /dev/null
+++ b/docs/html/search/variables_71.js
@@ -0,0 +1,4 @@
+var searchData=
+[
+ ['queues',['queues',['../namespacepyrax.html#a66c651710751cc178b5a5c0f029a9e8f',1,'pyrax']]]
+];
diff --git a/docs/html/search/variables_74.js b/docs/html/search/variables_74.js
index 1da3075e..9ea7dd53 100644
--- a/docs/html/search/variables_74.js
+++ b/docs/html/search/variables_74.js
@@ -8,6 +8,6 @@ var searchData=
['token',['token',['../classpyrax_1_1base__identity_1_1BaseAuth.html#a623ef6987ef3bd185c07b28b13e46d34',1,'pyrax::base_identity::BaseAuth.token()'],['../classpyrax_1_1base__identity_1_1BaseAuth.html#a87da3d8264af1c9427605148f20dd9c4',1,'pyrax::base_identity::BaseAuth.token()']]],
['total_5fbytes',['total_bytes',['../classpyrax_1_1cf__wrapper_1_1container_1_1Container.html#a15d97ed7e27cf9263c3bc520f95e1d82',1,'pyrax::cf_wrapper::container::Container.total_bytes()'],['../classpyrax_1_1cf__wrapper_1_1storage__object_1_1StorageObject.html#a15d97ed7e27cf9263c3bc520f95e1d82',1,'pyrax::cf_wrapper::storage_object::StorageObject.total_bytes()']]],
['trace',['trace',['../namespacepyrax_1_1utils.html#a1ef4c4162762c60a00cf44b5969127c5',1,'pyrax::utils']]],
- ['ttl',['ttl',['../classpyrax_1_1cf__wrapper_1_1client_1_1FolderUploader.html#a24139992a63da93bef33b5c8e6adc8bf',1,'pyrax::cf_wrapper::client::FolderUploader.ttl()'],['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#a24139992a63da93bef33b5c8e6adc8bf',1,'pyrax::clouddns::CloudDNSRecord.ttl()'],['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#a24139992a63da93bef33b5c8e6adc8bf',1,'pyrax::clouddns::CloudDNSPTRRecord.ttl()']]],
+ ['ttl',['ttl',['../classpyrax_1_1cf__wrapper_1_1client_1_1FolderUploader.html#a24139992a63da93bef33b5c8e6adc8bf',1,'pyrax::cf_wrapper::client::FolderUploader.ttl()'],['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#a24139992a63da93bef33b5c8e6adc8bf',1,'pyrax::clouddns::CloudDNSRecord.ttl()'],['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#a24139992a63da93bef33b5c8e6adc8bf',1,'pyrax::clouddns::CloudDNSPTRRecord.ttl()'],['../classpyrax_1_1queueing_1_1QueueMessage.html#a24139992a63da93bef33b5c8e6adc8bf',1,'pyrax::queueing::QueueMessage.ttl()']]],
['type',['type',['../classpyrax_1_1clouddns_1_1CloudDNSRecord.html#a7aead736a07eaf25623ad7bfa1f0ee2d',1,'pyrax::clouddns::CloudDNSRecord.type()'],['../classpyrax_1_1clouddns_1_1CloudDNSPTRRecord.html#a7aead736a07eaf25623ad7bfa1f0ee2d',1,'pyrax::clouddns::CloudDNSPTRRecord::type()'],['../classpyrax_1_1cloudloadbalancers_1_1Node.html#a7aead736a07eaf25623ad7bfa1f0ee2d',1,'pyrax::cloudloadbalancers::Node.type()'],['../classpyrax_1_1cloudloadbalancers_1_1VirtualIP.html#a7aead736a07eaf25623ad7bfa1f0ee2d',1,'pyrax::cloudloadbalancers::VirtualIP::type()']]]
];
diff --git a/docs/html/utils_8py.html b/docs/html/utils_8py.html
index c73686cb..27a0ff75 100644
--- a/docs/html/utils_8py.html
+++ b/docs/html/utils_8py.html
@@ -102,8 +102,10 @@
Convenience method for executing operating system commands.
def get_checksum
Returns the MD5 checksum in hex for the given content.
-def random_name
- Generates a random name; useful for testing.
+def random_unicode
+ Generates a random name; useful for testing.
+def random_ascii
+ Generates a random name; useful for testing.
def coerce_string_to_list
For parameters that can take either a single string or a list of strings, this function will ensure that the result is a list containing the passed values.
def folder_size
@@ -130,6 +132,8 @@
Compares `nm` with the supplied patterns, and returns True if it matches at least one.
def update_exc
Adds additional text to an exception's error message.
+def case_insensitive_update
+ Given two dicts, updates the first one with the second, but considers keys that are identical except for case to be the same.
def env
Returns the first environment variable set if none are non-empty, defaults to "" or keyword arg default.
def unauthenticated
@@ -164,7 +168,7 @@
diff --git a/docs/html/version_8py.html b/docs/html/version_8py.html
index 76b5865d..f5f2c6ca 100644
--- a/docs/html/version_8py.html
+++ b/docs/html/version_8py.html
@@ -88,7 +88,7 @@
namespace pyrax::version
Variables
-string version = "1.5.1"
+string version = "1.6.0"
@@ -108,7 +108,7 @@
diff --git a/docs/queues.md b/docs/queues.md
new file mode 100644
index 00000000..c0209e2f
--- /dev/null
+++ b/docs/queues.md
@@ -0,0 +1,123 @@
+# Queues
+
+## Basic Concepts
+Queues is an open source, scalable, and highly available message and notifications service, based on the OpenStack Marconi project. Users of this service can create and manage a producer-consumer or a publisher-subscriber model. Unlimited queues and messages give users the flexibility they need to create powerful web applications in the cloud.
+
+It consists of a few basic components: queues, messages, claims, and statistics. In the producer-consumer model, users create queues in which producers, or servers, can post messages. Workers, or consumers, can then claim those messages and delete them after they complete the actions associated with the messages. A single claim can contain multiple messages, and administrators can query claims for status.
+
+In the publisher-subscriber model, messages are posted to a queue as in the producer-consumer model, but messages are never claimed. Instead, subscribers, or watchers, send GET requests to pull all messages that have been posted since their last request. In this model, a message remains in the queue, unclaimed, until the message's time to live (TTL) has expired.
+
+In both of these models, administrators can get queue statistics that display the most recent and oldest messages, the number of unclaimed messages, and more.
+
+
+## Using Queues in pyrax
+Once you have authenticated, you can reference the Queues service via `pyrax.queues`. To make your coding easier, include the following line at the beginning of your code:
+
+ pq = pyrax.queues
+
+Then you can simply use the alias `pq` to reference the service. All of the code samples in this document assume that `pq` has been defined this way.
+
+
+# Client ID
+Cloud Queues requires that every client accessing queues have a unique **Client ID**. This Client ID must be a UUID string in its canonical form. Example: "3381af92-2b9e-11e3-b191-71861300734c".
+
+If you aren't familiar with UUIDs, Python provides a module in the Standard Library for working with them. Here is the code for creating a UUID string compatible with Cloud Queues:
+
+ import uuid
+ my_client_id = str(uuid.uuid4())
+
+Once you have your ID, you need to make it available to pyrax. There are two ways:
+
+1) After authenticating, but before calling any Cloud Queues methods, set it directly:
+
+ pq.client_id = my_client_id
+
+2) Export it to an environment variable named `CLOUD_QUEUES_ID`, either in your .bashrc, or by doing it explicitly:
+
+ export CLOUD_QUEUES_ID='3381af92-2b9e-11e3-b191-71861300734c'
+
+If you try to use any of the Cloud Queues methods without setting this value, a `QueueClientIDNotDefined` exception is raised.
+
+
+## Creating a Queue
+Queues require a unique name. If you try to create a queue with a name that already exists, a `DuplicateQueue` exception is raised. The command to create a queue is:
+
+ queue = pq.create("my_unique_queue")
+
+There is currently no way to list existing queues, so if you need to determine whether a queue by a specific name exists, call:
+
+ exists = pq.queue_exists("name_to_check")
+
+This call returns `True` or `False`, depending on the existence of a queue with the given name.
+
+
+## Posting a Message to a Queue
+Messages can be any type of data, as long as they do not exceed 256 KB in length. The message body can be simple values, or a chunk of XML, or a list of JSON values, or anything else. pyrax handles the JSON-encoding required to post the message.
+
+You need to specify the queue you wish to post to. This can be either the name of the queue, or a `Queue` object. If you already have a `Queue` object reference, you can call its `post_message()` method directly. The call is:
+
+ msg = pq.post_message(queue, body[, ttl])
+ # or
+ msg = queue.post_message(body[, ttl])
+
+Note that there is an optional `ttl` parameter, for specifying the **TTL**, or **Time To Live** for the message. If specified, the value of ttl must be between 60 and 1209600 seconds (14 days). If not specified, a default of 14 days is used.
+
+
+## Listing Messages in a Queue
+To get a listing of messages in a queue, you need the queue name or a `Queue` object reference. If you have a `Queue` object, you can call its `list()` method directly. The call is:
+
+ msgs = pq.list_messages(queue[, echo=False][, include_claimed=False]
+ [, marker=None][, limit=None])
+ # or
+ msgs = queue.list([echo=False][, include_claimed=False]
+ [, marker=None][, limit=None])
+
+The optional parameters and their effects are:
+
+Parameter | Default | Effect
+---- | ---- | ----
+**echo** | False | When True, your own messages are included.
+**include_claimed** | False | By default, only unclaimed messages are returned. Pass this as True to get all messages, claimed or not.
+**marker** | None | Used for pagination. Normally this should not be needed, as the `list()` methods handle this for you.
+**limit** | 10 | The maximum number of messages to return. Note that you may receive fewer than the specified limit if there aren't that many available messages in the queue.
+
+
+## Claiming Messages in a Queue
+Claiming messages is how workers processing a queue mark messages as being handled by that worker, avoiding having two workers process the same message.
+
+To claim messages you need the queue name or a `Queue` object reference. If you have a `Queue` object, you can call its `claim()` method directly. When claiming messages you must not only specify the queue, but also give a TTL and a Grace Period. You many also specify a limit to the number of messages to claim. The call is:
+
+ pq.claim_messages(queue, ttl, grace_period[, limit])
+
+An explanation of the parameters of this call follows:
+
+Parameter | Default | Notes
+---- | ---- | ----
+**queue** | | Either the name of the queue to claim messages from, or the corresponding `Queue` object.
+**ttl** | | The ttl attribute specifies how long the server waits before releasing the claim. The ttl value must be between 60 and 43200 seconds (12 hours).
+**grace_period** | | The grace_period attribute specifies the message grace period in seconds. The value of the grace period must be between 60 and 43200 seconds (12 hours). To deal with workers that have stopped responding (for up to 1209600 seconds or 14 days, including claim lifetime), the server extends the lifetime of claimed messages to be at least as long as the lifetime of the claim itself, plus the specified grace period. If a claimed message would normally live longer than the grace period, its expiration is not adjusted.
+**limit** | 10 | The number of messages to claim. The maximum number of messages you may claim at once is 20.
+
+If there are no messages to claim, the method returns `None`. When you create a successful claim, a `QueueClaim` object is returned that has a `messages` attribute. This is a list of `QueueMessage` objects representing the claimed messages. You can iterate through this list to process the messages, and once the message has been processed, call its `delete()` method to remove it from the queue to ensure that it is not processed more than once.
+
+
+## Renewing a Claim
+Once a claim has been made, if the TTL and grace period expire, the claim is automatically released and the messages are made available for others to claim. If you have a long-running process and want to ensure that this does not happen in the middle of the process, you should update the claim with one or both of a TTL or grace period. Updating resets the age of the claim, restarting the TTL for the claim. To update a claim, call:
+
+ pq.update_claim(queue, claim[, ttl=None][, grace_period=None])
+ # or
+ queue.update_claim(claim[, ttl=None][, grace_period=None])
+
+
+## Refreshing a Claim
+If you have a `QueueClaim` object, keep in mind that it is not a live window into the status of the claim; rather, it is a snapshot of the claim at the time the object was created. To refresh it with the latest information, call its `reload()` method. This refreshes all of its attributes with the most current status of the claim.
+
+
+## Releasing a Claim
+If you have a claim on several messages and must abandon processing of those messages for any reason, you should release the claim so that those messages can be processed by other workers as soon as possible, instead of waiting for the claim's TTL to expire. When you release a claim, the claimed messages are immediately made available in the queue for other workers to claim. To release a claim, call:
+
+ pq.release_claim(queue, claim)
+ # or
+ queue.release_claim(claim)
+
+
diff --git a/pyrax/__init__.py b/pyrax/__init__.py
index 9d9ade73..2c493ffc 100755
--- a/pyrax/__init__.py
+++ b/pyrax/__init__.py
@@ -69,6 +69,7 @@
from clouddns import CloudDNSClient
from cloudnetworks import CloudNetworkClient
from cloudmonitoring import CloudMonitorClient
+ from queueing import QueueClient
except ImportError:
# See if this is the result of the importing of version.py in setup.py
callstack = inspect.stack()
@@ -90,6 +91,7 @@
cloud_networks = None
cloud_monitoring = None
autoscale = None
+queues = None
# Default region for all services. Can be individually overridden if needed
default_region = None
# Encoding to use when working with non-ASCII names
@@ -118,6 +120,7 @@
"compute:network": CloudNetworkClient,
"monitor": CloudMonitorClient,
"autoscale": AutoScaleClient,
+ "queues": QueueClient,
}
@@ -531,7 +534,7 @@ def clear_credentials():
"""De-authenticate by clearing all the names back to None."""
global identity, regions, services, cloudservers, cloudfiles
global cloud_loadbalancers, cloud_databases, cloud_blockstorage, cloud_dns
- global cloud_networks, cloud_monitoring, autoscale
+ global cloud_networks, cloud_monitoring, autoscale, queues
identity = None
regions = tuple()
services = tuple()
@@ -544,6 +547,7 @@ def clear_credentials():
cloud_networks = None
cloud_monitoring = None
autoscale = None
+ queues = None
def _make_agent_name(base):
@@ -561,7 +565,7 @@ def connect_to_services(region=None):
"""Establishes authenticated connections to the various cloud APIs."""
global cloudservers, cloudfiles, cloud_loadbalancers, cloud_databases
global cloud_blockstorage, cloud_dns, cloud_networks, cloud_monitoring
- global autoscale
+ global autoscale, queues
cloudservers = connect_to_cloudservers(region=region)
cloudfiles = connect_to_cloudfiles(region=region)
cloud_loadbalancers = connect_to_cloud_loadbalancers(region=region)
@@ -571,6 +575,7 @@ def connect_to_services(region=None):
cloud_networks = connect_to_cloud_networks(region=region)
cloud_monitoring = connect_to_cloud_monitoring(region=region)
autoscale = connect_to_autoscale(region=region)
+ queues = connect_to_queues(region=region)
def _get_service_endpoint(svc, region=None, public=True):
@@ -723,6 +728,12 @@ def connect_to_autoscale(region=None):
service_type="autoscale", region=region)
+def connect_to_queues(region=None):
+ """Creates a client for working with Queues."""
+ return _create_client(ep_name="queues",
+ service_type="queues", region=region)
+
+
def get_http_debug():
return _http_debug
@@ -735,7 +746,7 @@ def set_http_debug(val):
identity.http_log_debug = val
for svc in (cloudservers, cloudfiles, cloud_loadbalancers,
cloud_blockstorage, cloud_databases, cloud_dns, cloud_networks,
- autoscale):
+ autoscale, queues):
if svc is not None:
svc.http_log_debug = val
if not val:
diff --git a/pyrax/autoscale.py b/pyrax/autoscale.py
index b7511017..e61fa1eb 100644
--- a/pyrax/autoscale.py
+++ b/pyrax/autoscale.py
@@ -328,6 +328,20 @@ def get_configuration(self, scaling_group):
return resp_body.get("groupConfiguration")
+ def replace(self, scaling_group, name, cooldown, min_entities,
+ max_entities, metadata=None):
+ """
+ Replace an existing ScalingGroup configuration. All of the attributes
+ must be specified If you wish to delete any of the optional attributes,
+ pass them in as None.
+ """
+ body = self._create_group_config_body(name, cooldown, min_entities,
+ max_entities, metadata=metadata)
+ group_id = utils.get_id(scaling_group)
+ uri = "/%s/%s/config" % (self.uri_base, group_id)
+ resp, resp_body = self.api.method_put(uri, body=body)
+
+
def update(self, scaling_group, name=None, cooldown=None,
min_entities=None, max_entities=None, metadata=None):
"""
@@ -392,6 +406,25 @@ def get_launch_config(self, scaling_group):
return ret
+ def replace_launch_config(self, scaling_group, launch_config_type,
+ server_name, image, flavor, disk_config=None, metadata=None,
+ personality=None, networks=None, load_balancers=None,
+ key_name=None):
+ """
+ Replace an existing launch configuration. All of the attributes must be
+ specified. If you wish to delete any of the optional attributes, pass
+ them in as None.
+ """
+ group_id = utils.get_id(scaling_group)
+ uri = "/%s/%s/launch" % (self.uri_base, group_id)
+ body = self._create_launch_config_body(
+ launch_config_type=launch_config_type, server_name=server_name,
+ image=image, flavor=flavor, disk_config=disk_config,
+ metadata=metadata, personality=personality, networks=networks,
+ load_balancers=load_balancers, key_name=key_name)
+ resp, resp_body = self.api.method_put(uri, body=body)
+
+
def update_launch_config(self, scaling_group, server_name=None, image=None,
flavor=None, disk_config=None, metadata=None, personality=None,
networks=None, load_balancers=None, key_name=None):
@@ -414,7 +447,7 @@ def update_launch_config(self, scaling_group, server_name=None, image=None,
"server": {
"name": server_name or srv_args.get("name"),
"imageRef": image or srv_args.get("imageRef"),
- "flavorRef": flavor or srv_args.get("flavorRef"),
+ "flavorRef": "%s" % flavor or srv_args.get("flavorRef"),
"OS-DCF:diskConfig": disk_config or
srv_args.get("OS-DCF:diskConfig"),
"personality": personality or
@@ -425,8 +458,9 @@ def update_launch_config(self, scaling_group, server_name=None, image=None,
"loadBalancers": load_balancers or lb_args,
},
}
- if key_name is not None:
- body["args"]["server"]["key_name"] = key_name
+ key_name = key_name or srv_args.get("key_name")
+ if key_name:
+ body["args"]["server"] = key_name
resp, resp_body = self.api.method_put(uri, body=body)
return None
@@ -452,6 +486,18 @@ def add_policy(self, scaling_group, name, policy_type, cooldown,
'is_percent' is True, in which case it is treated as a percentage.
"""
uri = "/%s/%s/policies" % (self.uri_base, utils.get_id(scaling_group))
+ body = self._create_policy_body(name, policy_type, cooldown,
+ change=change, is_percent=is_percent,
+ desired_capacity=desired_capacity, args=args)
+ # "body" needs to be a list
+ body = [body]
+ resp, resp_body = self.api.method_post(uri, body=body)
+ pol_info = resp_body.get("policies")[0]
+ return AutoScalePolicy(self, pol_info, scaling_group)
+
+
+ def _create_policy_body(self, name, policy_type, cooldown, change=None,
+ is_percent=None, desired_capacity=None, args=None):
body = {"name": name, "cooldown": cooldown, "type": policy_type}
if change is not None:
if is_percent:
@@ -462,11 +508,7 @@ def add_policy(self, scaling_group, name, policy_type, cooldown,
body["desiredCapacity"] = desired_capacity
if args is not None:
body["args"] = args
- # "body" needs to be a list
- body = [body]
- resp, resp_body = self.api.method_post(uri, body=body)
- pol_info = resp_body.get("policies")[0]
- return AutoScalePolicy(self, pol_info, scaling_group)
+ return body
def list_policies(self, scaling_group):
@@ -490,6 +532,23 @@ def get_policy(self, scaling_group, policy):
return AutoScalePolicy(self, data, scaling_group)
+ def replace_policy(self, scaling_group, policy, name,
+ policy_type, cooldown, change=None, is_percent=False,
+ desired_capacity=None, args=None):
+ """
+ Replace an existing policy. All of the attributes must be specified. If
+ you wish to delete any of the optional attributes, pass them in as
+ None.
+ """
+ policy_id = utils.get_id(policy)
+ group_id = utils.get_id(scaling_group)
+ uri = "/%s/%s/policies/%s" % (self.uri_base, group_id, policy_id)
+ body = self._create_policy_body(name=name, policy_type=policy_type,
+ cooldown=cooldown, change=change, is_percent=is_percent,
+ desired_capacity=desired_capacity, args=args)
+ resp, resp_body = self.api.method_put(uri, body=body)
+
+
def update_policy(self, scaling_group, policy, name=None, policy_type=None,
cooldown=None, change=None, is_percent=False,
desired_capacity=None, args=None):
@@ -521,6 +580,7 @@ def update_policy(self, scaling_group, policy, name=None, policy_type=None,
body["change"] = policy.change
elif getattr(policy, 'desiredCapacity', None) is not None:
body["desiredCapacity"] = policy.desiredCapacity
+ args = args or getattr(policy, 'args', None)
if args is not None:
body["args"] = args
resp, resp_body = self.api.method_put(uri, body=body)
@@ -545,6 +605,13 @@ def delete_policy(self, scaling_group, policy):
utils.get_id(scaling_group), utils.get_id(policy))
resp, resp_body = self.api.method_delete(uri)
+ def _create_webhook_body(self, name, metadata=None):
+ if metadata is None:
+ # If updating a group with existing metadata, metadata MUST be
+ # passed. Leaving it out causes Otter to return 400.
+ metadata = {}
+ body = {"name": name, "metadata": metadata}
+ return body
def add_webhook(self, scaling_group, policy, name, metadata=None):
"""
@@ -552,14 +619,12 @@ def add_webhook(self, scaling_group, policy, name, metadata=None):
"""
uri = "/%s/%s/policies/%s/webhooks" % (self.uri_base,
utils.get_id(scaling_group), utils.get_id(policy))
- body = {"name": name}
- if metadata is not None:
- body["metadata"] = metadata
+ body = self._create_webhook_body(name, metadata=metadata)
# "body" needs to be a list
body = [body]
resp, resp_body = self.api.method_post(uri, body=body)
data = resp_body.get("webhooks")[0]
- return AutoScaleWebhook(self, data, policy)
+ return AutoScaleWebhook(self, data, policy, scaling_group)
def list_webhooks(self, scaling_group, policy):
@@ -569,7 +634,7 @@ def list_webhooks(self, scaling_group, policy):
uri = "/%s/%s/policies/%s/webhooks" % (self.uri_base,
utils.get_id(scaling_group), utils.get_id(policy))
resp, resp_body = self.api.method_get(uri)
- return [AutoScaleWebhook(self, data, policy)
+ return [AutoScaleWebhook(self, data, policy, scaling_group)
for data in resp_body.get("webhooks", [])]
@@ -582,7 +647,24 @@ def get_webhook(self, scaling_group, policy, webhook):
utils.get_id(webhook))
resp, resp_body = self.api.method_get(uri)
data = resp_body.get("webhook")
- return AutoScaleWebhook(self, data, policy)
+ return AutoScaleWebhook(self, data, policy, scaling_group)
+
+
+ def replace_webhook(self, scaling_group, policy, webhook, name,
+ metadata=None):
+ """
+ Replace an existing webhook. All of the attributes must be specified.
+ If you wish to delete any of the optional attributes, pass them in as
+ None.
+ """
+ uri = "/%s/%s/policies/%s/webhooks/%s" % (self.uri_base,
+ utils.get_id(scaling_group), utils.get_id(policy),
+ utils.get_id(webhook))
+ group_id = utils.get_id(scaling_group)
+ policy_id = utils.get_id(policy)
+ webhook_id = utils.get_id(webhook)
+ body = self._create_webhook_body(name, metadata=metadata)
+ resp, resp_body = self.api.method_put(uri, body=body)
def update_webhook(self, scaling_group, policy, webhook, name=None,
@@ -681,15 +763,47 @@ def _create_body(self, name, cooldown, min_entities, max_entities,
if networks is None:
# Default to ServiceNet only
networks = [{"uuid": SERVICE_NET_ID}]
- if load_balancers is None:
- load_balancers = []
if scaling_policies is None:
scaling_policies = []
+ group_config = self._create_group_config_body(name, cooldown,
+ min_entities, max_entities, metadata=group_metadata)
+ launch_config = self._create_launch_config_body(launch_config_type,
+ server_name, image, flavor, disk_config=disk_config,
+ metadata=metadata, personality=personality, networks=networks,
+ load_balancers=load_balancers, key_name=key_name)
+ body = {
+ "groupConfiguration": group_config,
+ "launchConfiguration": launch_config,
+ "scalingPolicies": scaling_policies,
+ }
+ return body
+
+
+ def _create_group_config_body(self, name, cooldown, min_entities,
+ max_entities, metadata=None):
+ if metadata is None:
+ # If updating a group with existing metadata, metadata MUST be
+ # passed. Leaving it out causes Otter to return 400.
+ metadata = {}
+ body = {
+ "name": name,
+ "cooldown": cooldown,
+ "minEntities": min_entities,
+ "maxEntities": max_entities,
+ "metadata": metadata,
+ }
+ return body
+
+
+ def _create_launch_config_body(self, launch_config_type,
+ server_name, image, flavor, disk_config=None, metadata=None,
+ personality=None, networks=None, load_balancers=None,
+ key_name=None):
server_args = {
- "flavorRef": flavor,
- "name": server_name,
- "imageRef": utils.get_id(image),
- }
+ "flavorRef": "%s" % flavor,
+ "name": server_name,
+ "imageRef": utils.get_id(image),
+ }
if metadata is not None:
server_args["metadata"] = metadata
if personality is not None:
@@ -700,25 +814,12 @@ def _create_body(self, name, cooldown, min_entities, max_entities,
server_args["OS-DCF:diskConfig"] = disk_config
if key_name is not None:
server_args["key_name"] = key_name
+ if load_balancers is None:
+ load_balancers = []
load_balancer_args = self._resolve_lbs(load_balancers)
- body = {"groupConfiguration": {
- "name": name,
- "cooldown": cooldown,
- "minEntities": min_entities,
- "maxEntities": max_entities,
- },
- "launchConfiguration": {
- "type": launch_config_type,
- "args": {
- "server": server_args,
- "loadBalancers": load_balancer_args,
- },
- },
- "scalingPolicies": scaling_policies,
- }
- if group_metadata is not None:
- body["groupConfiguration"]["metadata"] = group_metadata
- return body
+ return {"type": launch_config_type,
+ "args": {"server": server_args,
+ "loadBalancers": load_balancer_args}}
@@ -814,10 +915,10 @@ def delete_webhook(self, webhook):
class AutoScaleWebhook(BaseResource):
- def __init__(self, manager, info, policy, *args, **kwargs):
+ def __init__(self, manager, info, policy, scaling_group, *args, **kwargs):
super(AutoScaleWebhook, self).__init__(manager, info, *args, **kwargs)
if not isinstance(policy, AutoScalePolicy):
- policy = manager.get_policy(policy)
+ policy = manager.get_policy(scaling_group, policy)
self.policy = policy
self._non_display = ["links", "policy"]
@@ -889,11 +990,22 @@ def resume(self, scaling_group):
return self._manager.resume(scaling_group)
- def update(self, scaling_group, name=None, cooldown=None,
- min_entities=None, max_entities=None, metadata=None):
+ def replace(self, scaling_group, name, cooldown, min_entities,
+ max_entities, metadata=None):
"""
- Updates an existing ScalingGroup. One or more of the attributes can
- be specified.
+ Replace an existing ScalingGroup configuration. All of the attributes
+ must be specified. If you wish to delete any of the optional
+ attributes, pass them in as None.
+ """
+ return self._manager.replace(scaling_group, name, cooldown,
+ min_entities, max_entities, metadata=metadata)
+
+
+ def update(self, scaling_group, name=None, cooldown=None, min_entities=None,
+ max_entities=None, metadata=None):
+ """
+ Updates an existing ScalingGroup. One or more of the attributes can be
+ specified.
NOTE: if you specify metadata, it will *replace* any existing metadata.
If you want to add to it, you either need to pass the complete dict of
@@ -926,6 +1038,22 @@ def get_launch_config(self, scaling_group):
return self._manager.get_launch_config(scaling_group)
+ def replace_launch_config(self, scaling_group, launch_config_type,
+ server_name, image, flavor, disk_config=None, metadata=None,
+ personality=None, networks=None, load_balancers=None,
+ key_name=None):
+ """
+ Replace an existing launch configuration. All of the attributes must be
+ specified. If you wish to delete any of the optional attributes, pass
+ them in as None.
+ """
+ return self._manager.replace_launch_config(scaling_group,
+ launch_config_type, server_name, image, flavor,
+ disk_config=disk_config, metadata=metadata,
+ personality=personality, networks=networks,
+ load_balancers=load_balancers, key_name=key_name)
+
+
def update_launch_config(self, scaling_group, server_name=None, image=None,
flavor=None, disk_config=None, metadata=None, personality=None,
networks=None, load_balancers=None, key_name=None):
@@ -978,6 +1106,19 @@ def get_policy(self, scaling_group, policy):
return self._manager.get_policy(scaling_group, policy)
+ def replace_policy(self, scaling_group, policy, name,
+ policy_type, cooldown, change=None, is_percent=False,
+ desired_capacity=None, args=None):
+ """
+ Replace an existing policy. All of the attributes must be specified. If
+ you wish to delete any of the optional attributes, pass them in as
+ None.
+ """
+ return self._manager.replace_policy(scaling_group, policy, name,
+ policy_type, cooldown, change=change, is_percent=is_percent,
+ desired_capacity=desired_capacity, args=args)
+
+
def update_policy(self, scaling_group, policy, name=None, policy_type=None,
cooldown=None, change=None, is_percent=False,
desired_capacity=None, args=None):
@@ -987,8 +1128,8 @@ def update_policy(self, scaling_group, policy, name=None, policy_type=None,
"""
return self._manager.update_policy(scaling_group, policy, name=name,
policy_type=policy_type, cooldown=cooldown, change=change,
- is_percent=is_percent,
- desired_capacity=desired_capacity, args=args)
+ is_percent=is_percent, desired_capacity=desired_capacity,
+ args=args)
def execute_policy(self, scaling_group, policy):
@@ -1029,6 +1170,17 @@ def get_webhook(self, scaling_group, policy, webhook):
return self._manager.get_webhook(scaling_group, policy, webhook)
+ def replace_webhook(self, scaling_group, policy, webhook, name,
+ metadata=None):
+ """
+ Replace an existing webhook. All of the attributes must be specified.
+ If you wish to delete any of the optional attributes, pass them in as
+ None.
+ """
+ return self._manager.replace_webhook(scaling_group, policy, webhook,
+ name, metadata=metadata)
+
+
def update_webhook(self, scaling_group, policy, webhook, name=None,
metadata=None):
"""
diff --git a/pyrax/cf_wrapper/client.py b/pyrax/cf_wrapper/client.py
index 904b2a8c..b77ac46f 100644
--- a/pyrax/cf_wrapper/client.py
+++ b/pyrax/cf_wrapper/client.py
@@ -33,6 +33,11 @@
EARLY_DATE_STR = "1900-01-01T00:00:00"
DATE_FORMAT = "%Y-%m-%dT%H:%M:%S"
HEAD_DATE_FORMAT = "%a, %d %b %Y %H:%M:%S %Z"
+# Format of last_modified in list responses, reverse engineered from sample
+# responses at
+# http://docs.rackspace.com/files/api/v1/cf-devguide/content/
+# Serialized_List_Output-d1e1460.html
+LIST_DATE_FORMAT = "%Y-%m-%dT%H:%M:%S.%f"
CONNECTION_TIMEOUT = 20
CONNECTION_RETRIES = 5
AUTH_ATTEMPTS = 2
@@ -44,23 +49,24 @@
etag_failed_pattern = re.compile(etag_fail_pat)
+def _close_swiftclient_conn(conn):
+ """Swiftclient often leaves the connection open."""
+ try:
+ conn.http_conn[1].close()
+ except Exception:
+ pass
+
+
def handle_swiftclient_exception(fnc):
@wraps(fnc)
def _wrapped(self, *args, **kwargs):
attempts = 0
clt_url = self.connection.url
- def close_swiftclient_conn(conn):
- """Swiftclient often leaves the connection open."""
- try:
- conn.http_conn[1].close()
- except Exception:
- pass
-
while attempts < AUTH_ATTEMPTS:
attempts += 1
try:
- close_swiftclient_conn(self.connection)
+ _close_swiftclient_conn(self.connection)
ret = fnc(self, *args, **kwargs)
return ret
except _swift_client.ClientException as e:
@@ -94,6 +100,34 @@ def close_swiftclient_conn(conn):
return _wrapped
+def _convert_head_object_last_modified_to_local(lm_str):
+ # Need to convert last modified time to a datetime object.
+ # Times are returned in default locale format, so we need to read
+ # them as such, no matter what the locale setting may be.
+ orig_locale = locale.getlocale(locale.LC_TIME)
+ locale.setlocale(locale.LC_TIME, (None, None))
+ try:
+ tm_tuple = time.strptime(lm_str, HEAD_DATE_FORMAT)
+ finally:
+ locale.setlocale(locale.LC_TIME, orig_locale)
+ dttm = datetime.datetime.fromtimestamp(time.mktime(tm_tuple))
+ # Now convert it back to the format returned by GETting the object.
+ dtstr = dttm.strftime(DATE_FORMAT)
+ return dtstr
+
+
+def _convert_list_last_modified_to_local(attdict):
+ if 'last_modified' in attdict:
+ attdict = attdict.copy()
+ list_date_format_with_tz = LIST_DATE_FORMAT + ' %Z'
+ last_modified_utc = attdict['last_modified'] + ' UTC'
+ tm_tuple = time.strptime(last_modified_utc,
+ list_date_format_with_tz)
+ dttm = datetime.datetime.fromtimestamp(time.mktime(tm_tuple))
+ attdict['last_modified'] = dttm.strftime(DATE_FORMAT)
+
+ return attdict
+
class CFClient(object):
"""
@@ -112,6 +146,7 @@ class CFClient(object):
cdn_enabled = False
default_cdn_ttl = 86400
_container_cache = {}
+ _cached_temp_url_key = ""
# Upload size limit
max_file_size = 5368709119 # 5GB - 1
# Folder upload status dict. Each upload will generate its own UUID key.
@@ -211,17 +246,23 @@ def set_account_metadata(self, metadata, clear=False,
curr_meta = self.get_account_metadata()
for ckey in curr_meta:
new_meta[ckey] = ""
- new_meta.update(massaged)
+ utils.case_insensitive_update(new_meta, massaged)
self.connection.post_account(new_meta, response_dict=extra_info)
@handle_swiftclient_exception
- def get_temp_url_key(self):
+ def get_temp_url_key(self, cached=True):
"""
Returns the current TempURL key, or None if it has not been set.
+
+ By default the value returned is cached. To force an API call to get
+ the current value on the server, pass `cached=False`.
"""
- key = "%stemp-url-key" % self.account_meta_prefix.lower()
- meta = self.get_account_metadata().get(key)
+ meta = self._cached_temp_url_key
+ if not cached or not meta:
+ key = "%stemp-url-key" % self.account_meta_prefix.lower()
+ meta = self.get_account_metadata().get(key)
+ self._cached_temp_url_key = meta
return meta
@@ -238,15 +279,21 @@ def set_temp_url_key(self, key=None):
key = uuid.uuid4().hex
meta = {"Temp-Url-Key": key}
self.set_account_metadata(meta)
+ self._cached_temp_url_key = key
- def get_temp_url(self, container, obj, seconds, method="GET"):
+ def get_temp_url(self, container, obj, seconds, method="GET", key=None,
+ cached=True):
"""
Given a storage object in a container, returns a URL that can be used
to access that object. The URL will expire after `seconds` seconds.
The only methods supported are GET and PUT. Anything else will raise
an InvalidTemporaryURLMethod exception.
+
+ If you have your Temporary URL key, you can pass it in directly and
+ potentially save an API call to retrieve it. If you don't pass in the
+ key, and don't wish to use any cached value, pass `cached=False`.
"""
cname = self._resolve_name(container)
oname = self._resolve_name(obj)
@@ -254,7 +301,8 @@ def get_temp_url(self, container, obj, seconds, method="GET"):
if mod_method not in ("GET", "PUT"):
raise exc.InvalidTemporaryURLMethod("Method must be either 'GET' "
"or 'PUT'; received '%s'." % method)
- key = self.get_temp_url_key()
+ if not key:
+ key = self.get_temp_url_key(cached=cached)
if not key:
raise exc.MissingTemporaryURLKey("You must set the key for "
"Temporary URLs before you can generate them. This is "
@@ -285,11 +333,12 @@ def delete_object_in_seconds(self, cont, obj, seconds,
Sets the object in the specified container to be deleted after the
specified number of seconds.
"""
- cname = self._resolve_name(cont)
- oname = self._resolve_name(obj)
- headers = {"X-Delete-After": seconds}
- self.connection.post_object(cname, oname, headers=headers,
- response_dict=extra_info)
+ meta = {"X-Delete-After": str(seconds)}
+ self.set_object_metadata(cont, obj, meta, prefix="")
+# cname = self._resolve_name(cont)
+# oname = self._resolve_name(obj)
+# self.connection.post_object(cname, oname, headers=headers,
+# response_dict=extra_info)
@handle_swiftclient_exception
@@ -336,7 +385,7 @@ def set_container_metadata(self, container, metadata, clear=False,
curr_meta = self.get_container_metadata(cname)
for ckey in curr_meta:
new_meta[ckey] = ""
- new_meta.update(massaged)
+ utils.case_insensitive_update(new_meta, massaged)
self.connection.post_container(cname, new_meta,
response_dict=extra_info)
@@ -444,7 +493,7 @@ def set_object_metadata(self, container, obj, metadata, clear=False,
if not clear:
obj_meta = self.get_object_metadata(cname, oname)
new_meta = self._massage_metakeys(obj_meta, self.object_meta_prefix)
- new_meta.update(massaged)
+ utils.case_insensitive_update(new_meta, massaged)
# Remove any empty values, since the object metadata API will
# store them.
to_pop = []
@@ -566,17 +615,8 @@ def get_object(self, container, obj):
cname = self._resolve_name(container)
oname = self._resolve_name(obj)
obj_info = self.connection.head_object(cname, oname)
- # Need to convert last modified time to a datetime object.
- # Times are returned in default locale format, so we need to read
- # them as such, no matter what the locale setting may be.
lm_str = obj_info["last-modified"]
- orig_locale = locale.getlocale(locale.LC_TIME)
- locale.setlocale(locale.LC_TIME, (None, None))
- tm_tuple = time.strptime(lm_str, HEAD_DATE_FORMAT)
- locale.setlocale(locale.LC_TIME, orig_locale)
- dttm = datetime.datetime.fromtimestamp(time.mktime(tm_tuple))
- # Now convert it back to the format returned by GETting the object.
- dtstr = dttm.strftime(DATE_FORMAT)
+ dtstr = _convert_head_object_last_modified_to_local(lm_str)
obj = StorageObject(self, self.get_container(container),
name=oname, content_type=obj_info["content-type"],
total_bytes=int(obj_info["content-length"]),
@@ -980,7 +1020,7 @@ def _should_abort_folder_upload(self, upload_key):
@handle_swiftclient_exception
def fetch_object(self, container, obj, include_meta=False,
- chunk_size=None, extra_info=None):
+ chunk_size=None, size=None, extra_info=None):
"""
Fetches the object from storage.
@@ -990,6 +1030,10 @@ def fetch_object(self, container, obj, include_meta=False,
Note: if 'chunk_size' is defined, you must fully read the object's
contents before making another request.
+ If 'size' is specified, only the first 'size' bytes of the object will
+ be returned. If the object if smaller than 'size', the entire object is
+ returned.
+
When 'include_meta' is True, what is returned from this method is a
2-tuple:
Element 0: a dictionary containing metadata about the file.
@@ -1009,6 +1053,18 @@ def fetch_object(self, container, obj, include_meta=False,
return data
+ @handle_swiftclient_exception
+ def fetch_partial(self, container, obj, size):
+ """
+ Returns the first 'size' bytes of an object. If the object is smaller
+ than the specified 'size' value, the entire object is returned.
+ """
+ gen = self.fetch_object(container, obj, chunk_size=size)
+ ret = gen.next()
+ _close_swiftclient_conn(self.connection)
+ return ret
+
+
@handle_swiftclient_exception
def download_object(self, container, obj, directory, structure=True):
"""
@@ -1082,7 +1138,10 @@ def get_container_objects(self, container, marker=None, limit=None,
limit=limit, prefix=prefix, delimiter=delimiter,
full_listing=full_listing)
cont = self.get_container(cname)
- return [StorageObject(self, container=cont, attdict=obj) for obj in objs
+ return [StorageObject(self,
+ container=cont,
+ attdict=_convert_list_last_modified_to_local(obj))
+ for obj in objs
if "name" in obj]
diff --git a/pyrax/client.py b/pyrax/client.py
index 0445f6ea..8d109f4c 100644
--- a/pyrax/client.py
+++ b/pyrax/client.py
@@ -80,7 +80,10 @@ def _configure_manager(self):
# The next 6 methods are simple pass-through to the manager.
def list(self, limit=None, marker=None):
- """Returns a list of all resources."""
+ """
+ Returns a list of resource objects. Pagination is supported through the
+ optional 'marker' and 'limit' parameters.
+ """
return self._manager.list(limit=limit, marker=marker)
@@ -149,7 +152,6 @@ def http_log_req(self, args, kwargs):
"""
if not self.http_log_debug:
return
-
string_parts = ["curl -i"]
for element in args:
if element in ("GET", "POST", "PUT", "DELETE", "HEAD"):
@@ -176,6 +178,18 @@ def http_log_resp(self, resp, body):
self._logger.debug("RESP: %s %s\n", resp, body)
+ def _add_custom_headers(self, dct):
+ """
+ Clients for some services must add headers that are required for that
+ service. This is a hook method to allow for such customization.
+
+ If a client needs to add a special header, the 'dct' parameter is a
+ dictionary of headers. Add the header(s) and their values as key/value
+ pairs to the 'dct'.
+ """
+ pass
+
+
def request(self, *args, **kwargs):
"""
Formats the request into a dict representing the headers
@@ -187,6 +201,8 @@ def request(self, *args, **kwargs):
if "body" in kwargs:
kwargs["headers"]["Content-Type"] = "application/json"
kwargs["body"] = json.dumps(kwargs["body"])
+ # Allow subclasses to add their own headers
+ self._add_custom_headers(kwargs["headers"])
self.http_log_req(args, kwargs)
resp, body = super(BaseClient, self).request(*args, **kwargs)
self.http_log_resp(resp, body)
@@ -233,11 +249,11 @@ def _api_request(self, uri, method, **kwargs):
if pos < 2:
# Don't escape the scheme or netloc
continue
- parsed[pos] = urllib.quote(parsed[pos], safe="/.?&=")
+ parsed[pos] = urllib.quote(parsed[pos], safe="/.?&=,")
safe_uri = urlparse.urlunparse(parsed)
else:
safe_uri = "%s%s" % (self.management_url,
- urllib.quote(uri, safe="/.?&="))
+ urllib.quote(uri, safe="/.?&=,"))
# Perform the request once. If we get a 401 back then it
# might be because the auth token expired, so try to
# re-authenticate and try again. If it still fails, bail.
@@ -282,6 +298,11 @@ def method_delete(self, uri, **kwargs):
return self._api_request(uri, "DELETE", **kwargs)
+ def method_patch(self, uri, **kwargs):
+ """Method used to make PATCH requests."""
+ return self._api_request(uri, "PATCH", **kwargs)
+
+
def authenticate(self):
"""
Handles all aspects of authentication against the cloud provider.
diff --git a/pyrax/clouddatabases.py b/pyrax/clouddatabases.py
index 05c9021b..efe25aa9 100644
--- a/pyrax/clouddatabases.py
+++ b/pyrax/clouddatabases.py
@@ -241,14 +241,14 @@ def get(self):
self.volume = CloudDatabaseVolume(self, self.volume)
- def list_databases(self):
+ def list_databases(self, limit=None, marker=None):
"""Returns a list of the names of all databases for this instance."""
- return self._database_manager.list()
+ return self._database_manager.list(limit=limit, marker=marker)
- def list_users(self):
+ def list_users(self, limit=None, marker=None):
"""Returns a list of the names of all users for this instance."""
- return self._user_manager.list()
+ return self._user_manager.list(limit=limit, marker=marker)
def get_user(self, name):
@@ -521,9 +521,9 @@ def _configure_manager(self):
@assure_instance
- def list_databases(self, instance):
+ def list_databases(self, instance, limit=None, marker=None):
"""Returns all databases for the specified instance."""
- return instance.list_databases()
+ return instance.list_databases(limit=limit, marker=marker)
@assure_instance
@@ -551,9 +551,9 @@ def delete_database(self, instance, name):
@assure_instance
- def list_users(self, instance):
+ def list_users(self, instance, limit=None, marker=None):
"""Returns all users for the specified instance."""
- return instance.list_users()
+ return instance.list_users(limit=limit, marker=marker)
@assure_instance
@@ -653,9 +653,9 @@ def get_limits(self):
raise NotImplementedError("Limits are not available for Cloud Databases")
- def list_flavors(self):
+ def list_flavors(self, limit=None, marker=None):
"""Returns a list of all available Flavors."""
- return self._flavor_manager.list()
+ return self._flavor_manager.list(limit=limit, marker=marker)
def get_flavor(self, flavor_id):
diff --git a/pyrax/clouddns.py b/pyrax/clouddns.py
index e996305f..53febde2 100644
--- a/pyrax/clouddns.py
+++ b/pyrax/clouddns.py
@@ -483,15 +483,15 @@ def _process_async_error(self, resp_body, error_class):
"""
def _fmt_error(err):
# Remove the cumbersome Java-esque message
- details = err["details"].replace("\n", " ")
+ details = err.get("details", "").replace("\n", " ")
if not details:
- details = err["message"]
- return "%s (%s)" % (details, err["code"])
+ details = err.get("message", "")
+ return "%s (%s)" % (details, err.get("code", ""))
- error = resp_body["error"]
+ error = resp_body.get("error", "")
if "failedItems" in error:
# Multi-error response
- faults = error["failedItems"]["faults"]
+ faults = error.get("failedItems", {}).get("faults", [])
msgs = [_fmt_error(fault) for fault in faults]
msg = "\n".join(msgs)
else:
@@ -554,11 +554,11 @@ def findall(self, **kwargs):
"""
if (len(kwargs) == 1) and ("name" in kwargs):
# Filtering on name; use the more efficient method.
- nm = kwargs["name"]
+ nm = kwargs["name"].lower()
uri = "/%s?name=%s" % (self.uri_base, nm)
matches = self._list(uri, list_all=True)
return [match for match in matches
- if match.name == nm]
+ if match.name.lower() == nm]
else:
return super(CloudDNSManager, self).findall(**kwargs)
@@ -871,9 +871,9 @@ def _resolve_device_type(self, device):
"""
try:
from tests.unit import fakes
- server_types = (pyrax.CloudServer, fakes.FakeServer,
- fakes.FakeDNSDevice)
- lb_types = (CloudLoadBalancer, fakes.FakeLoadBalancer)
+ server_types = (pyrax.CloudServer, fakes.FakeServer)
+ lb_types = (CloudLoadBalancer, fakes.FakeLoadBalancer,
+ fakes.FakeDNSDevice)
except ImportError:
# Not running with tests
server_types = (pyrax.CloudServer, )
diff --git a/pyrax/cloudloadbalancers.py b/pyrax/cloudloadbalancers.py
index 34a264b1..516f78fc 100644
--- a/pyrax/cloudloadbalancers.py
+++ b/pyrax/cloudloadbalancers.py
@@ -471,11 +471,10 @@ def _create_body(self, name, port=None, protocol=None, nodes=None,
"""
Used to create the dict required to create a load balancer instance.
"""
- required = (nodes, virtual_ips, port, protocol)
+ required = (virtual_ips, port, protocol)
if not all(required):
raise exc.MissingLoadBalancerParameters("Load Balancer creation "
- "requires at least one node, one virtual IP, "
- "a protocol, and a port.")
+ "requires at least one virtual IP, a protocol, and a port.")
nodes = utils.coerce_string_to_list(nodes)
virtual_ips = utils.coerce_string_to_list(virtual_ips)
bad_conditions = [node.condition for node in nodes
diff --git a/pyrax/cloudnetworks.py b/pyrax/cloudnetworks.py
index 88654c86..5f54322f 100644
--- a/pyrax/cloudnetworks.py
+++ b/pyrax/cloudnetworks.py
@@ -105,7 +105,17 @@ class CloudNetworkManager(BaseManager):
"""
Does nothing special, but is used in testing.
"""
- pass
+ def _create_body(self, name, label=None, cidr=None):
+ """
+ Used to create the dict required to create a network. Accepts either
+ 'label' or 'name' as the keyword parameter for the label attribute.
+ """
+ label = label or name
+ body = {"network": {
+ "label": label,
+ "cidr": cidr,
+ }}
+ return body
@@ -131,19 +141,6 @@ def _configure_manager(self):
response_key="network", uri_base="os-networksv2")
- def _create_body(self, name, label=None, cidr=None):
- """
- Used to create the dict required to create a network. Accepts either
- 'label' or 'name' as the keyword parameter for the label attribute.
- """
- label = label or name
- body = {"network": {
- "label": label,
- "cidr": cidr,
- }}
- return body
-
-
def create(self, label=None, name=None, cidr=None):
"""
Wraps the basic create() call to handle specific failures.
diff --git a/pyrax/exceptions.py b/pyrax/exceptions.py
index 5c2bd628..13235f90 100644
--- a/pyrax/exceptions.py
+++ b/pyrax/exceptions.py
@@ -68,6 +68,9 @@ class DomainRecordUpdateFailed(PyraxException):
class DomainUpdateFailed(PyraxException):
pass
+class DuplicateQueue(PyraxException):
+ pass
+
class DuplicateUser(PyraxException):
pass
@@ -146,6 +149,9 @@ class InvalidNodeParameters(PyraxException):
class InvalidPTRRecord(PyraxException):
pass
+class InvalidQueueName(PyraxException):
+ pass
+
class InvalidSessionPersistenceType(PyraxException):
pass
@@ -173,6 +179,9 @@ class InvalidVolumeResize(PyraxException):
class MissingAuthSettings(PyraxException):
pass
+class MissingClaimParameters(PyraxException):
+ pass
+
class MissingDNSSettings(PyraxException):
pass
@@ -263,6 +272,9 @@ class PTRRecordDeletionFailed(PyraxException):
class PTRRecordUpdateFailed(PyraxException):
pass
+class QueueClientIDNotDefined(PyraxException):
+ pass
+
class ServiceNotAvailable(PyraxException):
pass
diff --git a/pyrax/manager.py b/pyrax/manager.py
index 80a39402..48e18a1b 100644
--- a/pyrax/manager.py
+++ b/pyrax/manager.py
@@ -58,8 +58,16 @@ def __init__(self, api, resource_class=None, response_key=None,
self.uri_base = uri_base
- def list(self, limit=None, marker=None):
- """Gets a list of all items."""
+ def list(self, limit=None, marker=None, return_raw=False):
+ """
+ Returns a list of resource objects. Pagination is supported through the
+ optional 'marker' and 'limit' parameters.
+
+ Some APIs do not follow the typical pattern in their responses, and the
+ BaseManager subclasses will have to parse the raw response to get the
+ desired information. For those cases, pass 'return_raw=True', and the
+ response and response_body will be returned unprocessed.
+ """
uri = "/%s" % self.uri_base
pagination_items = []
if limit is not None:
@@ -69,7 +77,7 @@ def list(self, limit=None, marker=None):
pagination = "&".join(pagination_items)
if pagination:
uri = "%s?%s" % (uri, pagination)
- return self._list(uri)
+ return self._list(uri, return_raw=return_raw)
def head(self, item):
@@ -123,7 +131,7 @@ def delete(self, item):
return self._delete(uri)
- def _list(self, uri, obj_class=None, body=None):
+ def _list(self, uri, obj_class=None, body=None, return_raw=False):
"""
Handles the communication with the API when getting
a full listing of the resources managed by this class.
@@ -132,7 +140,8 @@ def _list(self, uri, obj_class=None, body=None):
resp, resp_body = self.api.method_post(uri, body=body)
else:
resp, resp_body = self.api.method_get(uri)
-
+ if return_raw:
+ return (resp, resp_body)
if obj_class is None:
obj_class = self.resource_class
diff --git a/pyrax/queueing.py b/pyrax/queueing.py
new file mode 100644
index 00000000..d5c87f30
--- /dev/null
+++ b/pyrax/queueing.py
@@ -0,0 +1,718 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+# Copyright 2012 Rackspace
+
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from functools import wraps
+import json
+import os
+import re
+import urlparse
+
+import pyrax
+from pyrax.client import BaseClient
+import pyrax.exceptions as exc
+from pyrax.manager import BaseManager
+from pyrax.resource import BaseResource
+import pyrax.utils as utils
+
+# The default for TTL for messages is 14 days, in seconds.
+DAYS_14 = 1209600
+# The hard-coded maximum number of messages returned in a single call.
+MSG_LIMIT = 10
+# Pattern for extracting the marker value from an href link.
+marker_pat = re.compile(r".+\bmarker=(\d+).*")
+
+
+def _parse_marker(body):
+ marker = None
+ links = body.get("links", [])
+ next_links = [link for link in links if link.get("rel") == "next"]
+ try:
+ next_link = next_links[0]["href"]
+ except IndexError:
+ next_link = ""
+ mtch = marker_pat.match(next_link)
+ if mtch:
+ marker = mtch.groups()[0]
+ return marker
+
+
+def assure_queue(fnc):
+ """
+ Converts a queue ID or name passed as the 'queue' parameter to a Queue
+ object.
+ """
+ @wraps(fnc)
+ def _wrapped(self, queue, *args, **kwargs):
+ if not isinstance(queue, Queue):
+ # Must be the ID
+ queue = self._manager.get(queue)
+ return fnc(self, queue, *args, **kwargs)
+ return _wrapped
+
+
+
+class BaseQueueManager(BaseManager):
+ """
+ This class attempts to add in all the common deviations from the API
+ standards that the regular base classes are based on.
+ """
+ def _list(self, uri, obj_class=None, body=None, return_raw=False):
+ try:
+ return super(BaseQueueManager, self)._list(uri, obj_class=None,
+ body=None, return_raw=return_raw)
+ except (exc.NotFound, AttributeError):
+ return []
+
+
+
+class Queue(BaseResource):
+ """
+ This class represents a Queue.
+ """
+ def __init__(self, manager, info, key=None, loaded=False):
+ # Queues are often returned with no info
+ info = info or {"queue": {}}
+ super(Queue, self).__init__(manager, info, key=key, loaded=loaded)
+ self._repr_properties = ["id"]
+ self._message_manager = QueueMessageManager(self.manager.api,
+ resource_class=QueueMessage, response_key="",
+ plural_response_key="messages",
+ uri_base="queues/%s/messages" % self.id)
+ self._claim_manager = QueueClaimManager(self.manager.api,
+ resource_class=QueueClaim, response_key="",
+ plural_response_key="claims",
+ uri_base="queues/%s/claims" % self.id)
+ self._claim_manager._message_manager = self._message_manager
+
+
+ def get_message(self, msg_id):
+ """
+ Returns the message whose ID matches the supplied msg_id from this
+ queue.
+ """
+ return self._message_manager.get(msg_id)
+
+
+ def delete_message(self, msg_id):
+ """
+ Deletes the message whose ID matches the supplied msg_id from this
+ queue.
+ """
+ return self._message_manager.delete(msg_id)
+
+
+ def list(self, include_claimed=False, echo=False, marker=None, limit=None):
+ """
+ Returns a list of messages for this queue.
+
+ By default only unclaimed messages are returned; if you want claimed
+ messages included, pass `include_claimed=True`. Also, the requester's
+ own messages are not returned by default; if you want them included,
+ pass `echo=True`.
+
+ The 'marker' and 'limit' parameters are used to control pagination of
+ results. 'Marker' is the ID of the last message returned, while 'limit'
+ controls the number of messages returned per reuqest (default=20).
+ """
+ return self._message_manager.list(include_claimed=include_claimed,
+ echo=echo, marker=marker, limit=limit)
+
+
+ def list_by_ids(self, ids):
+ """
+ If you wish to retrieve a list of messages from this queue and know the
+ IDs of those messages, you can pass in a list of those IDs, and only
+ the matching messages will be returned. This avoids pulling down all
+ the messages in a queue and filtering on the client side.
+ """
+ return self._message_manager.list_by_ids(ids)
+
+
+ def delete_by_ids(self, ids):
+ """
+ Deletes the messages whose IDs are passed in from this queue.
+ """
+ return self._message_manager.delete_by_ids(ids)
+
+
+ def list_by_claim(self, claim):
+ """
+ Returns a list of all the messages from this queue that have been
+ claimed by the specified claim. The claim can be either a claim ID or a
+ QueueClaim object.
+ """
+ if not isinstance(claim, QueueClaim):
+ claim = self._claim_manager.get(claim)
+ return claim.messages
+
+
+ def post_message(self, body, ttl=None):
+ """
+ Create a message in this queue. The 'ttl' parameter defaults to 14 days
+ if not specified.
+ """
+ return self._message_manager.create(body, ttl=ttl)
+
+
+ def claim_messages(self, ttl, grace, count=None):
+ """
+ Claims up to `count` unclaimed messages from this queue. If count is
+ not specified, the default is to claim 10 messages.
+
+ The `ttl` parameter specifies how long the server should wait before
+ releasing the claim. The ttl value MUST be between 60 and 43200 seconds.
+
+ The `grace` parameter is the message grace period in seconds. The value
+ of grace MUST be between 60 and 43200 seconds. The server extends the
+ lifetime of claimed messages to be at least as long as the lifetime of
+ the claim itself, plus a specified grace period to deal with crashed
+ workers (up to 1209600 or 14 days including claim lifetime). If a
+ claimed message would normally live longer than the grace period, its
+ expiration will not be adjusted.
+
+ Returns a QueueClaim object, whose 'messages' attribute contains the
+ list of QueueMessage objects representing the claimed messages.
+ """
+ return self._claim_manager.claim(ttl, grace, count=count)
+
+
+ def get_claim(self, claim):
+ """
+ Returns a QueueClaim object with information about the specified claim.
+ If no such claim exists, a NotFound exception is raised.
+ """
+ return self._claim_manager.get(claim)
+
+
+ def update_claim(self, claim, ttl=None, grace=None):
+ """
+ Updates the specified claim with either a new TTL or grace period, or
+ both.
+ """
+ return self._claim_manager.update(claim, ttl=ttl, grace=grace)
+
+
+ def release_claim(self, claim):
+ """
+ Releases the specified claim and makes any messages previously claimed
+ by this claim as available for processing by other workers.
+ """
+ return self._claim_manager.delete(claim)
+
+
+ @property
+ def id(self):
+ return self.name
+
+ @id.setter
+ def id(self, val):
+ self.name = val
+
+
+
+class QueueMessage(BaseResource):
+ """
+ This class represents a Message posted to a Queue.
+ """
+ id = None
+ age = None
+ body = None
+ href = None
+ ttl = None
+ claim_id = None
+
+
+ def _add_details(self, info):
+ """
+ The 'id' and 'claim_id' attributes are not supplied directly, but
+ included as part of the 'href' value.
+ """
+ super(QueueMessage, self)._add_details(info)
+ if self.href is None:
+ return
+ parsed = urlparse.urlparse(self.href)
+ self.id = parsed.path.lstrip("/")
+ query = parsed.query
+ if query:
+ self.claim_id = query.split("claim_id=")[-1]
+
+
+
+class QueueClaim(BaseResource):
+ """
+ This class represents a Claim for a Message posted by a consumer.
+ """
+ id = None
+ messages = None
+ href = ""
+
+ def _add_details(self, info):
+ """
+ The 'id' attribute is not supplied directly, but included as part of
+ the 'href' value. Also, convert the dicts for messages into
+ QueueMessage objects.
+ """
+ msg_dicts = info.pop("messages", [])
+ super(QueueClaim, self)._add_details(info)
+ parsed = urlparse.urlparse(self.href)
+ self.id = parsed.path.lstrip("/")
+ self.messages = [QueueMessage(self.manager._message_manager, item)
+ for item in msg_dicts]
+
+
+
+class QueueMessageManager(BaseQueueManager):
+ """
+ Manager class for a Queue Message.
+ """
+ def _create_body(self, msg, ttl=None):
+ """
+ Used to create the dict required to create a new message.
+ """
+ if ttl is None:
+ ttl = DAYS_14
+ body = [{"ttl": ttl,
+ "body": msg,
+ }]
+ return body
+
+
+ def list(self, include_claimed=False, echo=False, marker=None, limit=None):
+ """
+ Need to form the URI differently, so we can't use the default list().
+ """
+ return self._iterate_list(include_claimed=include_claimed, echo=echo,
+ marker=marker, limit=limit)
+
+
+ def _iterate_list(self, include_claimed, echo, marker, limit):
+ """
+ Recursive method to work around the hard limit of 10 items per call.
+ """
+ ret = []
+ if limit is None:
+ limit = MSG_LIMIT
+ this_limit = min(MSG_LIMIT, limit)
+ limit = max(0, (limit - this_limit))
+ uri = "/%s?include_claimed=%s&echo=%s" % (self.uri_base,
+ json.dumps(include_claimed), json.dumps(echo))
+ qs_parts = []
+ if marker is not None:
+ qs_parts.append("marker=%s" % marker)
+ if this_limit is not None:
+ qs_parts.append("limit=%s" % this_limit)
+ if qs_parts:
+ uri = "%s&%s" % (uri, "&".join(qs_parts))
+ resp, resp_body = self._list(uri, return_raw=True)
+ if not resp_body:
+ return ret
+ messages = resp_body.get(self.plural_response_key, [])
+ ret = [QueueMessage(manager=self, info=item) for item in messages]
+ marker = _parse_marker(resp_body)
+
+ loop = 0
+ if limit and marker:
+ loop += 1
+ ret.extend(self._iterate_list(include_claimed, echo, marker, limit))
+ return ret
+
+
+ def list_by_ids(self, ids):
+ """
+ If you wish to retrieve a list of messages from this queue and know the
+ IDs of those messages, you can pass in a list of those IDs, and only
+ the matching messages will be returned. This avoids pulling down all
+ the messages in a queue and filtering on the client side.
+ """
+ ids = utils.coerce_string_to_list(ids)
+ uri = "/%s?ids=%s" % (self.uri_base, ",".join(ids))
+ # The API is not consistent in how it returns message lists, so this
+ # workaround is needed.
+ curr_prkey = self.plural_response_key
+ self.plural_response_key = ""
+ # BROKEN: API returns a list, not a dict.
+ ret = self._list(uri)
+ self.plural_response_key = curr_prkey
+ return ret
+
+
+ def delete_by_ids(self, ids):
+ """
+ Deletes the messages whose IDs are passed in from this queue.
+ """
+ ids = utils.coerce_string_to_list(ids)
+ uri = "/%s?ids=%s" % (self.uri_base, ",".join(ids))
+ return self.api.method_delete(uri)
+
+
+
+class QueueClaimManager(BaseQueueManager):
+ """
+ Manager class for a Queue Claims.
+ """
+ def claim(self, ttl, grace, count=None):
+ """
+ Claims up to `count` unclaimed messages from this queue. If count is
+ not specified, the default is to claim 10 messages.
+
+ The `ttl` parameter specifies how long the server should wait before
+ releasing the claim. The ttl value MUST be between 60 and 43200 seconds.
+
+ The `grace` parameter is the message grace period in seconds. The value
+ of grace MUST be between 60 and 43200 seconds. The server extends the
+ lifetime of claimed messages to be at least as long as the lifetime of
+ the claim itself, plus a specified grace period to deal with crashed
+ workers (up to 1209600 or 14 days including claim lifetime). If a
+ claimed message would normally live longer than the grace period, its
+ expiration will not be adjusted.
+
+ bReturns a QueueClaim object, whose 'messages' attribute contains the
+ list of QueueMessage objects representing the claimed messages.
+ """
+ if count is None:
+ qs = ""
+ else:
+ qs = "?limit=%s" % count
+ uri = "/%s%s" % (self.uri_base, qs)
+ body = {"ttl": ttl,
+ "grace": grace,
+ }
+ resp, resp_body = self.api.method_post(uri, body=body)
+ if resp.status == 204:
+ # Nothing available to claim
+ return None
+ # Get the claim ID from the first message in the list.
+ href = resp_body[0]["href"]
+ claim_id = href.split("claim_id=")[-1]
+ return self.get(claim_id)
+
+
+ def update(self, claim, ttl=None, grace=None):
+ """
+ Updates the specified claim with either a new TTL or grace period, or
+ both.
+ """
+ body = {}
+ if ttl is not None:
+ body["ttl"] = ttl
+ if grace is not None:
+ body["grace"] = grace
+ if not body:
+ raise exc.MissingClaimParameters("You must supply a value for "
+ "'ttl' or 'grace' when calling 'update()'")
+ uri = "/%s/%s" % (self.uri_base, utils.get_id(claim))
+ resp, resp_body = self.api.method_patch(uri, body=body)
+
+
+
+class QueueManager(BaseQueueManager):
+ """
+ Manager class for a Queue.
+ """
+ def _create_body(self, name, metadata=None):
+ """
+ Used to create the dict required to create a new queue
+ """
+ if metadata is None:
+ body = {}
+ else:
+ body = {"metadata": metadata}
+ return body
+
+
+ def get(self, id_):
+ """
+ Need to customize, since Queues are not returned with normal response
+ bodies.
+ """
+ if self.api.queue_exists(id_):
+ return Queue(self, {"queue": {"name": id_, "id_": id_}}, key="queue")
+ raise exc.NotFound("The queue '%s' does not exist." % id_)
+
+
+ def create(self, name):
+ uri = "/%s/%s" % (self.uri_base, name)
+ resp, resp_body = self.api.method_put(uri)
+ if resp.status == 201:
+ return Queue(self, {"name": name})
+ elif resp.status == 400:
+ # Most likely an invalid name
+ raise exc.InvalidQueueName("Queue names must not exceed 64 bytes "
+ "in length, and are limited to US-ASCII letters, digits, "
+ "underscores, and hyphens. Submitted: '%s'." % name)
+
+
+ def get_stats(self, queue):
+ """
+ Returns the message stats for the specified queue.
+ """
+ uri = "/%s/%s/stats" % (self.uri_base, utils.get_id(queue))
+ resp, resp_body = self.api.method_get(uri)
+ return resp_body.get("messages")
+
+
+ def get_metadata(self, queue):
+ """
+ Returns the metadata for the specified queue.
+ """
+ uri = "/%s/%s/metadata" % (self.uri_base, utils.get_id(queue))
+ resp, resp_body = self.api.method_get(uri)
+ return resp_body
+
+
+ def set_metadata(self, queue, metadata, clear=False):
+ """
+ Accepts a dictionary and adds that to the specified queue's metadata.
+ If the 'clear' argument is passed as True, any existing metadata is
+ replaced with the new metadata.
+ """
+ uri = "/%s/%s/metadata" % (self.uri_base, utils.get_id(queue))
+ if clear:
+ curr = {}
+ else:
+ curr = self.get_metadata(queue)
+ curr.update(metadata)
+ resp, resp_body = self.api.method_put(uri, body=curr)
+
+
+
+class QueueClient(BaseClient):
+ """
+ This is the primary class for interacting with Cloud Queues.
+ """
+ name = "Cloud Queues"
+ client_id = None
+
+ def _configure_manager(self):
+ """
+ Create the manager to handle queues.
+ """
+ self._manager = QueueManager(self,
+ resource_class=Queue, response_key="queue",
+ uri_base="queues")
+
+
+ def _add_custom_headers(self, dct):
+ """
+ Add the Client-ID header required by Cloud Queues
+ """
+ if self.client_id is None:
+ self.client_id = os.environ.get("CLOUD_QUEUES_ID")
+ if not self.client_id:
+ raise exc.QueueClientIDNotDefined("You must supply a client ID to "
+ "work with Queues.")
+ dct["Client-ID"] = self.client_id
+
+
+ def get_home_document(self):
+ """
+ You should never need to use this method; it is included for
+ completeness. It is meant to be used for API clients that need to
+ explore the API with no prior knowledge. This knowledge is already
+ included in the SDK, so it should never be necessary to work at this
+ basic a level, as all the functionality is exposed through normal
+ Python methods in the client.
+
+ If you are curious about the 'Home Document' concept, here is the
+ explanation from the Cloud Queues documentation:
+
+ The entire API is discoverable from a single starting point - the home
+ document. You do not need to know any more than this one URI in order
+ to explore the entire API. This document is cacheable.
+
+ The home document lets you write clients using a "follow-your-nose"
+ style so clients do not have to construct their own URLs. You can click
+ through and view the JSON doc in your browser.
+
+ For more information about home documents, see
+ http://tools.ietf.org/html/draft-nottingham-json-home-02.
+ """
+ uri = self.management_url.rsplit("/", 1)[0]
+ return self.method_get(uri)
+
+
+ def queue_exists(self, name):
+ """
+ Returns True or False, depending on the existence of the named queue.
+ """
+ try:
+ queue = self._manager.head(name)
+ return True
+ except exc.NotFound:
+ return False
+
+
+ def create(self, name):
+ """
+ Cloud Queues works differently, in that they use the name as the ID for
+ the resource. So for create(), we need to check if a queue by that name
+ exists already, and raise an exception if it does. If not, create the
+ queue and return a reference object for it.
+ """
+ if self.queue_exists(name):
+ raise exc.DuplicateQueue("The queue '%s' already exists." % name)
+ return self._manager.create(name)
+
+
+ def get_stats(self, queue):
+ """
+ Returns the message stats for the specified queue.
+ """
+ return self._manager.get_stats(queue)
+
+
+ def get_metadata(self, queue):
+ """
+ Returns the metadata for the specified queue.
+ """
+ return self._manager.get_metadata(queue)
+
+
+ def set_metadata(self, queue, metadata, clear=False):
+ """
+ Accepts a dictionary and adds that to the specified queue's metadata.
+ If the 'clear' argument is passed as True, any existing metadata is
+ replaced with the new metadata.
+ """
+ return self._manager.set_metadata(queue, metadata, clear=clear)
+
+
+ @assure_queue
+ def get_message(self, queue, msg_id):
+ """
+ Returns the message whose ID matches the supplied msg_id from the
+ specified queue.
+ """
+ return queue.get_message(msg_id)
+
+
+ @assure_queue
+ def delete_message(self, queue, msg_id):
+ """
+ Deletes the message whose ID matches the supplied msg_id from the
+ specified queue.
+ """
+ return queue.delete_message(msg_id)
+
+
+ @assure_queue
+ def list_messages(self, queue, include_claimed=False, echo=False,
+ marker=None, limit=None):
+ """
+ Returns a list of messages for the specified queue.
+
+ By default only unclaimed messages are returned; if you want claimed
+ messages included, pass `include_claimed=True`. Also, the requester's
+ own messages are not returned by default; if you want them included,
+ pass `echo=True`.
+
+ The 'marker' and 'limit' parameters are used to control pagination of
+ results. 'Marker' is the ID of the last message returned, while 'limit'
+ controls the number of messages returned per reuqest (default=20).
+ """
+ return queue.list(include_claimed=include_claimed, echo=echo,
+ marker=marker, limit=limit)
+
+
+ @assure_queue
+ def list_messages_by_ids(self, queue, ids):
+ """
+ If you wish to retrieve a list of messages from a queue and know the
+ IDs of those messages, you can pass in a list of those IDs, and only
+ the matching messages will be returned. This avoids pulling down all
+ the messages in a queue and filtering on the client side.
+ """
+ return queue.list_by_ids(ids)
+
+
+ @assure_queue
+ def delete_messages_by_ids(self, queue, ids):
+ """
+ Deletes the messages whose IDs are passed in from the specified queue.
+ """
+ return queue.delete_by_ids(ids)
+
+
+ @assure_queue
+ def list_messages_by_claim(self, queue, claim):
+ """
+ Returns a list of all the messages from the specified queue that have
+ been claimed by the specified claim. The claim can be either a claim ID
+ or a QueueClaim object.
+ """
+ return queue.list_by_claim(claim)
+
+
+ @assure_queue
+ def post_message(self, queue, body, ttl=None):
+ """
+ Create a message in the specified queue. The 'ttl' parameter defaults
+ to 14 days if not specified.
+ """
+ return queue.post_message(body, ttl=ttl)
+
+
+ @assure_queue
+ def claim_messages(self, queue, ttl, grace, count=None):
+ """
+ Claims up to `count` unclaimed messages from the specified queue. If
+ count is not specified, the default is to claim 10 messages.
+
+ The `ttl` parameter specifies how long the server should wait before
+ releasing the claim. The ttl value MUST be between 60 and 43200 seconds.
+
+ The `grace` parameter is the message grace period in seconds. The value
+ of grace MUST be between 60 and 43200 seconds. The server extends the
+ lifetime of claimed messages to be at least as long as the lifetime of
+ the claim itself, plus a specified grace period to deal with crashed
+ workers (up to 1209600 or 14 days including claim lifetime). If a
+ claimed message would normally live longer than the grace period, its
+ expiration will not be adjusted.
+
+ Returns a QueueClaim object, whose 'messages' attribute contains the
+ list of QueueMessage objects representing the claimed messages.
+ """
+ return queue.claim_messages(ttl, grace, count=count)
+
+
+ @assure_queue
+ def get_claim(self, queue, claim):
+ """
+ Returns a QueueClaim object with information about the specified claim.
+ If no such claim exists, a NotFound exception is raised.
+ """
+ return queue.get_claim(claim)
+
+
+ @assure_queue
+ def update_claim(self, queue, claim, ttl=None, grace=None):
+ """
+ Updates the specified claim with either a new TTL or grace period, or
+ both.
+ """
+ return queue.update_claim(claim, ttl=ttl, grace=grace)
+
+
+ @assure_queue
+ def release_claim(self, queue, claim):
+ """
+ Releases the specified claim and makes any messages previously claimed
+ by this claim as available for processing by other workers.
+ """
+ return queue.release_claim(claim)
diff --git a/pyrax/utils.py b/pyrax/utils.py
index f29ecc3f..e81e6d40 100644
--- a/pyrax/utils.py
+++ b/pyrax/utils.py
@@ -152,24 +152,35 @@ def safe_update(txt):
return md.hexdigest()
-def random_name(length=20, ascii_only=False):
+def _join_chars(chars, length):
+ """
+ Used by the random character functions.
+ """
+ mult = (length / len(chars)) + 1
+ mult_chars = chars * mult
+ return "".join(random.sample(mult_chars, length))
+
+
+def random_unicode(length=20):
"""
Generates a random name; useful for testing.
- By default it will return an encoded string containing
- unicode values up to code point 1000. If you only
- need or want ASCII values, pass True to the
- ascii_only parameter.
+ Returns an encoded string of the specified length containing unicode values
+ up to code point 1000.
"""
- if ascii_only:
- base_chars = string.ascii_letters
- else:
- def get_char():
- return unichr(random.randint(32, 1000))
- base_chars = u"".join([get_char() for ii in xrange(length)])
- mult = (length / len(base_chars)) + 1
- chars = base_chars * mult
- return "".join(random.sample(chars, length))
+ def get_char():
+ return unichr(random.randint(32, 1000))
+ chars = u"".join([get_char() for ii in xrange(length)])
+ return _join_chars(chars, length)
+
+
+def random_ascii(length=20, ascii_only=False):
+ """
+ Generates a random name; useful for testing.
+
+ Returns a string of the specified length containing only ASCII characters.
+ """
+ return _join_chars(string.ascii_letters, length)
def coerce_string_to_list(val):
@@ -525,6 +536,19 @@ def update_exc(exc, msg, before=True, separator="\n"):
return exc
+def case_insensitive_update(dct1, dct2):
+ """
+ Given two dicts, updates the first one with the second, but considers keys
+ that are identical except for case to be the same.
+
+ No return value; this function modified dct1 similar to the update() method.
+ """
+ lowkeys = dict([(key.lower(), key) for key in dct1])
+ for key, val in dct2.items():
+ d1_key = lowkeys.get(key.lower(), key)
+ dct1[d1_key] = val
+
+
def env(*args, **kwargs):
"""
Returns the first environment variable set
diff --git a/pyrax/version.py b/pyrax/version.py
index 79c216f2..3b166917 100644
--- a/pyrax/version.py
+++ b/pyrax/version.py
@@ -1,4 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
-version = "1.5.1"
+version = "1.6.0"
diff --git a/samples/cloudfiles/temporary_url.py b/samples/cloudfiles/temporary_url.py
index 1309947e..4866e354 100644
--- a/samples/cloudfiles/temporary_url.py
+++ b/samples/cloudfiles/temporary_url.py
@@ -28,9 +28,9 @@
pyrax.set_credential_file(creds_file)
cf = pyrax.cloudfiles
-cont_name = pyrax.utils.random_name(8, ascii_only=True)
+cont_name = pyrax.utils.random_ascii(8)
cont = cf.create_container(cont_name)
-oname = pyrax.utils.random_name(8, ascii_only=True)
+oname = pyrax.utils.random_ascii(8)
ipsum = """Import integration functools test dunder object explicit. Method
integration mercurial unit import. Future integration decorator pypy method
tuple unit pycon. Django raspberrypi mercurial 2to3 cython scipy. Cython
diff --git a/samples/cloudservers/create_server.py b/samples/cloudservers/create_server.py
index 9fbabea5..e64c19a0 100644
--- a/samples/cloudservers/create_server.py
+++ b/samples/cloudservers/create_server.py
@@ -23,7 +23,7 @@
creds_file = os.path.expanduser("~/.rackspace_cloud_credentials")
pyrax.set_credential_file(creds_file)
cs = pyrax.cloudservers
-server_name = pyrax.utils.random_name(8, ascii_only=True)
+server_name = pyrax.utils.random_ascii(8)
ubu_image = [img for img in cs.images.list()
if "12.04" in img.name][0]
diff --git a/samples/queueing/claim_messages.py b/samples/queueing/claim_messages.py
new file mode 100644
index 00000000..d76a80dc
--- /dev/null
+++ b/samples/queueing/claim_messages.py
@@ -0,0 +1,108 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# Copyright 2013 Rackspace
+
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import os
+import pyrax
+import pyrax.exceptions as exc
+
+pyrax.set_setting("identity_type", "rackspace")
+creds_file = os.path.expanduser("~/.rackspace_cloud_credentials")
+pyrax.set_credential_file(creds_file)
+pq = pyrax.queues
+
+queues = pq.list()
+if not queues:
+ print "There are no queues to post to. Please create one before proceeding."
+ exit()
+
+if len(queues) == 1:
+ queue = queues[0]
+ print "Only one queue available; using '%s'." % queue.name
+else:
+ print "Queues:"
+ for pos, queue in enumerate(queues):
+ print "%s - %s" % (pos, queue.name)
+ snum = raw_input("Enter the number of the queue you wish to post a message "
+ "to: ")
+ if not snum:
+ exit()
+ try:
+ num = int(snum)
+ except ValueError:
+ print "'%s' is not a valid number." % snum
+ exit()
+ if not 0 <= num < len(queues):
+ print "'%s' is not a valid queue number." % snum
+ exit()
+ queue = queues[num]
+
+sttl = raw_input("Enter a TTL for the claim: ")
+if not sttl:
+ print "A TTL value is required."
+ exit()
+else:
+ try:
+ ttl = int(sttl)
+ if not 60 <= ttl <= 43200:
+ old_ttl = ttl
+ ttl = max(min(ttl, 43200), 60)
+ print ("TTL values must be between 60 and 43200 seconds; changing "
+ "it to '%s'." % ttl)
+ except ValueError:
+ print "'%s' is not a valid number." % sttl
+ exit()
+
+sgrace = raw_input("Enter a grace period for the claim: ")
+if not sgrace:
+ print "A value for the grace period is required."
+ exit()
+else:
+ try:
+ grace = int(sgrace)
+ if not 60 <= grace <= 43200:
+ old_grace = grace
+ grace = max(min(grace, 43200), 60)
+ print ("Grace values must be between 60 and 43200 seconds; changing "
+ "it to '%s'." % grace)
+ except ValueError:
+ print "'%s' is not a valid number." % sgrace
+ exit()
+
+scount = raw_input("Enter the number of messages to claim (max=20), or press "
+ "Enter for the default of 10: ")
+if not scount:
+ count = None
+else:
+ try:
+ count = int(scount)
+ except ValueError:
+ print "'%s' is not a valid number." % scount
+ exit()
+
+claim = pq.claim_messages(queue, ttl, grace, count=count)
+if not claim:
+ print "There were no messages available to claim."
+ exit()
+num_msgs = len(claim.messages)
+print
+print "You have successfully claimed %s messages." % num_msgs
+print "Claim ID:", claim.id
+for msg in claim.messages:
+ print "Age:", msg.age
+ print "Body:", msg.body
+ print
diff --git a/samples/queueing/create_queue.py b/samples/queueing/create_queue.py
new file mode 100644
index 00000000..27634ab3
--- /dev/null
+++ b/samples/queueing/create_queue.py
@@ -0,0 +1,37 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# Copyright 2013 Rackspace
+
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import os
+import pyrax
+import pyrax.exceptions as exc
+
+pyrax.set_setting("identity_type", "rackspace")
+creds_file = os.path.expanduser("~/.rackspace_cloud_credentials")
+pyrax.set_credential_file(creds_file)
+pq = pyrax.queues
+
+name = raw_input("Enter the name for your queue: ")
+if not name:
+ exit()
+
+try:
+ queue = pq.create(name)
+ msg = "The queue '%s' has been created." % queue.name
+except exc.DuplicateQueue:
+ msg = "A queue with the name '%s' already exists." % name
+print msg
diff --git a/samples/queueing/delete_message.py b/samples/queueing/delete_message.py
new file mode 100644
index 00000000..14c03f80
--- /dev/null
+++ b/samples/queueing/delete_message.py
@@ -0,0 +1,76 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# Copyright 2013 Rackspace
+
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import os
+import pyrax
+import pyrax.exceptions as exc
+
+pyrax.set_setting("identity_type", "rackspace")
+creds_file = os.path.expanduser("~/.rackspace_cloud_credentials")
+pyrax.set_credential_file(creds_file)
+pq = pyrax.queues
+
+queues = pq.list()
+if not queues:
+ print "There are no queues to post to. Please create one before proceeding."
+ exit()
+
+if len(queues) == 1:
+ queue = queues[0]
+ print "Only one queue available; using '%s'." % queue.name
+else:
+ print "Queues:"
+ for pos, queue in enumerate(queues):
+ print "%s - %s" % (pos, queue.name)
+ snum = raw_input("Enter the number of the queue you wish to list messages "
+ "from: ")
+ if not snum:
+ exit()
+ try:
+ num = int(snum)
+ except ValueError:
+ print "'%s' is not a valid number." % snum
+ exit()
+ if not 0 <= num < len(queues):
+ print "'%s' is not a valid queue number." % snum
+ exit()
+ queue = queues[num]
+echo = claimed = True
+msgs = pq.list_messages(queue, echo=echo, include_claimed=claimed)
+if not msgs:
+ print "There are no messages available in this queue."
+ exit()
+for pos, msg in enumerate(msgs):
+ msg.get()
+ print pos, "- ID:", msg.id, msg.claim_id, "Body='%s'" % msg.body[:80]
+snum = raw_input("Enter the number of the message you wish to delete: ")
+if not snum:
+ print "No message selected; exiting."
+ exit()
+try:
+ num = int(snum)
+except ValueError:
+ print "'%s' is not a valid number." % snum
+ exit()
+if not 0 <= num < len(msgs):
+ print "'%s' is not a valid message number." % snum
+ exit()
+msg_to_delete = msgs[num]
+msg_to_delete.delete()
+print
+print "The message has been deleted."
diff --git a/samples/queueing/delete_messages.py b/samples/queueing/delete_messages.py
new file mode 100644
index 00000000..8f9b6590
--- /dev/null
+++ b/samples/queueing/delete_messages.py
@@ -0,0 +1,77 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# Copyright 2013 Rackspace
+
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import os
+import pyrax
+import pyrax.exceptions as exc
+
+pyrax.set_setting("identity_type", "rackspace")
+creds_file = os.path.expanduser("~/.rackspace_cloud_credentials")
+pyrax.set_credential_file(creds_file)
+pq = pyrax.queues
+
+queues = pq.list()
+if not queues:
+ print "There are no queues to post to. Please create one before proceeding."
+ exit()
+
+if len(queues) == 1:
+ queue = queues[0]
+ print "Only one queue available; using '%s'." % queue.name
+else:
+ print "Queues:"
+ for pos, queue in enumerate(queues):
+ print "%s - %s" % (pos, queue.name)
+ snum = raw_input("Enter the number of the queue you wish to list messages "
+ "from: ")
+ if not snum:
+ exit()
+ try:
+ num = int(snum)
+ except ValueError:
+ print "'%s' is not a valid number." % snum
+ exit()
+ if not 0 <= num < len(queues):
+ print "'%s' is not a valid queue number." % snum
+ exit()
+ queue = queues[num]
+echo = claimed = True
+msgs = pq.list_messages(queue, echo=echo, include_claimed=claimed)
+if not msgs:
+ print "There are no messages available in this queue."
+ exit()
+for pos, msg in enumerate(msgs):
+ msg.get()
+ print pos, "- ID:", msg.id, msg.claim_id, "Body='%s'" % msg.body[:80]
+snums = raw_input("Enter one or more numbers of the messages you wish to "
+ "delete, separated by spaces: ")
+if not snums:
+ print "No messages selected; exiting."
+ exit()
+nums = [int(num) for num in snums.split()]
+ids = [msg.id for msg in msgs if msgs.index(msg) in nums]
+if not ids:
+ print "No messages match your selections; exiting."
+ exit()
+print "DEL", pq.delete_messages_by_ids(queue, ids)
+del_msgs = [msg for msg in msgs if msg.id in ids]
+
+print
+print "The following messages were deleted:"
+for del_msg in del_msgs:
+ print del_msg.id, "Body='%s'" % del_msg.body
diff --git a/samples/queueing/delete_queue.py b/samples/queueing/delete_queue.py
new file mode 100644
index 00000000..847fef6d
--- /dev/null
+++ b/samples/queueing/delete_queue.py
@@ -0,0 +1,48 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# Copyright 2013 Rackspace
+
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import os
+import pyrax
+import pyrax.exceptions as exc
+
+pyrax.set_setting("identity_type", "rackspace")
+creds_file = os.path.expanduser("~/.rackspace_cloud_credentials")
+pyrax.set_credential_file(creds_file)
+pq = pyrax.queues
+
+queues = pq.list()
+if not queues:
+ print "There are no queues to delete."
+ exit()
+
+print "Queues:"
+for pos, queue in enumerate(queues):
+ print "%s - %s" % (pos, queue.name)
+snum = raw_input("Enter the number of the queue to delete: ")
+if not snum:
+ exit()
+try:
+ num = int(snum)
+except ValueError:
+ print "'%s' is not a valid number." % snum
+ exit()
+if not 0 <= num < len(queues):
+ print "'%s' is not a valid queue number." % snum
+ exit()
+pq.delete(queues[num])
+print "Queue '%s' has been deleted." % queue.name
diff --git a/samples/queueing/list_claims.py b/samples/queueing/list_claims.py
new file mode 100644
index 00000000..ed2b9672
--- /dev/null
+++ b/samples/queueing/list_claims.py
@@ -0,0 +1,64 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# Copyright 2013 Rackspace
+
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import os
+import pyrax
+import pyrax.exceptions as exc
+
+
+print "Sorry, this hasn't been implemented yet."
+exit()
+
+pyrax.set_setting("identity_type", "rackspace")
+creds_file = os.path.expanduser("~/.rackspace_cloud_credentials")
+pyrax.set_credential_file(creds_file)
+pq = pyrax.queues
+
+queues = pq.list()
+if not queues:
+ print "There are no queues to post to. Please create one before proceeding."
+ exit()
+
+if len(queues) == 1:
+ queue = queues[0]
+ print "Only one queue available; using '%s'." % queue.name
+else:
+ print "Queues:"
+ for pos, queue in enumerate(queues):
+ print "%s - %s" % (pos, queue.name)
+ snum = raw_input("Enter the number of the queue you wish to list messages "
+ "from: ")
+ if not snum:
+ exit()
+ try:
+ num = int(snum)
+ except ValueError:
+ print "'%s' is not a valid number." % snum
+ exit()
+ if not 0 <= num < len(queues):
+ print "'%s' is not a valid queue number." % snum
+ exit()
+ queue = queues[num]
+claims = pq.list_claims(queue)
+if not claims:
+ print "There are no claims available in this queue."
+ exit()
+for claim in claims:
+ print "ID:", claim.id
+ print claim
+ print
diff --git a/samples/queueing/list_messages.py b/samples/queueing/list_messages.py
new file mode 100644
index 00000000..05c2fd02
--- /dev/null
+++ b/samples/queueing/list_messages.py
@@ -0,0 +1,71 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# Copyright 2013 Rackspace
+
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import os
+import pyrax
+import pyrax.exceptions as exc
+
+pyrax.set_setting("identity_type", "rackspace")
+creds_file = os.path.expanduser("~/.rackspace_cloud_credentials")
+pyrax.set_credential_file(creds_file)
+pq = pyrax.queues
+
+queues = pq.list()
+if not queues:
+ print "There are no queues to post to. Please create one before proceeding."
+ exit()
+
+if len(queues) == 1:
+ queue = queues[0]
+ print "Only one queue available; using '%s'." % queue.name
+else:
+ print "Queues:"
+ for pos, queue in enumerate(queues):
+ print "%s - %s" % (pos, queue.name)
+ snum = raw_input("Enter the number of the queue you wish to list messages "
+ "from: ")
+ if not snum:
+ exit()
+ try:
+ num = int(snum)
+ except ValueError:
+ print "'%s' is not a valid number." % snum
+ exit()
+ if not 0 <= num < len(queues):
+ print "'%s' is not a valid queue number." % snum
+ exit()
+ queue = queues[num]
+echo = claimed = False
+secho = raw_input("Do you want to include your own messages? [y/N]")
+if secho:
+ echo = secho in ("Yy")
+sclaimed = raw_input("Do you want to include claimed messages? [y/N]")
+if sclaimed:
+ claimed = sclaimed in ("Yy")
+
+msgs = pq.list_messages(queue, echo=echo, include_claimed=claimed)
+if not msgs:
+ print "There are no messages available in this queue."
+ exit()
+for msg in msgs:
+ print "ID:", msg.id
+ print "Age:", msg.age
+ print "TTL:", msg.ttl
+ print "Claim ID:", msg.claim_id
+ print "Body:", msg.body
+ print
diff --git a/samples/queueing/list_queues.py b/samples/queueing/list_queues.py
new file mode 100644
index 00000000..c740f1ea
--- /dev/null
+++ b/samples/queueing/list_queues.py
@@ -0,0 +1,37 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# Copyright 2013 Rackspace
+
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import os
+import pyrax
+
+pyrax.set_setting("identity_type", "rackspace")
+creds_file = os.path.expanduser("~/.rackspace_cloud_credentials")
+pyrax.set_credential_file(creds_file)
+pq = pyrax.queues
+
+queues = pq.list()
+if not queues:
+ print "No queues have been created."
+ exit()
+num_queues = len(queues)
+if num_queues == 1:
+ print "There is one queue defined:"
+else:
+ print "There are %s queueis defined:" % num_queues
+for queue in queues:
+ print " %s" % queue.name
diff --git a/samples/queueing/post_message.py b/samples/queueing/post_message.py
new file mode 100644
index 00000000..8f25cdde
--- /dev/null
+++ b/samples/queueing/post_message.py
@@ -0,0 +1,65 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+# Copyright 2013 Rackspace
+
+# All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+import os
+import pyrax
+import pyrax.exceptions as exc
+
+pyrax.set_setting("identity_type", "rackspace")
+creds_file = os.path.expanduser("~/.rackspace_cloud_credentials")
+pyrax.set_credential_file(creds_file)
+pq = pyrax.queues
+
+queues = pq.list()
+if not queues:
+ print "There are no queues to post to. Please create one before proceeding."
+ exit()
+
+if len(queues) == 1:
+ queue = queues[0]
+ print "Only one queue available; using '%s'." % queue.name
+else:
+ print "Queues:"
+ for pos, queue in enumerate(queues):
+ print "%s - %s" % (pos, queue.name)
+ snum = raw_input("Enter the number of the queue you wish to post a message "
+ "to: ")
+ if not snum:
+ exit()
+ try:
+ num = int(snum)
+ except ValueError:
+ print "'%s' is not a valid number." % snum
+ exit()
+ if not 0 <= num < len(queues):
+ print "'%s' is not a valid queue number." % snum
+ exit()
+ queue = queues[num]
+msg = raw_input("Enter the message to post: ")
+sttl = raw_input("Enter a TTL for the message, or just press Enter for the "
+ "default of 14 days: ")
+if not sttl:
+ ttl = None
+else:
+ try:
+ ttl = int(sttl)
+ except ValueError:
+ print "'%s' is not a valid number." % sttl
+ exit()
+pq.post_message(queue, msg, ttl=ttl)
+print "Your message has been posted."
diff --git a/tests/integrated/smoketest.py b/tests/integrated/smoketest.py
index 9a49555e..36e396f2 100644
--- a/tests/integrated/smoketest.py
+++ b/tests/integrated/smoketest.py
@@ -30,6 +30,9 @@ def __init__(self, region):
self.clb = pyrax.cloud_loadbalancers
self.dns = pyrax.cloud_dns
self.cnw = pyrax.cloud_networks
+ self.cmn = pyrax.cloud_monitoring
+ self.au = pyrax.autoscale
+ self.pq = pyrax.queues
self.services = ({"service": self.cs, "name": "Cloud Servers"},
{"service": self.cf, "name": "Cloud Files"},
{"service": self.cbs, "name": "Cloud Block Storage"},
@@ -37,6 +40,9 @@ def __init__(self, region):
{"service": self.clb, "name": "Cloud Load Balancers"},
{"service": self.dns, "name": "Cloud DNS"},
{"service": self.cnw, "name": "Cloud Networks"},
+ {"service": self.cmn, "name": "Cloud Monitoring"},
+ {"service": self.au, "name": "Auto Scale"},
+ {"service": self.pq, "name": "Cloud Queues"},
)
def auth(self, region):
@@ -61,29 +67,33 @@ def check_services(self):
print
def run_tests(self):
- services = pyrax.services
- if "compute" in services:
+ if self.cs:
print "Running 'compute' tests..."
self.cs_list_flavors()
self.cs_list_images()
self.cs_create_server()
self.cs_reboot_server()
self.cs_list_servers()
+
+ if self.cnw:
+ print "Running 'network' tests..."
try:
self.cnw_create_network()
self.cnw_list_networks()
except exc.NotFound:
# Networking not supported
- print " - Networking not supported"
+ print " - Networking not supported."
+ except exc.NetworkCountExceeded:
+ print " - Too many networks already exist."
- if "database" in services:
+ if self.cdb:
print "Running 'database' tests..."
self.cdb_list_flavors()
self.cdb_create_instance()
self.cdb_create_db()
self.cdb_create_user()
- if "object_store" in services:
+ if self.cf:
print "Running 'object_store' tests..."
self.cf_create_container()
self.cf_list_containers()
@@ -91,12 +101,31 @@ def run_tests(self):
self.cf_make_container_private()
self.cf_upload_file()
- if "load_balancer" in services:
+ if self.clb:
print "Running 'load_balancer' tests..."
self.lb_list()
self.lb_create()
-
+ if self.dns:
+ print "Running 'DNS' tests..."
+ self.dns_list()
+ self.dns_create_domain()
+ self.dns_create_record()
+
+ if self.cmn:
+ if not self.smoke_server:
+ print "Server not available; skipping Monitoring tests."
+ return
+ self.cmn_create_entity()
+ self.cmn_list_check_types()
+ self.cmn_list_monitoring_zones()
+ self.cmn_create_check()
+ self.cmn_create_notification()
+ self.cmn_create_notification_plan()
+ self.cmn_create_alarm()
+
+
+ ## Specific tests start here ##
def cs_list_flavors(self):
print "Listing Flavors:",
self.cs_flavors = self.cs.list_flavors()
@@ -194,7 +223,10 @@ def cs_list_servers(self):
def cdb_list_flavors(self):
print "Listing Database Flavors:",
- self.cdb_flavors = self.cdb.list_flavors()
+ try:
+ self.cdb_flavors = self.cdb.list_flavors()
+ except Exception as e:
+ self.cdb_flavors = None
if self.cdb_flavors:
print
for flavor in self.cdb_flavors:
@@ -205,6 +237,11 @@ def cdb_list_flavors(self):
print
def cdb_create_instance(self):
+ if not self.cdb_flavors:
+ # Skip this test
+ print "Skipping database instance creation..."
+ self.smoke_instance = None
+ return
print "Creating database instance..."
self.smoke_instance = self.cdb.create("SMOKETEST_DB_INSTANCE",
flavor=self.cdb_flavors[0], volume=1)
@@ -220,6 +257,10 @@ def cdb_create_instance(self):
print
def cdb_create_db(self):
+ if not self.smoke_instance:
+ # Skip this test
+ print "Skipping database creation..."
+ return
print "Creating database..."
self.smoke_db = self.smoke_instance.create_database("SMOKETEST_DB")
self.cleanup_items.append(self.smoke_db)
@@ -232,6 +273,10 @@ def cdb_create_db(self):
print
def cdb_create_user(self):
+ if not self.smoke_instance:
+ # Skip this test
+ print "Skipping database user creation..."
+ return
print "Creating database user..."
self.smoke_user = self.smoke_instance.create_user("SMOKETEST_USER",
"SMOKETEST_PW", database_names=[self.smoke_db])
@@ -291,7 +336,7 @@ def cf_make_container_private(self):
def cf_upload_file(self):
print "Uploading a Cloud Files object..."
cont = self.smoke_cont
- text = pyrax.utils.random_name(1024)
+ text = pyrax.utils.random_unicode(1024)
obj = cont.store_object("SMOKETEST_OBJECT", text)
# Make sure it is deleted before the container
self.cleanup_items.insert(0, obj)
@@ -327,6 +372,143 @@ def lb_create(self):
print "FAIL!"
self.failures.append("LOAD_BALANCERS")
+ def dns_list(self):
+ print "Listing DNS Domains..."
+ doms = self.dns.list()
+ if not doms:
+ print " - No domains to list!"
+ else:
+ for dns in doms:
+ print " -", dns.name
+
+ def dns_create_domain(self):
+ print "Creating a DNS Domain..."
+ domain_name = "SMOKETEST.example.edu"
+ try:
+ dom = self.dns.create(name=domain_name,
+ emailAddress="sample@example.edu", ttl=900,
+ comment="SMOKETEST sample domain")
+ print "Success!"
+ self.cleanup_items.append(dom)
+ except exc.DomainCreationFailed:
+ print "FAIL!"
+ self.failures.append("DNS DOMAIN CREATION")
+
+ def dns_create_record(self):
+ print "Creating a DNS Record..."
+ domain_name = "SMOKETEST.example.edu"
+ try:
+ dom = self.dns.find(name=domain_name)
+ except exc.NotFound:
+ print "Smoketest domain not found; skipping record test."
+ self.failures.append("DNS RECORD CREATION")
+ return
+ a_rec = {"type": "A",
+ "name": domain_name,
+ "data": "1.2.3.4",
+ "ttl": 6000}
+ try:
+ recs = dom.add_records(a_rec)
+ print "Success!"
+ # No need to cleanup, since domain deletion also deletes the recs.
+ # self.cleanup_items.extend(recs)
+ except exc.DomainRecordAdditionFailed:
+ print "FAIL!"
+ self.failures.append("DNS RECORD CREATION")
+
+ def cmn_list_check_types(self):
+ print "Listing Monitoring Check Types..."
+ cts = self.cmn.list_check_types()
+ for ct in cts:
+ print " -", ct.id, ct.type
+ print
+
+ def cmn_list_monitoring_zones(self):
+ print "Listing Monitoring Zones..."
+ zones = self.cmn.list_monitoring_zones()
+ for zone in zones:
+ print " -", zone.id, zone.name
+ print
+
+ def cmn_create_entity(self):
+ print "Creating a Monitoring Entity..."
+ srv = self.smoke_server
+ ip = srv.networks["public"][0]
+ try:
+ self.smoke_entity = self.cmn.create_entity(name="SMOKETEST_entity",
+ ip_addresses={"main": ip})
+ self.cleanup_items.append(self.smoke_entity)
+ print "Success!"
+ except Exception:
+ print "FAIL!"
+ self.smoke_entity = None
+ self.failures.append("MONITORING CREATE ENTITY")
+ print
+
+ def cmn_create_check(self):
+ print "Creating a Monitoring Check..."
+ ent = self.smoke_entity
+ alias = ent.ip_addresses.keys()[0]
+ try:
+ self.smoke_check = self.cmn.create_check(ent,
+ label="SMOKETEST_check", check_type="remote.ping",
+ details={"count": 5}, monitoring_zones_poll=["mzdfw"],
+ period=60, timeout=20, target_alias=alias)
+ print "Success!"
+ self.cleanup_items.append(self.smoke_check)
+ except Exception:
+ print "FAIL!"
+ self.smoke_check = None
+ self.failures.append("MONITORING CREATE CHECK")
+ print
+
+ def cmn_create_notification(self):
+ print "Creating a Monitoring Notification..."
+ email = "smoketest@example.com"
+ try:
+ self.smoke_notification = self.cmn.create_notification("email",
+ label="smoketest", details={"address": email})
+ print "Success!"
+ self.cleanup_items.append(self.smoke_notification)
+ except Exception:
+ print "FAIL!"
+ self.smoke_notification = None
+ self.failures.append("MONITORING CREATE NOTIFICATION")
+ print
+
+ def cmn_create_notification_plan(self):
+ if not self.smoke_notification:
+ print ("No monitoring notification found; skipping notification "
+ "creation...")
+ return
+ print "Creating a Monitoring Notification Plan..."
+ try:
+ self.smoke_notification_plan = self.cmn.create_notification_plan(
+ label="smoketest plan", ok_state=self.smoke_notification)
+ print "Success!"
+ self.cleanup_items.append(self.smoke_notification_plan)
+ except Exception as e:
+ print "FAIL!", e
+ self.smoke_notification_plan = None
+ self.failures.append("MONITORING CREATE NOTIFICATION PLAN")
+ print
+
+ def cmn_create_alarm(self):
+ if not self.smoke_notification_plan:
+ print "No monitoring plan found; skipping alarm creation..."
+ return
+ print "Creating a Monitoring Alarm..."
+ try:
+ self.smoke_alarm = self.cmn.create_alarm(self.smoke_entity,
+ self.smoke_check, self.smoke_notification_plan,
+ label="smoke alarm")
+ print "Success!"
+ self.cleanup_items.append(self.smoke_alarm)
+ except Exception:
+ print "FAIL!"
+ self.failures.append("MONITORING CREATE ALARM")
+ print
+
def cleanup(self):
print "Cleaning up..."
@@ -338,7 +520,10 @@ def cleanup(self):
print item.name
except AttributeError:
print item
-
+ except exc.NotFound:
+ # Some items are deleted along with others (e.g., DNS records
+ # when a domain is deleted), so don't complain.
+ pass
except Exception as e:
print "Could not delete '%s': %s" % (item, e)
diff --git a/tests/unit/.testtimes.json b/tests/unit/.testtimes.json
new file mode 100644
index 00000000..91b55e1c
--- /dev/null
+++ b/tests/unit/.testtimes.json
@@ -0,0 +1 @@
+[[9.298324584960938e-05, "test_add_method"], [9.679794311523438e-05, "test_add_method_no_name"], [3.1948089599609375e-05, "test_env"], [3.0994415283203125e-05, "test_folder_size_bad_folder"], [0.002016782760620117, "test_folder_size_ignore_list"], [0.0019488334655761719, "test_folder_size_ignore_string"], [0.001611948013305664, "test_folder_size_no_ignore"], [8.416175842285156e-05, "test_get_checksum_from_binary"], [0.0013799667358398438, "test_get_checksum_from_file"], [4.887580871582031e-05, "test_get_checksum_from_string"], [5.221366882324219e-05, "test_get_checksum_from_unicode"], [0.04255390167236328, "test_get_checksum_from_unicode_alt_encoding"], [0.00015091896057128906, "test_get_id"], [0.00015282630920410156, "test_get_name"], [1.0967254638671875e-05, "test_import_class"], [1.0013580322265625e-05, "test_isunauthenticated"], [2.384185791015625e-05, "test_match_pattern"], [0.0006420612335205078, "test_random_unicode"], [0.03811287879943848, "test_runproc"], [0.00020503997802734375, "test_safe_issubclass_bad"], [2.5987625122070312e-05, "test_safe_issubclass_good"], [0.00030612945556640625, "test_self_deleting_temp_directory"], [0.00022602081298828125, "test_self_deleting_temp_file"], [0.0005660057067871094, "test_slugify"], [0.0001571178436279297, "test_time_string_date"], [0.020350933074951172, "test_time_string_date_obj"], [0.0023310184478759766, "test_time_string_datetime"], [7.104873657226562e-05, "test_time_string_datetime_add_tz"], [7.605552673339844e-05, "test_time_string_datetime_hide_tz"], [4.601478576660156e-05, "test_time_string_datetime_show_tz"], [6.9141387939453125e-06, "test_time_string_empty"], [6.198883056640625e-05, "test_time_string_invalid"], [1.1920928955078125e-05, "test_unauthenticated"], [0.00025010108947753906, "test_update_exc"], [0.0011680126190185547, "test_wait_for_build"], [0.0005540847778320312, "test_wait_until"], [0.001271963119506836, "test_wait_until_callback"], [0.0004899501800537109, "test_wait_until_fail"], [9.799003601074219e-05, "test_add_method"], [0.00016379356384277344, "test_add_method_no_name"], [3.2901763916015625e-05, "test_env"], [2.6941299438476562e-05, "test_folder_size_bad_folder"], [0.0018258094787597656, "test_folder_size_ignore_list"], [0.0017931461334228516, "test_folder_size_ignore_string"], [0.0016651153564453125, "test_folder_size_no_ignore"], [8.296966552734375e-05, "test_get_checksum_from_binary"], [0.0014481544494628906, "test_get_checksum_from_file"], [4.7206878662109375e-05, "test_get_checksum_from_string"], [5.1021575927734375e-05, "test_get_checksum_from_unicode"], [0.003670215606689453, "test_get_checksum_from_unicode_alt_encoding"], [0.00013899803161621094, "test_get_id"], [0.0001499652862548828, "test_get_name"], [2.4080276489257812e-05, "test_import_class"], [2.2172927856445312e-05, "test_isunauthenticated"], [3.600120544433594e-05, "test_match_pattern"], [0.0009670257568359375, "test_random_unicode"], [0.004940986633300781, "test_runproc"], [0.00019693374633789062, "test_safe_issubclass_bad"], [2.7894973754882812e-05, "test_safe_issubclass_good"], [0.000370025634765625, "test_self_deleting_temp_directory"], [0.0002391338348388672, "test_self_deleting_temp_file"], [0.0009739398956298828, "test_slugify"], [0.0001659393310546875, "test_time_string_date"], [2.6941299438476562e-05, "test_time_string_date_obj"], [0.0016129016876220703, "test_time_string_datetime"], [6.198883056640625e-05, "test_time_string_datetime_add_tz"], [6.794929504394531e-05, "test_time_string_datetime_hide_tz"], [5.698204040527344e-05, "test_time_string_datetime_show_tz"], [4.0531158447265625e-06, "test_time_string_empty"], [3.910064697265625e-05, "test_time_string_invalid"], [1.1920928955078125e-05, "test_unauthenticated"], [0.0002570152282714844, "test_update_exc"], [0.0015079975128173828, "test_wait_for_build"], [0.0004608631134033203, "test_wait_until"], [0.0015289783477783203, "test_wait_until_callback"], [0.0003800392150878906, "test_wait_until_fail"], [9.608268737792969e-05, "test_add_method"], [0.00010800361633300781, "test_add_method_no_name"], [3.695487976074219e-05, "test_env"], [3.910064697265625e-05, "test_folder_size_bad_folder"], [0.0018758773803710938, "test_folder_size_ignore_list"], [0.0017428398132324219, "test_folder_size_ignore_string"], [0.0016450881958007812, "test_folder_size_no_ignore"], [8.20159912109375e-05, "test_get_checksum_from_binary"], [0.001425027847290039, "test_get_checksum_from_file"], [3.886222839355469e-05, "test_get_checksum_from_string"], [5.602836608886719e-05, "test_get_checksum_from_unicode"], [0.0014820098876953125, "test_get_checksum_from_unicode_alt_encoding"], [0.00015997886657714844, "test_get_id"], [0.0001461505889892578, "test_get_name"], [1.1920928955078125e-05, "test_import_class"], [1.0013580322265625e-05, "test_isunauthenticated"], [2.7179718017578125e-05, "test_match_pattern"], [0.0017287731170654297, "test_random_unicode"], [0.005548954010009766, "test_runproc"], [0.00020194053649902344, "test_safe_issubclass_bad"], [1.7881393432617188e-05, "test_safe_issubclass_good"], [0.00028514862060546875, "test_self_deleting_temp_directory"], [0.0002639293670654297, "test_self_deleting_temp_file"], [0.00020599365234375, "test_slugify"], [0.00015807151794433594, "test_time_string_date"], [2.8133392333984375e-05, "test_time_string_date_obj"], [0.0015549659729003906, "test_time_string_datetime"], [5.602836608886719e-05, "test_time_string_datetime_add_tz"], [7.295608520507812e-05, "test_time_string_datetime_hide_tz"], [5.1021575927734375e-05, "test_time_string_datetime_show_tz"], [1.0967254638671875e-05, "test_time_string_empty"], [5.316734313964844e-05, "test_time_string_invalid"], [1.2874603271484375e-05, "test_unauthenticated"], [0.00024819374084472656, "test_update_exc"], [0.0012221336364746094, "test_wait_for_build"], [0.0005648136138916016, "test_wait_until"], [0.0013408660888671875, "test_wait_until_callback"], [0.00036406517028808594, "test_wait_until_fail"]]
\ No newline at end of file
diff --git a/tests/unit/base_test.py b/tests/unit/base_test.py
new file mode 100644
index 00000000..be934623
--- /dev/null
+++ b/tests/unit/base_test.py
@@ -0,0 +1,36 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+import json
+import time
+import unittest
+
+
+TIMING_FILE = ".testtimes.json"
+
+
+class BaseTest(unittest.TestCase):
+ def __init__(self, *args, **kwargs):
+ super(BaseTest, self).__init__(*args, **kwargs)
+ self.timing = False
+ # Create the output file if it doesn't exist
+ with open(TIMING_FILE, "a") as jj:
+ pass
+
+ def setUp(self):
+ if self.timing:
+ self.begun = time.time()
+ super(BaseTest, self).setUp()
+
+ def tearDown(self):
+ if self.timing:
+ elapsed = time.time() - self.begun
+ with open(TIMING_FILE, "r") as jj:
+ try:
+ times = json.load(jj)
+ except ValueError:
+ times = []
+ times.append((elapsed, self._testMethodName))
+ with open(TIMING_FILE, "w") as jj:
+ json.dump(times, jj)
+ super(BaseTest, self).tearDown()
diff --git a/tests/unit/fakes.py b/tests/unit/fakes.py
index 044dbffb..df4a3abb 100644
--- a/tests/unit/fakes.py
+++ b/tests/unit/fakes.py
@@ -42,16 +42,13 @@
from pyrax.cloudnetworks import CloudNetworkClient
from pyrax.cloudmonitoring import CloudMonitorClient
from pyrax.cloudmonitoring import CloudMonitorEntity
-from pyrax.cloudmonitoring import CloudMonitorNotificationManager
-from pyrax.cloudmonitoring import CloudMonitorNotificationPlanManager
-from pyrax.cloudmonitoring import CloudMonitorEntityManager
from pyrax.cloudmonitoring import CloudMonitorCheck
-from pyrax.cloudmonitoring import CloudMonitorCheckType
-from pyrax.cloudmonitoring import CloudMonitorZone
from pyrax.cloudmonitoring import CloudMonitorNotification
-from pyrax.cloudmonitoring import CloudMonitorNotificationType
-from pyrax.cloudmonitoring import CloudMonitorNotificationPlan
-from pyrax.cloudmonitoring import CloudMonitorAlarm
+from pyrax.queueing import Queue
+from pyrax.queueing import QueueClaim
+from pyrax.queueing import QueueMessage
+from pyrax.queueing import QueueClient
+from pyrax.queueing import QueueManager
import pyrax.exceptions as exc
from pyrax.identity.rax_identity import RaxIdentity
@@ -120,7 +117,7 @@ def __init__(self, client, container, name=None, total_bytes=None,
class FakeServer(object):
- id = utils.random_name()
+ id = utils.random_unicode()
class FakeService(object):
@@ -132,7 +129,7 @@ def __init__(self, *args, **kwargs):
self.Node = FakeNode
self.VirtualIP = FakeVirtualIP
self.loadbalancers = FakeLoadBalancer()
- self.id = utils.random_name()
+ self.id = utils.random_unicode()
def authenticate(self):
pass
@@ -256,7 +253,7 @@ def set_password(self, *args, **kwargs):
class FakeEntity(object):
def __init__(self, *args, **kwargs):
- self.id = utils.random_name()
+ self.id = utils.random_unicode()
def get(self, *args, **kwargs):
pass
@@ -275,7 +272,7 @@ def __init__(self, instance, *args, **kwargs):
class FakeDatabaseInstance(CloudDatabaseInstance):
def __init__(self, *args, **kwargs):
- self.id = utils.random_name()
+ self.id = utils.random_unicode()
self.manager = FakeManager()
self.manager.api = FakeDatabaseClient()
self._database_manager = CloudDatabaseDatabaseManager(
@@ -299,43 +296,6 @@ def __init__(self, *args, **kwargs):
"fakepassword", *args, **kwargs)
-class FakeDNSClient(CloudDNSClient):
- def __init__(self, *args, **kwargs):
- super(FakeDNSClient, self).__init__("fakeuser",
- "fakepassword", *args, **kwargs)
-
-
-class FakeDNSManager(CloudDNSManager):
- def __init__(self, api=None, *args, **kwargs):
- if api is None:
- api = FakeDNSClient()
- super(FakeDNSManager, self).__init__(api, *args, **kwargs)
- self.resource_class = FakeDNSDomain
- self.response_key = "domain"
- self.plural_response_key = "domains"
- self.uri_base = "domains"
-
-
-class FakeDNSDomain(CloudDNSDomain):
- def __init__(self, *args, **kwargs):
- self.id = utils.random_name(ascii_only=True)
- self.name = utils.random_name()
- self.manager = FakeDNSManager()
-
-
-class FakeDNSRecord(CloudDNSRecord):
- def __init__(self, mgr, info, *args, **kwargs):
- super(FakeDNSRecord, self).__init__(mgr, info, *args, **kwargs)
-
-
-class FakeDNSPTRRecord(CloudDNSPTRRecord):
- pass
-
-class FakeDNSDevice(FakeEntity):
- def __init__(self, *args, **kwargs):
- self.id = utils.random_name()
-
-
class FakeNovaVolumeClient(BaseClient):
def __init__(self, *args, **kwargs):
pass
@@ -350,15 +310,15 @@ def __init__(self, api=None, *args, **kwargs):
class FakeBlockStorageVolume(CloudBlockStorageVolume):
def __init__(self, *args, **kwargs):
- volname = utils.random_name(8)
- self.id = utils.random_name()
+ volname = utils.random_unicode(8)
+ self.id = utils.random_unicode()
self.manager = FakeBlockStorageManager()
self._nova_volumes = FakeNovaVolumeClient()
class FakeBlockStorageSnapshot(CloudBlockStorageSnapshot):
def __init__(self, *args, **kwargs):
- self.id = utils.random_name()
+ self.id = utils.random_unicode()
self.manager = FakeManager()
self.status = "available"
@@ -386,10 +346,10 @@ def __init__(self, api=None, *args, **kwargs):
class FakeLoadBalancer(CloudLoadBalancer):
def __init__(self, name=None, info=None, *args, **kwargs):
- name = name or utils.random_name(ascii_only=True)
+ name = name or utils.random_ascii()
info = info or {"fake": "fake"}
super(FakeLoadBalancer, self).__init__(name, info, *args, **kwargs)
- self.id = utils.random_name(ascii_only=True)
+ self.id = utils.random_ascii()
self.port = random.randint(1, 256)
self.manager = FakeLoadBalancerManager()
@@ -402,7 +362,7 @@ def __init__(self, address=None, port=None, condition=None, weight=None,
if port is None:
port = 80
if id is None:
- id = utils.random_name()
+ id = utils.random_unicode()
super(FakeNode, self).__init__(address=address, port=port,
condition=condition, weight=weight, status=status,
parent=parent, type=type, id=id)
@@ -414,7 +374,7 @@ class FakeVirtualIP(VirtualIP):
class FakeStatusChanger(object):
check_count = 0
- id = utils.random_name()
+ id = utils.random_unicode()
@property
def status(self):
@@ -424,6 +384,44 @@ def status(self):
return "ready"
+class FakeDNSClient(CloudDNSClient):
+ def __init__(self, *args, **kwargs):
+ super(FakeDNSClient, self).__init__("fakeuser",
+ "fakepassword", *args, **kwargs)
+
+
+class FakeDNSManager(CloudDNSManager):
+ def __init__(self, api=None, *args, **kwargs):
+ if api is None:
+ api = FakeDNSClient()
+ super(FakeDNSManager, self).__init__(api, *args, **kwargs)
+ self.resource_class = FakeDNSDomain
+ self.response_key = "domain"
+ self.plural_response_key = "domains"
+ self.uri_base = "domains"
+
+
+class FakeDNSDomain(CloudDNSDomain):
+ def __init__(self, *args, **kwargs):
+ self.id = utils.random_ascii()
+ self.name = utils.random_unicode()
+ self.manager = FakeDNSManager()
+
+
+class FakeDNSRecord(CloudDNSRecord):
+ def __init__(self, mgr, info, *args, **kwargs):
+ super(FakeDNSRecord, self).__init__(mgr, info, *args, **kwargs)
+
+
+class FakeDNSPTRRecord(CloudDNSPTRRecord):
+ pass
+
+
+class FakeDNSDevice(FakeLoadBalancer):
+ def __init__(self, *args, **kwargs):
+ self.id = utils.random_unicode()
+
+
class FakeCloudNetworkClient(CloudNetworkClient):
def __init__(self, *args, **kwargs):
super(FakeCloudNetworkClient, self).__init__("fakeuser",
@@ -433,7 +431,7 @@ def __init__(self, *args, **kwargs):
class FakeCloudNetwork(CloudNetwork):
def __init__(self, *args, **kwargs):
info = kwargs.pop("info", {"fake": "fake"})
- label = kwargs.pop("label", kwargs.pop("name", utils.random_name()))
+ label = kwargs.pop("label", kwargs.pop("name", utils.random_unicode()))
info["label"] = label
super(FakeCloudNetwork, self).__init__(manager=None, info=info, *args,
**kwargs)
@@ -449,13 +447,13 @@ def __init__(self, *args, **kwargs):
class FakeAutoScalePolicy(AutoScalePolicy):
def __init__(self, *args, **kwargs):
super(FakeAutoScalePolicy, self).__init__(*args, **kwargs)
- self.id = utils.random_name(ascii_only=True)
+ self.id = utils.random_ascii()
class FakeAutoScaleWebhook(AutoScaleWebhook):
def __init__(self, *args, **kwargs):
super(FakeAutoScaleWebhook, self).__init__(*args, **kwargs)
- self.id = utils.random_name(ascii_only=True)
+ self.id = utils.random_ascii()
class FakeScalingGroupManager(ScalingGroupManager):
@@ -463,16 +461,16 @@ def __init__(self, api=None, *args, **kwargs):
if api is None:
api = FakeAutoScaleClient()
super(FakeScalingGroupManager, self).__init__(api, *args, **kwargs)
- self.id = utils.random_name(ascii_only=True)
+ self.id = utils.random_ascii()
class FakeScalingGroup(ScalingGroup):
def __init__(self, name=None, info=None, *args, **kwargs):
- name = name or utils.random_name(ascii_only=True)
+ name = name or utils.random_ascii()
info = info or {"fake": "fake", "scalingPolicies": []}
self.groupConfiguration = {}
super(FakeScalingGroup, self).__init__(name, info, *args, **kwargs)
- self.id = utils.random_name(ascii_only=True)
+ self.id = utils.random_ascii()
self.name = name
self.manager = FakeScalingGroupManager()
@@ -489,7 +487,7 @@ def __init__(self, *args, **kwargs):
super(FakeCloudMonitorEntity, self).__init__(FakeManager(), info=info,
*args, **kwargs)
self.manager.api = FakeCloudMonitorClient()
- self.id = utils.random_name()
+ self.id = utils.random_unicode()
class FakeCloudMonitorCheck(CloudMonitorCheck):
@@ -509,6 +507,48 @@ def __init__(self, *args, **kwargs):
self.id = uuid.uuid4()
+class FakeQueue(Queue):
+ def __init__(self, *args, **kwargs):
+ info = kwargs.pop("info", {"fake": "fake"})
+ info["name"] = utils.random_unicode()
+ mgr = kwargs.pop("manager", FakeQueueManager())
+ super(FakeQueue, self).__init__(manager=mgr, info=info, *args, **kwargs)
+
+
+class FakeQueueClaim(QueueClaim):
+ def __init__(self, *args, **kwargs):
+ info = kwargs.pop("info", {"fake": "fake"})
+ info["name"] = utils.random_unicode()
+ mgr = kwargs.pop("manager", FakeQueueManager())
+ super(FakeQueueClaim, self).__init__(manager=mgr, info=info, *args,
+ **kwargs)
+
+
+class FakeQueueMessage(QueueMessage):
+ def __init__(self, *args, **kwargs):
+ id_ = utils.random_unicode()
+ href = "http://example.com/%s" % id_
+ info = kwargs.pop("info", {"href": href})
+ info["name"] = utils.random_unicode()
+ mgr = kwargs.pop("manager", FakeQueueManager())
+ super(FakeQueueMessage, self).__init__(manager=mgr, info=info, *args,
+ **kwargs)
+
+
+class FakeQueueClient(QueueClient):
+ def __init__(self, *args, **kwargs):
+ super(FakeQueueClient, self).__init__("fakeuser",
+ "fakepassword", *args, **kwargs)
+
+
+class FakeQueueManager(QueueManager):
+ def __init__(self, api=None, *args, **kwargs):
+ if api is None:
+ api = FakeQueueClient()
+ super(FakeQueueManager, self).__init__(api, *args, **kwargs)
+ self.id = utils.random_ascii()
+
+
class FakeIdentity(RaxIdentity):
"""Class that returns canned authentication responses."""
def __init__(self, *args, **kwargs):
diff --git a/tests/unit/test_autoscale.py b/tests/unit/test_autoscale.py
index 9005fe2b..958b665e 100644
--- a/tests/unit/test_autoscale.py
+++ b/tests/unit/test_autoscale.py
@@ -35,8 +35,8 @@ def tearDown(self):
def test_make_policies(self):
sg = self.scaling_group
- p1 = utils.random_name()
- p2 = utils.random_name()
+ p1 = utils.random_unicode()
+ p2 = utils.random_unicode()
sg.scalingPolicies = [{"name": p1}, {"name": p2}]
sg._make_policies()
self.assertEqual(len(sg.policies), 2)
@@ -69,11 +69,11 @@ def test_update(self):
sg = self.scaling_group
mgr = sg.manager
mgr.update = Mock()
- name = utils.random_name()
- cooldown = utils.random_name()
- min_entities = utils.random_name()
- max_entities = utils.random_name()
- metadata = utils.random_name()
+ name = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ min_entities = utils.random_unicode()
+ max_entities = utils.random_unicode()
+ metadata = utils.random_unicode()
sg.update(name=name, cooldown=cooldown, min_entities=min_entities,
max_entities=max_entities, metadata=metadata)
mgr.update.assert_called_once_with(sg, name=name, cooldown=cooldown,
@@ -84,7 +84,7 @@ def test_update_metadata(self):
sg = self.scaling_group
mgr = sg.manager
mgr.update_metadata = Mock()
- metadata = utils.random_name()
+ metadata = utils.random_unicode()
sg.update_metadata(metadata)
mgr.update_metadata.assert_called_once_with(sg, metadata=metadata)
@@ -106,15 +106,15 @@ def test_update_launch_config(self):
sg = self.scaling_group
mgr = sg.manager
mgr.update_launch_config = Mock()
- server_name = utils.random_name()
- flavor = utils.random_name()
- image = utils.random_name()
- disk_config = utils.random_name()
- metadata = utils.random_name()
- personality = utils.random_name()
- networks = utils.random_name()
- load_balancers = utils.random_name()
- key_name = utils.random_name()
+ server_name = utils.random_unicode()
+ flavor = utils.random_unicode()
+ image = utils.random_unicode()
+ disk_config = utils.random_unicode()
+ metadata = utils.random_unicode()
+ personality = utils.random_unicode()
+ networks = utils.random_unicode()
+ load_balancers = utils.random_unicode()
+ key_name = utils.random_unicode()
sg.update_launch_config(server_name=server_name, flavor=flavor,
image=image, disk_config=disk_config, metadata=metadata,
personality=personality, networks=networks,
@@ -129,20 +129,20 @@ def test_update_launch_metadata(self):
sg = self.scaling_group
mgr = sg.manager
mgr.update_launch_metadata = Mock()
- metadata = utils.random_name()
+ metadata = utils.random_unicode()
sg.update_launch_metadata(metadata)
mgr.update_launch_metadata.assert_called_once_with(sg, metadata)
def test_add_policy(self):
sg = self.scaling_group
mgr = sg.manager
- name = utils.random_name()
- policy_type = utils.random_name()
- cooldown = utils.random_name()
- change = utils.random_name()
- is_percent = utils.random_name()
- desired_capacity = utils.random_name()
- args = utils.random_name()
+ name = utils.random_unicode()
+ policy_type = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ change = utils.random_unicode()
+ is_percent = utils.random_unicode()
+ desired_capacity = utils.random_unicode()
+ args = utils.random_unicode()
mgr.add_policy = Mock()
sg.add_policy(name, policy_type, cooldown, change,
is_percent=is_percent, desired_capacity=desired_capacity,
@@ -161,7 +161,7 @@ def test_list_policies(self):
def test_get_policy(self):
sg = self.scaling_group
mgr = sg.manager
- pol = utils.random_name()
+ pol = utils.random_unicode()
mgr.get_policy = Mock()
sg.get_policy(pol)
mgr.get_policy.assert_called_once_with(sg, pol)
@@ -169,14 +169,14 @@ def test_get_policy(self):
def test_update_policy(self):
sg = self.scaling_group
mgr = sg.manager
- policy = utils.random_name()
- name = utils.random_name()
- policy_type = utils.random_name()
- cooldown = utils.random_name()
- change = utils.random_name()
- desired_capacity = utils.random_name()
- is_percent = utils.random_name()
- args = utils.random_name()
+ policy = utils.random_unicode()
+ name = utils.random_unicode()
+ policy_type = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ change = utils.random_unicode()
+ desired_capacity = utils.random_unicode()
+ is_percent = utils.random_unicode()
+ args = utils.random_unicode()
mgr.update_policy = Mock()
sg.update_policy(policy, name=name, policy_type=policy_type,
cooldown=cooldown, change=change, is_percent=is_percent,
@@ -189,7 +189,7 @@ def test_update_policy(self):
def test_execute_policy(self):
sg = self.scaling_group
mgr = sg.manager
- pol = utils.random_name()
+ pol = utils.random_unicode()
mgr.execute_policy = Mock()
sg.execute_policy(pol)
mgr.execute_policy.assert_called_once_with(scaling_group=sg,
@@ -198,7 +198,7 @@ def test_execute_policy(self):
def test_delete_policy(self):
sg = self.scaling_group
mgr = sg.manager
- pol = utils.random_name()
+ pol = utils.random_unicode()
mgr.delete_policy = Mock()
sg.delete_policy(pol)
mgr.delete_policy.assert_called_once_with(scaling_group=sg,
@@ -207,9 +207,9 @@ def test_delete_policy(self):
def test_add_webhook(self):
sg = self.scaling_group
mgr = sg.manager
- pol = utils.random_name()
- name = utils.random_name()
- metadata = utils.random_name()
+ pol = utils.random_unicode()
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
mgr.add_webhook = Mock()
sg.add_webhook(pol, name, metadata=metadata)
mgr.add_webhook.assert_called_once_with(sg, pol, name,
@@ -218,7 +218,7 @@ def test_add_webhook(self):
def test_list_webhooks(self):
sg = self.scaling_group
mgr = sg.manager
- pol = utils.random_name()
+ pol = utils.random_unicode()
mgr.list_webhooks = Mock()
sg.list_webhooks(pol)
mgr.list_webhooks.assert_called_once_with(sg, pol)
@@ -226,10 +226,10 @@ def test_list_webhooks(self):
def test_update_webhook(self):
sg = self.scaling_group
mgr = sg.manager
- pol = utils.random_name()
- hook = utils.random_name()
- name = utils.random_name()
- metadata = utils.random_name()
+ pol = utils.random_unicode()
+ hook = utils.random_unicode()
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
mgr.update_webhook = Mock()
sg.update_webhook(pol, hook, name=name, metadata=metadata)
mgr.update_webhook.assert_called_once_with(scaling_group=sg, policy=pol,
@@ -238,9 +238,9 @@ def test_update_webhook(self):
def test_update_webhook_metadata(self):
sg = self.scaling_group
mgr = sg.manager
- pol = utils.random_name()
- hook = utils.random_name()
- metadata = utils.random_name()
+ pol = utils.random_unicode()
+ hook = utils.random_unicode()
+ metadata = utils.random_unicode()
mgr.update_webhook_metadata = Mock()
sg.update_webhook_metadata(pol, hook, metadata=metadata)
mgr.update_webhook_metadata.assert_called_once_with(sg, pol, hook,
@@ -249,8 +249,8 @@ def test_update_webhook_metadata(self):
def test_delete_webhook(self):
sg = self.scaling_group
mgr = sg.manager
- pol = utils.random_name()
- hook = utils.random_name()
+ pol = utils.random_unicode()
+ hook = utils.random_unicode()
mgr.delete_webhook = Mock()
sg.delete_webhook(pol, hook)
mgr.delete_webhook.assert_called_once_with(sg, pol, hook)
@@ -264,8 +264,8 @@ def test_policy_count(self):
def test_name(self):
sg = self.scaling_group
- name = utils.random_name()
- newname = utils.random_name()
+ name = utils.random_unicode()
+ newname = utils.random_unicode()
sg.groupConfiguration = {"name": name}
self.assertEqual(sg.name, name)
sg.name = newname
@@ -273,8 +273,8 @@ def test_name(self):
def test_cooldown(self):
sg = self.scaling_group
- cooldown = utils.random_name()
- newcooldown = utils.random_name()
+ cooldown = utils.random_unicode()
+ newcooldown = utils.random_unicode()
sg.groupConfiguration = {"cooldown": cooldown}
self.assertEqual(sg.cooldown, cooldown)
sg.cooldown = newcooldown
@@ -282,8 +282,8 @@ def test_cooldown(self):
def test_metadata(self):
sg = self.scaling_group
- metadata = utils.random_name()
- newmetadata = utils.random_name()
+ metadata = utils.random_unicode()
+ newmetadata = utils.random_unicode()
sg.groupConfiguration = {"metadata": metadata}
self.assertEqual(sg.metadata, metadata)
sg.metadata = newmetadata
@@ -291,8 +291,8 @@ def test_metadata(self):
def test_min_entities(self):
sg = self.scaling_group
- min_entities = utils.random_name()
- newmin_entities = utils.random_name()
+ min_entities = utils.random_unicode()
+ newmin_entities = utils.random_unicode()
sg.groupConfiguration = {"minEntities": min_entities}
self.assertEqual(sg.min_entities, min_entities)
sg.min_entities = newmin_entities
@@ -300,8 +300,8 @@ def test_min_entities(self):
def test_max_entities(self):
sg = self.scaling_group
- max_entities = utils.random_name()
- newmax_entities = utils.random_name()
+ max_entities = utils.random_unicode()
+ newmax_entities = utils.random_unicode()
sg.groupConfiguration = {"maxEntities": max_entities}
self.assertEqual(sg.max_entities, max_entities)
sg.max_entities = newmax_entities
@@ -310,12 +310,12 @@ def test_max_entities(self):
def test_mgr_get_state(self):
sg = self.scaling_group
mgr = sg.manager
- id1 = utils.random_name()
- id2 = utils.random_name()
- ac = utils.random_name()
- dc = utils.random_name()
- pc = utils.random_name()
- paused = utils.random_name()
+ id1 = utils.random_unicode()
+ id2 = utils.random_unicode()
+ ac = utils.random_unicode()
+ dc = utils.random_unicode()
+ pc = utils.random_unicode()
+ paused = utils.random_unicode()
statedict = {"group": {
"active": [{"id": id1}, {"id": id2}],
"activeCapacity": ac,
@@ -354,7 +354,7 @@ def test_mgr_get_configuration(self):
sg = self.scaling_group
mgr = sg.manager
uri = "/%s/%s/config" % (mgr.uri_base, sg.id)
- conf = utils.random_name()
+ conf = utils.random_unicode()
resp_body = {"groupConfiguration": conf}
mgr.api.method_get = Mock(return_value=(None, resp_body))
ret = mgr.get_configuration(sg)
@@ -366,11 +366,11 @@ def test_mgr_update(self):
mgr = sg.manager
mgr.get = Mock(return_value=sg)
uri = "/%s/%s/config" % (mgr.uri_base, sg.id)
- sg.name = utils.random_name()
- sg.cooldown = utils.random_name()
- sg.min_entities = utils.random_name()
- sg.max_entities = utils.random_name()
- metadata = utils.random_name()
+ sg.name = utils.random_unicode()
+ sg.cooldown = utils.random_unicode()
+ sg.min_entities = utils.random_unicode()
+ sg.max_entities = utils.random_unicode()
+ metadata = utils.random_unicode()
mgr.api.method_put = Mock(return_value=(None, None))
expected_body = {"name": sg.name,
"cooldown": sg.cooldown,
@@ -381,6 +381,32 @@ def test_mgr_update(self):
mgr.update(sg.id, metadata=metadata)
mgr.api.method_put.assert_called_once_with(uri, body=expected_body)
+ def test_mgr_replace(self):
+ sg = self.scaling_group
+ mgr = sg.manager
+ mgr.get = Mock(return_value=sg)
+ uri = "/%s/%s/config" % (mgr.uri_base, sg.id)
+ sg.name = utils.random_unicode()
+ sg.cooldown = utils.random_unicode()
+ sg.min_entities = utils.random_unicode()
+ sg.max_entities = utils.random_unicode()
+ metadata = utils.random_unicode()
+
+ new_name = utils.random_unicode()
+ new_cooldown = utils.random_unicode()
+ new_min = utils.random_unicode()
+ new_max = utils.random_unicode()
+ mgr.api.method_put = Mock(return_value=(None, None))
+ expected_body = {
+ "name": new_name,
+ "cooldown": new_cooldown,
+ "minEntities": new_min,
+ "maxEntities": new_max,
+ "metadata": {}
+ }
+ mgr.replace(sg.id, new_name, new_cooldown, new_min, new_max)
+ mgr.api.method_put.assert_called_once_with(uri, body=expected_body)
+
def test_mgr_update_metadata(self):
sg = self.scaling_group
mgr = sg.manager
@@ -396,16 +422,16 @@ def test_mgr_update_metadata(self):
def test_mgr_get_launch_config(self):
sg = self.scaling_group
mgr = sg.manager
- typ = utils.random_name()
- lbs = utils.random_name()
- name = utils.random_name()
- flv = utils.random_name()
- img = utils.random_name()
- dconfig = utils.random_name()
- metadata = utils.random_name()
- personality = utils.random_name()
- networks = utils.random_name()
- key_name = utils.random_name()
+ typ = utils.random_unicode()
+ lbs = utils.random_unicode()
+ name = utils.random_unicode()
+ flv = utils.random_unicode()
+ img = utils.random_unicode()
+ dconfig = utils.random_unicode()
+ metadata = utils.random_unicode()
+ personality = utils.random_unicode()
+ networks = utils.random_unicode()
+ key_name = utils.random_unicode()
launchdict = {"launchConfiguration": {
"type": typ,
"args": {
@@ -445,15 +471,15 @@ def test_mgr_update_launch_config(self):
sg = self.scaling_group
mgr = sg.manager
mgr.get = Mock(return_value=sg)
- typ = utils.random_name()
- lbs = utils.random_name()
- name = utils.random_name()
- flv = utils.random_name()
- img = utils.random_name()
- dconfig = utils.random_name()
- metadata = utils.random_name()
- personality = utils.random_name()
- networks = utils.random_name()
+ typ = utils.random_unicode()
+ lbs = utils.random_unicode()
+ name = utils.random_unicode()
+ flv = utils.random_unicode()
+ img = utils.random_unicode()
+ dconfig = utils.random_unicode()
+ metadata = utils.random_unicode()
+ personality = utils.random_unicode()
+ networks = utils.random_unicode()
sg.launchConfiguration = {}
body = {"type": "launch_server",
"args": {
@@ -476,6 +502,88 @@ def test_mgr_update_launch_config(self):
personality=personality, networks=networks, load_balancers=lbs)
mgr.api.method_put.assert_called_once_with(uri, body=body)
+ def test_mgr_update_launch_config_key_name(self):
+ sg = self.scaling_group
+ mgr = sg.manager
+ mgr.get = Mock(return_value=sg)
+ typ = utils.random_unicode()
+ lbs = utils.random_unicode()
+ name = utils.random_unicode()
+ flv = utils.random_unicode()
+ img = utils.random_unicode()
+ dconfig = utils.random_unicode()
+ metadata = utils.random_unicode()
+ personality = utils.random_unicode()
+ networks = utils.random_unicode()
+ key_name = utils.random_unicode()
+ sg.launchConfiguration = {}
+ body = {"type": "launch_server",
+ "args": {
+ "server": key_name,
+ "loadBalancers": lbs,
+ },
+ }
+ mgr.api.method_put = Mock(return_value=(None, None))
+ uri = "/%s/%s/launch" % (mgr.uri_base, sg.id)
+ mgr.update_launch_config(sg.id, server_name=name, flavor=flv, image=img,
+ disk_config=dconfig, metadata=metadata,
+ personality=personality, networks=networks, load_balancers=lbs,
+ key_name=key_name)
+ mgr.api.method_put.assert_called_once_with(uri, body=body)
+
+ def test_mgr_replace_launch_config(self):
+ sg = self.scaling_group
+ mgr = sg.manager
+ mgr.get = Mock(return_value=sg)
+ typ = utils.random_unicode()
+ lbs = utils.random_unicode()
+ name = utils.random_unicode()
+ flv = utils.random_unicode()
+ img = utils.random_unicode()
+ dconfig = utils.random_unicode()
+ metadata = utils.random_unicode()
+ personality = utils.random_unicode()
+ networks = utils.random_unicode()
+
+ sg.launchConfiguration = {
+ "type": typ,
+ "args": {
+ "server": {
+ "name": name,
+ "imageRef": img,
+ "flavorRef": flv,
+ "OS-DCF:diskConfig": dconfig,
+ "personality": personality,
+ "networks": networks,
+ "metadata": metadata,
+ },
+ "loadBalancers": lbs,
+ },
+ }
+ new_typ = utils.random_unicode()
+ new_name = utils.random_unicode()
+ new_flv = utils.random_unicode()
+ new_img = utils.random_unicode()
+
+ expected = {
+ "type": new_typ,
+ "args": {
+ "server": {
+ "name": new_name,
+ "imageRef": new_img,
+ "flavorRef": new_flv,
+ },
+ "loadBalancers": []
+ }
+ }
+
+ mgr.api.method_put = Mock(return_value=(None, None))
+ uri = "/%s/%s/launch" % (mgr.uri_base, sg.id)
+
+ mgr.replace_launch_config(sg.id, launch_config_type=new_typ,
+ server_name=new_name, flavor=new_flv, image=new_img)
+ mgr.api.method_put.assert_called_once_with(uri, body=expected)
+
def test_mgr_update_launch_metadata(self):
sg = self.scaling_group
mgr = sg.manager
@@ -495,10 +603,10 @@ def test_mgr_add_policy(self):
ret_body = {"policies": [{}]}
mgr.api.method_post = Mock(return_value=(None, ret_body))
uri = "/%s/%s/policies" % (mgr.uri_base, sg.id)
- name = utils.random_name()
- ptype = utils.random_name()
- cooldown = utils.random_name()
- change = utils.random_name()
+ name = utils.random_unicode()
+ ptype = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ change = utils.random_unicode()
for is_percent in (True, False):
post_body = {"name": name, "cooldown": cooldown, "type": ptype}
if is_percent:
@@ -510,22 +618,50 @@ def test_mgr_add_policy(self):
mgr.api.method_post.assert_called_with(uri, body=[post_body])
self.assert_(isinstance(ret, AutoScalePolicy))
+ def test_mgr_create_policy_body(self):
+ sg = self.scaling_group
+ mgr = sg.manager
+ name = utils.random_unicode()
+ ptype = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ desired_capacity = utils.random_unicode()
+ args = utils.random_unicode()
+ change = utils.random_unicode()
+ expected_pct = {"name": name,
+ "cooldown": cooldown,
+ "type": ptype,
+ "desiredCapacity": desired_capacity,
+ "args": args
+ }
+ expected_nopct = expected_pct.copy()
+ expected_pct["changePercent"] = change
+ expected_nopct["change"] = change
+ ret_pct = mgr._create_policy_body(name, ptype, cooldown, change=change,
+ is_percent=True, desired_capacity=desired_capacity, args=args)
+ ret_nopct = mgr._create_policy_body(name, ptype, cooldown,
+ change=change, is_percent=False,
+ desired_capacity=desired_capacity, args=args)
+ self.assertEqual(ret_nopct, expected_nopct)
+ self.assertEqual(ret_pct, expected_pct)
+
def test_mgr_add_policy_desired_capacity(self):
sg = self.scaling_group
mgr = sg.manager
ret_body = {"policies": [{}]}
mgr.api.method_post = Mock(return_value=(None, ret_body))
uri = "/%s/%s/policies" % (mgr.uri_base, sg.id)
- name = utils.random_name()
- ptype = utils.random_name()
- cooldown = utils.random_name()
- desired_capacity = utils.random_name()
- post_body = {"name": name,
- "cooldown": cooldown,
- "type": ptype,
- "desiredCapacity": desired_capacity}
+ name = utils.random_unicode()
+ ptype = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ desired_capacity = utils.random_unicode()
+ post_body = {
+ "name": name,
+ "cooldown": cooldown,
+ "type": ptype,
+ "desiredCapacity": desired_capacity,
+ }
ret = mgr.add_policy(sg, name, ptype, cooldown,
- desired_capacity=desired_capacity)
+ desired_capacity=desired_capacity)
mgr.api.method_post.assert_called_with(uri, body=[post_body])
self.assert_(isinstance(ret, AutoScalePolicy))
@@ -541,7 +677,7 @@ def test_mgr_list_policies(self):
def test_mgr_get_policy(self):
sg = self.scaling_group
mgr = sg.manager
- pol = utils.random_name()
+ pol = utils.random_unicode()
ret_body = {"policy": {}}
uri = "/%s/%s/policies/%s" % (mgr.uri_base, sg.id, pol)
mgr.api.method_get = Mock(return_value=(None, ret_body))
@@ -549,15 +685,47 @@ def test_mgr_get_policy(self):
self.assert_(isinstance(ret, AutoScalePolicy))
mgr.api.method_get.assert_called_once_with(uri)
+ def test_mgr_replace_policy(self):
+ sg = self.scaling_group
+ mgr = sg.manager
+ pol_id = utils.random_unicode()
+ info = {
+ "name": utils.random_unicode(),
+ "type": utils.random_unicode(),
+ "cooldown": utils.random_unicode(),
+ "change": utils.random_unicode(),
+ "args": utils.random_unicode(),
+ }
+ policy = fakes.FakeAutoScalePolicy(mgr, info, sg)
+ mgr.get_policy = Mock(return_value=policy)
+
+ new_name = utils.random_unicode()
+ new_type = utils.random_unicode()
+ new_cooldown = utils.random_unicode()
+ new_change_percent = utils.random_unicode()
+
+ mgr.api.method_put = Mock(return_value=(None, None))
+ uri = "/%s/%s/policies/%s" % (mgr.uri_base, sg.id, pol_id)
+ expected = {
+ "name": new_name,
+ "type": new_type,
+ "cooldown": new_cooldown,
+ "changePercent": new_change_percent,
+ }
+ ret = mgr.replace_policy(sg, pol_id, name=new_name,
+ policy_type=new_type, cooldown=new_cooldown,
+ change=new_change_percent, is_percent=True)
+ mgr.api.method_put.assert_called_with(uri, body=expected)
+
def test_mgr_update_policy(self):
sg = self.scaling_group
mgr = sg.manager
- pol = utils.random_name()
- name = utils.random_name()
- ptype = utils.random_name()
- cooldown = utils.random_name()
- change = utils.random_name()
- args = utils.random_name()
+ pol = utils.random_unicode()
+ name = utils.random_unicode()
+ ptype = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ change = utils.random_unicode()
+ args = utils.random_unicode()
mgr.get_policy = Mock(return_value=fakes.FakeAutoScalePolicy(mgr, {},
sg))
mgr.api.method_put = Mock(return_value=(None, None))
@@ -577,16 +745,16 @@ def test_mgr_update_policy(self):
def test_mgr_update_policy_desired_to_desired(self):
sg = self.scaling_group
mgr = sg.manager
- pol = utils.random_name()
- name = utils.random_name()
- ptype = utils.random_name()
- cooldown = utils.random_name()
- change = utils.random_name()
- args = utils.random_name()
+ pol = utils.random_unicode()
+ name = utils.random_unicode()
+ ptype = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ change = utils.random_unicode()
+ args = utils.random_unicode()
new_desired_capacity = 10
old_info = {"desiredCapacity": 0}
mgr.get_policy = Mock(
- return_value=fakes.FakeAutoScalePolicy(mgr, old_info, sg))
+ return_value=fakes.FakeAutoScalePolicy(mgr, old_info, sg))
mgr.api.method_put = Mock(return_value=(None, None))
uri = "/%s/%s/policies/%s" % (mgr.uri_base, sg.id, pol)
put_body = {"name": name, "cooldown": cooldown, "type": ptype,
@@ -598,16 +766,16 @@ def test_mgr_update_policy_desired_to_desired(self):
def test_mgr_update_policy_change_to_desired(self):
sg = self.scaling_group
mgr = sg.manager
- pol = utils.random_name()
- name = utils.random_name()
- ptype = utils.random_name()
- cooldown = utils.random_name()
- change = utils.random_name()
- args = utils.random_name()
+ pol = utils.random_unicode()
+ name = utils.random_unicode()
+ ptype = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ change = utils.random_unicode()
+ args = utils.random_unicode()
new_desired_capacity = 10
old_info = {"change": -1}
mgr.get_policy = Mock(
- return_value=fakes.FakeAutoScalePolicy(mgr, old_info, sg))
+ return_value=fakes.FakeAutoScalePolicy(mgr, old_info, sg))
mgr.api.method_put = Mock(return_value=(None, None))
uri = "/%s/%s/policies/%s" % (mgr.uri_base, sg.id, pol)
put_body = {"name": name, "cooldown": cooldown, "type": ptype,
@@ -619,16 +787,16 @@ def test_mgr_update_policy_change_to_desired(self):
def test_mgr_update_policy_desired_to_change(self):
sg = self.scaling_group
mgr = sg.manager
- pol = utils.random_name()
- name = utils.random_name()
- ptype = utils.random_name()
- cooldown = utils.random_name()
- change = utils.random_name()
- args = utils.random_name()
+ pol = utils.random_unicode()
+ name = utils.random_unicode()
+ ptype = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ change = utils.random_unicode()
+ args = utils.random_unicode()
new_change = 1
old_info = {"desiredCapacity": 0}
mgr.get_policy = Mock(
- return_value=fakes.FakeAutoScalePolicy(mgr, old_info, sg))
+ return_value=fakes.FakeAutoScalePolicy(mgr, old_info, sg))
mgr.api.method_put = Mock(return_value=(None, None))
uri = "/%s/%s/policies/%s" % (mgr.uri_base, sg.id, pol)
put_body = {"name": name, "cooldown": cooldown, "type": ptype,
@@ -640,19 +808,21 @@ def test_mgr_update_policy_desired_to_change(self):
def test_mgr_update_policy_maintain_desired_capacity(self):
sg = self.scaling_group
mgr = sg.manager
- pol = utils.random_name()
- name = utils.random_name()
- ptype = utils.random_name()
- cooldown = utils.random_name()
- change = utils.random_name()
- args = utils.random_name()
- new_name = utils.random_name()
+ pol = utils.random_unicode()
+ name = utils.random_unicode()
+ ptype = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ change = utils.random_unicode()
+ args = utils.random_unicode()
+ new_name = utils.random_unicode()
old_capacity = 0
- old_info = {"type": ptype,
- "desiredCapacity": old_capacity,
- "cooldown": cooldown}
+ old_info = {
+ "type": ptype,
+ "desiredCapacity": old_capacity,
+ "cooldown": cooldown,
+ }
mgr.get_policy = Mock(
- return_value=fakes.FakeAutoScalePolicy(mgr, old_info, sg))
+ return_value=fakes.FakeAutoScalePolicy(mgr, old_info, sg))
mgr.api.method_put = Mock(return_value=(None, None))
uri = "/%s/%s/policies/%s" % (mgr.uri_base, sg.id, pol)
put_body = {"name": new_name, "cooldown": cooldown, "type": ptype,
@@ -663,19 +833,21 @@ def test_mgr_update_policy_maintain_desired_capacity(self):
def test_mgr_update_policy_maintain_is_percent(self):
sg = self.scaling_group
mgr = sg.manager
- pol = utils.random_name()
- name = utils.random_name()
- ptype = utils.random_name()
- cooldown = utils.random_name()
- change = utils.random_name()
- args = utils.random_name()
- new_name = utils.random_name()
+ pol = utils.random_unicode()
+ name = utils.random_unicode()
+ ptype = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ change = utils.random_unicode()
+ args = utils.random_unicode()
+ new_name = utils.random_unicode()
old_percent = 10
- old_info = {"type": ptype,
- "changePercent": old_percent,
- "cooldown": cooldown}
+ old_info = {
+ "type": ptype,
+ "changePercent": old_percent,
+ "cooldown": cooldown,
+ }
mgr.get_policy = Mock(
- return_value=fakes.FakeAutoScalePolicy(mgr, old_info, sg))
+ return_value=fakes.FakeAutoScalePolicy(mgr, old_info, sg))
mgr.api.method_put = Mock(return_value=(None, None))
uri = "/%s/%s/policies/%s" % (mgr.uri_base, sg.id, pol)
put_body = {"name": new_name, "cooldown": cooldown, "type": ptype,
@@ -686,7 +858,7 @@ def test_mgr_update_policy_maintain_is_percent(self):
def test_mgr_execute_policy(self):
sg = self.scaling_group
mgr = sg.manager
- pol = utils.random_name()
+ pol = utils.random_unicode()
uri = "/%s/%s/policies/%s/execute" % (mgr.uri_base, sg.id, pol)
mgr.api.method_post = Mock(return_value=(None, None))
mgr.execute_policy(sg, pol)
@@ -695,7 +867,7 @@ def test_mgr_execute_policy(self):
def test_mgr_delete_policy(self):
sg = self.scaling_group
mgr = sg.manager
- pol = utils.random_name()
+ pol = utils.random_unicode()
uri = "/%s/%s/policies/%s" % (mgr.uri_base, sg.id, pol)
mgr.api.method_delete = Mock(return_value=(None, None))
mgr.delete_policy(sg, pol)
@@ -704,24 +876,19 @@ def test_mgr_delete_policy(self):
def test_mgr_add_webhook(self):
sg = self.scaling_group
mgr = sg.manager
- pol = utils.random_name()
+ pol = utils.random_unicode()
ret_body = {"webhooks": [{}]}
mgr.api.method_post = Mock(return_value=(None, ret_body))
uri = "/%s/%s/policies/%s/webhooks" % (mgr.uri_base, sg.id, pol)
mgr.get_policy = Mock(return_value=fakes.FakeAutoScalePolicy(mgr, {},
sg))
- name = utils.random_name()
- metadata = utils.random_name()
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
post_body = {"name": name, "metadata": metadata}
ret = mgr.add_webhook(sg, pol, name, metadata=metadata)
mgr.api.method_post.assert_called_with(uri, body=[post_body])
self.assert_(isinstance(ret, AutoScaleWebhook))
-
-
-
-
-
def test_mgr_list_webhooks(self):
sg = self.scaling_group
mgr = sg.manager
@@ -738,7 +905,7 @@ def test_mgr_get_webhook(self):
sg = self.scaling_group
mgr = sg.manager
pol = fakes.FakeAutoScalePolicy(mgr, {}, sg)
- hook = utils.random_name()
+ hook = utils.random_unicode()
ret_body = {"webhook": {}}
uri = "/%s/%s/policies/%s/webhooks/%s" % (mgr.uri_base, sg.id, pol.id,
hook)
@@ -747,14 +914,32 @@ def test_mgr_get_webhook(self):
self.assert_(isinstance(ret, AutoScaleWebhook))
mgr.api.method_get.assert_called_once_with(uri)
+ def test_mgr_replace_webhook(self):
+ sg = self.scaling_group
+ mgr = sg.manager
+ pol = fakes.FakeAutoScalePolicy(mgr, {}, sg)
+ hook = utils.random_unicode()
+ info = {"name": utils.random_unicode(),
+ "metadata": utils.random_unicode()}
+ hook_obj = fakes.FakeAutoScaleWebhook(mgr, info, pol, sg)
+ new_name = utils.random_unicode()
+ new_metadata = utils.random_unicode()
+ mgr.get_webhook = Mock(return_value=hook_obj)
+ mgr.api.method_put = Mock(return_value=(None, None))
+ uri = "/%s/%s/policies/%s/webhooks/%s" % (mgr.uri_base, sg.id, pol.id,
+ hook)
+ expected = {"name": new_name, "metadata": {}}
+ ret = mgr.replace_webhook(sg, pol, hook, name=new_name)
+ mgr.api.method_put.assert_called_with(uri, body=expected)
+
def test_mgr_update_webhook(self):
sg = self.scaling_group
mgr = sg.manager
pol = fakes.FakeAutoScalePolicy(mgr, {}, sg)
- hook = utils.random_name()
- hook_obj = fakes.FakeAutoScaleWebhook(mgr, {}, pol)
- name = utils.random_name()
- metadata = utils.random_name()
+ hook = utils.random_unicode()
+ hook_obj = fakes.FakeAutoScaleWebhook(mgr, {}, pol, sg)
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
mgr.get_webhook = Mock(return_value=hook_obj)
mgr.api.method_put = Mock(return_value=(None, None))
uri = "/%s/%s/policies/%s/webhooks/%s" % (mgr.uri_base, sg.id, pol.id,
@@ -767,8 +952,8 @@ def test_mgr_update_webhook_metadata(self):
sg = self.scaling_group
mgr = sg.manager
pol = fakes.FakeAutoScalePolicy(mgr, {}, sg)
- hook = utils.random_name()
- hook_obj = fakes.FakeAutoScaleWebhook(mgr, {}, pol)
+ hook = utils.random_unicode()
+ hook_obj = fakes.FakeAutoScaleWebhook(mgr, {}, pol, sg)
hook_obj.metadata = {"orig": "orig"}
metadata = {"new": "new"}
expected = hook_obj.metadata.copy()
@@ -785,8 +970,8 @@ def test_mgr_delete_webhook(self):
sg = self.scaling_group
mgr = sg.manager
pol = fakes.FakeAutoScalePolicy(mgr, {}, sg)
- hook = utils.random_name()
- hook_obj = fakes.FakeAutoScaleWebhook(mgr, {}, pol)
+ hook = utils.random_unicode()
+ hook_obj = fakes.FakeAutoScaleWebhook(mgr, {}, pol, sg)
uri = "/%s/%s/policies/%s/webhooks/%s" % (mgr.uri_base, sg.id, pol.id,
hook)
mgr.api.method_delete = Mock(return_value=(None, None))
@@ -797,8 +982,8 @@ def test_mgr_delete_webhook(self):
def test_mgr_resolve_lbs_dict(self):
sg = self.scaling_group
mgr = sg.manager
- key = utils.random_name()
- val = utils.random_name()
+ key = utils.random_unicode()
+ val = utils.random_unicode()
lb_dict = {key: val}
ret = mgr._resolve_lbs(lb_dict)
self.assertEqual(ret, [lb_dict])
@@ -808,8 +993,7 @@ def test_mgr_resolve_lbs_clb(self):
mgr = sg.manager
clb = fakes.FakeLoadBalancer(None, {})
ret = mgr._resolve_lbs(clb)
- expected = {"loadBalancerId": clb.id,
- "port": clb.port}
+ expected = {"loadBalancerId": clb.id, "port": clb.port}
self.assertEqual(ret, [expected])
def test_mgr_resolve_lbs_id(self):
@@ -824,8 +1008,7 @@ def get(self, *args, **kwargs):
pyrax.cloud_loadbalancers = PyrCLB()
ret = mgr._resolve_lbs("fakeid")
- expected = {"loadBalancerId": clb.id,
- "port": clb.port}
+ expected = {"loadBalancerId": clb.id, "port": clb.port}
self.assertEqual(ret, [expected])
pyrax.cloud_loadbalancers = sav
@@ -839,17 +1022,16 @@ def test_mgr_resolve_lbs_id_fail(self):
def test_mgr_create_body(self):
sg = self.scaling_group
mgr = sg.manager
- name = utils.random_name()
- cooldown = utils.random_name()
- min_entities = utils.random_name()
- max_entities = utils.random_name()
- launch_config_type = utils.random_name()
- flavor = utils.random_name()
- server_name = utils.random_name()
- image = utils.random_name()
- group_metadata = utils.random_name()
- server_metadata = utils.random_name()
- key_name = utils.random_name()
+ name = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ min_entities = utils.random_unicode()
+ max_entities = utils.random_unicode()
+ launch_config_type = utils.random_unicode()
+ flavor = utils.random_unicode()
+ server_name = utils.random_unicode()
+ image = utils.random_unicode()
+ group_metadata = utils.random_unicode()
+ key_name = utils.random_unicode()
expected = {
"groupConfiguration": {
"cooldown": cooldown,
@@ -864,7 +1046,7 @@ def test_mgr_create_body(self):
"OS-DCF:diskConfig": "AUTO",
"flavorRef": flavor,
"imageRef": image,
- "metadata": server_metadata,
+ "metadata": {},
"name": server_name,
"networks": [{"uuid": SERVICE_NET_ID}],
"personality": [],
@@ -876,7 +1058,7 @@ def test_mgr_create_body(self):
self.maxDiff = 1000000
ret = mgr._create_body(name, cooldown, min_entities, max_entities,
launch_config_type, server_name, image, flavor,
- disk_config=None, metadata=server_metadata, personality=None,
+ disk_config=None, metadata=None, personality=None,
networks=None, load_balancers=None, scaling_policies=None,
group_metadata=group_metadata, key_name=key_name)
self.assertEqual(ret, expected)
@@ -908,13 +1090,13 @@ def test_policy_update(self):
sg = self.scaling_group
mgr = sg.manager
pol = fakes.FakeAutoScalePolicy(mgr, {}, sg)
- name = utils.random_name()
- policy_type = utils.random_name()
- cooldown = utils.random_name()
- change = utils.random_name()
- is_percent = utils.random_name()
- desired_capacity = utils.random_name()
- args = utils.random_name()
+ name = utils.random_unicode()
+ policy_type = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ change = utils.random_unicode()
+ is_percent = utils.random_unicode()
+ desired_capacity = utils.random_unicode()
+ args = utils.random_unicode()
mgr.update_policy = Mock()
pol.update(name=name, policy_type=policy_type, cooldown=cooldown,
change=change, is_percent=is_percent,
@@ -937,8 +1119,8 @@ def test_policy_add_webhook(self):
mgr = sg.manager
pol = fakes.FakeAutoScalePolicy(mgr, {}, sg)
mgr.add_webhook = Mock()
- name = utils.random_name()
- metadata = utils.random_name()
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
pol.add_webhook(name, metadata=metadata)
mgr.add_webhook.assert_called_once_with(sg, pol, name,
metadata=metadata)
@@ -955,7 +1137,7 @@ def test_policy_get_webhook(self):
sg = self.scaling_group
mgr = sg.manager
pol = fakes.FakeAutoScalePolicy(mgr, {}, sg)
- hook = utils.random_name()
+ hook = utils.random_unicode()
mgr.get_webhook = Mock()
pol.get_webhook(hook)
mgr.get_webhook.assert_called_once_with(sg, pol, hook)
@@ -964,9 +1146,9 @@ def test_policy_update_webhook(self):
sg = self.scaling_group
mgr = sg.manager
pol = fakes.FakeAutoScalePolicy(mgr, {}, sg)
- hook = utils.random_name()
- name = utils.random_name()
- metadata = utils.random_name()
+ hook = utils.random_unicode()
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
mgr.update_webhook = Mock()
pol.update_webhook(hook, name=name, metadata=metadata)
mgr.update_webhook.assert_called_once_with(sg, policy=pol, webhook=hook,
@@ -976,8 +1158,8 @@ def test_policy_update_webhook_metadata(self):
sg = self.scaling_group
mgr = sg.manager
pol = fakes.FakeAutoScalePolicy(mgr, {}, sg)
- hook = utils.random_name()
- metadata = utils.random_name()
+ hook = utils.random_unicode()
+ metadata = utils.random_unicode()
mgr.update_webhook_metadata = Mock()
pol.update_webhook_metadata(hook, metadata=metadata)
mgr.update_webhook_metadata.assert_called_once_with(sg, pol, hook,
@@ -987,7 +1169,7 @@ def test_policy_delete_webhook(self):
sg = self.scaling_group
mgr = sg.manager
pol = fakes.FakeAutoScalePolicy(mgr, {}, sg)
- hook = utils.random_name()
+ hook = utils.random_unicode()
mgr.delete_webhook = Mock()
pol.delete_webhook(hook)
mgr.delete_webhook.assert_called_once_with(sg, pol, hook)
@@ -996,7 +1178,7 @@ def test_webhook_get(self):
sg = self.scaling_group
mgr = sg.manager
pol = fakes.FakeAutoScalePolicy(mgr, {}, sg)
- hook = fakes.FakeAutoScaleWebhook(mgr, {}, pol)
+ hook = fakes.FakeAutoScaleWebhook(mgr, {}, pol, sg)
pol.get_webhook = Mock()
hook.get()
pol.get_webhook.assert_called_once_with(hook)
@@ -1005,9 +1187,9 @@ def test_webhook_update(self):
sg = self.scaling_group
mgr = sg.manager
pol = fakes.FakeAutoScalePolicy(mgr, {}, sg)
- hook = fakes.FakeAutoScaleWebhook(mgr, {}, pol)
- name = utils.random_name()
- metadata = utils.random_name()
+ hook = fakes.FakeAutoScaleWebhook(mgr, {}, pol, sg)
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
pol.update_webhook = Mock()
hook.update(name=name, metadata=metadata)
pol.update_webhook.assert_called_once_with(hook, name=name,
@@ -1017,8 +1199,8 @@ def test_webhook_update_metadata(self):
sg = self.scaling_group
mgr = sg.manager
pol = fakes.FakeAutoScalePolicy(mgr, {}, sg)
- hook = fakes.FakeAutoScaleWebhook(mgr, {}, pol)
- metadata = utils.random_name()
+ hook = fakes.FakeAutoScaleWebhook(mgr, {}, pol, sg)
+ metadata = utils.random_unicode()
pol.update_webhook_metadata = Mock()
hook.update_metadata(metadata=metadata)
pol.update_webhook_metadata.assert_called_once_with(hook,
@@ -1028,7 +1210,7 @@ def test_webhook_delete(self):
sg = self.scaling_group
mgr = sg.manager
pol = fakes.FakeAutoScalePolicy(mgr, {}, sg)
- hook = fakes.FakeAutoScaleWebhook(mgr, {}, pol)
+ hook = fakes.FakeAutoScaleWebhook(mgr, {}, pol, sg)
pol.delete_webhook = Mock()
hook.delete()
pol.delete_webhook.assert_called_once_with(hook)
@@ -1057,15 +1239,30 @@ def test_clt_resume(self):
clt.resume(sg)
mgr.resume.assert_called_once_with(sg)
+ def test_clt_replace(self):
+ clt = fakes.FakeAutoScaleClient()
+ mgr = clt._manager
+ sg = self.scaling_group
+ name = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ min_entities = utils.random_unicode()
+ max_entities = utils.random_unicode()
+ metadata = utils.random_unicode()
+ mgr.replace = Mock()
+ clt.replace(sg, name, cooldown, min_entities, max_entities,
+ metadata=metadata)
+ mgr.replace.assert_called_once_with(sg, name, cooldown, min_entities,
+ max_entities, metadata=metadata)
+
def test_clt_update(self):
clt = fakes.FakeAutoScaleClient()
mgr = clt._manager
sg = self.scaling_group
- name = utils.random_name()
- cooldown = utils.random_name()
- min_entities = utils.random_name()
- max_entities = utils.random_name()
- metadata = utils.random_name()
+ name = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ min_entities = utils.random_unicode()
+ max_entities = utils.random_unicode()
+ metadata = utils.random_unicode()
mgr.update = Mock()
clt.update(sg, name=name, cooldown=cooldown, min_entities=min_entities,
max_entities=max_entities, metadata=metadata)
@@ -1077,7 +1274,7 @@ def test_clt_update_metadata(self):
clt = fakes.FakeAutoScaleClient()
mgr = clt._manager
sg = self.scaling_group
- metadata = utils.random_name()
+ metadata = utils.random_unicode()
mgr.update_metadata = Mock()
clt.update_metadata(sg, metadata)
mgr.update_metadata.assert_called_once_with(sg, metadata)
@@ -1098,25 +1295,49 @@ def test_clt_get_launch_config(self):
clt.get_launch_config(sg)
mgr.get_launch_config.assert_called_once_with(sg)
+ def test_clt_replace_launch_config(self):
+ clt = fakes.FakeAutoScaleClient()
+ mgr = clt._manager
+ sg = self.scaling_group
+ mgr.replace_launch_config = Mock()
+ launch_config_type = utils.random_unicode()
+ server_name = utils.random_unicode()
+ image = utils.random_unicode()
+ flavor = utils.random_unicode()
+ disk_config = utils.random_unicode()
+ metadata = utils.random_unicode()
+ personality = utils.random_unicode()
+ networks = utils.random_unicode()
+ load_balancers = utils.random_unicode()
+ key_name = utils.random_unicode()
+ clt.replace_launch_config(sg, launch_config_type, server_name, image,
+ flavor, disk_config=disk_config, metadata=metadata,
+ personality=personality, networks=networks,
+ load_balancers=load_balancers, key_name=key_name)
+ mgr.replace_launch_config.assert_called_once_with(sg,
+ launch_config_type, server_name, image, flavor,
+ disk_config=disk_config, metadata=metadata,
+ personality=personality, networks=networks,
+ load_balancers=load_balancers, key_name=key_name)
+
def test_clt_update_launch_config(self):
clt = fakes.FakeAutoScaleClient()
mgr = clt._manager
sg = self.scaling_group
mgr.update_launch_config = Mock()
- server_name = utils.random_name()
- flavor = utils.random_name()
- image = utils.random_name()
- disk_config = utils.random_name()
- metadata = utils.random_name()
- personality = utils.random_name()
- networks = utils.random_name()
- load_balancers = utils.random_name()
- key_name = utils.random_name()
+ server_name = utils.random_unicode()
+ flavor = utils.random_unicode()
+ image = utils.random_unicode()
+ disk_config = utils.random_unicode()
+ metadata = utils.random_unicode()
+ personality = utils.random_unicode()
+ networks = utils.random_unicode()
+ load_balancers = utils.random_unicode()
+ key_name = utils.random_unicode()
clt.update_launch_config(sg, server_name=server_name, flavor=flavor,
image=image, disk_config=disk_config, metadata=metadata,
personality=personality, networks=networks,
- load_balancers=load_balancers,
- key_name=key_name)
+ load_balancers=load_balancers, key_name=key_name)
mgr.update_launch_config.assert_called_once_with(sg,
server_name=server_name, flavor=flavor, image=image,
disk_config=disk_config, metadata=metadata,
@@ -1128,7 +1349,7 @@ def test_clt_update_launch_metadata(self):
mgr = clt._manager
sg = self.scaling_group
mgr.update_launch_metadata = Mock()
- metadata = utils.random_name()
+ metadata = utils.random_unicode()
clt.update_launch_metadata(sg, metadata)
mgr.update_launch_metadata.assert_called_once_with(sg, metadata)
@@ -1136,13 +1357,13 @@ def test_clt_add_policy(self):
clt = fakes.FakeAutoScaleClient()
mgr = clt._manager
sg = self.scaling_group
- name = utils.random_name()
- policy_type = utils.random_name()
- cooldown = utils.random_name()
- change = utils.random_name()
- is_percent = utils.random_name()
- desired_capacity = utils.random_name()
- args = utils.random_name()
+ name = utils.random_unicode()
+ policy_type = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ change = utils.random_unicode()
+ is_percent = utils.random_unicode()
+ desired_capacity = utils.random_unicode()
+ args = utils.random_unicode()
mgr.add_policy = Mock()
clt.add_policy(sg, name, policy_type, cooldown, change,
is_percent=is_percent, desired_capacity=desired_capacity,
@@ -1163,23 +1384,43 @@ def test_clt_get_policy(self):
clt = fakes.FakeAutoScaleClient()
mgr = clt._manager
sg = self.scaling_group
- pol = utils.random_name()
+ pol = utils.random_unicode()
mgr.get_policy = Mock()
clt.get_policy(sg, pol)
mgr.get_policy.assert_called_once_with(sg, pol)
+ def test_clt_replace_policy(self):
+ clt = fakes.FakeAutoScaleClient()
+ mgr = clt._manager
+ sg = self.scaling_group
+ pol = utils.random_unicode()
+ name = utils.random_unicode()
+ policy_type = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ change = utils.random_unicode()
+ is_percent = utils.random_unicode()
+ desired_capacity = utils.random_unicode()
+ args = utils.random_unicode()
+ mgr.replace_policy = Mock()
+ clt.replace_policy(sg, pol, name, policy_type, cooldown, change=change,
+ is_percent=is_percent, desired_capacity=desired_capacity,
+ args=args)
+ mgr.replace_policy.assert_called_once_with(sg, pol, name, policy_type,
+ cooldown, change=change, is_percent=is_percent,
+ desired_capacity=desired_capacity, args=args)
+
def test_clt_update_policy(self):
clt = fakes.FakeAutoScaleClient()
mgr = clt._manager
sg = self.scaling_group
- pol = utils.random_name()
- name = utils.random_name()
- policy_type = utils.random_name()
- cooldown = utils.random_name()
- change = utils.random_name()
- is_percent = utils.random_name()
- desired_capacity = utils.random_name()
- args = utils.random_name()
+ pol = utils.random_unicode()
+ name = utils.random_unicode()
+ policy_type = utils.random_unicode()
+ cooldown = utils.random_unicode()
+ change = utils.random_unicode()
+ is_percent = utils.random_unicode()
+ desired_capacity = utils.random_unicode()
+ args = utils.random_unicode()
mgr.update_policy = Mock()
clt.update_policy(sg, pol, name=name, policy_type=policy_type,
cooldown=cooldown, change=change, is_percent=is_percent,
@@ -1193,7 +1434,7 @@ def test_clt_execute_policy(self):
clt = fakes.FakeAutoScaleClient()
mgr = clt._manager
sg = self.scaling_group
- pol = utils.random_name()
+ pol = utils.random_unicode()
mgr.execute_policy = Mock()
clt.execute_policy(sg, pol)
mgr.execute_policy.assert_called_once_with(scaling_group=sg, policy=pol)
@@ -1202,7 +1443,7 @@ def test_clt_delete_policy(self):
clt = fakes.FakeAutoScaleClient()
mgr = clt._manager
sg = self.scaling_group
- pol = utils.random_name()
+ pol = utils.random_unicode()
mgr.delete_policy = Mock()
clt.delete_policy(sg, pol)
mgr.delete_policy.assert_called_once_with(scaling_group=sg, policy=pol)
@@ -1211,18 +1452,19 @@ def test_clt_add_webhook(self):
clt = fakes.FakeAutoScaleClient()
mgr = clt._manager
sg = self.scaling_group
- pol = utils.random_name()
- name = utils.random_name()
- metadata = utils.random_name()
+ pol = utils.random_unicode()
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
mgr.add_webhook = Mock()
clt.add_webhook(sg, pol, name, metadata=metadata)
- mgr.add_webhook.assert_called_once_with(sg, pol, name, metadata=metadata)
+ mgr.add_webhook.assert_called_once_with(sg, pol, name,
+ metadata=metadata)
def test_clt_list_webhooks(self):
clt = fakes.FakeAutoScaleClient()
mgr = clt._manager
sg = self.scaling_group
- pol = utils.random_name()
+ pol = utils.random_unicode()
mgr.list_webhooks = Mock()
clt.list_webhooks(sg, pol)
mgr.list_webhooks.assert_called_once_with(sg, pol)
@@ -1231,20 +1473,33 @@ def test_clt_get_webhook(self):
clt = fakes.FakeAutoScaleClient()
mgr = clt._manager
sg = self.scaling_group
- pol = utils.random_name()
- hook = utils.random_name()
+ pol = utils.random_unicode()
+ hook = utils.random_unicode()
mgr.get_webhook = Mock()
clt.get_webhook(sg, pol, hook)
mgr.get_webhook.assert_called_once_with(sg, pol, hook)
+ def test_clt_replace_webhook(self):
+ clt = fakes.FakeAutoScaleClient()
+ mgr = clt._manager
+ sg = self.scaling_group
+ pol = utils.random_unicode()
+ hook = utils.random_unicode()
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
+ mgr.replace_webhook = Mock()
+ clt.replace_webhook(sg, pol, hook, name, metadata=metadata)
+ mgr.replace_webhook.assert_called_once_with(sg, pol, hook, name,
+ metadata=metadata)
+
def test_clt_update_webhook(self):
clt = fakes.FakeAutoScaleClient()
mgr = clt._manager
sg = self.scaling_group
- pol = utils.random_name()
- hook = utils.random_name()
- name = utils.random_name()
- metadata = utils.random_name()
+ pol = utils.random_unicode()
+ hook = utils.random_unicode()
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
mgr.update_webhook = Mock()
clt.update_webhook(sg, pol, hook, name=name, metadata=metadata)
mgr.update_webhook.assert_called_once_with(scaling_group=sg, policy=pol,
@@ -1254,9 +1509,9 @@ def test_clt_update_webhook_metadata(self):
clt = fakes.FakeAutoScaleClient()
mgr = clt._manager
sg = self.scaling_group
- pol = utils.random_name()
- hook = utils.random_name()
- metadata = utils.random_name()
+ pol = utils.random_unicode()
+ hook = utils.random_unicode()
+ metadata = utils.random_unicode()
mgr.update_webhook_metadata = Mock()
clt.update_webhook_metadata(sg, pol, hook, metadata)
mgr.update_webhook_metadata.assert_called_once_with(sg, pol, hook,
@@ -1266,8 +1521,8 @@ def test_clt_delete_webhook(self):
clt = fakes.FakeAutoScaleClient()
mgr = clt._manager
sg = self.scaling_group
- pol = utils.random_name()
- hook = utils.random_name()
+ pol = utils.random_unicode()
+ hook = utils.random_unicode()
mgr.delete_webhook = Mock()
clt.delete_webhook(sg, pol, hook)
mgr.delete_webhook.assert_called_once_with(sg, pol, hook)
diff --git a/tests/unit/test_cf_client.py b/tests/unit/test_cf_client.py
index 8b090411..9ed02246 100644
--- a/tests/unit/test_cf_client.py
+++ b/tests/unit/test_cf_client.py
@@ -5,6 +5,7 @@
import os
import random
import unittest
+import uuid
from mock import ANY, patch
from mock import MagicMock as Mock
@@ -45,8 +46,8 @@ def setUp(self):
pyrax.connect_to_cloudfiles(region="FAKE")
self.client = pyrax.cloudfiles
self.client._container_cache = {}
- self.cont_name = utils.random_name(ascii_only=True)
- self.obj_name = utils.random_name(ascii_only=True)
+ self.cont_name = utils.random_ascii()
+ self.obj_name = utils.random_ascii()
self.fake_object = FakeStorageObject(self.client, self.cont_name,
self.obj_name)
@@ -81,7 +82,7 @@ def test_set_account_metadata(self):
def test_set_account_metadata_prefix(self):
client = self.client
client.connection.post_account = Mock()
- prefix = utils.random_name()
+ prefix = utils.random_unicode()
client.set_account_metadata({"newkey": "newval"}, prefix=prefix)
client.connection.post_account.assert_called_with(
{"%snewkey" % prefix: "newval"}, response_dict=None)
@@ -114,12 +115,29 @@ def test_set_temp_url_key(self):
client = self.client
sav = client.set_account_metadata
client.set_account_metadata = Mock()
- key = utils.random_name()
+ key = utils.random_unicode()
exp = {"Temp-Url-Key": key}
client.set_temp_url_key(key)
client.set_account_metadata.assert_called_once_with(exp)
client.set_account_metadata = sav
+ def test_set_temp_url_key_generated(self):
+ client = self.client
+ sav = client.set_account_metadata
+ client.set_account_metadata = Mock()
+ key = utils.random_ascii()
+ sav_uu = uuid.uuid4
+
+ class FakeUUID(object):
+ hex = key
+
+ uuid.uuid4 = Mock(return_value=FakeUUID())
+ exp = {"Temp-Url-Key": key}
+ client.set_temp_url_key()
+ client.set_account_metadata.assert_called_once_with(exp)
+ client.set_account_metadata = sav
+ uuid.uuid4 = sav_uu
+
def test_get_temp_url_key(self):
client = self.client
client.connection.head_account = Mock()
@@ -127,17 +145,24 @@ def test_get_temp_url_key(self):
"x-account-meta-foo": "yes", "some-other-key": "no"}
meta = client.get_temp_url_key()
self.assertIsNone(meta)
- nm = utils.random_name()
+ nm = utils.random_unicode()
client.connection.head_account.return_value = {
"x-account-meta-temp-url-key": nm, "some-other-key": "no"}
meta = client.get_temp_url_key()
self.assertEqual(meta, nm)
+ def test_get_temp_url_key_cached(self):
+ client = self.client
+ key = utils.random_unicode()
+ client._cached_temp_url_key = key
+ meta = client.get_temp_url_key()
+ self.assertEqual(meta, key)
+
def test_get_temp_url(self):
client = self.client
- nm = utils.random_name(ascii_only=True)
- cname = utils.random_name(ascii_only=True)
- oname = utils.random_name(ascii_only=True)
+ nm = utils.random_ascii()
+ cname = utils.random_ascii()
+ oname = utils.random_ascii()
client.connection.head_account = Mock()
client.connection.head_account.return_value = {
"x-account-meta-temp-url-key": nm, "some-other-key": "no"}
@@ -147,11 +172,19 @@ def test_get_temp_url(self):
self.assert_("?temp_url_sig=" in ret)
self.assert_("&temp_url_expires=" in ret)
+ def test_get_temp_url_bad_method(self):
+ client = self.client
+ nm = utils.random_ascii()
+ cname = utils.random_ascii()
+ oname = utils.random_ascii()
+ self.assertRaises(exc.InvalidTemporaryURLMethod, client.get_temp_url,
+ cname, oname, seconds=120, method="INVALID")
+
def test_get_temp_url_windows(self):
client = self.client
- nm = "%s\\" % utils.random_name(ascii_only=True)
- cname = "\\%s\\" % utils.random_name(ascii_only=True)
- oname = utils.random_name(ascii_only=True)
+ nm = "%s\\" % utils.random_ascii()
+ cname = "\\%s\\" % utils.random_ascii()
+ oname = utils.random_ascii()
client.connection.head_account = Mock()
client.connection.head_account.return_value = {
"x-account-meta-temp-url-key": nm, "some-other-key": "no"}
@@ -160,9 +193,9 @@ def test_get_temp_url_windows(self):
def test_get_temp_url_unicode(self):
client = self.client
- nm = utils.random_name(ascii_only=False)
- cname = utils.random_name(ascii_only=True)
- oname = utils.random_name(ascii_only=True)
+ nm = utils.random_unicode()
+ cname = utils.random_ascii()
+ oname = utils.random_ascii()
client.connection.head_account = Mock()
client.connection.head_account.return_value = {
"x-account-meta-temp-url-key": nm, "some-other-key": "no"}
@@ -172,8 +205,8 @@ def test_get_temp_url_unicode(self):
def test_get_temp_url_missing_key(self):
client = self.client
- cname = utils.random_name(ascii_only=True)
- oname = utils.random_name(ascii_only=True)
+ cname = utils.random_ascii()
+ oname = utils.random_ascii()
client.connection.head_account = Mock()
client.connection.head_account.return_value = {"some-other-key": "no"}
self.assertRaises(exc.MissingTemporaryURLKey, client.get_temp_url,
@@ -207,7 +240,7 @@ def test_set_container_metadata(self):
def test_set_container_metadata_prefix(self):
client = self.client
client.connection.post_container = Mock()
- prefix = utils.random_name()
+ prefix = utils.random_unicode()
client.set_container_metadata(self.cont_name, {"newkey": "newval"},
prefix=prefix)
client.connection.post_container.assert_called_with(self.cont_name,
@@ -261,7 +294,7 @@ def test_set_object_metadata_prefix(self):
client.connection.head_object.return_value = {
"X-Object-Meta-Foo": "yes", "Some-Other-Key": "no"}
client.connection.post_object = Mock()
- prefix = utils.random_name()
+ prefix = utils.random_unicode()
client.set_object_metadata(self.cont_name, self.obj_name,
{"newkey": "newval", "emptykey": ""}, prefix=prefix)
client.connection.post_object.assert_called_with(self.cont_name,
@@ -376,7 +409,7 @@ def test_delete_container(self):
def test_remove_object_from_cache(self):
client = self.client
client.connection.head_container = Mock()
- nm = utils.random_name()
+ nm = utils.random_unicode()
client._container_cache = {nm: object()}
client.remove_container_from_cache(nm)
self.assertEqual(client._container_cache, {})
@@ -415,7 +448,7 @@ def test_bulk_delete(self):
sav = client.bulk_delete_interval
client.bulk_delete_interval = 0.001
container = self.cont_name
- obj_names = [utils.random_name()]
+ obj_names = [utils.random_unicode()]
ret = client.bulk_delete(container, obj_names, async=False)
self.assertTrue(isinstance(ret, dict))
client.bulk_delete_interval = sav
@@ -425,7 +458,7 @@ def test_bulk_delete(self):
def test_bulk_delete_async(self):
client = self.client
container = self.cont_name
- obj_names = [utils.random_name()]
+ obj_names = [utils.random_unicode()]
ret = client.bulk_delete(container, obj_names, async=True)
self.assertTrue(isinstance(ret, FakeBulkDeleter))
@@ -441,19 +474,23 @@ def test_get_object(self):
obj = client.get_object(self.cont_name, "o1")
self.assertEqual(obj.name, "o1")
+ def random_non_us_locale(self):
+ nonUS_locales = ("de_DE", "fr_FR", "hu_HU", "ja_JP", "nl_NL", "pl_PL",
+ "pt_BR", "pt_PT", "ro_RO", "ru_RU", "zh_CN", "zh_HK",
+ "zh_TW")
+ return random.choice(nonUS_locales)
+
@patch('pyrax.cf_wrapper.client.Container', new=FakeContainer)
def test_get_object_locale(self):
client = self.client
orig_locale = locale.getlocale(locale.LC_TIME)
- nonUS_locales = ("de_DE", "fr_FR", "hu_HU", "ja_JP", "nl_NL", "pl_PL",
- "pt_BR", "pt_PT", "ro_RO", "ru_RU", "zh_CN", "zh_HK", "zh_TW")
- new_locale = random.choice(nonUS_locales)
+ new_locale = self.random_non_us_locale()
try:
locale.setlocale(locale.LC_TIME, new_locale)
except Exception:
# Travis CI seems to have a problem with setting locale, so
# just skip this.
- return
+ self.skipTest("Could not set locale to %s" % new_locale)
client.connection.head_container = Mock()
client.connection.head_object = Mock(return_value=fake_attdict)
obj = client.get_object(self.cont_name, "fake")
@@ -614,7 +651,7 @@ def test_upload_folder_with_files(self):
client.upload_file = Mock()
client.connection.head_container = Mock()
client.connection.put_container = Mock()
- cont_name = utils.random_name()
+ cont_name = utils.random_unicode()
cont = client.create_container(cont_name)
gobj = client.get_object
client.get_object = Mock(return_value=self.fake_object)
@@ -656,7 +693,7 @@ def test_sync_folder_to_container(self):
clt.connection.put_container = Mock()
clt.connection.head_object = Mock(return_value=fake_attdict)
clt.get_container_objects = Mock(return_value=[])
- cont_name = utils.random_name(8)
+ cont_name = utils.random_unicode(8)
cont = clt.create_container(cont_name)
num_files = 7
with utils.SelfDeletingTempDirectory() as tmpdir:
@@ -677,7 +714,7 @@ def test_sync_folder_to_container_hidden(self):
clt.connection.put_container = Mock()
clt.connection.head_object = Mock(return_value=fake_attdict)
clt.get_container_objects = Mock(return_value=[])
- cont_name = utils.random_name(8)
+ cont_name = utils.random_unicode(8)
cont = clt.create_container(cont_name)
num_vis_files = 4
num_hid_files = 4
@@ -704,7 +741,7 @@ def test_sync_folder_to_container_nested(self):
clt.connection.put_container = Mock()
clt.connection.head_object = Mock(return_value=fake_attdict)
clt.get_container_objects = Mock(return_value=[])
- cont_name = utils.random_name(8)
+ cont_name = utils.random_unicode(8)
cont = clt.create_container(cont_name)
num_files = 3
num_nested_files = 6
@@ -803,12 +840,20 @@ def test_fetch_object(self):
client.connection.get_object.assert_called_with(ANY, ANY,
resp_chunk_size=ANY, response_dict=response)
+ def test_fetch_partial(self):
+ client = self.client
+ cont = utils.random_unicode()
+ obj = utils.random_unicode()
+ size = random.randint(1, 1000)
+ client.fetch_object = Mock()
+ client.fetch_partial(cont, obj, size)
+ client.fetch_object.assert_called_once_with(cont, obj, chunk_size=size)
+
@patch('pyrax.cf_wrapper.client.Container', new=FakeContainer)
def test_download_object(self):
client = self.client
sav_fetch = client.fetch_object
- client.fetch_object = Mock(return_value=utils.random_name(
- ascii_only=True))
+ client.fetch_object = Mock(return_value=utils.random_ascii())
sav_isdir = os.path.isdir
os.path.isdir = Mock(return_value=True)
nm = "one/two/three/four.txt"
@@ -884,6 +929,49 @@ def test_get_container_objects(self):
self.assertEqual(len(objs), 2)
self.assertEqual(objs[0].container.name, self.cont_name)
+ @patch('pyrax.cf_wrapper.client.Container', new=FakeContainer)
+ def test_get_container_objects_locale(self):
+ client = self.client
+
+ orig_locale = locale.getlocale(locale.LC_TIME)
+ try:
+ # Set locale to Great Britain because we know that DST was active
+ # there at 2013-10-21T01:02:03.123456 UTC
+ locale.setlocale(locale.LC_TIME, 'en_GB')
+ except Exception:
+ # Travis CI seems to have a problem with setting locale, so
+ # just skip this.
+ self.skipTest("Could not set locale to en_GB")
+
+ client.connection.head_container = Mock()
+ dct = [
+ {
+ "name": "o1",
+ "bytes": 111,
+ "last_modified": "2013-01-01T01:02:03.123456",
+ },
+ {
+ "name": "o2",
+ "bytes": 2222,
+ "last_modified": "2013-10-21T01:02:03.123456",
+ },
+ ]
+ client.connection.get_container = Mock(return_value=({}, dct))
+ objs = client.get_container_objects(self.cont_name)
+
+ self.assertEqual(len(objs), 2)
+ self.assertEqual(objs[0].container.name, self.cont_name)
+ self.assertEqual(objs[0].name, "o1")
+ self.assertEqual(objs[0].last_modified, "2013-01-01T01:02:03")
+ self.assertEqual(objs[1].name, "o2")
+ # Note that hour here is 1 greater than the hour in the last_modified
+ # returned by the server. This is because they are in different
+ # timezones - the server returns the time in UTC (no DST) but the local
+ # timezone of the client as of 2013-10-21 is BST (1 hour daylight savings).
+ self.assertEqual(objs[1].last_modified, "2013-10-21T02:02:03")
+
+ locale.setlocale(locale.LC_TIME, orig_locale)
+
@patch('pyrax.cf_wrapper.client.Container', new=FakeContainer)
def test_get_container_object_names(self):
client = self.client
@@ -1140,7 +1228,7 @@ def test_handle_swiftclient_exception_others(self):
def test_bulk_deleter(self):
client = self.client
container = self.cont_name
- object_names = utils.random_name()
+ object_names = utils.random_unicode()
bd = FakeBulkDeleter(client, container, object_names)
self.assertEqual(bd.client, client)
self.assertEqual(bd.container, container)
@@ -1149,29 +1237,29 @@ def test_bulk_deleter(self):
def test_bulk_deleter_run(self):
client = self.client
container = self.cont_name
- object_names = utils.random_name()
+ object_names = utils.random_unicode()
bd = FakeBulkDeleter(client, container, object_names)
class FakeConn(object):
pass
class FakePath(object):
- path = utils.random_name()
+ path = utils.random_unicode()
class FakeResp(object):
- status = utils.random_name()
- reason = utils.random_name()
+ status = utils.random_unicode()
+ reason = utils.random_unicode()
fpath = FakePath()
conn = FakeConn()
resp = FakeResp()
# Need to make these ASCII, since some characters will confuse the
# splitlines() call.
- num_del = utils.random_name(ascii_only=True)
- num_not_found = utils.random_name(ascii_only=True)
- status = utils.random_name(ascii_only=True)
- errors = utils.random_name(ascii_only=True)
- useless = utils.random_name(ascii_only=True)
+ num_del = utils.random_ascii()
+ num_not_found = utils.random_ascii()
+ status = utils.random_ascii()
+ errors = utils.random_ascii()
+ useless = utils.random_ascii()
fake_read = """Number Deleted: %s
Number Not Found: %s
Response Status: %s
diff --git a/tests/unit/test_cf_container.py b/tests/unit/test_cf_container.py
index 47b0060f..80505ddd 100644
--- a/tests/unit/test_cf_container.py
+++ b/tests/unit/test_cf_container.py
@@ -48,9 +48,9 @@ def setUp(self):
pyrax.connect_to_cloudfiles()
self.client = pyrax.cloudfiles
self.client.connection.head_container = Mock()
- self.cont_name = utils.random_name(ascii_only=True)
+ self.cont_name = utils.random_ascii()
self.container = self.client.get_container(self.cont_name)
- self.obj_name = utils.random_name(ascii_only=True)
+ self.obj_name = utils.random_ascii()
self.fake_object = FakeStorageObject(self.client, self.cont_name,
self.obj_name)
self.client._container_cache = {}
@@ -170,11 +170,11 @@ def test_list_subdirs(self):
cont = self.container
clt = cont.client
clt.list_container_subdirs = Mock()
- marker = utils.random_name()
- limit = utils.random_name()
- prefix = utils.random_name()
- delimiter = utils.random_name()
- full_listing = utils.random_name()
+ marker = utils.random_unicode()
+ limit = utils.random_unicode()
+ prefix = utils.random_unicode()
+ delimiter = utils.random_unicode()
+ full_listing = utils.random_unicode()
cont.list_subdirs(marker=marker, limit=limit, prefix=prefix,
delimiter=delimiter, full_listing=full_listing)
clt.list_container_subdirs.assert_called_once_with(cont.name,
@@ -246,7 +246,7 @@ def test_delete(self):
def test_fetch_object(self):
cont = self.container
cont.client.fetch_object = Mock()
- oname = utils.random_name(ascii_only=True)
+ oname = utils.random_ascii()
incmeta = random.choice((True, False))
csize = random.randint(0, 1000)
cont.fetch_object(oname, include_meta=incmeta, chunk_size=csize)
@@ -256,8 +256,8 @@ def test_fetch_object(self):
def test_download_object(self):
cont = self.container
cont.client.download_object = Mock()
- oname = utils.random_name(ascii_only=True)
- dname = utils.random_name(ascii_only=True)
+ oname = utils.random_ascii()
+ dname = utils.random_ascii()
stru = random.choice((True, False))
cont.download_object(oname, dname, structure=stru)
cont.client.download_object.assert_called_once_with(cont, oname,
@@ -282,7 +282,7 @@ def test_set_metadata(self):
def test_set_metadata_prefix(self):
cont = self.container
cont.client.connection.post_container = Mock()
- prefix = utils.random_name()
+ prefix = utils.random_unicode()
cont.set_metadata({"newkey": "newval"}, prefix=prefix)
cont.client.connection.post_container.assert_called_with(cont.name,
{"%snewkey" % prefix: "newval"}, response_dict=None)
@@ -290,7 +290,7 @@ def test_set_metadata_prefix(self):
def test_remove_metadata_key(self):
cont = self.container
cont.client.remove_container_metadata_key = Mock()
- key = utils.random_name()
+ key = utils.random_unicode()
cont.remove_metadata_key(key)
cont.client.remove_container_metadata_key.assert_called_once_with(cont,
key)
@@ -339,10 +339,10 @@ def test_make_private(self):
def test_copy_object(self):
cont = self.container
cont.client.copy_object = Mock()
- obj = utils.random_name()
- new_cont = utils.random_name()
- new_name = utils.random_name()
- extra_info = utils.random_name()
+ obj = utils.random_unicode()
+ new_cont = utils.random_unicode()
+ new_name = utils.random_unicode()
+ extra_info = utils.random_unicode()
cont.copy_object(obj, new_cont, new_obj_name=new_name,
extra_info=extra_info)
cont.client.copy_object.assert_called_once_with(cont, obj, new_cont,
@@ -351,10 +351,10 @@ def test_copy_object(self):
def test_move_object(self):
cont = self.container
cont.client.move_object = Mock()
- obj = utils.random_name()
- new_cont = utils.random_name()
- new_name = utils.random_name()
- extra_info = utils.random_name()
+ obj = utils.random_unicode()
+ new_cont = utils.random_unicode()
+ new_name = utils.random_unicode()
+ extra_info = utils.random_unicode()
cont.move_object(obj, new_cont, new_obj_name=new_name,
extra_info=extra_info)
cont.client.move_object.assert_called_once_with(cont, obj, new_cont,
@@ -369,9 +369,9 @@ def test_change_object_content_type(self):
def test_get_temp_url(self):
cont = self.container
- nm = utils.random_name(ascii_only=True)
+ nm = utils.random_ascii()
sav = cont.name
- cont.name = utils.random_name(ascii_only=True)
+ cont.name = utils.random_ascii()
cont.client.get_temp_url = Mock()
secs = random.randint(1, 1000)
cont.get_temp_url(nm, seconds=secs)
@@ -381,17 +381,16 @@ def test_get_temp_url(self):
def test_delete_object_in_seconds(self):
cont = self.container
- cont.client.connection.post_object = Mock()
+ cont.client.delete_object_in_seconds = Mock()
secs = random.randint(1, 1000)
- obj_name = utils.random_name(ascii_only=True)
- cont.delete_object_in_seconds(obj_name, seconds=secs)
- cont.client.connection.post_object.assert_called_with(cont.name,
- obj_name, headers={'X-Delete-After': secs},
- response_dict=None)
+ obj_name = utils.random_ascii()
+ cont.delete_object_in_seconds(obj_name, secs)
+ cont.client.delete_object_in_seconds.assert_called_once_with(cont,
+ obj_name, secs)
- nm = utils.random_name(ascii_only=True)
+ nm = utils.random_ascii()
sav = cont.name
- cont.name = utils.random_name(ascii_only=True)
+ cont.name = utils.random_ascii()
cont.client.get_temp_url = Mock()
secs = random.randint(1, 1000)
cont.get_temp_url(nm, seconds=secs)
diff --git a/tests/unit/test_cf_storage_object.py b/tests/unit/test_cf_storage_object.py
index b41c5b86..0f52d287 100644
--- a/tests/unit/test_cf_storage_object.py
+++ b/tests/unit/test_cf_storage_object.py
@@ -71,10 +71,10 @@ def tearDown(self):
pyrax.connect_to_cloud_blockstorage = octcbs
def test_init(self):
- cname = utils.random_name()
- oname = utils.random_name()
- ctype = utils.random_name()
- etag = utils.random_name()
+ cname = utils.random_unicode()
+ oname = utils.random_unicode()
+ ctype = utils.random_unicode()
+ etag = utils.random_unicode()
tbytes = random.randint(0, 1000)
lmod = random.randint(0, 1000)
cont = FakeContainer(self.client, cname, 0, 0)
@@ -131,7 +131,7 @@ def test_get(self):
def test_download(self):
obj = self.storage_object
obj.client.download_object = Mock()
- dname = utils.random_name()
+ dname = utils.random_unicode()
stru = random.choice((True, False))
obj.download(dname, structure=stru)
obj.client.download_object.assert_called_once_with(obj.container, obj,
@@ -177,7 +177,7 @@ def test_set_metadata_prefix(self):
obj = self.storage_object
obj.client.connection.post_object = Mock()
obj.client.connection.head_object = Mock(return_value={})
- prefix = utils.random_name()
+ prefix = utils.random_unicode()
obj.set_metadata({"newkey": "newval"}, prefix=prefix)
obj.client.connection.post_object.assert_called_with(obj.container.name,
obj.name, {"%snewkey" % prefix: "newval"},
@@ -195,9 +195,9 @@ def test_copy(self):
obj = self.storage_object
cont = obj.container
cont.copy_object = Mock()
- new_cont = utils.random_name()
- new_name = utils.random_name()
- extra_info = utils.random_name()
+ new_cont = utils.random_unicode()
+ new_name = utils.random_unicode()
+ extra_info = utils.random_unicode()
obj.copy(new_cont, new_obj_name=new_name, extra_info=extra_info)
cont.copy_object.assert_called_once_with(obj, new_cont,
new_obj_name=new_name, extra_info=extra_info)
@@ -206,9 +206,9 @@ def test_move(self):
obj = self.storage_object
cont = obj.container
cont.move_object = Mock()
- new_cont = utils.random_name()
- new_name = utils.random_name()
- extra_info = utils.random_name()
+ new_cont = utils.random_unicode()
+ new_name = utils.random_unicode()
+ extra_info = utils.random_unicode()
obj.move(new_cont, new_obj_name=new_name, extra_info=extra_info)
cont.move_object.assert_called_once_with(obj, new_cont,
new_obj_name=new_name, extra_info=extra_info)
@@ -234,8 +234,7 @@ def test_delete_in_seconds(self):
secs = random.randint(1, 1000)
obj.delete_in_seconds(seconds=secs)
obj.client.connection.post_object.assert_called_with(obj.container.name,
- obj.name, headers={'X-Delete-After': secs},
- response_dict=None)
+ obj.name, {'X-Delete-After': "%s" % secs}, response_dict=None)
def test_repr(self):
obj = self.storage_object
diff --git a/tests/unit/test_client.py b/tests/unit/test_client.py
index c0702ae4..5716610f 100644
--- a/tests/unit/test_client.py
+++ b/tests/unit/test_client.py
@@ -133,7 +133,7 @@ def test_reset_timings(self):
def test_get_limits(self):
clt = self.client
- data = utils.random_name()
+ data = utils.random_unicode()
clt.method_get = Mock(return_value=(None, data))
ret = clt.get_limits()
self.assertEqual(ret, data)
diff --git a/tests/unit/test_cloud_blockstorage.py b/tests/unit/test_cloud_blockstorage.py
index 422c7d5e..9b002afa 100644
--- a/tests/unit/test_cloud_blockstorage.py
+++ b/tests/unit/test_cloud_blockstorage.py
@@ -103,7 +103,7 @@ def test_create_volume(self):
def test_attach_to_instance(self):
vol = self.volume
inst = fakes.FakeServer()
- mp = utils.random_name()
+ mp = utils.random_unicode()
vol._nova_volumes.create_server_volume = Mock(return_value=vol)
vol.attach_to_instance(inst, mp)
vol._nova_volumes.create_server_volume.assert_called_once_with(inst.id,
@@ -112,7 +112,7 @@ def test_attach_to_instance(self):
def test_attach_to_instance_fail(self):
vol = self.volume
inst = fakes.FakeServer()
- mp = utils.random_name()
+ mp = utils.random_unicode()
vol._nova_volumes.create_server_volume = Mock(
side_effect=Exception("test"))
self.assertRaises(exc.VolumeAttachmentFailed, vol.attach_to_instance,
@@ -120,8 +120,8 @@ def test_attach_to_instance_fail(self):
def test_detach_from_instance(self):
vol = self.volume
- srv_id = utils.random_name()
- att_id = utils.random_name()
+ srv_id = utils.random_unicode()
+ att_id = utils.random_unicode()
vol.attachments = [{"server_id": srv_id, "id": att_id}]
vol._nova_volumes.delete_server_volume = Mock()
vol.detach()
@@ -130,8 +130,8 @@ def test_detach_from_instance(self):
def test_detach_from_instance_fail(self):
vol = self.volume
- srv_id = utils.random_name()
- att_id = utils.random_name()
+ srv_id = utils.random_unicode()
+ att_id = utils.random_unicode()
vol.attachments = [{"server_id": srv_id, "id": att_id}]
vol._nova_volumes.delete_server_volume = Mock(
side_effect=Exception("test"))
@@ -140,8 +140,8 @@ def test_detach_from_instance_fail(self):
def test_create_snapshot(self):
vol = self.volume
vol.manager.create_snapshot = Mock()
- name = utils.random_name()
- desc = utils.random_name()
+ name = utils.random_unicode()
+ desc = utils.random_unicode()
vol.create_snapshot(name=name, description=desc, force=False)
vol.manager.create_snapshot.assert_called_once_with(volume=vol,
name=name, description=desc, force=False)
@@ -151,8 +151,8 @@ def test_create_snapshot_bad_request(self):
sav = BaseManager.create
BaseManager.create = Mock(side_effect=exc.BadRequest(
"Invalid volume: must be available"))
- name = utils.random_name()
- desc = utils.random_name()
+ name = utils.random_unicode()
+ desc = utils.random_unicode()
self.assertRaises(exc.VolumeNotAvailable, vol.create_snapshot,
name=name, description=desc, force=False)
BaseManager.create = sav
@@ -161,8 +161,8 @@ def test_create_snapshot_bad_request_other(self):
vol = self.volume
vol.manager.api.create_snapshot = Mock(side_effect=exc.BadRequest(
"Some other message"))
- name = utils.random_name()
- desc = utils.random_name()
+ name = utils.random_unicode()
+ desc = utils.random_unicode()
self.assertRaises(exc.BadRequest, vol.create_snapshot, name=name,
description=desc, force=False)
@@ -195,12 +195,12 @@ def test_create_volume_bad_size(self):
def test_create_body_volume(self):
mgr = self.client._manager
size = random.randint(MIN_SIZE, MAX_SIZE)
- name = utils.random_name()
- snapshot_id = utils.random_name()
+ name = utils.random_unicode()
+ snapshot_id = utils.random_unicode()
display_description = None
volume_type = None
metadata = None
- availability_zone = utils.random_name()
+ availability_zone = utils.random_unicode()
fake_body = {"volume": {
"size": size,
"snapshot_id": snapshot_id,
@@ -218,12 +218,12 @@ def test_create_body_volume(self):
def test_create_body_volume_defaults(self):
mgr = self.client._manager
size = random.randint(MIN_SIZE, MAX_SIZE)
- name = utils.random_name()
- snapshot_id = utils.random_name()
- display_description = utils.random_name()
- volume_type = utils.random_name()
+ name = utils.random_unicode()
+ snapshot_id = utils.random_unicode()
+ display_description = utils.random_unicode()
+ volume_type = utils.random_unicode()
metadata = {}
- availability_zone = utils.random_name()
+ availability_zone = utils.random_unicode()
fake_body = {"volume": {
"size": size,
"snapshot_id": snapshot_id,
@@ -241,8 +241,8 @@ def test_create_body_volume_defaults(self):
def test_create_body_snapshot(self):
mgr = self.client._snapshot_manager
vol = self.volume
- name = utils.random_name()
- display_description = utils.random_name()
+ name = utils.random_unicode()
+ display_description = utils.random_unicode()
force = True
fake_body = {"snapshot": {
"display_name": name,
@@ -258,7 +258,7 @@ def test_client_attach_to_instance(self):
clt = self.client
vol = self.volume
inst = fakes.FakeServer()
- mp = utils.random_name()
+ mp = utils.random_unicode()
vol.attach_to_instance = Mock()
clt.attach_to_instance(vol, inst, mp)
vol.attach_to_instance.assert_called_once_with(inst, mp)
@@ -297,8 +297,8 @@ def test_client_delete_volume_force(self):
def test_client_create_snapshot(self):
clt = self.client
vol = self.volume
- name = utils.random_name()
- description = utils.random_name()
+ name = utils.random_unicode()
+ description = utils.random_unicode()
clt._snapshot_manager.create = Mock()
clt.create_snapshot(vol, name=name, description=description,
force=True)
@@ -308,8 +308,8 @@ def test_client_create_snapshot(self):
def test_client_create_snapshot_not_available(self):
clt = self.client
vol = self.volume
- name = utils.random_name()
- description = utils.random_name()
+ name = utils.random_unicode()
+ description = utils.random_unicode()
cli_exc = exc.ClientException(409, "Request conflicts with in-progress")
sav = BaseManager.create
BaseManager.create = Mock(side_effect=cli_exc)
@@ -344,37 +344,37 @@ def test_snapshot_delete_retry(self):
def test_volume_name_property(self):
vol = self.volume
- nm = utils.random_name()
+ nm = utils.random_unicode()
vol.display_name = nm
self.assertEqual(vol.name, vol.display_name)
- nm = utils.random_name()
+ nm = utils.random_unicode()
vol.name = nm
self.assertEqual(vol.name, vol.display_name)
def test_volume_description_property(self):
vol = self.volume
- nm = utils.random_name()
+ nm = utils.random_unicode()
vol.display_description = nm
self.assertEqual(vol.description, vol.display_description)
- nm = utils.random_name()
+ nm = utils.random_unicode()
vol.description = nm
self.assertEqual(vol.description, vol.display_description)
def test_snapshot_name_property(self):
snap = self.snapshot
- nm = utils.random_name()
+ nm = utils.random_unicode()
snap.display_name = nm
self.assertEqual(snap.name, snap.display_name)
- nm = utils.random_name()
+ nm = utils.random_unicode()
snap.name = nm
self.assertEqual(snap.name, snap.display_name)
def test_snapshot_description_property(self):
snap = self.snapshot
- nm = utils.random_name()
+ nm = utils.random_unicode()
snap.display_description = nm
self.assertEqual(snap.description, snap.display_description)
- nm = utils.random_name()
+ nm = utils.random_unicode()
snap.description = nm
self.assertEqual(snap.description, snap.display_description)
diff --git a/tests/unit/test_cloud_databases.py b/tests/unit/test_cloud_databases.py
index 8e4baaad..2f945b6e 100644
--- a/tests/unit/test_cloud_databases.py
+++ b/tests/unit/test_cloud_databases.py
@@ -16,7 +16,7 @@
import pyrax.exceptions as exc
import pyrax.utils as utils
-from tests.unit import fakes
+import fakes
example_uri = "http://example.com"
@@ -59,16 +59,21 @@ def test_list_databases(self):
inst = self.instance
sav = inst._database_manager.list
inst._database_manager.list = Mock()
- inst.list_databases()
- inst._database_manager.list.assert_called_once_with()
+ limit = utils.random_unicode()
+ marker = utils.random_unicode()
+ inst.list_databases(limit=limit, marker=marker)
+ inst._database_manager.list.assert_called_once_with(limit=limit,
+ marker=marker)
inst._database_manager.list = sav
def test_list_users(self):
inst = self.instance
sav = inst._user_manager.list
inst._user_manager.list = Mock()
- inst.list_users()
- inst._user_manager.list.assert_called_once_with()
+ limit = utils.random_unicode()
+ marker = utils.random_unicode()
+ inst.list_users(limit=limit, marker=marker)
+ inst._user_manager.list.assert_called_once_with(limit=limit, marker=marker)
inst._user_manager.list = sav
def test_get_database(self):
@@ -135,7 +140,7 @@ def test_delete_user(self):
def test_enable_root_user(self):
inst = self.instance
- pw = utils.random_name()
+ pw = utils.random_unicode()
fake_body = {"user": {"password": pw}}
inst.manager.api.method_post = Mock(return_value=(None, fake_body))
ret = inst.enable_root_user()
@@ -160,7 +165,7 @@ def test_restart(self):
def test_resize(self):
inst = self.instance
- flavor_ref = utils.random_name()
+ flavor_ref = utils.random_unicode()
inst.manager.api._get_flavor_ref = Mock(return_value=flavor_ref)
fake_body = {"flavorRef": flavor_ref}
inst.manager.action = Mock()
@@ -227,10 +232,12 @@ def test_list_databases_for_instance(self):
clt = self.client
inst = self.instance
sav = inst.list_databases
+ limit = utils.random_unicode()
+ marker = utils.random_unicode()
inst.list_databases = Mock(return_value=["db"])
- ret = clt.list_databases(inst)
+ ret = clt.list_databases(inst, limit=limit, marker=marker)
self.assertEqual(ret, ["db"])
- inst.list_databases.assert_called_once_with()
+ inst.list_databases.assert_called_once_with(limit=limit, marker=marker)
inst.list_databases = sav
@patch("pyrax.manager.BaseManager", new=fakes.FakeManager)
@@ -239,20 +246,28 @@ def test_create_database_for_instance(self):
inst = self.instance
sav = inst.create_database
inst.create_database = Mock(return_value=["db"])
- nm = utils.random_name()
+ nm = utils.random_unicode()
ret = clt.create_database(inst, nm)
self.assertEqual(ret, ["db"])
inst.create_database.assert_called_once_with(nm,
character_set=None, collate=None)
inst.create_database = sav
+ def test_clt_get_database(self):
+ clt = self.client
+ inst = self.instance
+ inst.get_database = Mock()
+ nm = utils.random_unicode()
+ clt.get_database(inst, nm)
+ inst.get_database.assert_called_once_with(nm)
+
@patch("pyrax.manager.BaseManager", new=fakes.FakeManager)
def test_delete_database_for_instance(self):
clt = self.client
inst = self.instance
sav = inst.delete_database
inst.delete_database = Mock()
- nm = utils.random_name()
+ nm = utils.random_unicode()
clt.delete_database(inst, nm)
inst.delete_database.assert_called_once_with(nm)
inst.delete_database = sav
@@ -262,10 +277,12 @@ def test_list_users_for_instance(self):
clt = self.client
inst = self.instance
sav = inst.list_users
+ limit = utils.random_unicode()
+ marker = utils.random_unicode()
inst.list_users = Mock(return_value=["user"])
- ret = clt.list_users(inst)
+ ret = clt.list_users(inst, limit=limit, marker=marker)
self.assertEqual(ret, ["user"])
- inst.list_users.assert_called_once_with()
+ inst.list_users.assert_called_once_with(limit=limit, marker=marker)
inst.list_users = sav
def test_create_user_for_instance(self):
@@ -273,8 +290,8 @@ def test_create_user_for_instance(self):
inst = self.instance
sav = inst.create_user
inst.create_user = Mock()
- nm = utils.random_name()
- pw = utils.random_name()
+ nm = utils.random_unicode()
+ pw = utils.random_unicode()
ret = clt.create_user(inst, nm, pw, ["db"])
inst.create_user.assert_called_once_with(name=nm, password=pw,
database_names=["db"])
@@ -286,7 +303,7 @@ def test_delete_user_for_instance(self):
inst = self.instance
sav = inst.delete_user
inst.delete_user = Mock()
- nm = utils.random_name()
+ nm = utils.random_unicode()
clt.delete_user(inst, nm)
inst.delete_user.assert_called_once_with(nm)
inst.delete_user = sav
@@ -317,14 +334,14 @@ def test_get_user_by_client(self):
inst = self.instance
sav = inst.get_user
inst.get_user = Mock()
- fakeuser = utils.random_name()
+ fakeuser = utils.random_unicode()
clt.get_user(inst, fakeuser)
inst.get_user.assert_called_once_with(fakeuser)
inst.get_user = sav
def test_get_user(self):
inst = self.instance
- good_name = utils.random_name()
+ good_name = utils.random_unicode()
user = fakes.FakeDatabaseUser(manager=None, info={"name": good_name})
inst._user_manager.get = Mock(return_value=user)
returned = inst.get_user(good_name)
@@ -332,7 +349,7 @@ def test_get_user(self):
def test_get_user_fail(self):
inst = self.instance
- bad_name = utils.random_name()
+ bad_name = utils.random_unicode()
inst._user_manager.get = Mock(side_effect=exc.NoSuchDatabaseUser())
self.assertRaises(exc.NoSuchDatabaseUser, inst.get_user, bad_name)
@@ -340,8 +357,8 @@ def test_get_db_names(self):
inst = self.instance
mgr = inst._user_manager
mgr.instance = inst
- dbname1 = utils.random_name(ascii_only=True)
- dbname2 = utils.random_name(ascii_only=True)
+ dbname1 = utils.random_ascii()
+ dbname2 = utils.random_ascii()
sav = inst.list_databases
inst.list_databases = Mock(return_value=((dbname1, dbname2)))
resp = mgr._get_db_names(dbname1)
@@ -352,8 +369,8 @@ def test_get_db_names_not_strict(self):
inst = self.instance
mgr = inst._user_manager
mgr.instance = inst
- dbname1 = utils.random_name(ascii_only=True)
- dbname2 = utils.random_name(ascii_only=True)
+ dbname1 = utils.random_ascii()
+ dbname2 = utils.random_ascii()
sav = inst.list_databases
inst.list_databases = Mock(return_value=((dbname1, dbname2)))
resp = mgr._get_db_names("BAD", strict=False)
@@ -364,8 +381,8 @@ def test_get_db_names_fail(self):
inst = self.instance
mgr = inst._user_manager
mgr.instance = inst
- dbname1 = utils.random_name(ascii_only=True)
- dbname2 = utils.random_name(ascii_only=True)
+ dbname1 = utils.random_ascii()
+ dbname2 = utils.random_ascii()
sav = inst.list_databases
inst.list_databases = Mock(return_value=((dbname1, dbname2)))
self.assertRaises(exc.NoSuchDatabase, mgr._get_db_names, "BAD")
@@ -373,8 +390,8 @@ def test_get_db_names_fail(self):
def test_change_user_password(self):
inst = self.instance
- fakeuser = utils.random_name()
- newpass = utils.random_name()
+ fakeuser = utils.random_unicode()
+ newpass = utils.random_unicode()
resp = fakes.FakeResponse()
resp.status = 202
inst._user_manager.api.method_put = Mock(return_value=(resp, {}))
@@ -384,8 +401,8 @@ def test_change_user_password(self):
def test_list_user_access(self):
inst = self.instance
- dbname1 = utils.random_name(ascii_only=True)
- dbname2 = utils.random_name(ascii_only=True)
+ dbname1 = utils.random_ascii()
+ dbname2 = utils.random_ascii()
acc = {"databases": [{"name": dbname1}, {"name": dbname2}]}
inst._user_manager.api.method_get = Mock(return_value=(None, acc))
db_list = inst.list_user_access("fakeuser")
@@ -394,8 +411,8 @@ def test_list_user_access(self):
def test_grant_user_access(self):
inst = self.instance
- fakeuser = utils.random_name(ascii_only=True)
- dbname1 = utils.random_name(ascii_only=True)
+ fakeuser = utils.random_ascii()
+ dbname1 = utils.random_ascii()
inst._user_manager.api.method_put = Mock(return_value=(None, None))
inst.grant_user_access(fakeuser, dbname1, strict=False)
inst._user_manager.api.method_put.assert_called_once_with(
@@ -404,13 +421,58 @@ def test_grant_user_access(self):
def test_revoke_user_access(self):
inst = self.instance
- fakeuser = utils.random_name(ascii_only=True)
- dbname1 = utils.random_name(ascii_only=True)
+ fakeuser = utils.random_ascii()
+ dbname1 = utils.random_ascii()
inst._user_manager.api.method_delete = Mock(return_value=(None, None))
inst.revoke_user_access(fakeuser, dbname1, strict=False)
inst._user_manager.api.method_delete.assert_called_once_with(
"/None/%s/databases/%s" % (fakeuser, dbname1))
+ def test_clt_change_user_password(self):
+ clt = self.client
+ inst = self.instance
+ inst.change_user_password = Mock()
+ user = utils.random_unicode()
+ pw = utils.random_unicode()
+ clt.change_user_password(inst, user, pw)
+ inst.change_user_password.assert_called_once_with(user, pw)
+
+ def test_clt_list_user_access(self):
+ clt = self.client
+ inst = self.instance
+ inst.list_user_access = Mock()
+ user = utils.random_unicode()
+ clt.list_user_access(inst, user)
+ inst.list_user_access.assert_called_once_with(user)
+
+ def test_clt_grant_user_access(self):
+ clt = self.client
+ inst = self.instance
+ inst.grant_user_access = Mock()
+ user = utils.random_unicode()
+ db_names = utils.random_unicode()
+ clt.grant_user_access(inst, user, db_names)
+ inst.grant_user_access.assert_called_once_with(user, db_names,
+ strict=True)
+
+ def test_clt_revoke_user_access(self):
+ clt = self.client
+ inst = self.instance
+ inst.revoke_user_access = Mock()
+ user = utils.random_unicode()
+ db_names = utils.random_unicode()
+ clt.revoke_user_access(inst, user, db_names)
+ inst.revoke_user_access.assert_called_once_with(user, db_names,
+ strict=True)
+
+ def test_clt_restart(self):
+ clt = self.client
+ inst = self.instance
+ inst.restart = Mock()
+ clt.restart(inst)
+ inst.restart.assert_called_once_with()
+
+
@patch("pyrax.manager.BaseManager", new=fakes.FakeManager)
def test_resize_for_instance(self):
clt = self.client
@@ -421,12 +483,18 @@ def test_resize_for_instance(self):
inst.resize.assert_called_once_with("flavor")
inst.resize = sav
+ def test_get_limits(self):
+ self.assertRaises(NotImplementedError, self.client.get_limits)
+
@patch("pyrax.manager.BaseManager", new=fakes.FakeManager)
def test_list_flavors(self):
clt = self.client
clt._flavor_manager.list = Mock()
- clt.list_flavors()
- clt._flavor_manager.list.assert_called_once_with()
+ limit = utils.random_unicode()
+ marker = utils.random_unicode()
+ clt.list_flavors(limit=limit, marker=marker)
+ clt._flavor_manager.list.assert_called_once_with(limit=limit,
+ marker=marker)
@patch("pyrax.manager.BaseManager", new=fakes.FakeManager)
def test_get_flavor(self):
@@ -523,7 +591,7 @@ def test_get_flavor_ref_not_found(self):
@patch("pyrax.manager.BaseManager", new=fakes.FakeManager)
def test_create_body_db(self):
mgr = self.instance._database_manager
- nm = utils.random_name()
+ nm = utils.random_unicode()
ret = mgr._create_body(nm, character_set="CS", collate="CO")
expected = {"databases": [
{"name": nm,
@@ -535,8 +603,8 @@ def test_create_body_db(self):
def test_create_body_user(self):
inst = self.instance
mgr = inst._user_manager
- nm = utils.random_name()
- pw = utils.random_name()
+ nm = utils.random_unicode()
+ pw = utils.random_unicode()
ret = mgr._create_body(nm, password=pw, database_names=[])
expected = {"users": [
{"name": nm,
@@ -547,7 +615,7 @@ def test_create_body_user(self):
@patch("pyrax.manager.BaseManager", new=fakes.FakeManager)
def test_create_body_flavor(self):
clt = self.client
- nm = utils.random_name()
+ nm = utils.random_unicode()
sav = clt._get_flavor_ref
clt._get_flavor_ref = Mock(return_value=example_uri)
ret = clt._manager._create_body(nm)
diff --git a/tests/unit/test_cloud_dns.py b/tests/unit/test_cloud_dns.py
index dce6ee13..43127507 100644
--- a/tests/unit/test_cloud_dns.py
+++ b/tests/unit/test_cloud_dns.py
@@ -23,7 +23,7 @@
import pyrax.exceptions as exc
import pyrax.utils as utils
-from tests.unit import fakes
+import fakes
example_uri = "http://example.com"
@@ -107,9 +107,9 @@ def test_reset_paging_body(self):
mgr._paging["domain"]["total_entries"] = 99
mgr._paging["domain"]["next_uri"] = "FAKE"
exp_entries = random.randint(100, 200)
- uri_string_next = utils.random_name()
+ uri_string_next = utils.random_unicode()
next_uri = "%s/domains/%s" % (example_uri, uri_string_next)
- uri_string_prev = utils.random_name()
+ uri_string_prev = utils.random_unicode()
prev_uri = "%s/domains/%s" % (example_uri, uri_string_prev)
body = {"totalEntries": exp_entries,
"links": [
@@ -135,7 +135,7 @@ def test_get_pagination_qs(self):
def test_manager_list(self):
clt = self.client
mgr = clt._manager
- fake_name = utils.random_name()
+ fake_name = utils.random_unicode()
ret_body = {"domains": [{"name": fake_name}]}
clt.method_get = Mock(return_value=({}, ret_body))
ret = clt.list()
@@ -144,9 +144,9 @@ def test_manager_list(self):
def test_manager_list_all(self):
clt = self.client
mgr = clt._manager
- fake_name = utils.random_name()
+ fake_name = utils.random_unicode()
ret_body = {"domains": [{"name": fake_name}]}
- uri_string_next = utils.random_name()
+ uri_string_next = utils.random_unicode()
next_uri = "%s/domains/%s" % (example_uri, uri_string_next)
mgr.count = 0
@@ -308,7 +308,7 @@ def test_manager_findall_default(self):
def test_create_body(self):
mgr = self.client._manager
- fake_name = utils.random_name()
+ fake_name = utils.random_unicode()
body = mgr._create_body(fake_name, "fake@fake.com")
self.assertEqual(body["domains"][0]["name"], fake_name)
@@ -430,7 +430,7 @@ def test_changes_since(self):
def test_export_domain(self):
clt = self.client
dom = self.domain
- export = utils.random_name()
+ export = utils.random_unicode()
clt._manager._async_call = Mock(return_value=({}, {"contents": export}))
ret = clt.export_domain(dom)
uri = "/domains/%s/export" % dom.id
@@ -441,7 +441,7 @@ def test_export_domain(self):
def test_import_domain(self):
clt = self.client
mgr = clt._manager
- data = utils.random_name()
+ data = utils.random_unicode()
mgr._async_call = Mock(return_value=({}, "fake"))
req_body = {"domains": [{
"contentType": "BIND_9",
@@ -461,7 +461,7 @@ def test_update_domain(self):
dom = self.domain
mgr = clt._manager
emailAddress = None
- comment = utils.random_name()
+ comment = utils.random_unicode()
ttl = 666
mgr._async_call = Mock(return_value=({}, "fake"))
uri = "/domains/%s" % utils.get_id(dom)
@@ -542,7 +542,7 @@ def test_search_records_params(self):
mgr = clt._manager
dom = self.domain
typ = "A"
- nm = utils.random_name()
+ nm = utils.random_unicode()
data = "0.0.0.0"
clt.method_get = Mock(return_value=({}, {}))
uri = "/domains/%s/records?type=%s&name=%s&data=%s" % (
@@ -555,7 +555,7 @@ def test_find_record(self):
mgr = clt._manager
dom = self.domain
typ = "A"
- nm = utils.random_name()
+ nm = utils.random_unicode()
data = "0.0.0.0"
ret_body = {"records": [{
"accountId": "728829",
@@ -576,7 +576,7 @@ def test_find_record_not_found(self):
mgr = clt._manager
dom = self.domain
typ = "A"
- nm = utils.random_name()
+ nm = utils.random_unicode()
data = "0.0.0.0"
ret_body = {"records": []}
clt.method_get = Mock(return_value=({}, ret_body))
@@ -590,7 +590,7 @@ def test_find_record_not_unique(self):
mgr = clt._manager
dom = self.domain
typ = "A"
- nm = utils.random_name()
+ nm = utils.random_unicode()
data = "0.0.0.0"
ret_body = {"records": [{
"accountId": "728829",
@@ -629,8 +629,8 @@ def test_get_record(self):
clt = self.client
mgr = clt._manager
dom = self.domain
- nm = utils.random_name()
- rec_id = utils.random_name()
+ nm = utils.random_unicode()
+ rec_id = utils.random_unicode()
rec_dict = {"id": rec_id, "name": nm}
mgr.api.method_get = Mock(return_value=(None, rec_dict))
ret = clt.get_record(dom, rec_id)
@@ -641,8 +641,8 @@ def test_update_record(self):
clt = self.client
mgr = clt._manager
dom = self.domain
- nm = utils.random_name()
- rec = fakes.FakeDNSRecord(mgr, {"id": utils.random_name(),
+ nm = utils.random_unicode()
+ rec = fakes.FakeDNSRecord(mgr, {"id": utils.random_unicode(),
"name": nm})
ttl = 9999
data = "0.0.0.0"
@@ -658,7 +658,7 @@ def test_delete_record(self):
clt = self.client
mgr = clt._manager
dom = self.domain
- rec = CloudDNSRecord(mgr, {"id": utils.random_name()})
+ rec = CloudDNSRecord(mgr, {"id": utils.random_unicode()})
mgr._async_call = Mock(return_value=({}, {}))
uri = "/domains/%s/records/%s" % (utils.get_id(dom), utils.get_id(rec))
clt.delete_record(dom, rec)
@@ -671,7 +671,7 @@ def test_resolve_device_type(self):
mgr = clt._manager
device = fakes.FakeDNSDevice()
typ = mgr._resolve_device_type(device)
- self.assertEqual(typ, "server")
+ self.assertEqual(typ, "loadbalancer")
device = fakes.FakeLoadBalancer()
typ = mgr._resolve_device_type(device)
self.assertEqual(typ, "loadbalancer")
@@ -683,19 +683,6 @@ def test_resolve_device_type_invalid(self):
self.assertRaises(exc.InvalidDeviceType, mgr._resolve_device_type,
device)
- def test_get_ptr_details_server(self):
- clt = self.client
- mgr = clt._manager
- dvc = fakes.FakeDNSDevice()
- dvc_type = "server"
- sav = pyrax._get_service_endpoint
- pyrax._get_service_endpoint = Mock(return_value=example_uri)
- expected_href = "%s/servers/%s" % (example_uri, dvc.id)
- href, svc_name = mgr._get_ptr_details(dvc, dvc_type)
- self.assertEqual(svc_name, "cloudServersOpenStack")
- self.assertEqual(href, expected_href)
- pyrax._get_service_endpoint = sav
-
def test_get_ptr_details_lb(self):
clt = self.client
mgr = clt._manager
@@ -757,7 +744,7 @@ def test_update_ptr_record(self):
dvc = fakes.FakeDNSDevice()
href = "%s/%s" % (example_uri, dvc.id)
svc_name = "cloudServersOpenStack"
- ptr_record = fakes.FakeDNSPTRRecord({"id": utils.random_name()})
+ ptr_record = fakes.FakeDNSPTRRecord({"id": utils.random_unicode()})
ttl = 9999
data = "0.0.0.0"
long_comment = "x" * 200
@@ -793,7 +780,7 @@ def test_delete_ptr_records(self):
def test_get_absolute_limits(self):
clt = self.client
- rand_limit = utils.random_name()
+ rand_limit = utils.random_unicode()
resp = {"limits": {"absolute": rand_limit}}
clt.method_get = Mock(return_value=({}, resp))
ret = clt.get_absolute_limits()
@@ -832,7 +819,7 @@ def test_iter_next(self):
def test_iter_items_first_fetch(self):
clt = self.client
mgr = clt._manager
- fake_name = utils.random_name()
+ fake_name = utils.random_unicode()
ret_body = {"domains": [{"name": fake_name}]}
clt.method_get = Mock(return_value=({}, ret_body))
res_iter = DomainResultsIterator(mgr)
@@ -843,7 +830,7 @@ def test_iter_items_first_fetch(self):
def test_iter_items_next_fetch(self):
clt = self.client
mgr = clt._manager
- fake_name = utils.random_name()
+ fake_name = utils.random_unicode()
ret_body = {"domains": [{"name": fake_name}]}
clt.method_get = Mock(return_value=({}, ret_body))
res_iter = DomainResultsIterator(mgr)
diff --git a/tests/unit/test_cloud_loadbalancers.py b/tests/unit/test_cloud_loadbalancers.py
index be4c59b3..b6b425a4 100644
--- a/tests/unit/test_cloud_loadbalancers.py
+++ b/tests/unit/test_cloud_loadbalancers.py
@@ -134,9 +134,9 @@ def test_client_update_lb(self):
lb = self.loadbalancer
mgr = clt._manager
mgr.update = Mock()
- name = utils.random_name()
- algorithm = utils.random_name()
- timeout = utils.random_name()
+ name = utils.random_unicode()
+ algorithm = utils.random_unicode()
+ timeout = utils.random_unicode()
clt.update(lb, name=name, algorithm=algorithm, timeout=timeout)
mgr.update.assert_called_once_with(lb, name=name, algorithm=algorithm,
protocol=None, halfClosed=None, port=None, timeout=timeout)
@@ -145,9 +145,9 @@ def test_lb_update_lb(self):
lb = self.loadbalancer
mgr = lb.manager
mgr.update = Mock()
- name = utils.random_name()
- algorithm = utils.random_name()
- timeout = utils.random_name()
+ name = utils.random_unicode()
+ algorithm = utils.random_unicode()
+ timeout = utils.random_unicode()
lb.update(name=name, algorithm=algorithm, timeout=timeout)
mgr.update.assert_called_once_with(lb, name=name, algorithm=algorithm,
protocol=None, halfClosed=None, port=None, timeout=timeout)
@@ -156,9 +156,9 @@ def test_mgr_update_lb(self):
lb = self.loadbalancer
mgr = lb.manager
mgr.api.method_put = Mock(return_value=(None, None))
- name = utils.random_name()
- algorithm = utils.random_name()
- timeout = utils.random_name()
+ name = utils.random_unicode()
+ algorithm = utils.random_unicode()
+ timeout = utils.random_unicode()
mgr.update(lb, name=name, algorithm=algorithm, timeout=timeout)
exp_uri = "/loadbalancers/%s" % lb.id
exp_body = {"loadBalancer": {"name": name, "algorithm": algorithm,
@@ -1164,7 +1164,7 @@ def test_client_create_body(self):
fake_connectionLogging = True
fake_connectionThrottle = True
fake_healthMonitor = object()
- fake_metadata = {"fake": utils.random_name()}
+ fake_metadata = {"fake": utils.random_unicode()}
fake_timeout = 42
fake_sessionPersistence = True
expected = {"loadBalancer": {
@@ -1211,7 +1211,7 @@ def test_bad_node_condition(self):
fake_connectionLogging = True
fake_connectionThrottle = True
fake_healthMonitor = object()
- fake_metadata = {"fake": utils.random_name()}
+ fake_metadata = {"fake": utils.random_unicode()}
fake_timeout = 42
fake_sessionPersistence = True
self.assertRaises(exc.InvalidNodeCondition, mgr._create_body,
@@ -1240,7 +1240,7 @@ def test_missing_lb_parameters(self):
fake_connectionLogging = True
fake_connectionThrottle = True
fake_healthMonitor = object()
- fake_metadata = {"fake": utils.random_name()}
+ fake_metadata = {"fake": utils.random_unicode()}
fake_timeout = 42
fake_sessionPersistence = True
self.assertRaises(exc.MissingLoadBalancerParameters, mgr._create_body,
@@ -1264,7 +1264,7 @@ def test_client_get_usage(self):
def test_client_allowed_domains(self):
clt = self.client
- fake_name = utils.random_name()
+ fake_name = utils.random_unicode()
fake_body = {"allowedDomains": [{"allowedDomain":
{"name": fake_name}}]}
clt.method_get = Mock(return_value=({}, fake_body))
@@ -1278,7 +1278,7 @@ def test_client_allowed_domains(self):
def test_client_algorithms(self):
clt = self.client
- fake_name = utils.random_name()
+ fake_name = utils.random_unicode()
fake_body = {"algorithms": [{"name": fake_name}]}
clt.method_get = Mock(return_value=({}, fake_body))
ret = clt.algorithms
@@ -1291,7 +1291,7 @@ def test_client_algorithms(self):
def test_client_protocols(self):
clt = self.client
- fake_name = utils.random_name()
+ fake_name = utils.random_unicode()
fake_body = {"protocols": [{"name": fake_name}]}
clt.method_get = Mock(return_value=({}, fake_body))
ret = clt.protocols
diff --git a/tests/unit/test_cloud_monitoring.py b/tests/unit/test_cloud_monitoring.py
index 32960fff..fb071412 100644
--- a/tests/unit/test_cloud_monitoring.py
+++ b/tests/unit/test_cloud_monitoring.py
@@ -37,7 +37,7 @@ def tearDown(self):
self.client = None
def test_params_to_dict(self):
- val = utils.random_name()
+ val = utils.random_unicode()
local = {"foo": val, "bar": None, "baz": True}
params = ("foo", "bar")
expected = {"foo": val}
@@ -47,8 +47,8 @@ def test_params_to_dict(self):
def test_entity_update(self):
ent = self.entity
ent.manager.update_entity = Mock()
- agent = utils.random_name()
- metadata = {"fake": utils.random_name()}
+ agent = utils.random_unicode()
+ metadata = {"fake": utils.random_unicode()}
ent.update(agent=agent, metadata=metadata)
ent.manager.update_entity.assert_called_once_with(ent, agent=agent,
metadata=metadata)
@@ -62,27 +62,27 @@ def test_entity_list_checks(self):
def test_entity_delete_check(self):
ent = self.entity
ent.manager.delete_check = Mock()
- check = utils.random_name()
+ check = utils.random_unicode()
ent.delete_check(check)
ent.manager.delete_check.assert_called_once_with(ent, check)
def test_entity_list_metrics(self):
ent = self.entity
ent.manager.list_metrics = Mock()
- check = utils.random_name()
+ check = utils.random_unicode()
ent.list_metrics(check)
ent.manager.list_metrics.assert_called_once_with(ent, check)
def test_entity_get_metric_data_points(self):
ent = self.entity
ent.manager.get_metric_data_points = Mock()
- check = utils.random_name()
- metric = utils.random_name()
- start = utils.random_name()
- end = utils.random_name()
- points = utils.random_name()
- resolution = utils.random_name()
- stats = utils.random_name()
+ check = utils.random_unicode()
+ metric = utils.random_unicode()
+ start = utils.random_unicode()
+ end = utils.random_unicode()
+ points = utils.random_unicode()
+ resolution = utils.random_unicode()
+ stats = utils.random_unicode()
ent.get_metric_data_points(check, metric, start, end, points=points,
resolution=resolution, stats=stats)
ent.manager.get_metric_data_points.assert_called_once_with(ent, check,
@@ -92,13 +92,13 @@ def test_entity_get_metric_data_points(self):
def test_entity_create_alarm(self):
ent = self.entity
ent.manager.create_alarm = Mock()
- check = utils.random_name()
- np = utils.random_name()
- criteria = utils.random_name()
+ check = utils.random_unicode()
+ np = utils.random_unicode()
+ criteria = utils.random_unicode()
disabled = random.choice((True, False))
- label = utils.random_name()
- name = utils.random_name()
- metadata = utils.random_name()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
ent.create_alarm(check, np, criteria=criteria, disabled=disabled,
label=label, name=name, metadata=metadata)
ent.manager.create_alarm.assert_called_once_with(ent, check, np,
@@ -108,12 +108,12 @@ def test_entity_create_alarm(self):
def test_entity_update_alarm(self):
ent = self.entity
ent.manager.update_alarm = Mock()
- alarm = utils.random_name()
- criteria = utils.random_name()
+ alarm = utils.random_unicode()
+ criteria = utils.random_unicode()
disabled = random.choice((True, False))
- label = utils.random_name()
- name = utils.random_name()
- metadata = utils.random_name()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
ent.update_alarm(alarm, criteria=criteria, disabled=disabled,
label=label, name=name, metadata=metadata)
ent.manager.update_alarm.assert_called_once_with(ent, alarm,
@@ -129,32 +129,32 @@ def test_entity_list_alarms(self):
def test_entity_get_alarm(self):
ent = self.entity
ent.manager.get_alarm = Mock()
- alarm = utils.random_name()
+ alarm = utils.random_unicode()
ent.get_alarm(alarm)
ent.manager.get_alarm.assert_called_once_with(ent, alarm)
def test_entity_delete_alarm(self):
ent = self.entity
ent.manager.delete_alarm = Mock()
- alarm = utils.random_name()
+ alarm = utils.random_unicode()
ent.delete_alarm(alarm)
ent.manager.delete_alarm.assert_called_once_with(ent, alarm)
def test_entity_name(self):
ent = self.entity
- ent.label = utils.random_name()
+ ent.label = utils.random_unicode()
self.assertEqual(ent.label, ent.name)
def test_notif_manager_create(self):
clt = self.client
mgr = clt._notification_manager
clt.method_post = Mock(
- return_value=({"x-object-id": utils.random_name()}, None))
+ return_value=({"x-object-id": utils.random_unicode()}, None))
mgr.get = Mock()
- ntyp = utils.random_name()
- label = utils.random_name()
- name = utils.random_name()
- details = utils.random_name()
+ ntyp = utils.random_unicode()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ details = utils.random_unicode()
exp_uri = "/%s" % mgr.uri_base
exp_body = {"label": label or name, "type": ntyp, "details": details}
mgr.create(ntyp, label=label, name=name, details=details)
@@ -164,8 +164,8 @@ def test_notif_manager_test_notification_existing(self):
clt = self.client
mgr = clt._notification_manager
clt.method_post = Mock(return_value=(None, None))
- ntf = utils.random_name()
- details = utils.random_name()
+ ntf = utils.random_unicode()
+ details = utils.random_unicode()
exp_uri = "/%s/%s/test" % (mgr.uri_base, ntf)
exp_body = None
mgr.test_notification(notification=ntf, details=details)
@@ -175,8 +175,8 @@ def test_notif_manager_test_notification(self):
clt = self.client
mgr = clt._notification_manager
clt.method_post = Mock(return_value=(None, None))
- ntyp = utils.random_name()
- details = utils.random_name()
+ ntyp = utils.random_unicode()
+ details = utils.random_unicode()
exp_uri = "/test-notification"
exp_body = {"type": ntyp, "details": details}
mgr.test_notification(notification_type=ntyp, details=details)
@@ -187,8 +187,8 @@ def test_notif_manager_update_notification(self):
mgr = clt._notification_manager
clt.method_put = Mock(return_value=(None, None))
ntf = fakes.FakeCloudMonitorNotification()
- ntf.type = utils.random_name()
- details = utils.random_name()
+ ntf.type = utils.random_unicode()
+ details = utils.random_unicode()
exp_uri = "/%s/%s" % (mgr.uri_base, ntf.id)
exp_body = {"type": ntf.type, "details": details}
mgr.update_notification(ntf, details)
@@ -199,8 +199,8 @@ def test_notif_manager_update_notification_id(self):
mgr = clt._notification_manager
clt.method_put = Mock(return_value=(None, None))
ntf = fakes.FakeCloudMonitorNotification()
- ntf.type = utils.random_name()
- details = utils.random_name()
+ ntf.type = utils.random_unicode()
+ details = utils.random_unicode()
mgr.get = Mock(return_value=ntf)
exp_uri = "/%s/%s" % (mgr.uri_base, ntf.id)
exp_body = {"type": ntf.type, "details": details}
@@ -210,7 +210,7 @@ def test_notif_manager_update_notification_id(self):
def test_notif_manager_list_types(self):
clt = self.client
mgr = clt._notification_manager
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
ret_body = {"values": [{"id": id_}]}
clt.method_get = Mock(return_value=(None, ret_body))
ret = mgr.list_types()
@@ -223,7 +223,7 @@ def test_notif_manager_list_types(self):
def test_notif_manager_get_type(self):
clt = self.client
mgr = clt._notification_manager
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
ret_body = {"id": id_}
clt.method_get = Mock(return_value=(None, ret_body))
ret = mgr.get_type(id_)
@@ -236,15 +236,15 @@ def test_notif_plan_manager_create(self):
clt = self.client
mgr = clt._notification_plan_manager
clt.method_post = Mock(
- return_value=({"x-object-id": utils.random_name()}, None))
+ return_value=({"x-object-id": utils.random_unicode()}, None))
mgr.get = Mock()
- label = utils.random_name()
- name = utils.random_name()
- crit = utils.random_name()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ crit = utils.random_unicode()
# Make the OK an object rather than a straight ID.
ok = fakes.FakeEntity()
- ok_id = ok.id = utils.random_name()
- warn = utils.random_name()
+ ok_id = ok.id = utils.random_unicode()
+ warn = utils.random_unicode()
exp_uri = "/%s" % mgr.uri_base
exp_body = {"label": label or name, "critical_state": [crit],
"ok_state": [ok.id], "warning_state": [warn]}
@@ -257,8 +257,8 @@ def test_entity_mgr_update_entity(self):
clt = self.client
mgr = clt._entity_manager
clt.method_put = Mock(return_value=(None, None))
- agent = utils.random_name()
- metadata = utils.random_name()
+ agent = utils.random_unicode()
+ metadata = utils.random_unicode()
exp_uri = "/%s/%s" % (mgr.uri_base, ent.id)
exp_body = {"agent_id": agent, "metadata": metadata}
mgr.update_entity(ent, agent, metadata)
@@ -268,7 +268,7 @@ def test_entity_mgr_list_checks(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
ret_body = {"values": [{"id": id_}]}
clt.method_get = Mock(return_value=(None, ret_body))
ret = mgr.list_checks(ent)
@@ -288,18 +288,18 @@ def test_entity_mgr_create_check_test_debug(self, cmc):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- label = utils.random_name()
- name = utils.random_name()
- check_type = utils.random_name()
- details = utils.random_name()
- disabled = utils.random_name()
- metadata = utils.random_name()
- monitoring_zones_poll = utils.random_name()
- timeout = utils.random_name()
- period = utils.random_name()
- target_alias = utils.random_name()
- target_hostname = utils.random_name()
- target_receiver = utils.random_name()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ check_type = utils.random_unicode()
+ details = utils.random_unicode()
+ disabled = utils.random_unicode()
+ metadata = utils.random_unicode()
+ monitoring_zones_poll = utils.random_unicode()
+ timeout = utils.random_unicode()
+ period = utils.random_unicode()
+ target_alias = utils.random_unicode()
+ target_hostname = utils.random_unicode()
+ target_receiver = utils.random_unicode()
test_only = True
include_debug = True
fake_resp = {"x-object-id": {}, "status": "201"}
@@ -327,18 +327,18 @@ def test_entity_mgr_create_check_test_no_debug(self, cmc):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- label = utils.random_name()
- name = utils.random_name()
- check_type = utils.random_name()
- details = utils.random_name()
- disabled = utils.random_name()
- metadata = utils.random_name()
- monitoring_zones_poll = utils.random_name()
- timeout = utils.random_name()
- period = utils.random_name()
- target_alias = utils.random_name()
- target_hostname = utils.random_name()
- target_receiver = utils.random_name()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ check_type = utils.random_unicode()
+ details = utils.random_unicode()
+ disabled = utils.random_unicode()
+ metadata = utils.random_unicode()
+ monitoring_zones_poll = utils.random_unicode()
+ timeout = utils.random_unicode()
+ period = utils.random_unicode()
+ target_alias = utils.random_unicode()
+ target_hostname = utils.random_unicode()
+ target_receiver = utils.random_unicode()
test_only = True
include_debug = False
fake_resp = {"x-object-id": {}, "status": "201"}
@@ -366,18 +366,18 @@ def test_entity_mgr_create_check(self, cmc):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- label = utils.random_name()
- name = utils.random_name()
- check_type = utils.random_name()
- details = utils.random_name()
- disabled = utils.random_name()
- metadata = utils.random_name()
- monitoring_zones_poll = utils.random_name()
- timeout = utils.random_name()
- period = utils.random_name()
- target_alias = utils.random_name()
- target_hostname = utils.random_name()
- target_receiver = utils.random_name()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ check_type = utils.random_unicode()
+ details = utils.random_unicode()
+ disabled = utils.random_unicode()
+ metadata = utils.random_unicode()
+ monitoring_zones_poll = utils.random_unicode()
+ timeout = utils.random_unicode()
+ period = utils.random_unicode()
+ target_alias = utils.random_unicode()
+ target_hostname = utils.random_unicode()
+ target_receiver = utils.random_unicode()
test_only = False
include_debug = False
fake_resp = {"x-object-id": {}, "status": "201"}
@@ -479,18 +479,18 @@ def test_entity_mgr_update_check(self):
clt = self.client
mgr = clt._entity_manager
chk = fakes.FakeCloudMonitorCheck(entity=ent)
- label = utils.random_name()
- name = utils.random_name()
- check_type = utils.random_name()
- details = utils.random_name()
- disabled = utils.random_name()
- metadata = utils.random_name()
- monitoring_zones_poll = utils.random_name()
- timeout = utils.random_name()
- period = utils.random_name()
- target_alias = utils.random_name()
- target_hostname = utils.random_name()
- target_receiver = utils.random_name()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ check_type = utils.random_unicode()
+ details = utils.random_unicode()
+ disabled = utils.random_unicode()
+ metadata = utils.random_unicode()
+ monitoring_zones_poll = utils.random_unicode()
+ timeout = utils.random_unicode()
+ period = utils.random_unicode()
+ target_alias = utils.random_unicode()
+ target_hostname = utils.random_unicode()
+ target_receiver = utils.random_unicode()
test_only = False
include_debug = False
clt.method_put = Mock(return_value=(None, None))
@@ -511,7 +511,7 @@ def test_entity_mgr_update_check_failed_validation(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
chk = fakes.FakeCloudMonitorCheck(info={"id": id_}, entity=ent)
err = exc.BadRequest(400)
err.message = "Validation error"
@@ -524,7 +524,7 @@ def test_entity_mgr_update_check_failed_validation_other(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
chk = fakes.FakeCloudMonitorCheck(info={"id": id_}, entity=ent)
err = exc.BadRequest(400)
err.message = "Another error"
@@ -537,7 +537,7 @@ def test_entity_mgr_get_check(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
ret_body = {"id": id_}
clt.method_get = Mock(return_value=(None, ret_body))
ret = mgr.get_check(ent, id_)
@@ -550,7 +550,7 @@ def test_entity_mgr_delete_check(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
clt.method_delete = Mock(return_value=(None, None))
ret = mgr.delete_check(ent, id_)
exp_uri = "/%s/%s/checks/%s" % (mgr.uri_base, ent.id, id_)
@@ -560,9 +560,9 @@ def test_entity_mgr_list_metrics(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
- met1 = utils.random_name()
- met2 = utils.random_name()
+ id_ = utils.random_unicode()
+ met1 = utils.random_unicode()
+ met2 = utils.random_unicode()
ret_body = {"values": [{"name": met1}, {"name": met2}]}
clt.method_get = Mock(return_value=(None, ret_body))
ret = mgr.list_metrics(ent, id_)
@@ -591,9 +591,9 @@ def test_entity_mgr_get_metric_data_points(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- chk_id = utils.random_name()
- metric = utils.random_name()
- points = utils.random_name()
+ chk_id = utils.random_unicode()
+ metric = utils.random_unicode()
+ points = utils.random_unicode()
resolution = "FULL"
end = datetime.datetime.now()
start = end - datetime.timedelta(days=7)
@@ -611,7 +611,7 @@ def test_entity_mgr_get_metric_data_points(self):
start_stamp, end_stamp, points, resolution, stats[0], stats[1])
exp_uri = "/%s/%s/checks/%s/metrics/%s/plot?%s" % (mgr.uri_base, ent.id,
chk_id, metric, exp_qp)
- vals = utils.random_name()
+ vals = utils.random_unicode()
ret_body = {"values": vals}
clt.method_get = Mock(return_value=(None, ret_body))
ret = mgr.get_metric_data_points(ent, chk_id, metric, start, end,
@@ -623,9 +623,9 @@ def test_entity_mgr_get_metric_data_points_invalid_request(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- chk_id = utils.random_name()
- metric = utils.random_name()
- points = utils.random_name()
+ chk_id = utils.random_unicode()
+ metric = utils.random_unicode()
+ points = utils.random_unicode()
resolution = "FULL"
end = datetime.datetime.now()
start = end - datetime.timedelta(days=7)
@@ -641,9 +641,9 @@ def test_entity_mgr_get_metric_data_points_invalid_request_other(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- chk_id = utils.random_name()
- metric = utils.random_name()
- points = utils.random_name()
+ chk_id = utils.random_unicode()
+ metric = utils.random_unicode()
+ points = utils.random_unicode()
resolution = "FULL"
end = datetime.datetime.now()
start = end - datetime.timedelta(days=7)
@@ -659,14 +659,14 @@ def test_entity_mgr_create_alarm(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- check = utils.random_name()
- np = utils.random_name()
- criteria = utils.random_name()
+ check = utils.random_unicode()
+ np = utils.random_unicode()
+ criteria = utils.random_unicode()
disabled = random.choice((True, False))
- label = utils.random_name()
- name = utils.random_name()
- metadata = utils.random_name()
- obj_id = utils.random_name()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
+ obj_id = utils.random_unicode()
resp = ({"status": "201", "x-object-id": {}}, None)
clt.method_post = Mock(return_value=resp)
mgr.get_alarm = Mock()
@@ -683,12 +683,12 @@ def test_entity_mgr_update_alarm(self):
clt = self.client
mgr = clt._entity_manager
clt.method_put = Mock(return_value=(None, None))
- alarm = utils.random_name()
- criteria = utils.random_name()
+ alarm = utils.random_unicode()
+ criteria = utils.random_unicode()
disabled = random.choice((True, False))
- label = utils.random_name()
- name = utils.random_name()
- metadata = utils.random_name()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
exp_uri = "/%s/%s/alarms/%s" % (mgr.uri_base, ent.id, alarm)
exp_body = {"criteria": criteria, "disabled": disabled, "label": label,
"metadata": metadata}
@@ -700,7 +700,7 @@ def test_entity_mgr_list_alarms(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
ret_body = {"values": [{"id": id_}]}
clt.method_get = Mock(return_value=(None, ret_body))
exp_uri = "/%s/%s/alarms" % (mgr.uri_base, ent.id)
@@ -714,7 +714,7 @@ def test_entity_mgr_get_alarm(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
ret_body = {"id": id_}
clt.method_get = Mock(return_value=(None, ret_body))
ret = mgr.get_alarm(ent, id_)
@@ -727,7 +727,7 @@ def test_entity_mgr_delete_alarm(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
clt.method_delete = Mock(return_value=(None, None))
ret = mgr.delete_alarm(ent, id_)
exp_uri = "/%s/%s/alarms/%s" % (mgr.uri_base, ent.id, id_)
@@ -738,7 +738,7 @@ def test_check(self):
clt = self.client
mgr = clt._entity_manager
mgr.get = Mock(return_value=ent)
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
chk = CloudMonitorCheck(mgr, info={"id": id_}, entity="fake")
self.assertEqual(chk.manager, mgr)
self.assertEqual(chk.id, id_)
@@ -748,9 +748,9 @@ def test_check_name(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
chk = CloudMonitorCheck(mgr, info={"id": id_}, entity=ent)
- nm = utils.random_name()
+ nm = utils.random_unicode()
chk.label = nm
self.assertEqual(chk.name, nm)
@@ -758,7 +758,7 @@ def test_check_get_reload(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
chk = CloudMonitorCheck(mgr, info={"id": id_}, entity=ent)
info = chk._info
mgr.get_check = Mock(return_value=chk)
@@ -769,20 +769,20 @@ def test_check_update(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
chk = CloudMonitorCheck(mgr, info={"id": id_}, entity=ent)
mgr.update_check = Mock()
- label = utils.random_name()
- name = utils.random_name()
- check_type = utils.random_name()
- disabled = utils.random_name()
- metadata = utils.random_name()
- monitoring_zones_poll = utils.random_name()
- timeout = utils.random_name()
- period = utils.random_name()
- target_alias = utils.random_name()
- target_hostname = utils.random_name()
- target_receiver = utils.random_name()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ check_type = utils.random_unicode()
+ disabled = utils.random_unicode()
+ metadata = utils.random_unicode()
+ monitoring_zones_poll = utils.random_unicode()
+ timeout = utils.random_unicode()
+ period = utils.random_unicode()
+ target_alias = utils.random_unicode()
+ target_hostname = utils.random_unicode()
+ target_receiver = utils.random_unicode()
chk.update(label=label, name=name, disabled=disabled,
metadata=metadata, monitoring_zones_poll=monitoring_zones_poll,
timeout=timeout, period=period, target_alias=target_alias,
@@ -799,7 +799,7 @@ def test_check_delete(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
chk = CloudMonitorCheck(mgr, info={"id": id_}, entity=ent)
mgr.delete_check = Mock()
chk.delete()
@@ -809,7 +809,7 @@ def test_check_list_metrics(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
chk = CloudMonitorCheck(mgr, info={"id": id_}, entity=ent)
mgr.list_metrics = Mock()
chk.list_metrics()
@@ -819,15 +819,15 @@ def test_check_get_metric_data_points(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
chk = CloudMonitorCheck(mgr, info={"id": id_}, entity=ent)
mgr.get_metric_data_points = Mock()
- metric = utils.random_name()
- start = utils.random_name()
- end = utils.random_name()
- points = utils.random_name()
- resolution = utils.random_name()
- stats = utils.random_name()
+ metric = utils.random_unicode()
+ start = utils.random_unicode()
+ end = utils.random_unicode()
+ points = utils.random_unicode()
+ resolution = utils.random_unicode()
+ stats = utils.random_unicode()
chk.get_metric_data_points(metric, start, end, points=points,
resolution=resolution, stats=stats)
mgr.get_metric_data_points.assert_called_once_with(ent, chk, metric,
@@ -837,15 +837,15 @@ def test_check_create_alarm(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
chk = CloudMonitorCheck(mgr, info={"id": id_}, entity=ent)
mgr.create_alarm = Mock()
- notification_plan = utils.random_name()
- criteria = utils.random_name()
- disabled = utils.random_name()
- label = utils.random_name()
- name = utils.random_name()
- metadata = utils.random_name()
+ notification_plan = utils.random_unicode()
+ criteria = utils.random_unicode()
+ disabled = utils.random_unicode()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
chk.create_alarm(notification_plan, criteria=criteria,
disabled=disabled, label=label, name=name, metadata=metadata)
mgr.create_alarm.assert_called_once_with(ent, chk, notification_plan,
@@ -856,7 +856,7 @@ def test_checktype_field_names(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
+ id_ = utils.random_unicode()
flds = [{"optional": True, "name": "fake_opt",
"description": "Optional Field"},
{"optional": False, "name": "fake_req",
@@ -870,8 +870,8 @@ def test_zone_name(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
- nm = utils.random_name()
+ id_ = utils.random_unicode()
+ nm = utils.random_unicode()
cmz = CloudMonitorZone(mgr, info={"id": id_, "label": nm})
self.assertEqual(cmz.label, nm)
self.assertEqual(cmz.name, nm)
@@ -880,8 +880,8 @@ def test_notification_name(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
- nm = utils.random_name()
+ id_ = utils.random_unicode()
+ nm = utils.random_unicode()
cnot = CloudMonitorNotification(mgr, info={"id": id_, "label": nm})
self.assertEqual(cnot.label, nm)
self.assertEqual(cnot.name, nm)
@@ -890,9 +890,9 @@ def test_notification_update(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
- nm = utils.random_name()
- details = utils.random_name()
+ id_ = utils.random_unicode()
+ nm = utils.random_unicode()
+ details = utils.random_unicode()
cnot = CloudMonitorNotification(mgr, info={"id": id_, "label": nm})
mgr.update_notification = Mock()
cnot.update(details)
@@ -902,8 +902,8 @@ def test_notification_type_name(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
- nm = utils.random_name()
+ id_ = utils.random_unicode()
+ nm = utils.random_unicode()
cntyp = CloudMonitorNotificationType(mgr, info={"id": id_, "label": nm})
self.assertEqual(cntyp.label, nm)
self.assertEqual(cntyp.name, nm)
@@ -912,8 +912,8 @@ def test_notification_plan_name(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
- nm = utils.random_name()
+ id_ = utils.random_unicode()
+ nm = utils.random_unicode()
cpln = CloudMonitorNotificationPlan(mgr, info={"id": id_, "label": nm})
self.assertEqual(cpln.label, nm)
self.assertEqual(cpln.name, nm)
@@ -922,8 +922,8 @@ def test_alarm(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
- nm = utils.random_name()
+ id_ = utils.random_unicode()
+ nm = utils.random_unicode()
mgr.get = Mock(return_value=ent)
alm = CloudMonitorAlarm(mgr, info={"id": id_, "label": nm},
entity="fake")
@@ -933,8 +933,8 @@ def test_alarm_name(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
- nm = utils.random_name()
+ id_ = utils.random_unicode()
+ nm = utils.random_unicode()
alm = CloudMonitorAlarm(mgr, info={"id": id_, "label": nm},
entity=ent)
self.assertEqual(alm.label, nm)
@@ -944,15 +944,15 @@ def test_alarm_update(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
- nm = utils.random_name()
+ id_ = utils.random_unicode()
+ nm = utils.random_unicode()
alm = CloudMonitorAlarm(mgr, info={"id": id_, "label": nm},
entity=ent)
- criteria = utils.random_name()
- disabled = utils.random_name()
- label = utils.random_name()
- name = utils.random_name()
- metadata = utils.random_name()
+ criteria = utils.random_unicode()
+ disabled = utils.random_unicode()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
ent.update_alarm = Mock()
alm.update(criteria=criteria, disabled=disabled, label=label,
name=name, metadata=metadata)
@@ -963,8 +963,8 @@ def test_alarm_get_reload(self):
ent = self.entity
clt = self.client
mgr = clt._entity_manager
- id_ = utils.random_name()
- nm = utils.random_name()
+ id_ = utils.random_unicode()
+ nm = utils.random_unicode()
alm = CloudMonitorAlarm(mgr, info={"id": id_, "label": nm},
entity=ent)
info = alm._info
@@ -974,8 +974,8 @@ def test_alarm_get_reload(self):
def test_clt_get_account(self):
clt = self.client
- rsp = utils.random_name()
- rb = utils.random_name()
+ rsp = utils.random_unicode()
+ rb = utils.random_unicode()
clt.method_get = Mock(return_value=((rsp, rb)))
ret = clt.get_account()
clt.method_get.assert_called_once_with("/account")
@@ -983,8 +983,8 @@ def test_clt_get_account(self):
def test_clt_get_limits(self):
clt = self.client
- rsp = utils.random_name()
- rb = utils.random_name()
+ rsp = utils.random_unicode()
+ rb = utils.random_unicode()
clt.method_get = Mock(return_value=((rsp, rb)))
ret = clt.get_limits()
clt.method_get.assert_called_once_with("/limits")
@@ -992,8 +992,8 @@ def test_clt_get_limits(self):
def test_clt_get_audits(self):
clt = self.client
- rsp = utils.random_name()
- rb = utils.random_name()
+ rsp = utils.random_unicode()
+ rb = utils.random_unicode()
clt.method_get = Mock(return_value=((rsp, {"values": rb})))
ret = clt.get_audits()
clt.method_get.assert_called_once_with("/audits")
@@ -1001,7 +1001,7 @@ def test_clt_get_audits(self):
def test_clt_list_entities(self):
clt = self.client
- ents = utils.random_name()
+ ents = utils.random_unicode()
clt._entity_manager.list = Mock(return_value=ents)
ret = clt.list_entities()
clt._entity_manager.list.assert_called_once_with()
@@ -1019,15 +1019,15 @@ def test_clt_create_entity(self):
clt = self.client
ent = self.entity
mgr = clt._entity_manager
- obj_id = utils.random_name()
+ obj_id = utils.random_unicode()
resp = {"status": "201", "x-object-id": obj_id}
mgr.create = Mock(return_value=resp)
clt.get_entity = Mock(return_value=ent)
- label = utils.random_name()
- name = utils.random_name()
- agent = utils.random_name()
- ip_addresses = utils.random_name()
- metadata = utils.random_name()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ agent = utils.random_unicode()
+ ip_addresses = utils.random_unicode()
+ metadata = utils.random_unicode()
ret = clt.create_entity(label=label, name=name, agent=agent,
ip_addresses=ip_addresses, metadata=metadata)
mgr.create.assert_called_once_with(label=label, name=name, agent=agent,
@@ -1040,10 +1040,10 @@ def test_clt_update_entity(self):
clt = self.client
ent = self.entity
mgr = clt._entity_manager
- obj_id = utils.random_name()
+ obj_id = utils.random_unicode()
mgr.update_entity = Mock()
- agent = utils.random_name()
- metadata = utils.random_name()
+ agent = utils.random_unicode()
+ metadata = utils.random_unicode()
clt.update_entity(ent, agent=agent, metadata=metadata)
mgr.update_entity.assert_called_once_with(ent, agent=agent,
metadata=metadata)
@@ -1060,7 +1060,7 @@ def test_clt_list_check_types(self):
clt = self.client
ent = self.entity
mgr = clt._check_type_manager
- cts = utils.random_name()
+ cts = utils.random_unicode()
mgr.list = Mock(return_value=cts)
ret = clt.list_check_types()
mgr.list.assert_called_once_with()
@@ -1070,7 +1070,7 @@ def test_clt_get_check_type(self):
clt = self.client
ent = self.entity
mgr = clt._check_type_manager
- ct = utils.random_name()
+ ct = utils.random_unicode()
mgr.get = Mock(return_value=ct)
ret = clt.get_check_type("fake")
mgr.get.assert_called_once_with("fake")
@@ -1080,7 +1080,7 @@ def test_clt_list_checks(self):
clt = self.client
ent = self.entity
mgr = clt._entity_manager
- chks = utils.random_name()
+ chks = utils.random_unicode()
mgr.list_checks = Mock(return_value=chks)
ret = clt.list_checks(ent)
mgr.list_checks.assert_called_once_with(ent)
@@ -1090,20 +1090,20 @@ def test_clt_create_check(self):
clt = self.client
ent = self.entity
mgr = clt._entity_manager
- label = utils.random_name()
- name = utils.random_name()
- check_type = utils.random_name()
- disabled = utils.random_name()
- metadata = utils.random_name()
- details = utils.random_name()
- monitoring_zones_poll = utils.random_name()
- timeout = utils.random_name()
- period = utils.random_name()
- target_alias = utils.random_name()
- target_hostname = utils.random_name()
- target_receiver = utils.random_name()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ check_type = utils.random_unicode()
+ disabled = utils.random_unicode()
+ metadata = utils.random_unicode()
+ details = utils.random_unicode()
+ monitoring_zones_poll = utils.random_unicode()
+ timeout = utils.random_unicode()
+ period = utils.random_unicode()
+ target_alias = utils.random_unicode()
+ target_hostname = utils.random_unicode()
+ target_receiver = utils.random_unicode()
rand_bool = random.choice((True, False))
- answer = utils.random_name()
+ answer = utils.random_unicode()
mgr.create_check = Mock(return_value=answer)
ret = clt.create_check(ent, label=label, name=name,
check_type=check_type, disabled=disabled, metadata=metadata,
@@ -1125,8 +1125,8 @@ def test_clt_get_check(self):
clt = self.client
ent = self.entity
mgr = clt._entity_manager
- answer = utils.random_name()
- chk = utils.random_name()
+ answer = utils.random_unicode()
+ chk = utils.random_unicode()
mgr.get_check = Mock(return_value=answer)
ret = clt.get_check(ent, chk)
mgr.get_check.assert_called_once_with(ent, chk)
@@ -1136,7 +1136,7 @@ def test_clt_find_all_checks(self):
clt = self.client
ent = self.entity
mgr = clt._entity_manager
- answer = utils.random_name()
+ answer = utils.random_unicode()
mgr.find_all_checks = Mock(return_value=answer)
ret = clt.find_all_checks(ent, foo="fake", bar="fake")
mgr.find_all_checks.assert_called_once_with(ent, foo="fake", bar="fake")
@@ -1146,17 +1146,17 @@ def test_clt_update_check(self):
clt = self.client
ent = self.entity
mgr = clt._entity_manager
- chk = utils.random_name()
- label = utils.random_name()
- name = utils.random_name()
- disabled = utils.random_name()
- metadata = utils.random_name()
- monitoring_zones_poll = utils.random_name()
- timeout = utils.random_name()
- period = utils.random_name()
- target_alias = utils.random_name()
- target_hostname = utils.random_name()
- target_receiver = utils.random_name()
+ chk = utils.random_unicode()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ disabled = utils.random_unicode()
+ metadata = utils.random_unicode()
+ monitoring_zones_poll = utils.random_unicode()
+ timeout = utils.random_unicode()
+ period = utils.random_unicode()
+ target_alias = utils.random_unicode()
+ target_hostname = utils.random_unicode()
+ target_receiver = utils.random_unicode()
mgr.update_check = Mock()
clt.update_check(ent, chk, label=label, name=name, disabled=disabled,
metadata=metadata, monitoring_zones_poll=monitoring_zones_poll,
@@ -1174,7 +1174,7 @@ def test_clt_delete_check(self):
clt = self.client
ent = self.entity
mgr = clt._entity_manager
- chk = utils.random_name()
+ chk = utils.random_unicode()
mgr.delete_check = Mock()
clt.delete_check(ent, chk)
mgr.delete_check.assert_called_once_with(ent, chk)
@@ -1183,8 +1183,8 @@ def test_clt_list_metrics(self):
clt = self.client
ent = self.entity
mgr = clt._entity_manager
- chk = utils.random_name()
- answer = utils.random_name()
+ chk = utils.random_unicode()
+ answer = utils.random_unicode()
mgr.list_metrics = Mock(return_value=answer)
ret = clt.list_metrics(ent, chk)
mgr.list_metrics.assert_called_once_with(ent, chk)
@@ -1194,15 +1194,15 @@ def test_clt_get_metric_data_points(self):
clt = self.client
ent = self.entity
mgr = clt._entity_manager
- chk = utils.random_name()
- answer = utils.random_name()
+ chk = utils.random_unicode()
+ answer = utils.random_unicode()
mgr.get_metric_data_points = Mock(return_value=answer)
- metric = utils.random_name()
- start = utils.random_name()
- end = utils.random_name()
- points = utils.random_name()
- resolution = utils.random_name()
- stats = utils.random_name()
+ metric = utils.random_unicode()
+ start = utils.random_unicode()
+ end = utils.random_unicode()
+ points = utils.random_unicode()
+ resolution = utils.random_unicode()
+ stats = utils.random_unicode()
ret = clt.get_metric_data_points(ent, chk, metric, start, end,
points=points, resolution=resolution, stats=stats)
mgr.get_metric_data_points.assert_called_once_with(ent, chk, metric,
@@ -1213,7 +1213,7 @@ def test_clt_list_notifications(self):
clt = self.client
ent = self.entity
mgr = clt._notification_manager
- answer = utils.random_name()
+ answer = utils.random_unicode()
mgr.list = Mock(return_value=answer)
ret = clt.list_notifications()
mgr.list.assert_called_once_with()
@@ -1223,8 +1223,8 @@ def test_clt_get_notification(self):
clt = self.client
ent = self.entity
mgr = clt._notification_manager
- answer = utils.random_name()
- notif_id = utils.random_name()
+ answer = utils.random_unicode()
+ notif_id = utils.random_unicode()
mgr.get = Mock(return_value=answer)
ret = clt.get_notification(notif_id)
mgr.get.assert_called_once_with(notif_id)
@@ -1234,11 +1234,11 @@ def test_clt_test_notification(self):
clt = self.client
ent = self.entity
mgr = clt._notification_manager
- answer = utils.random_name()
+ answer = utils.random_unicode()
mgr.test_notification = Mock(return_value=answer)
- notification = utils.random_name()
- ntyp = utils.random_name()
- details = utils.random_name()
+ notification = utils.random_unicode()
+ ntyp = utils.random_unicode()
+ details = utils.random_unicode()
ret = clt.test_notification(notification=notification,
notification_type=ntyp, details=details)
mgr.test_notification.assert_called_once_with(notification=notification,
@@ -1249,12 +1249,12 @@ def test_clt_create_notification(self):
clt = self.client
ent = self.entity
mgr = clt._notification_manager
- answer = utils.random_name()
+ answer = utils.random_unicode()
mgr.create = Mock(return_value=answer)
- ntyp = utils.random_name()
- label = utils.random_name()
- name = utils.random_name()
- details = utils.random_name()
+ ntyp = utils.random_unicode()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ details = utils.random_unicode()
ret = clt.create_notification(ntyp, label=label, name=name,
details=details)
mgr.create.assert_called_once_with(ntyp, label=label, name=name,
@@ -1265,10 +1265,10 @@ def test_clt_update_notification(self):
clt = self.client
ent = self.entity
mgr = clt._notification_manager
- answer = utils.random_name()
+ answer = utils.random_unicode()
mgr.update_notification = Mock(return_value=answer)
- notification = utils.random_name()
- details = utils.random_name()
+ notification = utils.random_unicode()
+ details = utils.random_unicode()
ret = clt.update_notification(notification, details)
mgr.update_notification.assert_called_once_with(notification, details)
self.assertEqual(ret, answer)
@@ -1277,9 +1277,9 @@ def test_clt_delete_notification(self):
clt = self.client
ent = self.entity
mgr = clt._notification_manager
- answer = utils.random_name()
+ answer = utils.random_unicode()
mgr.delete = Mock(return_value=answer)
- notification = utils.random_name()
+ notification = utils.random_unicode()
ret = clt.delete_notification(notification)
mgr.delete.assert_called_once_with(notification)
self.assertEqual(ret, answer)
@@ -1288,13 +1288,13 @@ def test_clt_create_notification_plan(self):
clt = self.client
ent = self.entity
mgr = clt._notification_plan_manager
- answer = utils.random_name()
+ answer = utils.random_unicode()
mgr.create = Mock(return_value=answer)
- label = utils.random_name()
- name = utils.random_name()
- critical_state = utils.random_name()
- ok_state = utils.random_name()
- warning_state = utils.random_name()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ critical_state = utils.random_unicode()
+ ok_state = utils.random_unicode()
+ warning_state = utils.random_unicode()
ret = clt.create_notification_plan(label=label, name=name,
critical_state=critical_state, ok_state=ok_state,
warning_state=warning_state)
@@ -1307,7 +1307,7 @@ def test_clt_list_notification_plans(self):
clt = self.client
ent = self.entity
mgr = clt._notification_plan_manager
- answer = utils.random_name()
+ answer = utils.random_unicode()
mgr.list = Mock(return_value=answer)
ret = clt.list_notification_plans()
mgr.list.assert_called_once_with()
@@ -1317,8 +1317,8 @@ def test_clt_get_notification_plan(self):
clt = self.client
ent = self.entity
mgr = clt._notification_plan_manager
- answer = utils.random_name()
- nplan_id = utils.random_name()
+ answer = utils.random_unicode()
+ nplan_id = utils.random_unicode()
mgr.get = Mock(return_value=answer)
ret = clt.get_notification_plan(nplan_id)
mgr.get.assert_called_once_with(nplan_id)
@@ -1328,9 +1328,9 @@ def test_clt_delete_notification_plan(self):
clt = self.client
ent = self.entity
mgr = clt._notification_plan_manager
- answer = utils.random_name()
+ answer = utils.random_unicode()
mgr.delete = Mock(return_value=answer)
- notification_plan = utils.random_name()
+ notification_plan = utils.random_unicode()
ret = clt.delete_notification_plan(notification_plan)
mgr.delete.assert_called_once_with(notification_plan)
self.assertEqual(ret, answer)
@@ -1339,7 +1339,7 @@ def test_clt_list_alarms(self):
clt = self.client
ent = self.entity
mgr = clt._entity_manager
- alms = utils.random_name()
+ alms = utils.random_unicode()
mgr.list_alarms = Mock(return_value=alms)
ret = clt.list_alarms(ent)
mgr.list_alarms.assert_called_once_with(ent)
@@ -1349,8 +1349,8 @@ def test_clt_get_alarm(self):
clt = self.client
ent = self.entity
mgr = clt._entity_manager
- answer = utils.random_name()
- alm = utils.random_name()
+ answer = utils.random_unicode()
+ alm = utils.random_unicode()
mgr.get_alarm = Mock(return_value=answer)
ret = clt.get_alarm(ent, alm)
mgr.get_alarm.assert_called_once_with(ent, alm)
@@ -1360,14 +1360,14 @@ def test_clt_create_alarm(self):
clt = self.client
ent = self.entity
mgr = clt._entity_manager
- chk = utils.random_name()
- nplan = utils.random_name()
- criteria = utils.random_name()
- disabled = utils.random_name()
- label = utils.random_name()
- name = utils.random_name()
- metadata = utils.random_name()
- answer = utils.random_name()
+ chk = utils.random_unicode()
+ nplan = utils.random_unicode()
+ criteria = utils.random_unicode()
+ disabled = utils.random_unicode()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
+ answer = utils.random_unicode()
mgr.create_alarm = Mock(return_value=answer)
ret = clt.create_alarm(ent, chk, nplan, criteria=criteria,
disabled=disabled, label=label, name=name, metadata=metadata)
@@ -1380,13 +1380,13 @@ def test_clt_update_alarm(self):
clt = self.client
ent = self.entity
mgr = clt._entity_manager
- alm = utils.random_name()
- criteria = utils.random_name()
- disabled = utils.random_name()
- label = utils.random_name()
- name = utils.random_name()
- metadata = utils.random_name()
- answer = utils.random_name()
+ alm = utils.random_unicode()
+ criteria = utils.random_unicode()
+ disabled = utils.random_unicode()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
+ answer = utils.random_unicode()
mgr.update_alarm = Mock(return_value=answer)
ret = clt.update_alarm(ent, alm, criteria=criteria, disabled=disabled,
label=label, name=name, metadata=metadata)
@@ -1398,7 +1398,7 @@ def test_clt_delete_alarm(self):
clt = self.client
ent = self.entity
mgr = clt._entity_manager
- alm = utils.random_name()
+ alm = utils.random_unicode()
mgr.delete_alarm = Mock()
clt.delete_alarm(ent, alm)
mgr.delete_alarm.assert_called_once_with(ent, alm)
@@ -1407,7 +1407,7 @@ def test_clt_list_notification_types(self):
clt = self.client
ent = self.entity
mgr = clt._notification_manager
- typs = utils.random_name()
+ typs = utils.random_unicode()
mgr.list_types = Mock(return_value=typs)
ret = clt.list_notification_types()
mgr.list_types.assert_called_once_with()
@@ -1417,8 +1417,8 @@ def test_clt_get_notification_type(self):
clt = self.client
ent = self.entity
mgr = clt._notification_manager
- answer = utils.random_name()
- nt_id = utils.random_name()
+ answer = utils.random_unicode()
+ nt_id = utils.random_unicode()
mgr.get_type = Mock(return_value=answer)
ret = clt.get_notification_type(nt_id)
mgr.get_type.assert_called_once_with(nt_id)
@@ -1428,7 +1428,7 @@ def test_clt_list_monitoring_zones(self):
clt = self.client
ent = self.entity
mgr = clt._monitoring_zone_manager
- typs = utils.random_name()
+ typs = utils.random_unicode()
mgr.list = Mock(return_value=typs)
ret = clt.list_monitoring_zones()
mgr.list.assert_called_once_with()
@@ -1438,8 +1438,8 @@ def test_clt_get_monitoring_zone(self):
clt = self.client
ent = self.entity
mgr = clt._monitoring_zone_manager
- answer = utils.random_name()
- mz_id = utils.random_name()
+ answer = utils.random_unicode()
+ mz_id = utils.random_unicode()
mgr.get = Mock(return_value=answer)
ret = clt.get_monitoring_zone(mz_id)
mgr.get.assert_called_once_with(mz_id)
@@ -1471,11 +1471,11 @@ def test_clt_findall(self):
def test_clt_create_body(self):
mgr = self.client._entity_manager
- label = utils.random_name()
- name = utils.random_name()
- agent = utils.random_name()
- ip_addresses = utils.random_name()
- metadata = utils.random_name()
+ label = utils.random_unicode()
+ name = utils.random_unicode()
+ agent = utils.random_unicode()
+ ip_addresses = utils.random_unicode()
+ metadata = utils.random_unicode()
expected = {"label": label, "ip_addresses": ip_addresses,
"agent_id": agent, "metadata": metadata}
ret = mgr._create_body(name, label=label, agent=agent,
diff --git a/tests/unit/test_cloud_networks.py b/tests/unit/test_cloud_networks.py
index d90f7cf9..7243669f 100644
--- a/tests/unit/test_cloud_networks.py
+++ b/tests/unit/test_cloud_networks.py
@@ -78,15 +78,16 @@ def test_create_manager(self):
self.assertTrue(isinstance(clt._manager, CloudNetworkManager))
def test_create_body(self):
- nm = utils.random_name()
+ mgr = self.client._manager
+ nm = utils.random_unicode()
expected = {"network": {"label": nm, "cidr": example_cidr}}
- returned = self.client._create_body(name=nm, cidr=example_cidr)
+ returned = mgr._create_body(name=nm, cidr=example_cidr)
self.assertEqual(expected, returned)
def test_create(self):
clt = self.client
clt._manager.create = Mock(return_value=fakes.FakeCloudNetwork())
- nm = utils.random_name()
+ nm = utils.random_unicode()
new = clt.create(label=nm, cidr=example_cidr)
clt._manager.create.assert_called_once_with(label=nm, name=None,
cidr=example_cidr)
@@ -96,7 +97,7 @@ def test_create_fail_count(self):
err = exc.BadRequest(400)
err.message = "Request failed: too many networks."
clt._manager.create = Mock(side_effect=err)
- nm = utils.random_name()
+ nm = utils.random_unicode()
self.assertRaises(exc.NetworkCountExceeded, clt.create, label=nm,
cidr=example_cidr)
@@ -105,7 +106,7 @@ def test_create_fail_cidr(self):
err = exc.BadRequest(400)
err.message = "CIDR does not contain enough addresses."
clt._manager.create = Mock(side_effect=err)
- nm = utils.random_name()
+ nm = utils.random_unicode()
self.assertRaises(exc.NetworkCIDRInvalid, clt.create, label=nm,
cidr=example_cidr)
@@ -114,7 +115,7 @@ def test_create_fail_cidr_malformed(self):
err = exc.BadRequest(400)
err.message = "CIDR is malformed."
clt._manager.create = Mock(side_effect=err)
- nm = utils.random_name()
+ nm = utils.random_unicode()
self.assertRaises(exc.NetworkCIDRMalformed, clt.create, label=nm,
cidr=example_cidr)
@@ -123,7 +124,7 @@ def test_create_fail_other(self):
err = exc.BadRequest(400)
err.message = "Something strange happened."
clt._manager.create = Mock(side_effect=err)
- nm = utils.random_name()
+ nm = utils.random_unicode()
self.assertRaises(exc.BadRequest, clt.create, label=nm,
cidr=example_cidr)
diff --git a/tests/unit/test_identity.py b/tests/unit/test_identity.py
index 640b38a5..ed4ee9f0 100644
--- a/tests/unit/test_identity.py
+++ b/tests/unit/test_identity.py
@@ -62,8 +62,8 @@ def test_init(self):
def test_auth_with_token_name(self):
for cls in self.id_classes.values():
ident = cls()
- tok = utils.random_name()
- nm = utils.random_name()
+ tok = utils.random_unicode()
+ nm = utils.random_unicode()
resp = fakes.FakeIdentityResponse()
# Need to stuff this into the standard response
sav = resp.content["access"]["user"]["name"]
@@ -80,8 +80,8 @@ def test_auth_with_token_name(self):
def test_auth_with_token_id(self):
for cls in self.id_classes.values():
ident = cls()
- tok = utils.random_name()
- tenant_id = utils.random_name()
+ tok = utils.random_unicode()
+ tenant_id = utils.random_unicode()
resp = fakes.FakeIdentityResponse()
# Need to stuff this into the standard response
sav = resp.content["access"]["token"]["tenant"]["id"]
@@ -98,8 +98,8 @@ def test_auth_with_token_id(self):
def test_auth_with_token_id_auth_fail(self):
for cls in self.id_classes.values():
ident = cls()
- tok = utils.random_name()
- tenant_id = utils.random_name()
+ tok = utils.random_unicode()
+ tenant_id = utils.random_unicode()
resp = fakes.FakeIdentityResponse()
resp.status_code = 401
ident.method_post = Mock(return_value=resp)
@@ -109,8 +109,8 @@ def test_auth_with_token_id_auth_fail(self):
def test_auth_with_token_id_auth_fail_general(self):
for cls in self.id_classes.values():
ident = cls()
- tok = utils.random_name()
- tenant_id = utils.random_name()
+ tok = utils.random_unicode()
+ tenant_id = utils.random_unicode()
resp = fakes.FakeIdentityResponse()
resp.status_code = 499
resp.reason = "fake"
@@ -123,13 +123,13 @@ def test_auth_with_token_missing(self):
for cls in self.id_classes.values():
ident = cls()
self.assertRaises(exc.MissingAuthSettings, ident.auth_with_token,
- utils.random_name())
+ utils.random_unicode())
def test_auth_with_token_rax(self):
ident = self.rax_identity_class()
- mid = utils.random_name()
- oid = utils.random_name()
- token = utils.random_name()
+ mid = utils.random_unicode()
+ oid = utils.random_unicode()
+ token = utils.random_unicode()
class FakeResp(object):
info = None
@@ -304,7 +304,7 @@ def test_rax_endpoints(self):
def test_auth_token(self):
for cls in self.id_classes.values():
ident = cls()
- test_token = utils.random_name()
+ test_token = utils.random_unicode()
ident.token = test_token
self.assertEqual(ident.auth_token, test_token)
@@ -321,10 +321,10 @@ def test_regions(self):
def test_http_methods(self):
ident = self.base_identity_class()
ident._call = Mock()
- uri = utils.random_name()
- dkv = utils.random_name()
+ uri = utils.random_unicode()
+ dkv = utils.random_unicode()
data = {dkv: dkv}
- hkv = utils.random_name()
+ hkv = utils.random_unicode()
headers = {hkv: hkv}
std_headers = True
ident.method_get(uri, admin=False, data=data, headers=headers,
@@ -354,15 +354,15 @@ def test_call(self):
requests.post = Mock()
sav_debug = ident.http_log_debug
ident.http_log_debug = True
- uri = "https://%s/%s" % (utils.random_name(), utils.random_name())
+ uri = "https://%s/%s" % (utils.random_unicode(), utils.random_unicode())
sav_stdout = sys.stdout
out = StringIO.StringIO()
sys.stdout = out
utils.add_method(ident, lambda self: "", "_get_auth_endpoint")
- dkv = utils.random_name()
+ dkv = utils.random_unicode()
data = {dkv: dkv}
jdata = json.dumps(data)
- hkv = utils.random_name()
+ hkv = utils.random_unicode()
headers = {hkv: hkv}
for std_headers in (True, False):
expected_headers = ident._standard_headers() if std_headers else {}
@@ -415,21 +415,21 @@ def test_find_user(self):
resp = fakes.FakeIdentityResponse()
resp.response_type = "users"
ident.method_get = Mock(return_value=resp)
- fake_uri = utils.random_name()
+ fake_uri = utils.random_unicode()
ret = ident._find_user(fake_uri)
self.assert_(isinstance(ret, pyrax.rax_identity.User))
def test_find_user_by_name(self):
ident = self.rax_identity_class()
ident._find_user = Mock()
- fake_name = utils.random_name()
+ fake_name = utils.random_unicode()
ret = ident.find_user_by_name(fake_name)
ident._find_user.assert_called_with("users?name=%s" % fake_name)
def test_find_user_by_id(self):
ident = self.rax_identity_class()
ident._find_user = Mock()
- fake_id = utils.random_name()
+ fake_id = utils.random_unicode()
ret = ident.find_user_by_id(fake_id)
ident._find_user.assert_called_with("users/%s" % fake_id)
@@ -438,9 +438,9 @@ def test_create_user(self):
resp = fakes.FakeIdentityResponse()
resp.response_type = "users"
ident.method_post = Mock(return_value=resp)
- fake_name = utils.random_name()
- fake_email = utils.random_name()
- fake_password = utils.random_name()
+ fake_name = utils.random_unicode()
+ fake_email = utils.random_unicode()
+ fake_password = utils.random_unicode()
ident.create_user(fake_name, fake_email, fake_password)
cargs = ident.method_post.call_args
self.assertEqual(len(cargs), 2)
@@ -454,11 +454,11 @@ def test_update_user(self):
resp = fakes.FakeIdentityResponse()
resp.response_type = "users"
ident.method_put = Mock(return_value=resp)
- fake_name = utils.random_name()
- fake_email = utils.random_name()
- fake_username = utils.random_name()
- fake_uid = utils.random_name()
- fake_region = utils.random_name()
+ fake_name = utils.random_unicode()
+ fake_email = utils.random_unicode()
+ fake_username = utils.random_unicode()
+ fake_uid = utils.random_unicode()
+ fake_region = utils.random_unicode()
fake_enabled = random.choice((True, False))
ident.update_user(fake_name, email=fake_email, username=fake_username,
uid=fake_uid, defaultRegion=fake_region, enabled=fake_enabled)
@@ -475,14 +475,14 @@ def test_update_user(self):
def test_find_user_by_name(self):
ident = self.rax_identity_class()
ident._find_user = Mock()
- fake_name = utils.random_name()
+ fake_name = utils.random_unicode()
ret = ident.find_user_by_name(fake_name)
ident._find_user.assert_called_with("users?name=%s" % fake_name)
def test_find_user_by_id(self):
ident = self.rax_identity_class()
ident._find_user = Mock()
- fake_id = utils.random_name()
+ fake_id = utils.random_unicode()
ret = ident.find_user_by_id(fake_id)
ident._find_user.assert_called_with("users/%s" % fake_id)
@@ -492,7 +492,7 @@ def test_find_user_fail(self):
resp.response_type = "users"
resp.status_code = 404
ident.method_get = Mock(return_value=resp)
- fake_uri = utils.random_name()
+ fake_uri = utils.random_unicode()
ret = ident._find_user(fake_uri)
self.assertIsNone(ret)
@@ -502,9 +502,9 @@ def test_create_user(self):
resp = fakes.FakeIdentityResponse()
resp.response_type = "users"
ident.method_post = Mock(return_value=resp)
- fake_name = utils.random_name()
- fake_email = utils.random_name()
- fake_password = utils.random_name()
+ fake_name = utils.random_unicode()
+ fake_email = utils.random_unicode()
+ fake_password = utils.random_unicode()
ident.create_user(fake_name, fake_email, fake_password)
cargs = ident.method_post.call_args
self.assertEqual(len(cargs), 2)
@@ -520,9 +520,9 @@ def test_create_user_not_authorized(self):
resp.response_type = "users"
resp.status_code = 401
ident.method_post = Mock(return_value=resp)
- fake_name = utils.random_name()
- fake_email = utils.random_name()
- fake_password = utils.random_name()
+ fake_name = utils.random_unicode()
+ fake_email = utils.random_unicode()
+ fake_password = utils.random_unicode()
self.assertRaises(exc.AuthorizationFailure, ident.create_user,
fake_name, fake_email, fake_password)
@@ -535,9 +535,9 @@ def test_create_user_bad_email(self):
resp.text = json.dumps(
{"badRequest": {"message": "Expecting valid email address"}})
ident.method_post = Mock(return_value=resp)
- fake_name = utils.random_name()
- fake_email = utils.random_name()
- fake_password = utils.random_name()
+ fake_name = utils.random_unicode()
+ fake_email = utils.random_unicode()
+ fake_password = utils.random_unicode()
self.assertRaises(exc.InvalidEmail, ident.create_user,
fake_name, fake_email, fake_password)
@@ -548,9 +548,9 @@ def test_create_user_not_found(self):
resp.response_type = "users"
resp.status_code = 404
ident.method_post = Mock(return_value=resp)
- fake_name = utils.random_name()
- fake_email = utils.random_name()
- fake_password = utils.random_name()
+ fake_name = utils.random_unicode()
+ fake_email = utils.random_unicode()
+ fake_password = utils.random_unicode()
self.assertRaises(exc.AuthorizationFailure, ident.create_user,
fake_name, fake_email, fake_password)
@@ -560,11 +560,11 @@ def test_update_user(self):
resp = fakes.FakeIdentityResponse()
resp.response_type = "users"
ident.method_put = Mock(return_value=resp)
- fake_name = utils.random_name()
- fake_email = utils.random_name()
- fake_username = utils.random_name()
- fake_uid = utils.random_name()
- fake_region = utils.random_name()
+ fake_name = utils.random_unicode()
+ fake_email = utils.random_unicode()
+ fake_username = utils.random_unicode()
+ fake_uid = utils.random_unicode()
+ fake_region = utils.random_unicode()
fake_enabled = random.choice((True, False))
kwargs = {"email": fake_email, "username": fake_username,
"uid": fake_uid, "enabled": fake_enabled}
@@ -587,7 +587,7 @@ def test_delete_user(self):
resp = fakes.FakeIdentityResponse()
resp.response_type = "users"
ident.method_delete = Mock(return_value=resp)
- fake_name = utils.random_name()
+ fake_name = utils.random_unicode()
ident.delete_user(fake_name)
cargs = ident.method_delete.call_args
self.assertEqual(len(cargs), 2)
@@ -600,7 +600,7 @@ def test_delete_user_fail(self):
resp.response_type = "users"
resp.status_code = 404
ident.method_delete = Mock(return_value=resp)
- fake_name = utils.random_name()
+ fake_name = utils.random_unicode()
self.assertRaises(exc.UserNotFound, ident.delete_user, fake_name)
def test_list_roles_for_user(self):
@@ -620,7 +620,7 @@ def test_list_roles_for_user(self):
def test_list_credentials(self):
ident = self.rax_identity_class()
ident.method_get = Mock()
- fake_name = utils.random_name()
+ fake_name = utils.random_unicode()
ident.list_credentials(fake_name)
cargs = ident.method_get.call_args
called_uri = cargs[0][0]
@@ -630,7 +630,7 @@ def test_list_credentials(self):
def test_get_user_credentials(self):
ident = self.rax_identity_class()
ident.method_get = Mock()
- fake_name = utils.random_name()
+ fake_name = utils.random_unicode()
ident.get_user_credentials(fake_name)
cargs = ident.method_get.call_args
called_uri = cargs[0][0]
@@ -639,7 +639,7 @@ def test_get_user_credentials(self):
def test_get_keystone_endpoint(self):
ident = self.keystone_identity_class()
- fake_ep = utils.random_name()
+ fake_ep = utils.random_unicode()
sav_setting = pyrax.get_setting
pyrax.get_setting = Mock(return_value=fake_ep)
ep = ident._get_auth_endpoint()
@@ -800,8 +800,8 @@ def test_create_tenant(self):
resp = fakes.FakeIdentityResponse()
resp.response_type = "tenant"
ident.method_post = Mock(return_value=resp)
- fake_name = utils.random_name()
- fake_desc = utils.random_name()
+ fake_name = utils.random_unicode()
+ fake_desc = utils.random_unicode()
tenant = ident.create_tenant(fake_name, description=fake_desc)
self.assert_(isinstance(tenant, base_identity.Tenant))
cargs = ident.method_post.call_args
@@ -817,9 +817,9 @@ def test_update_tenant(self):
resp = fakes.FakeIdentityResponse()
resp.response_type = "tenant"
ident.method_put = Mock(return_value=resp)
- fake_id = utils.random_name()
- fake_name = utils.random_name()
- fake_desc = utils.random_name()
+ fake_id = utils.random_unicode()
+ fake_name = utils.random_unicode()
+ fake_desc = utils.random_unicode()
tenant = ident.update_tenant(fake_id, name=fake_name,
description=fake_desc)
self.assert_(isinstance(tenant, base_identity.Tenant))
@@ -836,7 +836,7 @@ def test_delete_tenant(self):
resp = fakes.FakeIdentityResponse()
resp.response_type = "tenant"
ident.method_delete = Mock(return_value=resp)
- fake_id = utils.random_name()
+ fake_id = utils.random_unicode()
ident.delete_tenant(fake_id)
ident.method_delete.assert_called_with("tenants/%s" % fake_id)
@@ -847,7 +847,7 @@ def test_delete_tenantfail(self):
resp.response_type = "tenant"
resp.status_code = 404
ident.method_delete = Mock(return_value=resp)
- fake_id = utils.random_name()
+ fake_id = utils.random_unicode()
self.assertRaises(exc.TenantNotFound, ident.delete_tenant, fake_id)
def test_parse_api_time_us(self):
diff --git a/tests/unit/test_manager.py b/tests/unit/test_manager.py
index 7a9d2fd3..1a53fc34 100644
--- a/tests/unit/test_manager.py
+++ b/tests/unit/test_manager.py
@@ -33,7 +33,7 @@ def test_list(self):
mgr._list = Mock()
mgr.uri_base = "test"
mgr.list()
- mgr._list.assert_called_once_with("/test")
+ mgr._list.assert_called_once_with("/test", return_raw=False)
mgr._list = sav
def test_list_paged(self):
@@ -45,7 +45,7 @@ def test_list_paged(self):
fake_marker = random.randint(100, 200)
mgr.list(limit=fake_limit, marker=fake_marker)
expected_uri = "/test?limit=%s&marker=%s" % (fake_limit, fake_marker)
- mgr._list.assert_called_once_with(expected_uri)
+ mgr._list.assert_called_once_with(expected_uri, return_raw=False)
mgr._list = sav
def test_head(self):
@@ -86,7 +86,7 @@ def test_create(self):
mgr._create = Mock()
mgr.uri_base = "test"
mgr._create_body = Mock(return_value="body")
- nm = utils.random_name()
+ nm = utils.random_unicode()
mgr.create(nm)
mgr._create.assert_called_once_with("/test", "body", return_none=False,
return_raw=False, return_response=False)
diff --git a/tests/unit/test_module.py b/tests/unit/test_module.py
index ab2f1b1c..9686dd73 100644
--- a/tests/unit/test_module.py
+++ b/tests/unit/test_module.py
@@ -92,8 +92,8 @@ def test_settings_get(self):
def test_settings_get_from_env(self):
pyrax.settings._settings = {"default": {}}
pyrax.settings.env_dct = {"identity_type": "fake"}
- typ = utils.random_name()
- ident = utils.random_name()
+ typ = utils.random_unicode()
+ ident = utils.random_unicode()
sav_env = os.environ
sav_imp = pyrax._import_identity
pyrax._import_identity = Mock(return_value=ident)
@@ -190,8 +190,8 @@ class FakeKeyring(object):
def test_auth_with_token(self):
pyrax.authenticated = False
- tok = utils.random_name()
- tname = utils.random_name()
+ tok = utils.random_unicode()
+ tname = utils.random_unicode()
pyrax.auth_with_token(tok, tenant_name=tname)
self.assertTrue(pyrax.identity.authenticated)
self.assertEqual(pyrax.identity.token, tok)
@@ -262,19 +262,19 @@ def test_set_region_setting(self):
def test_safe_region(self):
# Pass direct
- reg = utils.random_name()
+ reg = utils.random_unicode()
ret = pyrax._safe_region(reg)
self.assertEqual(reg, ret)
# From config setting
orig_reg = pyrax.get_setting("region")
- reg = utils.random_name()
+ reg = utils.random_unicode()
pyrax.set_setting("region", reg)
ret = pyrax._safe_region()
self.assertEqual(reg, ret)
# Identity default
pyrax.set_setting("region", None)
orig_defreg = pyrax.identity.get_default_region
- reg = utils.random_name()
+ reg = utils.random_unicode()
pyrax.identity.get_default_region = Mock(return_value=reg)
ret = pyrax._safe_region()
self.assertEqual(reg, ret)
diff --git a/tests/unit/test_queues.py b/tests/unit/test_queues.py
new file mode 100644
index 00000000..4eefed03
--- /dev/null
+++ b/tests/unit/test_queues.py
@@ -0,0 +1,687 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+import os
+import random
+import unittest
+
+from mock import patch
+from mock import MagicMock as Mock
+
+import pyrax
+import pyrax.queueing
+from pyrax.queueing import BaseQueueManager
+from pyrax.queueing import Queue
+from pyrax.queueing import QueueClaim
+from pyrax.queueing import QueueClaimManager
+from pyrax.queueing import QueueClient
+from pyrax.queueing import QueueManager
+from pyrax.queueing import QueueMessage
+from pyrax.queueing import QueueMessageManager
+from pyrax.queueing import assure_queue
+from pyrax.queueing import _parse_marker
+
+import pyrax.exceptions as exc
+import pyrax.utils as utils
+
+import fakes
+
+
+def _safe_id():
+ """
+ Remove characters that shouldn't be in IDs, etc., that are being parsed
+ from HREFs. This is a consequence of the random_unicode() function, which
+ sometimes causes the urlparse function to return the wrong values when
+ these characters are present.
+ """
+ val = utils.random_ascii()
+ for bad in "#;/?":
+ val = val.replace(bad, "")
+ return val
+
+
+class QueuesTest(unittest.TestCase):
+ def __init__(self, *args, **kwargs):
+ super(QueuesTest, self).__init__(*args, **kwargs)
+
+ def setUp(self):
+ self.client = fakes.FakeQueueClient()
+ self.client._manager = fakes.FakeQueueManager(self.client)
+ self.queue = fakes.FakeQueue()
+ self.queue.manager = self.client._manager
+
+ def tearDown(self):
+ pass
+
+ def test_parse_marker(self):
+ fake_marker = "%s" % random.randint(10000, 100000)
+ href = "http://example.com/foo?marker=%s" % fake_marker
+ body = {"links": [
+ {"rel": "next", "href": href},
+ {"rel": "bogus", "href": "fake"},
+ ]}
+ ret = _parse_marker(body)
+ self.assertEqual(ret, fake_marker)
+
+ def test_parse_marker_no_next(self):
+ fake_marker = "%s" % random.randint(10000, 100000)
+ href = "http://example.com/foo?marker=%s" % fake_marker
+ body = {"links": [
+ {"rel": "bogus", "href": "fake"},
+ ]}
+ ret = _parse_marker(body)
+ self.assertIsNone(ret)
+
+ def test_parse_marker_fail(self):
+ fake_marker = "%s" % random.randint(10000, 100000)
+ href = "http://example.com/foo?not_valid=%s" % fake_marker
+ body = {"links": [
+ {"rel": "next", "href": href},
+ {"rel": "bogus", "href": "fake"},
+ ]}
+ ret = _parse_marker(body)
+ self.assertIsNone(ret)
+
+ def test_assure_queue(self):
+ @assure_queue
+ def test(self, queue):
+ return queue
+ clt = self.client
+ q = self.queue
+ clt._manager.get = Mock(return_value=q)
+ ret = test(clt, q.id)
+ self.assertEqual(ret, q)
+
+ def test_base_list(self):
+ clt = self.client
+ mgr = clt._manager
+ mgr.api.method_get = Mock(side_effect=exc.NotFound(""))
+ uri = utils.random_unicode()
+ ret = mgr.list(uri)
+ self.assertEqual(ret, [])
+
+ def test_queue_get_message(self):
+ q = self.queue
+ q._message_manager.get = Mock()
+ msgid = utils.random_unicode()
+ q.get_message(msgid)
+ q._message_manager.get.assert_called_once_with(msgid)
+
+ def test_queue_delete_message(self):
+ q = self.queue
+ q._message_manager.delete = Mock()
+ msgid = utils.random_unicode()
+ q.delete_message(msgid)
+ q._message_manager.delete.assert_called_once_with(msgid)
+
+ def test_queue_list(self):
+ q = self.queue
+ q._message_manager.list = Mock()
+ include_claimed = utils.random_unicode()
+ echo = utils.random_unicode()
+ marker = utils.random_unicode()
+ limit = utils.random_unicode()
+ q.list(include_claimed=include_claimed, echo=echo, marker=marker,
+ limit=limit)
+ q._message_manager.list.assert_called_once_with(
+ include_claimed=include_claimed, echo=echo, marker=marker,
+ limit=limit)
+
+ def test_queue_list_by_ids(self):
+ q = self.queue
+ q._message_manager.list_by_ids = Mock()
+ ids = utils.random_unicode()
+ q.list_by_ids(ids)
+ q._message_manager.list_by_ids.assert_called_once_with(ids)
+
+ def test_queue_delete_by_ids(self):
+ q = self.queue
+ q._message_manager.delete_by_ids = Mock()
+ ids = utils.random_unicode()
+ q.delete_by_ids(ids)
+ q._message_manager.delete_by_ids.assert_called_once_with(ids)
+
+ def test_queue_list_by_claim(self):
+ q = self.queue
+ qclaim = fakes.FakeQueueClaim()
+ q._claim_manager.get = Mock(return_value=qclaim)
+ claim_id = utils.random_unicode()
+ ret = q.list_by_claim(claim_id)
+ self.assertEqual(ret, qclaim.messages)
+
+ def test_queue_post_message(self):
+ q = self.queue
+ q._message_manager.create = Mock()
+ body = utils.random_unicode()
+ ttl = utils.random_unicode()
+ q.post_message(body, ttl=ttl)
+ q._message_manager.create.assert_called_once_with(body, ttl=ttl)
+
+ def test_queue_claim_messages(self):
+ q = self.queue
+ q._claim_manager.claim = Mock()
+ ttl = utils.random_unicode()
+ grace = utils.random_unicode()
+ count = random.randint(1, 9)
+ q.claim_messages(ttl, grace, count=count)
+ q._claim_manager.claim.assert_called_once_with(ttl, grace, count=count)
+
+ def test_queue_get_claim(self):
+ q = self.queue
+ q._claim_manager.get = Mock()
+ claim = utils.random_unicode()
+ q.get_claim(claim)
+ q._claim_manager.get.assert_called_once_with(claim)
+
+ def test_queue_update_claim(self):
+ q = self.queue
+ q._claim_manager.update = Mock()
+ claim = utils.random_unicode()
+ ttl = utils.random_unicode()
+ grace = utils.random_unicode()
+ q.update_claim(claim, ttl=ttl, grace=grace)
+ q._claim_manager.update.assert_called_once_with(claim, ttl=ttl,
+ grace=grace)
+
+ def test_queue_release_claim(self):
+ q = self.queue
+ q._claim_manager.delete = Mock()
+ claim = utils.random_unicode()
+ q.release_claim(claim)
+ q._claim_manager.delete.assert_called_once_with(claim)
+
+ def test_queue_id_property(self):
+ q = self.queue
+ val = utils.random_unicode()
+ q.name = val
+ self.assertEqual(q.id, val)
+ val = utils.random_unicode()
+ q.id = val
+ self.assertEqual(q.name, val)
+
+ def test_msg_add_details(self):
+ id_ = _safe_id()
+ claim_id = utils.random_unicode()
+ age = utils.random_unicode()
+ body = utils.random_unicode()
+ ttl = utils.random_unicode()
+ href = "http://example.com/%s" % id_
+ info = {"href": href,
+ "age": age,
+ "body": body,
+ "ttl": ttl,
+ }
+ msg = QueueMessage(manager=None, info=info)
+ self.assertEqual(msg.id, id_)
+ self.assertIsNone(msg.claim_id)
+ self.assertEqual(msg.age, age)
+ self.assertEqual(msg.body, body)
+ self.assertEqual(msg.ttl, ttl)
+ self.assertEqual(msg.href, href)
+
+ def test_msg_add_details_claim(self):
+ id_ = _safe_id()
+ claim_id = _safe_id()
+ age = utils.random_unicode()
+ body = utils.random_unicode()
+ ttl = utils.random_unicode()
+ href = "http://example.com/%s?claim_id=%s" % (id_, claim_id)
+ info = {"href": href,
+ "age": age,
+ "body": body,
+ "ttl": ttl,
+ }
+ msg = QueueMessage(manager=None, info=info)
+ self.assertEqual(msg.id, id_)
+ self.assertEqual(msg.claim_id, claim_id)
+
+ def test_msg_add_details_no_href(self):
+ id_ = utils.random_unicode()
+ claim_id = utils.random_unicode()
+ age = utils.random_unicode()
+ body = utils.random_unicode()
+ ttl = utils.random_unicode()
+ href = None
+ info = {"href": href,
+ "age": age,
+ "body": body,
+ "ttl": ttl,
+ }
+ msg = QueueMessage(manager=None, info=info)
+ self.assertIsNone(msg.id)
+ self.assertIsNone(msg.claim_id)
+
+ def test_claim(self):
+ msgs = []
+ num = random.randint(1, 9)
+ for ii in range(num):
+ msg_id = utils.random_unicode()
+ claim_id = utils.random_unicode()
+ age = utils.random_unicode()
+ body = utils.random_unicode()
+ ttl = utils.random_unicode()
+ href = "http://example.com/%s" % msg_id
+ info = {"href": href,
+ "age": age,
+ "body": body,
+ "ttl": ttl,
+ }
+ msgs.append(info)
+ id_ = _safe_id()
+ href = "http://example.com/%s" % id_
+ info = {"href": href,
+ "messages": msgs,
+ }
+ mgr = fakes.FakeQueueManager()
+ mgr._message_manager = fakes.FakeQueueManager()
+ clm = QueueClaim(manager=mgr, info=info)
+ self.assertEqual(clm.id, id_)
+ self.assertEqual(len(clm.messages), num)
+
+ def test_queue_msg_mgr_create_body(self):
+ q = self.queue
+ mgr = q._message_manager
+ msg = utils.random_unicode()
+ ret = mgr._create_body(msg)
+ self.assertTrue(isinstance(ret, list))
+ self.assertEqual(len(ret), 1)
+ dct = ret[0]
+ self.assertTrue(isinstance(dct, dict))
+ self.assertEqual(dct["body"], msg)
+ self.assertEqual(dct["ttl"], pyrax.queueing.DAYS_14)
+
+ def test_queue_msg_mgr_list(self):
+ q = self.queue
+ mgr = q._message_manager
+ include_claimed = random.choice((True, False))
+ echo = random.choice((True, False))
+ marker = utils.random_unicode()
+ limit = random.randint(15, 35)
+ rbody = {"links": [], "messages": [{"href": "fake"}]}
+ pyrax.queueing._parse_marker = Mock(return_value="fake")
+ mgr._list = Mock(return_value=(None, rbody))
+ msgs = mgr.list(include_claimed=include_claimed, echo=echo,
+ marker=marker, limit=limit)
+
+ def test_queue_msg_mgr_no_limit_or_body(self):
+ q = self.queue
+ mgr = q._message_manager
+ include_claimed = random.choice((True, False))
+ echo = random.choice((True, False))
+ marker = utils.random_unicode()
+ pyrax.queueing._parse_marker = Mock(return_value="fake")
+ mgr._list = Mock(return_value=(None, None))
+ msgs = mgr.list(include_claimed=include_claimed, echo=echo,
+ marker=marker)
+
+ def test_queue_msg_mgr_list_by_ids(self):
+ q = self.queue
+ mgr = q._message_manager
+ mgr._list = Mock()
+ id1 = utils.random_unicode()
+ id2 = utils.random_unicode()
+ mgr.list_by_ids([id1, id2])
+ expected = "/%s?ids=%s" % (mgr.uri_base, ",".join([id1, id2]))
+ mgr._list.assert_called_once_with(expected)
+
+ def test_queue_msg_mgr_delete_by_ids(self):
+ q = self.queue
+ mgr = q._message_manager
+ mgr.api.method_delete = Mock()
+ id1 = utils.random_unicode()
+ id2 = utils.random_unicode()
+ mgr.delete_by_ids([id1, id2])
+ expected = "/%s?ids=%s" % (mgr.uri_base, ",".join([id1, id2]))
+ mgr.api.method_delete.assert_called_once_with(expected)
+
+ def test_queue_claim_mgr_claim(self):
+ q = self.queue
+ mgr = q._claim_manager
+ ttl = utils.random_unicode()
+ grace = utils.random_unicode()
+ count = utils.random_unicode()
+ claim_id = utils.random_unicode()
+ rbody = [{"href": "http://example.com/foo?claim_id=%s" % claim_id}]
+ mgr.api.method_post = Mock(return_value=(fakes.FakeResponse(), rbody))
+ mgr.get = Mock()
+ exp_uri = "/%s?limit=%s" % (mgr.uri_base, count)
+ exp_body = {"ttl": ttl, "grace": grace}
+ mgr.claim(ttl, grace, count=count)
+ mgr.api.method_post.assert_called_once_with(exp_uri, body=exp_body)
+ mgr.get.assert_called_once_with(claim_id)
+
+ def test_queue_claim_mgr_claim_no_count(self):
+ q = self.queue
+ mgr = q._claim_manager
+ ttl = utils.random_unicode()
+ grace = utils.random_unicode()
+ claim_id = utils.random_unicode()
+ rbody = [{"href": "http://example.com/foo?claim_id=%s" % claim_id}]
+ mgr.api.method_post = Mock(return_value=(fakes.FakeResponse(), rbody))
+ mgr.get = Mock()
+ exp_uri = "/%s" % mgr.uri_base
+ exp_body = {"ttl": ttl, "grace": grace}
+ mgr.claim(ttl, grace)
+ mgr.api.method_post.assert_called_once_with(exp_uri, body=exp_body)
+ mgr.get.assert_called_once_with(claim_id)
+
+ def test_queue_claim_mgr_claim_empty(self):
+ q = self.queue
+ mgr = q._claim_manager
+ ttl = utils.random_unicode()
+ grace = utils.random_unicode()
+ claim_id = utils.random_unicode()
+ rbody = [{"href": "http://example.com/foo?claim_id=%s" % claim_id}]
+ resp = fakes.FakeResponse()
+ resp.status = 204
+ mgr.api.method_post = Mock(return_value=(resp, rbody))
+ mgr.get = Mock()
+ exp_uri = "/%s" % mgr.uri_base
+ exp_body = {"ttl": ttl, "grace": grace}
+ mgr.claim(ttl, grace)
+ mgr.api.method_post.assert_called_once_with(exp_uri, body=exp_body)
+
+ def test_queue_claim_mgr_update(self):
+ q = self.queue
+ mgr = q._claim_manager
+ claim = utils.random_unicode()
+ ttl = utils.random_unicode()
+ grace = utils.random_unicode()
+ mgr.api.method_patch = Mock(return_value=(None, None))
+ exp_uri = "/%s/%s" % (mgr.uri_base, claim)
+ exp_body = {"ttl": ttl, "grace": grace}
+ mgr.update(claim, ttl=ttl, grace=grace)
+ mgr.api.method_patch.assert_called_once_with(exp_uri, body=exp_body)
+
+ def test_queue_claim_mgr_update_missing(self):
+ q = self.queue
+ mgr = q._claim_manager
+ claim = utils.random_unicode()
+ self.assertRaises(exc.MissingClaimParameters, mgr.update, claim)
+
+ def test_queue_mgr_create_body(self):
+ clt = self.client
+ mgr = clt._manager
+ name = utils.random_unicode()
+ metadata = utils.random_unicode()
+ ret = mgr._create_body(name, metadata=metadata)
+ self.assertEqual(ret, {"metadata": metadata})
+
+ def test_queue_mgr_create_body_no_meta(self):
+ clt = self.client
+ mgr = clt._manager
+ name = utils.random_unicode()
+ ret = mgr._create_body(name)
+ self.assertEqual(ret, {})
+
+ def test_queue_mgr_get(self):
+ clt = self.client
+ mgr = clt._manager
+ id_ = utils.random_unicode()
+ mgr.api.queue_exists = Mock(return_value=True)
+ q = mgr.get(id_)
+ self.assertTrue(isinstance(q, Queue))
+ self.assertEqual(q.name, id_)
+
+ def test_queue_mgr_get_not_found(self):
+ clt = self.client
+ mgr = clt._manager
+ id_ = utils.random_unicode()
+ mgr.api.queue_exists = Mock(return_value=False)
+ self.assertRaises(exc.NotFound, mgr.get, id_)
+
+ def test_queue_mgr_create(self):
+ clt = self.client
+ mgr = clt._manager
+ name = utils.random_unicode()
+ exp_uri = "/%s/%s" % (mgr.uri_base, name)
+ resp = fakes.FakeResponse()
+ resp.status = 201
+ mgr.api.method_put = Mock(return_value=(resp, None))
+ q = mgr.create(name)
+ self.assertTrue(isinstance(q, Queue))
+ self.assertEqual(q.name, name)
+
+ def test_queue_mgr_create_invalid(self):
+ clt = self.client
+ mgr = clt._manager
+ name = utils.random_unicode()
+ exp_uri = "/%s/%s" % (mgr.uri_base, name)
+ resp = fakes.FakeResponse()
+ resp.status = 400
+ mgr.api.method_put = Mock(return_value=(resp, None))
+ self.assertRaises(exc.InvalidQueueName, mgr.create, name)
+
+ def test_queue_mgr_get_stats(self):
+ clt = self.client
+ mgr = clt._manager
+ q = utils.random_unicode()
+ exp_uri = "/%s/%s/stats" % (mgr.uri_base, q)
+ msgs = utils.random_unicode()
+ rbody = {"messages": msgs}
+ mgr.api.method_get = Mock(return_value=(None, rbody))
+ ret = mgr.get_stats(q)
+ self.assertEqual(ret, msgs)
+ mgr.api.method_get.assert_called_once_with(exp_uri)
+
+ def test_queue_mgr_get_metadata(self):
+ clt = self.client
+ mgr = clt._manager
+ q = utils.random_unicode()
+ exp_uri = "/%s/%s/metadata" % (mgr.uri_base, q)
+ rbody = utils.random_unicode()
+ mgr.api.method_get = Mock(return_value=(None, rbody))
+ ret = mgr.get_metadata(q)
+ self.assertEqual(ret, rbody)
+ mgr.api.method_get.assert_called_once_with(exp_uri)
+
+ def test_queue_mgr_set_metadata_clear(self):
+ clt = self.client
+ mgr = clt._manager
+ q = utils.random_unicode()
+ exp_uri = "/%s/%s/metadata" % (mgr.uri_base, q)
+ val = utils.random_unicode()
+ metadata = {"new": val}
+ mgr.api.method_put = Mock(return_value=(None, None))
+ ret = mgr.set_metadata(q, metadata, clear=True)
+ mgr.api.method_put.assert_called_once_with(exp_uri, body=metadata)
+
+ def test_queue_mgr_set_metadata_no_clear(self):
+ clt = self.client
+ mgr = clt._manager
+ q = utils.random_unicode()
+ exp_uri = "/%s/%s/metadata" % (mgr.uri_base, q)
+ val = utils.random_unicode()
+ metadata = {"new": val}
+ old_val = utils.random_unicode()
+ old_metadata = {"old": val}
+ exp_body = old_metadata
+ exp_body.update(metadata)
+ mgr.api.method_put = Mock(return_value=(None, None))
+ mgr.get_metadata = Mock(return_value=old_metadata)
+ ret = mgr.set_metadata(q, metadata, clear=False)
+ mgr.api.method_put.assert_called_once_with(exp_uri, body=exp_body)
+
+ def test_clt_add_custom_headers(self):
+ clt = self.client
+ dct = {}
+ client_id = utils.random_unicode()
+ sav = os.environ.get
+ os.environ.get = Mock(return_value=client_id)
+ clt._add_custom_headers(dct)
+ self.assertEqual(dct, {"Client-ID": client_id})
+ os.environ.get = sav
+
+ def test_clt_add_custom_headers_fail(self):
+ clt = self.client
+ dct = {}
+ sav = os.environ.get
+ os.environ.get = Mock(return_value=None)
+ self.assertRaises(exc.QueueClientIDNotDefined, clt._add_custom_headers,
+ dct)
+ os.environ.get = sav
+
+ def test_clt_get_home_document(self):
+ clt = self.client
+ parts = [_safe_id() for ii in range(4)]
+ clt.management_url = "/".join(parts)
+ exp_uri = "/".join(parts[:-1])
+ clt.method_get = Mock()
+ clt.get_home_document()
+ clt.method_get.assert_called_once_with(exp_uri)
+
+ def test_clt_queue_exists(self):
+ clt = self.client
+ clt._manager.head = Mock()
+ name = utils.random_unicode()
+ ret = clt.queue_exists(name)
+ self.assertTrue(ret)
+ clt._manager.head.assert_called_once_with(name)
+
+ def test_clt_queue_not_exists(self):
+ clt = self.client
+ clt._manager.head = Mock(side_effect=exc.NotFound(""))
+ name = utils.random_unicode()
+ ret = clt.queue_exists(name)
+ self.assertFalse(ret)
+ clt._manager.head.assert_called_once_with(name)
+
+ def test_clt_create(self):
+ clt = self.client
+ clt.queue_exists = Mock(return_value=False)
+ clt._manager.create = Mock()
+ name = utils.random_unicode()
+ clt.create(name)
+ clt._manager.create.assert_called_once_with(name)
+
+ def test_clt_create_dupe(self):
+ clt = self.client
+ clt.queue_exists = Mock(return_value=True)
+ name = utils.random_unicode()
+ self.assertRaises(exc.DuplicateQueue, clt.create, name)
+
+ def test_clt_get_stats(self):
+ clt = self.client
+ clt._manager.get_stats = Mock()
+ q = utils.random_unicode()
+ clt.get_stats(q)
+ clt._manager.get_stats.assert_called_once_with(q)
+
+ def test_clt_get_metadata(self):
+ clt = self.client
+ clt._manager.get_metadata = Mock()
+ q = utils.random_unicode()
+ clt.get_metadata(q)
+ clt._manager.get_metadata.assert_called_once_with(q)
+
+ def test_clt_set_metadata(self):
+ clt = self.client
+ clt._manager.set_metadata = Mock()
+ q = utils.random_unicode()
+ metadata = utils.random_unicode()
+ clear = random.choice((True, False))
+ clt.set_metadata(q, metadata, clear=clear)
+ clt._manager.set_metadata.assert_called_once_with(q, metadata,
+ clear=clear)
+
+ def test_clt_get_message(self):
+ clt = self.client
+ q = self.queue
+ msg_id = utils.random_unicode()
+ q.get_message = Mock()
+ clt.get_message(q, msg_id)
+ q.get_message.assert_called_once_with(msg_id)
+
+ def test_clt_delete_message(self):
+ clt = self.client
+ q = self.queue
+ msg_id = utils.random_unicode()
+ q.delete_message = Mock()
+ clt.delete_message(q, msg_id)
+ q.delete_message.assert_called_once_with(msg_id)
+
+ def test_clt_list_messages(self):
+ clt = self.client
+ q = self.queue
+ include_claimed = utils.random_unicode()
+ echo = utils.random_unicode()
+ marker = utils.random_unicode()
+ limit = utils.random_unicode()
+ q.list = Mock()
+ clt.list_messages(q, include_claimed=include_claimed, echo=echo,
+ marker=marker, limit=limit)
+ q.list.assert_called_once_with(include_claimed=include_claimed,
+ echo=echo, marker=marker, limit=limit)
+
+ def test_clt_list_messages_by_ids(self):
+ clt = self.client
+ q = self.queue
+ ids = utils.random_unicode()
+ q.list_by_ids = Mock()
+ clt.list_messages_by_ids(q, ids)
+ q.list_by_ids.assert_called_once_with(ids)
+
+ def test_clt_delete_messages_by_ids(self):
+ clt = self.client
+ q = self.queue
+ ids = utils.random_unicode()
+ q.delete_by_ids = Mock()
+ clt.delete_messages_by_ids(q, ids)
+ q.delete_by_ids.assert_called_once_with(ids)
+
+ def test_clt_list_messages_by_claim(self):
+ clt = self.client
+ q = self.queue
+ claim = utils.random_unicode()
+ q.list_by_claim = Mock()
+ clt.list_messages_by_claim(q, claim)
+ q.list_by_claim.assert_called_once_with(claim)
+
+ def test_clt_post_message(self):
+ clt = self.client
+ q = self.queue
+ body = utils.random_unicode()
+ ttl = utils.random_unicode()
+ q.post_message = Mock()
+ clt.post_message(q, body, ttl=ttl)
+ q.post_message.assert_called_once_with(body, ttl=ttl)
+
+ def test_clt_claim_messages(self):
+ clt = self.client
+ q = self.queue
+ ttl = utils.random_unicode()
+ grace = utils.random_unicode()
+ count = utils.random_unicode()
+ q.claim_messages = Mock()
+ clt.claim_messages(q, ttl, grace, count=count)
+ q.claim_messages.assert_called_once_with(ttl, grace, count=count)
+
+ def test_clt_get_claim(self):
+ clt = self.client
+ q = self.queue
+ claim = utils.random_unicode()
+ q.get_claim = Mock()
+ clt.get_claim(q, claim)
+ q.get_claim.assert_called_once_with(claim)
+
+ def test_clt_update_claim(self):
+ clt = self.client
+ q = self.queue
+ claim = utils.random_unicode()
+ ttl = utils.random_unicode()
+ grace = utils.random_unicode()
+ q.update_claim = Mock()
+ clt.update_claim(q, claim, ttl=ttl, grace=grace)
+ q.update_claim.assert_called_once_with(claim, ttl=ttl, grace=grace)
+
+ def test_clt_release_claim(self):
+ clt = self.client
+ q = self.queue
+ claim = utils.random_unicode()
+ q.release_claim = Mock()
+ clt.release_claim(q, claim)
+ q.release_claim.assert_called_once_with(claim)
+
+
+if __name__ == "__main__":
+ unittest.main()
diff --git a/tests/unit/test_resource.py b/tests/unit/test_resource.py
index 64def9f0..c405af5c 100644
--- a/tests/unit/test_resource.py
+++ b/tests/unit/test_resource.py
@@ -22,7 +22,7 @@ def _create_dummy_resource(self):
mgr = fakes.FakeManager()
info = {"name": "test_resource",
"size": 42,
- "id": utils.random_name()}
+ "id": utils.random_unicode()}
return resource.BaseResource(mgr, info)
def setUp(self):
@@ -92,7 +92,7 @@ def test_get(self):
rsc.__getattr__ = Mock()
sav_mgr = rsc.manager.get
ent = fakes.FakeEntity
- new_att = utils.random_name(ascii_only=True)
+ new_att = utils.random_ascii()
ent._info = {new_att: None}
rsc.manager.get = Mock(return_value=ent)
rsc.get()
diff --git a/tests/unit/test_utils.py b/tests/unit/test_utils.py
index 9b67399d..fc2ca8a0 100644
--- a/tests/unit/test_utils.py
+++ b/tests/unit/test_utils.py
@@ -79,7 +79,7 @@ def test_get_checksum_from_unicode_alt_encoding(self):
self.assertEqual(expected, received)
def test_get_checksum_from_binary(self):
-# test = utils.random_name()
+# test = utils.random_unicode()
# test = open("tests/unit/python-logo.png", "rb").read()
test = fakes.png_file
md = hashlib.md5()
@@ -101,9 +101,9 @@ def test_get_checksum_from_file(self):
received = utils.get_checksum(testfile)
self.assertEqual(expected, received)
- def test_random_name(self):
+ def test_random_unicode(self):
testlen = random.randint(50, 500)
- nm = utils.random_name(testlen)
+ nm = utils.random_unicode(testlen)
self.assertEqual(len(nm), testlen)
def test_folder_size_bad_folder(self):
@@ -159,6 +159,17 @@ def fake_method(self):
self.assertTrue(hasattr(obj, "fake_method"))
self.assertTrue(callable(obj.fake_method))
+ def test_case_insensitive_update(self):
+ k1 = utils.random_ascii()
+ k2 = utils.random_ascii()
+ k2up = k2.upper()
+ k3 = utils.random_ascii()
+ d1 = {k1: "fake", k2up: "fake"}
+ d2 = {k2: "NEW", k3: "NEW"}
+ expected = {k1: "fake", k2up: "NEW", k3: "NEW"}
+ utils.case_insensitive_update(d1, d2)
+ self.assertEqual(d1, expected)
+
def test_env(self):
args = ("foo", "bar")
ret = utils.env(*args)
@@ -238,13 +249,13 @@ def test_wait_for_build(self):
sav = utils.wait_until
utils.wait_until = Mock()
obj = fakes.FakeEntity()
- att = utils.random_name()
- desired = utils.random_name()
- callback = utils.random_name()
- interval = utils.random_name()
- attempts = utils.random_name()
- verbose = utils.random_name()
- verbose_atts = utils.random_name()
+ att = utils.random_unicode()
+ desired = utils.random_unicode()
+ callback = utils.random_unicode()
+ interval = utils.random_unicode()
+ attempts = utils.random_unicode()
+ verbose = utils.random_unicode()
+ verbose_atts = utils.random_unicode()
utils.wait_for_build(obj, att, desired, callback, interval, attempts,
verbose, verbose_atts)
utils.wait_until.assert_called_once_with(obj, att, desired,
@@ -305,7 +316,7 @@ def test_match_pattern(self):
self.assertFalse(utils.match_pattern("some.good", ignore_pat))
def test_get_id(self):
- target = utils.random_name()
+ target = utils.random_unicode()
class ObjWithID(object):
id = target
@@ -317,7 +328,7 @@ class ObjWithID(object):
self.assertEqual(utils.get_id(plain), plain)
def test_get_name(self):
- nm = utils.random_name()
+ nm = utils.random_unicode()
class ObjWithName(object):
name = nm
@@ -333,8 +344,8 @@ def test_import_class(self):
self.assertTrue(ret is fakes.FakeManager)
def test_update_exc(self):
- msg1 = utils.random_name()
- msg2 = utils.random_name()
+ msg1 = utils.random_unicode()
+ msg2 = utils.random_unicode()
err = exc.PyraxException(400)
err.message = msg1
sep = random.choice(("!", "@", "#", "$"))
diff --git a/tests/unit/testtimes.json b/tests/unit/testtimes.json
new file mode 100644
index 00000000..5720bbe8
--- /dev/null
+++ b/tests/unit/testtimes.json
@@ -0,0 +1 @@
+[[0.0001239776611328125, "test_add_method"], [0.0001361370086669922, "test_add_method_no_name"], [3.910064697265625e-05, "test_env"], [3.0994415283203125e-05, "test_folder_size_bad_folder"], [0.0023241043090820312, "test_folder_size_ignore_list"], [0.0023190975189208984, "test_folder_size_ignore_string"], [0.0017161369323730469, "test_folder_size_no_ignore"], [9.393692016601562e-05, "test_get_checksum_from_binary"], [0.0014240741729736328, "test_get_checksum_from_file"], [4.482269287109375e-05, "test_get_checksum_from_string"], [5.7220458984375e-05, "test_get_checksum_from_unicode"], [0.001325845718383789, "test_get_checksum_from_unicode_alt_encoding"], [0.0001850128173828125, "test_get_id"], [0.0001761913299560547, "test_get_name"], [1.4066696166992188e-05, "test_import_class"], [1.4066696166992188e-05, "test_isunauthenticated"], [3.0040740966796875e-05, "test_match_pattern"], [0.0018579959869384766, "test_random_unicode"], [0.005424976348876953, "test_runproc"], [0.00025391578674316406, "test_safe_issubclass_bad"], [1.9073486328125e-05, "test_safe_issubclass_good"], [0.00031685829162597656, "test_self_deleting_temp_directory"], [0.00023221969604492188, "test_self_deleting_temp_file"], [0.00017905235290527344, "test_slugify"], [0.00017189979553222656, "test_time_string_date"], [3.314018249511719e-05, "test_time_string_date_obj"], [0.0031440258026123047, "test_time_string_datetime"], [8.797645568847656e-05, "test_time_string_datetime_add_tz"], [7.915496826171875e-05, "test_time_string_datetime_hide_tz"], [5.888938903808594e-05, "test_time_string_datetime_show_tz"], [8.106231689453125e-06, "test_time_string_empty"], [6.29425048828125e-05, "test_time_string_invalid"], [2.384185791015625e-05, "test_unauthenticated"], [0.00037407875061035156, "test_update_exc"], [0.0017931461334228516, "test_wait_for_build"], [0.0006711483001708984, "test_wait_until"], [0.0017261505126953125, "test_wait_until_callback"], [0.0006098747253417969, "test_wait_until_fail"]]
\ No newline at end of file