diff --git a/LICENSES/vendor/github.com/docker/distribution/LICENSE b/LICENSES/vendor/github.com/docker/distribution/LICENSE
new file mode 100644
index 0000000000000..ed0cac55e15b3
--- /dev/null
+++ b/LICENSES/vendor/github.com/docker/distribution/LICENSE
@@ -0,0 +1,206 @@
+= vendor/github.com/docker/distribution licensed under: =
+
+Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "{}"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright {yyyy} {name of copyright owner}
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+
+= vendor/github.com/docker/distribution/LICENSE d2794c0df5b907fdace235a619d80314
diff --git a/LICENSES/vendor/github.com/docker/docker/LICENSE b/LICENSES/vendor/github.com/docker/docker/LICENSE
new file mode 100644
index 0000000000000..48c33574e4cf5
--- /dev/null
+++ b/LICENSES/vendor/github.com/docker/docker/LICENSE
@@ -0,0 +1,195 @@
+= vendor/github.com/docker/docker licensed under: =
+
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        https://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   Copyright 2013-2018 Docker, Inc.
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       https://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+= vendor/github.com/docker/docker/LICENSE 4859e97a9c7780e77972d989f0823f28
diff --git a/LICENSES/vendor/github.com/docker/go-connections/LICENSE b/LICENSES/vendor/github.com/docker/go-connections/LICENSE
new file mode 100644
index 0000000000000..08061a0926b9e
--- /dev/null
+++ b/LICENSES/vendor/github.com/docker/go-connections/LICENSE
@@ -0,0 +1,195 @@
+= vendor/github.com/docker/go-connections licensed under: =
+
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        https://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   Copyright 2015 Docker, Inc.
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       https://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+= vendor/github.com/docker/go-connections/LICENSE 04424bc6f5a5be60691b9824d65c2ad8
diff --git a/LICENSES/vendor/github.com/opencontainers/image-spec/LICENSE b/LICENSES/vendor/github.com/opencontainers/image-spec/LICENSE
new file mode 100644
index 0000000000000..b4ccc319f0249
--- /dev/null
+++ b/LICENSES/vendor/github.com/opencontainers/image-spec/LICENSE
@@ -0,0 +1,195 @@
+= vendor/github.com/opencontainers/image-spec licensed under: =
+
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   Copyright 2016 The Linux Foundation.
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+= vendor/github.com/opencontainers/image-spec/LICENSE 27ef03aa2da6e424307f102e8b42621d
diff --git a/go.mod b/go.mod
index 361fbe94f2c5d..0c76f66394db4 100644
--- a/go.mod
+++ b/go.mod
@@ -159,6 +159,9 @@ require (
 	github.com/coreos/go-semver v0.3.1 // indirect
 	github.com/davecgh/go-spew v1.1.1 // indirect
 	github.com/daviddengcn/go-colortext v1.0.0 // indirect
+	github.com/docker/distribution v2.8.2+incompatible // indirect
+	github.com/docker/docker v20.10.24+incompatible // indirect
+	github.com/docker/go-connections v0.4.0 // indirect
 	github.com/dustin/go-humanize v1.0.1 // indirect
 	github.com/euank/go-kmsg-parser v2.0.0+incompatible // indirect
 	github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d // indirect
@@ -206,6 +209,7 @@ require (
 	github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
 	github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
 	github.com/opencontainers/go-digest v1.0.0 // indirect
+	github.com/opencontainers/image-spec v1.0.2 // indirect
 	github.com/opencontainers/runtime-spec v1.0.3-0.20220909204839-494a5a6aca78 // indirect
 	github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
 	github.com/pquerna/cachecontrol v0.1.0 // indirect
diff --git a/go.sum b/go.sum
index 4e7df5a3f675b..a72ee6b8a1924 100644
--- a/go.sum
+++ b/go.sum
@@ -643,6 +643,7 @@ github.com/mohae/deepcopy v0.0.0-20170603005431-491d3605edfb h1:e+l77LJOEqXTIQih
 github.com/mohae/deepcopy v0.0.0-20170603005431-491d3605edfb/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
 github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 h1:n6/2gBQ3RWajuToeY6ZtZTIKv2v7ThUy5KKusIT0yc0=
 github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00/go.mod h1:Pm3mSP3c5uWn86xMLZ5Sa7JB9GsEZySvHYXCTK4E9q4=
+github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
 github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
 github.com/mrunalp/fileutils v0.5.1 h1:F+S7ZlNKnrwHfSwdlgNSkKo67ReVf8o9fel6C3dkm/Q=
 github.com/mrunalp/fileutils v0.5.1/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ=
@@ -1326,6 +1327,7 @@ gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C
 gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
 gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
 gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+gotest.tools/v3 v3.0.3 h1:4AuOwCGf4lLR9u3YOe2awrHygurzhO/HeQ6laiA6Sx0=
 gotest.tools/v3 v3.0.3/go.mod h1:Z7Lb0S5l+klDB31fvDQX8ss/FlKDxtlFlw3Oa8Ymbl8=
 honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
 honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
diff --git a/vendor/github.com/docker/distribution/LICENSE b/vendor/github.com/docker/distribution/LICENSE
new file mode 100644
index 0000000000000..e06d2081865a7
--- /dev/null
+++ b/vendor/github.com/docker/distribution/LICENSE
@@ -0,0 +1,202 @@
+Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "{}"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright {yyyy} {name of copyright owner}
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
diff --git a/vendor/github.com/docker/distribution/digestset/set.go b/vendor/github.com/docker/distribution/digestset/set.go
new file mode 100644
index 0000000000000..71327dca72091
--- /dev/null
+++ b/vendor/github.com/docker/distribution/digestset/set.go
@@ -0,0 +1,247 @@
+package digestset
+
+import (
+	"errors"
+	"sort"
+	"strings"
+	"sync"
+
+	digest "github.com/opencontainers/go-digest"
+)
+
+var (
+	// ErrDigestNotFound is used when a matching digest
+	// could not be found in a set.
+	ErrDigestNotFound = errors.New("digest not found")
+
+	// ErrDigestAmbiguous is used when multiple digests
+	// are found in a set. None of the matching digests
+	// should be considered valid matches.
+	ErrDigestAmbiguous = errors.New("ambiguous digest string")
+)
+
+// Set is used to hold a unique set of digests which
+// may be easily referenced by easily  referenced by a string
+// representation of the digest as well as short representation.
+// The uniqueness of the short representation is based on other
+// digests in the set. If digests are omitted from this set,
+// collisions in a larger set may not be detected, therefore it
+// is important to always do short representation lookups on
+// the complete set of digests. To mitigate collisions, an
+// appropriately long short code should be used.
+type Set struct {
+	mutex   sync.RWMutex
+	entries digestEntries
+}
+
+// NewSet creates an empty set of digests
+// which may have digests added.
+func NewSet() *Set {
+	return &Set{
+		entries: digestEntries{},
+	}
+}
+
+// checkShortMatch checks whether two digests match as either whole
+// values or short values. This function does not test equality,
+// rather whether the second value could match against the first
+// value.
+func checkShortMatch(alg digest.Algorithm, hex, shortAlg, shortHex string) bool {
+	if len(hex) == len(shortHex) {
+		if hex != shortHex {
+			return false
+		}
+		if len(shortAlg) > 0 && string(alg) != shortAlg {
+			return false
+		}
+	} else if !strings.HasPrefix(hex, shortHex) {
+		return false
+	} else if len(shortAlg) > 0 && string(alg) != shortAlg {
+		return false
+	}
+	return true
+}
+
+// Lookup looks for a digest matching the given string representation.
+// If no digests could be found ErrDigestNotFound will be returned
+// with an empty digest value. If multiple matches are found
+// ErrDigestAmbiguous will be returned with an empty digest value.
+func (dst *Set) Lookup(d string) (digest.Digest, error) {
+	dst.mutex.RLock()
+	defer dst.mutex.RUnlock()
+	if len(dst.entries) == 0 {
+		return "", ErrDigestNotFound
+	}
+	var (
+		searchFunc func(int) bool
+		alg        digest.Algorithm
+		hex        string
+	)
+	dgst, err := digest.Parse(d)
+	if err == digest.ErrDigestInvalidFormat {
+		hex = d
+		searchFunc = func(i int) bool {
+			return dst.entries[i].val >= d
+		}
+	} else {
+		hex = dgst.Hex()
+		alg = dgst.Algorithm()
+		searchFunc = func(i int) bool {
+			if dst.entries[i].val == hex {
+				return dst.entries[i].alg >= alg
+			}
+			return dst.entries[i].val >= hex
+		}
+	}
+	idx := sort.Search(len(dst.entries), searchFunc)
+	if idx == len(dst.entries) || !checkShortMatch(dst.entries[idx].alg, dst.entries[idx].val, string(alg), hex) {
+		return "", ErrDigestNotFound
+	}
+	if dst.entries[idx].alg == alg && dst.entries[idx].val == hex {
+		return dst.entries[idx].digest, nil
+	}
+	if idx+1 < len(dst.entries) && checkShortMatch(dst.entries[idx+1].alg, dst.entries[idx+1].val, string(alg), hex) {
+		return "", ErrDigestAmbiguous
+	}
+
+	return dst.entries[idx].digest, nil
+}
+
+// Add adds the given digest to the set. An error will be returned
+// if the given digest is invalid. If the digest already exists in the
+// set, this operation will be a no-op.
+func (dst *Set) Add(d digest.Digest) error {
+	if err := d.Validate(); err != nil {
+		return err
+	}
+	dst.mutex.Lock()
+	defer dst.mutex.Unlock()
+	entry := &digestEntry{alg: d.Algorithm(), val: d.Hex(), digest: d}
+	searchFunc := func(i int) bool {
+		if dst.entries[i].val == entry.val {
+			return dst.entries[i].alg >= entry.alg
+		}
+		return dst.entries[i].val >= entry.val
+	}
+	idx := sort.Search(len(dst.entries), searchFunc)
+	if idx == len(dst.entries) {
+		dst.entries = append(dst.entries, entry)
+		return nil
+	} else if dst.entries[idx].digest == d {
+		return nil
+	}
+
+	entries := append(dst.entries, nil)
+	copy(entries[idx+1:], entries[idx:len(entries)-1])
+	entries[idx] = entry
+	dst.entries = entries
+	return nil
+}
+
+// Remove removes the given digest from the set. An err will be
+// returned if the given digest is invalid. If the digest does
+// not exist in the set, this operation will be a no-op.
+func (dst *Set) Remove(d digest.Digest) error {
+	if err := d.Validate(); err != nil {
+		return err
+	}
+	dst.mutex.Lock()
+	defer dst.mutex.Unlock()
+	entry := &digestEntry{alg: d.Algorithm(), val: d.Hex(), digest: d}
+	searchFunc := func(i int) bool {
+		if dst.entries[i].val == entry.val {
+			return dst.entries[i].alg >= entry.alg
+		}
+		return dst.entries[i].val >= entry.val
+	}
+	idx := sort.Search(len(dst.entries), searchFunc)
+	// Not found if idx is after or value at idx is not digest
+	if idx == len(dst.entries) || dst.entries[idx].digest != d {
+		return nil
+	}
+
+	entries := dst.entries
+	copy(entries[idx:], entries[idx+1:])
+	entries = entries[:len(entries)-1]
+	dst.entries = entries
+
+	return nil
+}
+
+// All returns all the digests in the set
+func (dst *Set) All() []digest.Digest {
+	dst.mutex.RLock()
+	defer dst.mutex.RUnlock()
+	retValues := make([]digest.Digest, len(dst.entries))
+	for i := range dst.entries {
+		retValues[i] = dst.entries[i].digest
+	}
+
+	return retValues
+}
+
+// ShortCodeTable returns a map of Digest to unique short codes. The
+// length represents the minimum value, the maximum length may be the
+// entire value of digest if uniqueness cannot be achieved without the
+// full value. This function will attempt to make short codes as short
+// as possible to be unique.
+func ShortCodeTable(dst *Set, length int) map[digest.Digest]string {
+	dst.mutex.RLock()
+	defer dst.mutex.RUnlock()
+	m := make(map[digest.Digest]string, len(dst.entries))
+	l := length
+	resetIdx := 0
+	for i := 0; i < len(dst.entries); i++ {
+		var short string
+		extended := true
+		for extended {
+			extended = false
+			if len(dst.entries[i].val) <= l {
+				short = dst.entries[i].digest.String()
+			} else {
+				short = dst.entries[i].val[:l]
+				for j := i + 1; j < len(dst.entries); j++ {
+					if checkShortMatch(dst.entries[j].alg, dst.entries[j].val, "", short) {
+						if j > resetIdx {
+							resetIdx = j
+						}
+						extended = true
+					} else {
+						break
+					}
+				}
+				if extended {
+					l++
+				}
+			}
+		}
+		m[dst.entries[i].digest] = short
+		if i >= resetIdx {
+			l = length
+		}
+	}
+	return m
+}
+
+type digestEntry struct {
+	alg    digest.Algorithm
+	val    string
+	digest digest.Digest
+}
+
+type digestEntries []*digestEntry
+
+func (d digestEntries) Len() int {
+	return len(d)
+}
+
+func (d digestEntries) Less(i, j int) bool {
+	if d[i].val != d[j].val {
+		return d[i].val < d[j].val
+	}
+	return d[i].alg < d[j].alg
+}
+
+func (d digestEntries) Swap(i, j int) {
+	d[i], d[j] = d[j], d[i]
+}
diff --git a/vendor/github.com/docker/distribution/reference/helpers.go b/vendor/github.com/docker/distribution/reference/helpers.go
new file mode 100644
index 0000000000000..978df7eabbf19
--- /dev/null
+++ b/vendor/github.com/docker/distribution/reference/helpers.go
@@ -0,0 +1,42 @@
+package reference
+
+import "path"
+
+// IsNameOnly returns true if reference only contains a repo name.
+func IsNameOnly(ref Named) bool {
+	if _, ok := ref.(NamedTagged); ok {
+		return false
+	}
+	if _, ok := ref.(Canonical); ok {
+		return false
+	}
+	return true
+}
+
+// FamiliarName returns the familiar name string
+// for the given named, familiarizing if needed.
+func FamiliarName(ref Named) string {
+	if nn, ok := ref.(normalizedNamed); ok {
+		return nn.Familiar().Name()
+	}
+	return ref.Name()
+}
+
+// FamiliarString returns the familiar string representation
+// for the given reference, familiarizing if needed.
+func FamiliarString(ref Reference) string {
+	if nn, ok := ref.(normalizedNamed); ok {
+		return nn.Familiar().String()
+	}
+	return ref.String()
+}
+
+// FamiliarMatch reports whether ref matches the specified pattern.
+// See https://godoc.org/path#Match for supported patterns.
+func FamiliarMatch(pattern string, ref Reference) (bool, error) {
+	matched, err := path.Match(pattern, FamiliarString(ref))
+	if namedRef, isNamed := ref.(Named); isNamed && !matched {
+		matched, _ = path.Match(pattern, FamiliarName(namedRef))
+	}
+	return matched, err
+}
diff --git a/vendor/github.com/docker/distribution/reference/normalize.go b/vendor/github.com/docker/distribution/reference/normalize.go
new file mode 100644
index 0000000000000..b3dfb7a6d7e12
--- /dev/null
+++ b/vendor/github.com/docker/distribution/reference/normalize.go
@@ -0,0 +1,199 @@
+package reference
+
+import (
+	"errors"
+	"fmt"
+	"strings"
+
+	"github.com/docker/distribution/digestset"
+	"github.com/opencontainers/go-digest"
+)
+
+var (
+	legacyDefaultDomain = "index.docker.io"
+	defaultDomain       = "docker.io"
+	officialRepoName    = "library"
+	defaultTag          = "latest"
+)
+
+// normalizedNamed represents a name which has been
+// normalized and has a familiar form. A familiar name
+// is what is used in Docker UI. An example normalized
+// name is "docker.io/library/ubuntu" and corresponding
+// familiar name of "ubuntu".
+type normalizedNamed interface {
+	Named
+	Familiar() Named
+}
+
+// ParseNormalizedNamed parses a string into a named reference
+// transforming a familiar name from Docker UI to a fully
+// qualified reference. If the value may be an identifier
+// use ParseAnyReference.
+func ParseNormalizedNamed(s string) (Named, error) {
+	if ok := anchoredIdentifierRegexp.MatchString(s); ok {
+		return nil, fmt.Errorf("invalid repository name (%s), cannot specify 64-byte hexadecimal strings", s)
+	}
+	domain, remainder := splitDockerDomain(s)
+	var remoteName string
+	if tagSep := strings.IndexRune(remainder, ':'); tagSep > -1 {
+		remoteName = remainder[:tagSep]
+	} else {
+		remoteName = remainder
+	}
+	if strings.ToLower(remoteName) != remoteName {
+		return nil, errors.New("invalid reference format: repository name must be lowercase")
+	}
+
+	ref, err := Parse(domain + "/" + remainder)
+	if err != nil {
+		return nil, err
+	}
+	named, isNamed := ref.(Named)
+	if !isNamed {
+		return nil, fmt.Errorf("reference %s has no name", ref.String())
+	}
+	return named, nil
+}
+
+// ParseDockerRef normalizes the image reference following the docker convention. This is added
+// mainly for backward compatibility.
+// The reference returned can only be either tagged or digested. For reference contains both tag
+// and digest, the function returns digested reference, e.g. docker.io/library/busybox:latest@
+// sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa will be returned as
+// docker.io/library/busybox@sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa.
+func ParseDockerRef(ref string) (Named, error) {
+	named, err := ParseNormalizedNamed(ref)
+	if err != nil {
+		return nil, err
+	}
+	if _, ok := named.(NamedTagged); ok {
+		if canonical, ok := named.(Canonical); ok {
+			// The reference is both tagged and digested, only
+			// return digested.
+			newNamed, err := WithName(canonical.Name())
+			if err != nil {
+				return nil, err
+			}
+			newCanonical, err := WithDigest(newNamed, canonical.Digest())
+			if err != nil {
+				return nil, err
+			}
+			return newCanonical, nil
+		}
+	}
+	return TagNameOnly(named), nil
+}
+
+// splitDockerDomain splits a repository name to domain and remotename string.
+// If no valid domain is found, the default domain is used. Repository name
+// needs to be already validated before.
+func splitDockerDomain(name string) (domain, remainder string) {
+	i := strings.IndexRune(name, '/')
+	if i == -1 || (!strings.ContainsAny(name[:i], ".:") && name[:i] != "localhost") {
+		domain, remainder = defaultDomain, name
+	} else {
+		domain, remainder = name[:i], name[i+1:]
+	}
+	if domain == legacyDefaultDomain {
+		domain = defaultDomain
+	}
+	if domain == defaultDomain && !strings.ContainsRune(remainder, '/') {
+		remainder = officialRepoName + "/" + remainder
+	}
+	return
+}
+
+// familiarizeName returns a shortened version of the name familiar
+// to to the Docker UI. Familiar names have the default domain
+// "docker.io" and "library/" repository prefix removed.
+// For example, "docker.io/library/redis" will have the familiar
+// name "redis" and "docker.io/dmcgowan/myapp" will be "dmcgowan/myapp".
+// Returns a familiarized named only reference.
+func familiarizeName(named namedRepository) repository {
+	repo := repository{
+		domain: named.Domain(),
+		path:   named.Path(),
+	}
+
+	if repo.domain == defaultDomain {
+		repo.domain = ""
+		// Handle official repositories which have the pattern "library/<official repo name>"
+		if split := strings.Split(repo.path, "/"); len(split) == 2 && split[0] == officialRepoName {
+			repo.path = split[1]
+		}
+	}
+	return repo
+}
+
+func (r reference) Familiar() Named {
+	return reference{
+		namedRepository: familiarizeName(r.namedRepository),
+		tag:             r.tag,
+		digest:          r.digest,
+	}
+}
+
+func (r repository) Familiar() Named {
+	return familiarizeName(r)
+}
+
+func (t taggedReference) Familiar() Named {
+	return taggedReference{
+		namedRepository: familiarizeName(t.namedRepository),
+		tag:             t.tag,
+	}
+}
+
+func (c canonicalReference) Familiar() Named {
+	return canonicalReference{
+		namedRepository: familiarizeName(c.namedRepository),
+		digest:          c.digest,
+	}
+}
+
+// TagNameOnly adds the default tag "latest" to a reference if it only has
+// a repo name.
+func TagNameOnly(ref Named) Named {
+	if IsNameOnly(ref) {
+		namedTagged, err := WithTag(ref, defaultTag)
+		if err != nil {
+			// Default tag must be valid, to create a NamedTagged
+			// type with non-validated input the WithTag function
+			// should be used instead
+			panic(err)
+		}
+		return namedTagged
+	}
+	return ref
+}
+
+// ParseAnyReference parses a reference string as a possible identifier,
+// full digest, or familiar name.
+func ParseAnyReference(ref string) (Reference, error) {
+	if ok := anchoredIdentifierRegexp.MatchString(ref); ok {
+		return digestReference("sha256:" + ref), nil
+	}
+	if dgst, err := digest.Parse(ref); err == nil {
+		return digestReference(dgst), nil
+	}
+
+	return ParseNormalizedNamed(ref)
+}
+
+// ParseAnyReferenceWithSet parses a reference string as a possible short
+// identifier to be matched in a digest set, a full digest, or familiar name.
+func ParseAnyReferenceWithSet(ref string, ds *digestset.Set) (Reference, error) {
+	if ok := anchoredShortIdentifierRegexp.MatchString(ref); ok {
+		dgst, err := ds.Lookup(ref)
+		if err == nil {
+			return digestReference(dgst), nil
+		}
+	} else {
+		if dgst, err := digest.Parse(ref); err == nil {
+			return digestReference(dgst), nil
+		}
+	}
+
+	return ParseNormalizedNamed(ref)
+}
diff --git a/vendor/github.com/docker/distribution/reference/reference.go b/vendor/github.com/docker/distribution/reference/reference.go
new file mode 100644
index 0000000000000..b7cd00b0d68e2
--- /dev/null
+++ b/vendor/github.com/docker/distribution/reference/reference.go
@@ -0,0 +1,433 @@
+// Package reference provides a general type to represent any way of referencing images within the registry.
+// Its main purpose is to abstract tags and digests (content-addressable hash).
+//
+// Grammar
+//
+//	reference                       := name [ ":" tag ] [ "@" digest ]
+//	name                            := [domain '/'] path-component ['/' path-component]*
+//	domain                          := domain-component ['.' domain-component]* [':' port-number]
+//	domain-component                := /([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])/
+//	port-number                     := /[0-9]+/
+//	path-component                  := alpha-numeric [separator alpha-numeric]*
+//	alpha-numeric                   := /[a-z0-9]+/
+//	separator                       := /[_.]|__|[-]*/
+//
+//	tag                             := /[\w][\w.-]{0,127}/
+//
+//	digest                          := digest-algorithm ":" digest-hex
+//	digest-algorithm                := digest-algorithm-component [ digest-algorithm-separator digest-algorithm-component ]*
+//	digest-algorithm-separator      := /[+.-_]/
+//	digest-algorithm-component      := /[A-Za-z][A-Za-z0-9]*/
+//	digest-hex                      := /[0-9a-fA-F]{32,}/ ; At least 128 bit digest value
+//
+//	identifier                      := /[a-f0-9]{64}/
+//	short-identifier                := /[a-f0-9]{6,64}/
+package reference
+
+import (
+	"errors"
+	"fmt"
+	"strings"
+
+	"github.com/opencontainers/go-digest"
+)
+
+const (
+	// NameTotalLengthMax is the maximum total number of characters in a repository name.
+	NameTotalLengthMax = 255
+)
+
+var (
+	// ErrReferenceInvalidFormat represents an error while trying to parse a string as a reference.
+	ErrReferenceInvalidFormat = errors.New("invalid reference format")
+
+	// ErrTagInvalidFormat represents an error while trying to parse a string as a tag.
+	ErrTagInvalidFormat = errors.New("invalid tag format")
+
+	// ErrDigestInvalidFormat represents an error while trying to parse a string as a tag.
+	ErrDigestInvalidFormat = errors.New("invalid digest format")
+
+	// ErrNameContainsUppercase is returned for invalid repository names that contain uppercase characters.
+	ErrNameContainsUppercase = errors.New("repository name must be lowercase")
+
+	// ErrNameEmpty is returned for empty, invalid repository names.
+	ErrNameEmpty = errors.New("repository name must have at least one component")
+
+	// ErrNameTooLong is returned when a repository name is longer than NameTotalLengthMax.
+	ErrNameTooLong = fmt.Errorf("repository name must not be more than %v characters", NameTotalLengthMax)
+
+	// ErrNameNotCanonical is returned when a name is not canonical.
+	ErrNameNotCanonical = errors.New("repository name must be canonical")
+)
+
+// Reference is an opaque object reference identifier that may include
+// modifiers such as a hostname, name, tag, and digest.
+type Reference interface {
+	// String returns the full reference
+	String() string
+}
+
+// Field provides a wrapper type for resolving correct reference types when
+// working with encoding.
+type Field struct {
+	reference Reference
+}
+
+// AsField wraps a reference in a Field for encoding.
+func AsField(reference Reference) Field {
+	return Field{reference}
+}
+
+// Reference unwraps the reference type from the field to
+// return the Reference object. This object should be
+// of the appropriate type to further check for different
+// reference types.
+func (f Field) Reference() Reference {
+	return f.reference
+}
+
+// MarshalText serializes the field to byte text which
+// is the string of the reference.
+func (f Field) MarshalText() (p []byte, err error) {
+	return []byte(f.reference.String()), nil
+}
+
+// UnmarshalText parses text bytes by invoking the
+// reference parser to ensure the appropriately
+// typed reference object is wrapped by field.
+func (f *Field) UnmarshalText(p []byte) error {
+	r, err := Parse(string(p))
+	if err != nil {
+		return err
+	}
+
+	f.reference = r
+	return nil
+}
+
+// Named is an object with a full name
+type Named interface {
+	Reference
+	Name() string
+}
+
+// Tagged is an object which has a tag
+type Tagged interface {
+	Reference
+	Tag() string
+}
+
+// NamedTagged is an object including a name and tag.
+type NamedTagged interface {
+	Named
+	Tag() string
+}
+
+// Digested is an object which has a digest
+// in which it can be referenced by
+type Digested interface {
+	Reference
+	Digest() digest.Digest
+}
+
+// Canonical reference is an object with a fully unique
+// name including a name with domain and digest
+type Canonical interface {
+	Named
+	Digest() digest.Digest
+}
+
+// namedRepository is a reference to a repository with a name.
+// A namedRepository has both domain and path components.
+type namedRepository interface {
+	Named
+	Domain() string
+	Path() string
+}
+
+// Domain returns the domain part of the Named reference
+func Domain(named Named) string {
+	if r, ok := named.(namedRepository); ok {
+		return r.Domain()
+	}
+	domain, _ := splitDomain(named.Name())
+	return domain
+}
+
+// Path returns the name without the domain part of the Named reference
+func Path(named Named) (name string) {
+	if r, ok := named.(namedRepository); ok {
+		return r.Path()
+	}
+	_, path := splitDomain(named.Name())
+	return path
+}
+
+func splitDomain(name string) (string, string) {
+	match := anchoredNameRegexp.FindStringSubmatch(name)
+	if len(match) != 3 {
+		return "", name
+	}
+	return match[1], match[2]
+}
+
+// SplitHostname splits a named reference into a
+// hostname and name string. If no valid hostname is
+// found, the hostname is empty and the full value
+// is returned as name
+// DEPRECATED: Use Domain or Path
+func SplitHostname(named Named) (string, string) {
+	if r, ok := named.(namedRepository); ok {
+		return r.Domain(), r.Path()
+	}
+	return splitDomain(named.Name())
+}
+
+// Parse parses s and returns a syntactically valid Reference.
+// If an error was encountered it is returned, along with a nil Reference.
+// NOTE: Parse will not handle short digests.
+func Parse(s string) (Reference, error) {
+	matches := ReferenceRegexp.FindStringSubmatch(s)
+	if matches == nil {
+		if s == "" {
+			return nil, ErrNameEmpty
+		}
+		if ReferenceRegexp.FindStringSubmatch(strings.ToLower(s)) != nil {
+			return nil, ErrNameContainsUppercase
+		}
+		return nil, ErrReferenceInvalidFormat
+	}
+
+	if len(matches[1]) > NameTotalLengthMax {
+		return nil, ErrNameTooLong
+	}
+
+	var repo repository
+
+	nameMatch := anchoredNameRegexp.FindStringSubmatch(matches[1])
+	if len(nameMatch) == 3 {
+		repo.domain = nameMatch[1]
+		repo.path = nameMatch[2]
+	} else {
+		repo.domain = ""
+		repo.path = matches[1]
+	}
+
+	ref := reference{
+		namedRepository: repo,
+		tag:             matches[2],
+	}
+	if matches[3] != "" {
+		var err error
+		ref.digest, err = digest.Parse(matches[3])
+		if err != nil {
+			return nil, err
+		}
+	}
+
+	r := getBestReferenceType(ref)
+	if r == nil {
+		return nil, ErrNameEmpty
+	}
+
+	return r, nil
+}
+
+// ParseNamed parses s and returns a syntactically valid reference implementing
+// the Named interface. The reference must have a name and be in the canonical
+// form, otherwise an error is returned.
+// If an error was encountered it is returned, along with a nil Reference.
+// NOTE: ParseNamed will not handle short digests.
+func ParseNamed(s string) (Named, error) {
+	named, err := ParseNormalizedNamed(s)
+	if err != nil {
+		return nil, err
+	}
+	if named.String() != s {
+		return nil, ErrNameNotCanonical
+	}
+	return named, nil
+}
+
+// WithName returns a named object representing the given string. If the input
+// is invalid ErrReferenceInvalidFormat will be returned.
+func WithName(name string) (Named, error) {
+	if len(name) > NameTotalLengthMax {
+		return nil, ErrNameTooLong
+	}
+
+	match := anchoredNameRegexp.FindStringSubmatch(name)
+	if match == nil || len(match) != 3 {
+		return nil, ErrReferenceInvalidFormat
+	}
+	return repository{
+		domain: match[1],
+		path:   match[2],
+	}, nil
+}
+
+// WithTag combines the name from "name" and the tag from "tag" to form a
+// reference incorporating both the name and the tag.
+func WithTag(name Named, tag string) (NamedTagged, error) {
+	if !anchoredTagRegexp.MatchString(tag) {
+		return nil, ErrTagInvalidFormat
+	}
+	var repo repository
+	if r, ok := name.(namedRepository); ok {
+		repo.domain = r.Domain()
+		repo.path = r.Path()
+	} else {
+		repo.path = name.Name()
+	}
+	if canonical, ok := name.(Canonical); ok {
+		return reference{
+			namedRepository: repo,
+			tag:             tag,
+			digest:          canonical.Digest(),
+		}, nil
+	}
+	return taggedReference{
+		namedRepository: repo,
+		tag:             tag,
+	}, nil
+}
+
+// WithDigest combines the name from "name" and the digest from "digest" to form
+// a reference incorporating both the name and the digest.
+func WithDigest(name Named, digest digest.Digest) (Canonical, error) {
+	if !anchoredDigestRegexp.MatchString(digest.String()) {
+		return nil, ErrDigestInvalidFormat
+	}
+	var repo repository
+	if r, ok := name.(namedRepository); ok {
+		repo.domain = r.Domain()
+		repo.path = r.Path()
+	} else {
+		repo.path = name.Name()
+	}
+	if tagged, ok := name.(Tagged); ok {
+		return reference{
+			namedRepository: repo,
+			tag:             tagged.Tag(),
+			digest:          digest,
+		}, nil
+	}
+	return canonicalReference{
+		namedRepository: repo,
+		digest:          digest,
+	}, nil
+}
+
+// TrimNamed removes any tag or digest from the named reference.
+func TrimNamed(ref Named) Named {
+	domain, path := SplitHostname(ref)
+	return repository{
+		domain: domain,
+		path:   path,
+	}
+}
+
+func getBestReferenceType(ref reference) Reference {
+	if ref.Name() == "" {
+		// Allow digest only references
+		if ref.digest != "" {
+			return digestReference(ref.digest)
+		}
+		return nil
+	}
+	if ref.tag == "" {
+		if ref.digest != "" {
+			return canonicalReference{
+				namedRepository: ref.namedRepository,
+				digest:          ref.digest,
+			}
+		}
+		return ref.namedRepository
+	}
+	if ref.digest == "" {
+		return taggedReference{
+			namedRepository: ref.namedRepository,
+			tag:             ref.tag,
+		}
+	}
+
+	return ref
+}
+
+type reference struct {
+	namedRepository
+	tag    string
+	digest digest.Digest
+}
+
+func (r reference) String() string {
+	return r.Name() + ":" + r.tag + "@" + r.digest.String()
+}
+
+func (r reference) Tag() string {
+	return r.tag
+}
+
+func (r reference) Digest() digest.Digest {
+	return r.digest
+}
+
+type repository struct {
+	domain string
+	path   string
+}
+
+func (r repository) String() string {
+	return r.Name()
+}
+
+func (r repository) Name() string {
+	if r.domain == "" {
+		return r.path
+	}
+	return r.domain + "/" + r.path
+}
+
+func (r repository) Domain() string {
+	return r.domain
+}
+
+func (r repository) Path() string {
+	return r.path
+}
+
+type digestReference digest.Digest
+
+func (d digestReference) String() string {
+	return digest.Digest(d).String()
+}
+
+func (d digestReference) Digest() digest.Digest {
+	return digest.Digest(d)
+}
+
+type taggedReference struct {
+	namedRepository
+	tag string
+}
+
+func (t taggedReference) String() string {
+	return t.Name() + ":" + t.tag
+}
+
+func (t taggedReference) Tag() string {
+	return t.tag
+}
+
+type canonicalReference struct {
+	namedRepository
+	digest digest.Digest
+}
+
+func (c canonicalReference) String() string {
+	return c.Name() + "@" + c.digest.String()
+}
+
+func (c canonicalReference) Digest() digest.Digest {
+	return c.digest
+}
diff --git a/vendor/github.com/docker/distribution/reference/regexp.go b/vendor/github.com/docker/distribution/reference/regexp.go
new file mode 100644
index 0000000000000..78603493203f4
--- /dev/null
+++ b/vendor/github.com/docker/distribution/reference/regexp.go
@@ -0,0 +1,143 @@
+package reference
+
+import "regexp"
+
+var (
+	// alphaNumericRegexp defines the alpha numeric atom, typically a
+	// component of names. This only allows lower case characters and digits.
+	alphaNumericRegexp = match(`[a-z0-9]+`)
+
+	// separatorRegexp defines the separators allowed to be embedded in name
+	// components. This allow one period, one or two underscore and multiple
+	// dashes.
+	separatorRegexp = match(`(?:[._]|__|[-]*)`)
+
+	// nameComponentRegexp restricts registry path component names to start
+	// with at least one letter or number, with following parts able to be
+	// separated by one period, one or two underscore and multiple dashes.
+	nameComponentRegexp = expression(
+		alphaNumericRegexp,
+		optional(repeated(separatorRegexp, alphaNumericRegexp)))
+
+	// domainComponentRegexp restricts the registry domain component of a
+	// repository name to start with a component as defined by DomainRegexp
+	// and followed by an optional port.
+	domainComponentRegexp = match(`(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])`)
+
+	// DomainRegexp defines the structure of potential domain components
+	// that may be part of image names. This is purposely a subset of what is
+	// allowed by DNS to ensure backwards compatibility with Docker image
+	// names.
+	DomainRegexp = expression(
+		domainComponentRegexp,
+		optional(repeated(literal(`.`), domainComponentRegexp)),
+		optional(literal(`:`), match(`[0-9]+`)))
+
+	// TagRegexp matches valid tag names. From docker/docker:graph/tags.go.
+	TagRegexp = match(`[\w][\w.-]{0,127}`)
+
+	// anchoredTagRegexp matches valid tag names, anchored at the start and
+	// end of the matched string.
+	anchoredTagRegexp = anchored(TagRegexp)
+
+	// DigestRegexp matches valid digests.
+	DigestRegexp = match(`[A-Za-z][A-Za-z0-9]*(?:[-_+.][A-Za-z][A-Za-z0-9]*)*[:][[:xdigit:]]{32,}`)
+
+	// anchoredDigestRegexp matches valid digests, anchored at the start and
+	// end of the matched string.
+	anchoredDigestRegexp = anchored(DigestRegexp)
+
+	// NameRegexp is the format for the name component of references. The
+	// regexp has capturing groups for the domain and name part omitting
+	// the separating forward slash from either.
+	NameRegexp = expression(
+		optional(DomainRegexp, literal(`/`)),
+		nameComponentRegexp,
+		optional(repeated(literal(`/`), nameComponentRegexp)))
+
+	// anchoredNameRegexp is used to parse a name value, capturing the
+	// domain and trailing components.
+	anchoredNameRegexp = anchored(
+		optional(capture(DomainRegexp), literal(`/`)),
+		capture(nameComponentRegexp,
+			optional(repeated(literal(`/`), nameComponentRegexp))))
+
+	// ReferenceRegexp is the full supported format of a reference. The regexp
+	// is anchored and has capturing groups for name, tag, and digest
+	// components.
+	ReferenceRegexp = anchored(capture(NameRegexp),
+		optional(literal(":"), capture(TagRegexp)),
+		optional(literal("@"), capture(DigestRegexp)))
+
+	// IdentifierRegexp is the format for string identifier used as a
+	// content addressable identifier using sha256. These identifiers
+	// are like digests without the algorithm, since sha256 is used.
+	IdentifierRegexp = match(`([a-f0-9]{64})`)
+
+	// ShortIdentifierRegexp is the format used to represent a prefix
+	// of an identifier. A prefix may be used to match a sha256 identifier
+	// within a list of trusted identifiers.
+	ShortIdentifierRegexp = match(`([a-f0-9]{6,64})`)
+
+	// anchoredIdentifierRegexp is used to check or match an
+	// identifier value, anchored at start and end of string.
+	anchoredIdentifierRegexp = anchored(IdentifierRegexp)
+
+	// anchoredShortIdentifierRegexp is used to check if a value
+	// is a possible identifier prefix, anchored at start and end
+	// of string.
+	anchoredShortIdentifierRegexp = anchored(ShortIdentifierRegexp)
+)
+
+// match compiles the string to a regular expression.
+var match = regexp.MustCompile
+
+// literal compiles s into a literal regular expression, escaping any regexp
+// reserved characters.
+func literal(s string) *regexp.Regexp {
+	re := match(regexp.QuoteMeta(s))
+
+	if _, complete := re.LiteralPrefix(); !complete {
+		panic("must be a literal")
+	}
+
+	return re
+}
+
+// expression defines a full expression, where each regular expression must
+// follow the previous.
+func expression(res ...*regexp.Regexp) *regexp.Regexp {
+	var s string
+	for _, re := range res {
+		s += re.String()
+	}
+
+	return match(s)
+}
+
+// optional wraps the expression in a non-capturing group and makes the
+// production optional.
+func optional(res ...*regexp.Regexp) *regexp.Regexp {
+	return match(group(expression(res...)).String() + `?`)
+}
+
+// repeated wraps the regexp in a non-capturing group to get one or more
+// matches.
+func repeated(res ...*regexp.Regexp) *regexp.Regexp {
+	return match(group(expression(res...)).String() + `+`)
+}
+
+// group wraps the regexp in a non-capturing group.
+func group(res ...*regexp.Regexp) *regexp.Regexp {
+	return match(`(?:` + expression(res...).String() + `)`)
+}
+
+// capture wraps the expression in a capturing group.
+func capture(res ...*regexp.Regexp) *regexp.Regexp {
+	return match(`(` + expression(res...).String() + `)`)
+}
+
+// anchored anchors the regular expression by adding start and end delimiters.
+func anchored(res ...*regexp.Regexp) *regexp.Regexp {
+	return match(`^` + expression(res...).String() + `$`)
+}
diff --git a/vendor/github.com/docker/docker/AUTHORS b/vendor/github.com/docker/docker/AUTHORS
new file mode 100644
index 0000000000000..dffacff112025
--- /dev/null
+++ b/vendor/github.com/docker/docker/AUTHORS
@@ -0,0 +1,2175 @@
+# This file lists all individuals having contributed content to the repository.
+# For how it is generated, see `hack/generate-authors.sh`.
+
+Aanand Prasad <aanand.prasad@gmail.com>
+Aaron Davidson <aaron@databricks.com>
+Aaron Feng <aaron.feng@gmail.com>
+Aaron Hnatiw <aaron@griddio.com>
+Aaron Huslage <huslage@gmail.com>
+Aaron L. Xu <liker.xu@foxmail.com>
+Aaron Lehmann <aaron.lehmann@docker.com>
+Aaron Welch <welch@packet.net>
+Aaron.L.Xu <likexu@harmonycloud.cn>
+Abel Muiño <amuino@gmail.com>
+Abhijeet Kasurde <akasurde@redhat.com>
+Abhinandan Prativadi <abhi@docker.com>
+Abhinav Ajgaonkar <abhinav316@gmail.com>
+Abhishek Chanda <abhishek.becs@gmail.com>
+Abhishek Sharma <abhishek@asharma.me>
+Abin Shahab <ashahab@altiscale.com>
+Adam Avilla <aavilla@yp.com>
+Adam Dobrawy <naczelnik@jawnosc.tk>
+Adam Eijdenberg <adam.eijdenberg@gmail.com>
+Adam Kunk <adam.kunk@tiaa-cref.org>
+Adam Miller <admiller@redhat.com>
+Adam Mills <adam@armills.info>
+Adam Pointer <adam.pointer@skybettingandgaming.com>
+Adam Singer <financeCoding@gmail.com>
+Adam Walz <adam@adamwalz.net>
+Addam Hardy <addam.hardy@gmail.com>
+Aditi Rajagopal <arajagopal@us.ibm.com>
+Aditya <aditya@netroy.in>
+Adnan Khan <adnkha@amazon.com>
+Adolfo Ochagavía <aochagavia92@gmail.com>
+Adria Casas <adriacasas88@gmail.com>
+Adrian Moisey <adrian@changeover.za.net>
+Adrian Mouat <adrian.mouat@gmail.com>
+Adrian Oprea <adrian@codesi.nz>
+Adrien Folie <folie.adrien@gmail.com>
+Adrien Gallouët <adrien@gallouet.fr>
+Ahmed Kamal <email.ahmedkamal@googlemail.com>
+Ahmet Alp Balkan <ahmetb@microsoft.com>
+Aidan Feldman <aidan.feldman@gmail.com>
+Aidan Hobson Sayers <aidanhs@cantab.net>
+AJ Bowen <aj@soulshake.net>
+Ajey Charantimath <ajey.charantimath@gmail.com>
+ajneu <ajneu@users.noreply.github.com>
+Akash Gupta <akagup@microsoft.com>
+Akhil Mohan <akhil.mohan@mayadata.io>
+Akihiro Matsushima <amatsusbit@gmail.com>
+Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
+Akim Demaille <akim.demaille@docker.com>
+Akira Koyasu <mail@akirakoyasu.net>
+Akshay Karle <akshay.a.karle@gmail.com>
+Al Tobey <al@ooyala.com>
+alambike <alambike@gmail.com>
+Alan Hoyle <alan@alanhoyle.com>
+Alan Scherger <flyinprogrammer@gmail.com>
+Alan Thompson <cloojure@gmail.com>
+Albert Callarisa <shark234@gmail.com>
+Albert Zhang <zhgwenming@gmail.com>
+Albin Kerouanton <albin@akerouanton.name>
+Alejandro González Hevia <alejandrgh11@gmail.com>
+Aleksa Sarai <asarai@suse.de>
+Aleksandrs Fadins <aleks@s-ko.net>
+Alena Prokharchyk <alena@rancher.com>
+Alessandro Boch <aboch@tetrationanalytics.com>
+Alessio Biancalana <dottorblaster@gmail.com>
+Alex Chan <alex@alexwlchan.net>
+Alex Chen <alexchenunix@gmail.com>
+Alex Coventry <alx@empirical.com>
+Alex Crawford <alex.crawford@coreos.com>
+Alex Ellis <alexellis2@gmail.com>
+Alex Gaynor <alex.gaynor@gmail.com>
+Alex Goodman <wagoodman@gmail.com>
+Alex Olshansky <i@creagenics.com>
+Alex Samorukov <samm@os2.kiev.ua>
+Alex Warhawk <ax.warhawk@gmail.com>
+Alexander Artemenko <svetlyak.40wt@gmail.com>
+Alexander Boyd <alex@opengroove.org>
+Alexander Larsson <alexl@redhat.com>
+Alexander Midlash <amidlash@docker.com>
+Alexander Morozov <lk4d4@docker.com>
+Alexander Shopov <ash@kambanaria.org>
+Alexandre Beslic <alexandre.beslic@gmail.com>
+Alexandre Garnier <zigarn@gmail.com>
+Alexandre González <agonzalezro@gmail.com>
+Alexandre Jomin <alexandrejomin@gmail.com>
+Alexandru Sfirlogea <alexandru.sfirlogea@gmail.com>
+Alexei Margasov <alexei38@yandex.ru>
+Alexey Guskov <lexag@mail.ru>
+Alexey Kotlyarov <alexey@infoxchange.net.au>
+Alexey Shamrin <shamrin@gmail.com>
+Alexis THOMAS <fr.alexisthomas@gmail.com>
+Alfred Landrum <alfred.landrum@docker.com>
+Ali Dehghani <ali.dehghani.g@gmail.com>
+Alicia Lauerman <alicia@eta.im>
+Alihan Demir <alihan_6153@hotmail.com>
+Allen Madsen <blatyo@gmail.com>
+Allen Sun <allensun.shl@alibaba-inc.com>
+almoehi <almoehi@users.noreply.github.com>
+Alvaro Saurin <alvaro.saurin@gmail.com>
+Alvin Deng <alvin.q.deng@utexas.edu>
+Alvin Richards <alvin.richards@docker.com>
+amangoel <amangoel@gmail.com>
+Amen Belayneh <amenbelayneh@gmail.com>
+Amir Goldstein <amir73il@aquasec.com>
+Amit Bakshi <ambakshi@gmail.com>
+Amit Krishnan <amit.krishnan@oracle.com>
+Amit Shukla <amit.shukla@docker.com>
+Amr Gawish <amr.gawish@gmail.com>
+Amy Lindburg <amy.lindburg@docker.com>
+Anand Patil <anand.prabhakar.patil@gmail.com>
+AnandkumarPatel <anandkumarpatel@gmail.com>
+Anatoly Borodin <anatoly.borodin@gmail.com>
+Anca Iordache <anca.iordache@docker.com>
+Anchal Agrawal <aagrawa4@illinois.edu>
+Anda Xu <anda.xu@docker.com>
+Anders Janmyr <anders@janmyr.com>
+Andre Dublin <81dublin@gmail.com>
+Andre Granovsky <robotciti@live.com>
+Andrea Denisse Gómez <crypto.andrea@protonmail.ch>
+Andrea Luzzardi <aluzzardi@gmail.com>
+Andrea Turli <andrea.turli@gmail.com>
+Andreas Elvers <andreas@work.de>
+Andreas Köhler <andi5.py@gmx.net>
+Andreas Savvides <andreas@editd.com>
+Andreas Tiefenthaler <at@an-ti.eu>
+Andrei Gherzan <andrei@resin.io>
+Andrei Vagin <avagin@gmail.com>
+Andrew C. Bodine <acbodine@us.ibm.com>
+Andrew Clay Shafer <andrewcshafer@gmail.com>
+Andrew Duckworth <grillopress@gmail.com>
+Andrew France <andrew@avito.co.uk>
+Andrew Gerrand <adg@golang.org>
+Andrew Guenther <guenther.andrew.j@gmail.com>
+Andrew He <he.andrew.mail@gmail.com>
+Andrew Hsu <andrewhsu@docker.com>
+Andrew Kuklewicz <kookster@gmail.com>
+Andrew Macgregor <andrew.macgregor@agworld.com.au>
+Andrew Macpherson <hopscotch23@gmail.com>
+Andrew Martin <sublimino@gmail.com>
+Andrew McDonnell <bugs@andrewmcdonnell.net>
+Andrew Munsell <andrew@wizardapps.net>
+Andrew Pennebaker <andrew.pennebaker@gmail.com>
+Andrew Po <absourd.noise@gmail.com>
+Andrew Weiss <andrew.weiss@docker.com>
+Andrew Williams <williams.andrew@gmail.com>
+Andrews Medina <andrewsmedina@gmail.com>
+Andrey Kolomentsev <andrey.kolomentsev@docker.com>
+Andrey Petrov <andrey.petrov@shazow.net>
+Andrey Stolbovsky <andrey.stolbovsky@gmail.com>
+André Martins <aanm90@gmail.com>
+andy <ztao@tibco-support.com>
+Andy Chambers <anchambers@paypal.com>
+andy diller <dillera@gmail.com>
+Andy Goldstein <agoldste@redhat.com>
+Andy Kipp <andy@rstudio.com>
+Andy Rothfusz <github@developersupport.net>
+Andy Smith <github@anarkystic.com>
+Andy Wilson <wilson.andrew.j+github@gmail.com>
+Anes Hasicic <anes.hasicic@gmail.com>
+Anil Belur <askb23@gmail.com>
+Anil Madhavapeddy <anil@recoil.org>
+Ankit Jain <ajatkj@yahoo.co.in>
+Ankush Agarwal <ankushagarwal11@gmail.com>
+Anonmily <michelle@michelleliu.io>
+Anran Qiao <anran.qiao@daocloud.io>
+Anshul Pundir <anshul.pundir@docker.com>
+Anthon van der Neut <anthon@mnt.org>
+Anthony Baire <Anthony.Baire@irisa.fr>
+Anthony Bishopric <git@anthonybishopric.com>
+Anthony Dahanne <anthony.dahanne@gmail.com>
+Anthony Sottile <asottile@umich.edu>
+Anton Löfgren <anton.lofgren@gmail.com>
+Anton Nikitin <anton.k.nikitin@gmail.com>
+Anton Polonskiy <anton.polonskiy@gmail.com>
+Anton Tiurin <noxiouz@yandex.ru>
+Antonio Murdaca <antonio.murdaca@gmail.com>
+Antonis Kalipetis <akalipetis@gmail.com>
+Antony Messerli <amesserl@rackspace.com>
+Anuj Bahuguna <anujbahuguna.dev@gmail.com>
+Anusha Ragunathan <anusha.ragunathan@docker.com>
+apocas <petermdias@gmail.com>
+Arash Deshmeh <adeshmeh@ca.ibm.com>
+ArikaChen <eaglesora@gmail.com>
+Arko Dasgupta <arko.dasgupta@docker.com>
+Arnaud Lefebvre <a.lefebvre@outlook.fr>
+Arnaud Porterie <arnaud.porterie@docker.com>
+Arnaud Rebillout <arnaud.rebillout@collabora.com>
+Arthur Barr <arthur.barr@uk.ibm.com>
+Arthur Gautier <baloo@gandi.net>
+Artur Meyster <arthurfbi@yahoo.com>
+Arun Gupta <arun.gupta@gmail.com>
+Asad Saeeduddin <masaeedu@gmail.com>
+Asbjørn Enge <asbjorn@hanafjedle.net>
+averagehuman <averagehuman@users.noreply.github.com>
+Avi Das <andas222@gmail.com>
+Avi Kivity <avi@scylladb.com>
+Avi Miller <avi.miller@oracle.com>
+Avi Vaid <avaid1996@gmail.com>
+ayoshitake <airandfingers@gmail.com>
+Azat Khuyiyakhmetov <shadow_uz@mail.ru>
+Bardia Keyoumarsi <bkeyouma@ucsc.edu>
+Barnaby Gray <barnaby@pickle.me.uk>
+Barry Allard <barry.allard@gmail.com>
+Bartłomiej Piotrowski <b@bpiotrowski.pl>
+Bastiaan Bakker <bbakker@xebia.com>
+bdevloed <boris.de.vloed@gmail.com>
+Ben Bonnefoy <frenchben@docker.com>
+Ben Firshman <ben@firshman.co.uk>
+Ben Golub <ben.golub@dotcloud.com>
+Ben Gould <ben@bengould.co.uk>
+Ben Hall <ben@benhall.me.uk>
+Ben Sargent <ben@brokendigits.com>
+Ben Severson <BenSeverson@users.noreply.github.com>
+Ben Toews <mastahyeti@gmail.com>
+Ben Wiklund <ben@daisyowl.com>
+Benjamin Atkin <ben@benatkin.com>
+Benjamin Baker <Benjamin.baker@utexas.edu>
+Benjamin Boudreau <boudreau.benjamin@gmail.com>
+Benjamin Yolken <yolken@stripe.com>
+Benny Ng <benny.tpng@gmail.com>
+Benoit Chesneau <bchesneau@gmail.com>
+Bernerd Schaefer <bj.schaefer@gmail.com>
+Bernhard M. Wiedemann <bwiedemann@suse.de>
+Bert Goethals <bert@bertg.be>
+Bertrand Roussel <broussel@sierrawireless.com>
+Bevisy Zhang <binbin36520@gmail.com>
+Bharath Thiruveedula <bharath_ves@hotmail.com>
+Bhiraj Butala <abhiraj.butala@gmail.com>
+Bhumika Bayani <bhumikabayani@gmail.com>
+Bilal Amarni <bilal.amarni@gmail.com>
+Bill Wang <ozbillwang@gmail.com>
+Bily Zhang <xcoder@tenxcloud.com>
+Bin Liu <liubin0329@gmail.com>
+Bingshen Wang <bingshen.wbs@alibaba-inc.com>
+Blake Geno <blakegeno@gmail.com>
+Boaz Shuster <ripcurld.github@gmail.com>
+bobby abbott <ttobbaybbob@gmail.com>
+Boqin Qin <bobbqqin@gmail.com>
+Boris Pruessmann <boris@pruessmann.org>
+Boshi Lian <farmer1992@gmail.com>
+Bouke Haarsma <bouke@webatoom.nl>
+Boyd Hemphill <boyd@feedmagnet.com>
+boynux <boynux@gmail.com>
+Bradley Cicenas <bradley.cicenas@gmail.com>
+Bradley Wright <brad@intranation.com>
+Brandon Liu <bdon@bdon.org>
+Brandon Philips <brandon.philips@coreos.com>
+Brandon Rhodes <brandon@rhodesmill.org>
+Brendan Dixon <brendand@microsoft.com>
+Brent Salisbury <brent.salisbury@docker.com>
+Brett Higgins <brhiggins@arbor.net>
+Brett Kochendorfer <brett.kochendorfer@gmail.com>
+Brett Randall <javabrett@gmail.com>
+Brian (bex) Exelbierd <bexelbie@redhat.com>
+Brian Bland <brian.bland@docker.com>
+Brian DeHamer <brian@dehamer.com>
+Brian Dorsey <brian@dorseys.org>
+Brian Flad <bflad417@gmail.com>
+Brian Goff <cpuguy83@gmail.com>
+Brian McCallister <brianm@skife.org>
+Brian Olsen <brian@maven-group.org>
+Brian Schwind <brianmschwind@gmail.com>
+Brian Shumate <brian@couchbase.com>
+Brian Torres-Gil <brian@dralth.com>
+Brian Trump <btrump@yelp.com>
+Brice Jaglin <bjaglin@teads.tv>
+Briehan Lombaard <briehan.lombaard@gmail.com>
+Brielle Broder <bbroder@google.com>
+Bruno Bigras <bigras.bruno@gmail.com>
+Bruno Binet <bruno.binet@gmail.com>
+Bruno Gazzera <bgazzera@paginar.com>
+Bruno Renié <brutasse@gmail.com>
+Bruno Tavares <btavare@thoughtworks.com>
+Bryan Bess <squarejaw@bsbess.com>
+Bryan Boreham <bjboreham@gmail.com>
+Bryan Matsuo <bryan.matsuo@gmail.com>
+Bryan Murphy <bmurphy1976@gmail.com>
+Burke Libbey <burke@libbey.me>
+Byung Kang <byung.kang.ctr@amrdec.army.mil>
+Caleb Spare <cespare@gmail.com>
+Calen Pennington <cale@edx.org>
+Cameron Boehmer <cameron.boehmer@gmail.com>
+Cameron Spear <cameronspear@gmail.com>
+Campbell Allen <campbell.allen@gmail.com>
+Candid Dauth <cdauth@cdauth.eu>
+Cao Weiwei <cao.weiwei30@zte.com.cn>
+Carl Henrik Lunde <chlunde@ping.uio.no>
+Carl Loa Odin <carlodin@gmail.com>
+Carl X. Su <bcbcarl@gmail.com>
+Carlo Mion <mion00@gmail.com>
+Carlos Alexandro Becker <caarlos0@gmail.com>
+Carlos de Paula <me@carlosedp.com>
+Carlos Sanchez <carlos@apache.org>
+Carol Fager-Higgins <carol.fager-higgins@docker.com>
+Cary <caryhartline@users.noreply.github.com>
+Casey Bisson <casey.bisson@joyent.com>
+Catalin Pirvu <pirvu.catalin94@gmail.com>
+Ce Gao <ce.gao@outlook.com>
+Cedric Davies <cedricda@microsoft.com>
+Cezar Sa Espinola <cezarsa@gmail.com>
+Chad Swenson <chadswen@gmail.com>
+Chance Zibolski <chance.zibolski@gmail.com>
+Chander Govindarajan <chandergovind@gmail.com>
+Chanhun Jeong <keyolk@gmail.com>
+Chao Wang <wangchao.fnst@cn.fujitsu.com>
+Charles Chan <charleswhchan@users.noreply.github.com>
+Charles Hooper <charles.hooper@dotcloud.com>
+Charles Law <claw@conduce.com>
+Charles Lindsay <chaz@chazomatic.us>
+Charles Merriam <charles.merriam@gmail.com>
+Charles Sarrazin <charles@sarraz.in>
+Charles Smith <charles.smith@docker.com>
+Charlie Drage <charlie@charliedrage.com>
+Charlie Lewis <charliel@lab41.org>
+Chase Bolt <chase.bolt@gmail.com>
+ChaYoung You <yousbe@gmail.com>
+Chen Chao <cc272309126@gmail.com>
+Chen Chuanliang <chen.chuanliang@zte.com.cn>
+Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
+Chen Min <chenmin46@huawei.com>
+Chen Mingjie <chenmingjie0828@163.com>
+Chen Qiu <cheney-90@hotmail.com>
+Cheng-mean Liu <soccerl@microsoft.com>
+Chengfei Shang <cfshang@alauda.io>
+Chengguang Xu <cgxu519@gmx.com>
+chenyuzhu <chenyuzhi@oschina.cn>
+Chetan Birajdar <birajdar.chetan@gmail.com>
+Chewey <prosto-chewey@users.noreply.github.com>
+Chia-liang Kao <clkao@clkao.org>
+chli <chli@freewheel.tv>
+Cholerae Hu <choleraehyq@gmail.com>
+Chris Alfonso <calfonso@redhat.com>
+Chris Armstrong <chris@opdemand.com>
+Chris Dias <cdias@microsoft.com>
+Chris Dituri <csdituri@gmail.com>
+Chris Fordham <chris@fordham-nagy.id.au>
+Chris Gavin <chris@chrisgavin.me>
+Chris Gibson <chris@chrisg.io>
+Chris Khoo <chris.khoo@gmail.com>
+Chris McKinnel <chris.mckinnel@tangentlabs.co.uk>
+Chris McKinnel <chrismckinnel@gmail.com>
+Chris Price <cprice@mirantis.com>
+Chris Seto <chriskseto@gmail.com>
+Chris Snow <chsnow123@gmail.com>
+Chris St. Pierre <chris.a.st.pierre@gmail.com>
+Chris Stivers <chris@stivers.us>
+Chris Swan <chris.swan@iee.org>
+Chris Telfer <ctelfer@docker.com>
+Chris Wahl <github@wahlnetwork.com>
+Chris Weyl <cweyl@alumni.drew.edu>
+Chris White <me@cwprogram.com>
+Christian Berendt <berendt@b1-systems.de>
+Christian Brauner <christian.brauner@ubuntu.com>
+Christian Böhme <developement@boehme3d.de>
+Christian Muehlhaeuser <muesli@gmail.com>
+Christian Persson <saser@live.se>
+Christian Rotzoll <ch.rotzoll@gmail.com>
+Christian Simon <simon@swine.de>
+Christian Stefanescu <st.chris@gmail.com>
+Christophe Mehay <cmehay@online.net>
+Christophe Troestler <christophe.Troestler@umons.ac.be>
+Christophe Vidal <kriss@krizalys.com>
+Christopher Biscardi <biscarch@sketcht.com>
+Christopher Crone <christopher.crone@docker.com>
+Christopher Currie <codemonkey+github@gmail.com>
+Christopher Jones <tophj@linux.vnet.ibm.com>
+Christopher Latham <sudosurootdev@gmail.com>
+Christopher Rigor <crigor@gmail.com>
+Christy Norman <christy@linux.vnet.ibm.com>
+Chun Chen <ramichen@tencent.com>
+Ciro S. Costa <ciro.costa@usp.br>
+Clayton Coleman <ccoleman@redhat.com>
+Clinton Kitson <clintonskitson@gmail.com>
+Cody Roseborough <crrosebo@amazon.com>
+Coenraad Loubser <coenraad@wish.org.za>
+Colin Dunklau <colin.dunklau@gmail.com>
+Colin Hebert <hebert.colin@gmail.com>
+Colin Panisset <github@clabber.com>
+Colin Rice <colin@daedrum.net>
+Colin Walters <walters@verbum.org>
+Collin Guarino <collin.guarino@gmail.com>
+Colm Hally <colmhally@gmail.com>
+companycy <companycy@gmail.com>
+Corbin Coleman <corbin.coleman@docker.com>
+Corey Farrell <git@cfware.com>
+Cory Forsyth <cory.forsyth@gmail.com>
+cressie176 <github@stephen-cresswell.net>
+CrimsonGlory <CrimsonGlory@users.noreply.github.com>
+Cristian Ariza <dev@cristianrz.com>
+Cristian Staretu <cristian.staretu@gmail.com>
+cristiano balducci <cristiano.balducci@gmail.com>
+Cristina Yenyxe Gonzalez Garcia <cristina.yenyxe@gmail.com>
+Cruceru Calin-Cristian <crucerucalincristian@gmail.com>
+CUI Wei <ghostplant@qq.com>
+Cyprian Gracz <cyprian.gracz@micro-jumbo.eu>
+Cyril F <cyrilf7x@gmail.com>
+Daan van Berkel <daan.v.berkel.1980@gmail.com>
+Daehyeok Mun <daehyeok@gmail.com>
+Dafydd Crosby <dtcrsby@gmail.com>
+dalanlan <dalanlan925@gmail.com>
+Damian Smyth <damian@dsau.co>
+Damien Nadé <github@livna.org>
+Damien Nozay <damien.nozay@gmail.com>
+Damjan Georgievski <gdamjan@gmail.com>
+Dan Anolik <dan@anolik.net>
+Dan Buch <d.buch@modcloth.com>
+Dan Cotora <dan@bluevision.ro>
+Dan Feldman <danf@jfrog.com>
+Dan Griffin <dgriffin@peer1.com>
+Dan Hirsch <thequux@upstandinghackers.com>
+Dan Keder <dan.keder@gmail.com>
+Dan Levy <dan@danlevy.net>
+Dan McPherson <dmcphers@redhat.com>
+Dan Stine <sw@stinemail.com>
+Dan Williams <me@deedubs.com>
+Dani Hodovic <dani.hodovic@gmail.com>
+Dani Louca <dani.louca@docker.com>
+Daniel Antlinger <d.antlinger@gmx.at>
+Daniel Black <daniel@linux.ibm.com>
+Daniel Dao <dqminh@cloudflare.com>
+Daniel Exner <dex@dragonslave.de>
+Daniel Farrell <dfarrell@redhat.com>
+Daniel Garcia <daniel@danielgarcia.info>
+Daniel Gasienica <daniel@gasienica.ch>
+Daniel Grunwell <mwgrunny@gmail.com>
+Daniel Helfand <helfand.4@gmail.com>
+Daniel Hiltgen <daniel.hiltgen@docker.com>
+Daniel J Walsh <dwalsh@redhat.com>
+Daniel Menet <membership@sontags.ch>
+Daniel Mizyrycki <daniel.mizyrycki@dotcloud.com>
+Daniel Nephin <dnephin@docker.com>
+Daniel Norberg <dano@spotify.com>
+Daniel Nordberg <dnordberg@gmail.com>
+Daniel Robinson <gottagetmac@gmail.com>
+Daniel S <dan.streby@gmail.com>
+Daniel Sweet <danieljsweet@icloud.com>
+Daniel Von Fange <daniel@leancoder.com>
+Daniel Watkins <daniel@daniel-watkins.co.uk>
+Daniel X Moore <yahivin@gmail.com>
+Daniel YC Lin <dlin.tw@gmail.com>
+Daniel Zhang <jmzwcn@gmail.com>
+Danny Berger <dpb587@gmail.com>
+Danny Milosavljevic <dannym@scratchpost.org>
+Danny Yates <danny@codeaholics.org>
+Danyal Khaliq <danyal.khaliq@tenpearls.com>
+Darren Coxall <darren@darrencoxall.com>
+Darren Shepherd <darren.s.shepherd@gmail.com>
+Darren Stahl <darst@microsoft.com>
+Dattatraya Kumbhar <dattatraya.kumbhar@gslab.com>
+Davanum Srinivas <davanum@gmail.com>
+Dave Barboza <dbarboza@datto.com>
+Dave Goodchild <buddhamagnet@gmail.com>
+Dave Henderson <dhenderson@gmail.com>
+Dave MacDonald <mindlapse@gmail.com>
+Dave Tucker <dt@docker.com>
+David Anderson <dave@natulte.net>
+David Calavera <david.calavera@gmail.com>
+David Chung <david.chung@docker.com>
+David Corking <dmc-source@dcorking.com>
+David Cramer <davcrame@cisco.com>
+David Currie <david_currie@uk.ibm.com>
+David Davis <daviddavis@redhat.com>
+David Dooling <dooling@gmail.com>
+David Gageot <david@gageot.net>
+David Gebler <davidgebler@gmail.com>
+David Glasser <glasser@davidglasser.net>
+David Lawrence <david.lawrence@docker.com>
+David Lechner <david@lechnology.com>
+David M. Karr <davidmichaelkarr@gmail.com>
+David Mackey <tdmackey@booleanhaiku.com>
+David Mat <david@davidmat.com>
+David Mcanulty <github@hellspark.com>
+David McKay <david@rawkode.com>
+David P Hilton <david.hilton.p@gmail.com>
+David Pelaez <pelaez89@gmail.com>
+David R. Jenni <david.r.jenni@gmail.com>
+David Röthlisberger <david@rothlis.net>
+David Sheets <dsheets@docker.com>
+David Sissitka <me@dsissitka.com>
+David Trott <github@davidtrott.com>
+David Wang <00107082@163.com>
+David Williamson <david.williamson@docker.com>
+David Xia <dxia@spotify.com>
+David Young <yangboh@cn.ibm.com>
+Davide Ceretti <davide.ceretti@hogarthww.com>
+Dawn Chen <dawnchen@google.com>
+dbdd <wangtong2712@gmail.com>
+dcylabs <dcylabs@gmail.com>
+Debayan De <debayande@users.noreply.github.com>
+Deborah Gertrude Digges <deborah.gertrude.digges@gmail.com>
+deed02392 <georgehafiz@gmail.com>
+Deep Debroy <ddebroy@docker.com>
+Deng Guangxing <dengguangxing@huawei.com>
+Deni Bertovic <deni@kset.org>
+Denis Defreyne <denis@soundcloud.com>
+Denis Gladkikh <denis@gladkikh.email>
+Denis Ollier <larchunix@users.noreply.github.com>
+Dennis Chen <barracks510@gmail.com>
+Dennis Chen <dennis.chen@arm.com>
+Dennis Docter <dennis@d23.nl>
+Derek <crq@kernel.org>
+Derek <crquan@gmail.com>
+Derek Ch <denc716@gmail.com>
+Derek McGowan <derek@mcgstyle.net>
+Deric Crago <deric.crago@gmail.com>
+Deshi Xiao <dxiao@redhat.com>
+devmeyster <arthurfbi@yahoo.com>
+Devon Estes <devon.estes@klarna.com>
+Devvyn Murphy <devvyn@devvyn.com>
+Dharmit Shah <shahdharmit@gmail.com>
+Dhawal Yogesh Bhanushali <dbhanushali@vmware.com>
+Diego Romero <idiegoromero@gmail.com>
+Diego Siqueira <dieg0@live.com>
+Dieter Reuter <dieter.reuter@me.com>
+Dillon Dixon <dillondixon@gmail.com>
+Dima Stopel <dima@twistlock.com>
+Dimitri John Ledkov <dimitri.j.ledkov@intel.com>
+Dimitris Mandalidis <dimitris.mandalidis@gmail.com>
+Dimitris Rozakis <dimrozakis@gmail.com>
+Dimitry Andric <d.andric@activevideo.com>
+Dinesh Subhraveti <dineshs@altiscale.com>
+Ding Fei <dingfei@stars.org.cn>
+Diogo Monica <diogo@docker.com>
+DiuDiugirl <sophia.wang@pku.edu.cn>
+Djibril Koné <kone.djibril@gmail.com>
+dkumor <daniel@dkumor.com>
+Dmitri Logvinenko <dmitri.logvinenko@gmail.com>
+Dmitri Shuralyov <shurcooL@gmail.com>
+Dmitry Demeshchuk <demeshchuk@gmail.com>
+Dmitry Gusev <dmitry.gusev@gmail.com>
+Dmitry Kononenko <d@dm42.ru>
+Dmitry Sharshakov <d3dx12.xx@gmail.com>
+Dmitry Shyshkin <dmitry@shyshkin.org.ua>
+Dmitry Smirnov <onlyjob@member.fsf.org>
+Dmitry V. Krivenok <krivenok.dmitry@gmail.com>
+Dmitry Vorobev <dimahabr@gmail.com>
+Dolph Mathews <dolph.mathews@gmail.com>
+Dominic Tubach <dominic.tubach@to.com>
+Dominic Yin <yindongchao@inspur.com>
+Dominik Dingel <dingel@linux.vnet.ibm.com>
+Dominik Finkbeiner <finkes93@gmail.com>
+Dominik Honnef <dominik@honnef.co>
+Don Kirkby <donkirkby@users.noreply.github.com>
+Don Kjer <don.kjer@gmail.com>
+Don Spaulding <donspauldingii@gmail.com>
+Donald Huang <don.hcd@gmail.com>
+Dong Chen <dongluo.chen@docker.com>
+Donghwa Kim <shanytt@gmail.com>
+Donovan Jones <git@gamma.net.nz>
+Doron Podoleanu <doronp@il.ibm.com>
+Doug Davis <dug@us.ibm.com>
+Doug MacEachern <dougm@vmware.com>
+Doug Tangren <d.tangren@gmail.com>
+Douglas Curtis <dougcurtis1@gmail.com>
+Dr Nic Williams <drnicwilliams@gmail.com>
+dragon788 <dragon788@users.noreply.github.com>
+Dražen Lučanin <kermit666@gmail.com>
+Drew Erny <derny@mirantis.com>
+Drew Hubl <drew.hubl@gmail.com>
+Dustin Sallings <dustin@spy.net>
+Ed Costello <epc@epcostello.com>
+Edmund Wagner <edmund-wagner@web.de>
+Eiichi Tsukata <devel@etsukata.com>
+Eike Herzbach <eike@herzbach.net>
+Eivin Giske Skaaren <eivinsn@axis.com>
+Eivind Uggedal <eivind@uggedal.com>
+Elan Ruusamäe <glen@pld-linux.org>
+Elango Sivanandam <elango.siva@docker.com>
+Elena Morozova <lelenanam@gmail.com>
+Eli Uriegas <eli.uriegas@docker.com>
+Elias Faxö <elias.faxo@tre.se>
+Elias Probst <mail@eliasprobst.eu>
+Elijah Zupancic <elijah@zupancic.name>
+eluck <mail@eluck.me>
+Elvir Kuric <elvirkuric@gmail.com>
+Emil Davtyan <emil2k@gmail.com>
+Emil Hernvall <emil@quench.at>
+Emily Maier <emily@emilymaier.net>
+Emily Rose <emily@contactvibe.com>
+Emir Ozer <emirozer@yandex.com>
+Enguerran <engcolson@gmail.com>
+Eohyung Lee <liquidnuker@gmail.com>
+epeterso <epeterson@breakpoint-labs.com>
+Eric Barch <barch@tomesoftware.com>
+Eric Curtin <ericcurtin17@gmail.com>
+Eric G. Noriega <enoriega@vizuri.com>
+Eric Hanchrow <ehanchrow@ine.com>
+Eric Lee <thenorthsecedes@gmail.com>
+Eric Myhre <hash@exultant.us>
+Eric Paris <eparis@redhat.com>
+Eric Rafaloff <erafaloff@gmail.com>
+Eric Rosenberg <ehaydenr@gmail.com>
+Eric Sage <eric.david.sage@gmail.com>
+Eric Soderstrom <ericsoderstrom@gmail.com>
+Eric Yang <windfarer@gmail.com>
+Eric-Olivier Lamey <eo@lamey.me>
+Erica Windisch <erica@windisch.us>
+Erik Bray <erik.m.bray@gmail.com>
+Erik Dubbelboer <erik@dubbelboer.com>
+Erik Hollensbe <github@hollensbe.org>
+Erik Inge Bolsø <knan@redpill-linpro.com>
+Erik Kristensen <erik@erikkristensen.com>
+Erik St. Martin <alakriti@gmail.com>
+Erik Weathers <erikdw@gmail.com>
+Erno Hopearuoho <erno.hopearuoho@gmail.com>
+Erwin van der Koogh <info@erronis.nl>
+Ethan Bell <ebgamer29@gmail.com>
+Ethan Mosbaugh <ethan@replicated.com>
+Euan Kemp <euan.kemp@coreos.com>
+Eugen Krizo <eugen.krizo@gmail.com>
+Eugene Yakubovich <eugene.yakubovich@coreos.com>
+Evan Allrich <evan@unguku.com>
+Evan Carmi <carmi@users.noreply.github.com>
+Evan Hazlett <ejhazlett@gmail.com>
+Evan Krall <krall@yelp.com>
+Evan Phoenix <evan@fallingsnow.net>
+Evan Wies <evan@neomantra.net>
+Evelyn Xu <evelynhsu21@gmail.com>
+Everett Toews <everett.toews@rackspace.com>
+Evgeniy Makhrov <e.makhrov@corp.badoo.com>
+Evgeny Shmarnev <shmarnev@gmail.com>
+Evgeny Vereshchagin <evvers@ya.ru>
+Ewa Czechowska <ewa@ai-traders.com>
+Eystein Måløy Stenberg <eystein.maloy.stenberg@cfengine.com>
+ezbercih <cem.ezberci@gmail.com>
+Ezra Silvera <ezra@il.ibm.com>
+Fabian Kramm <kramm@covexo.com>
+Fabian Lauer <kontakt@softwareschmiede-saar.de>
+Fabian Raetz <fabian.raetz@gmail.com>
+Fabiano Rosas <farosas@br.ibm.com>
+Fabio Falci <fabiofalci@gmail.com>
+Fabio Kung <fabio.kung@gmail.com>
+Fabio Rapposelli <fabio@vmware.com>
+Fabio Rehm <fgrehm@gmail.com>
+Fabrizio Regini <freegenie@gmail.com>
+Fabrizio Soppelsa <fsoppelsa@mirantis.com>
+Faiz Khan <faizkhan00@gmail.com>
+falmp <chico.lopes@gmail.com>
+Fangming Fang <fangming.fang@arm.com>
+Fangyuan Gao <21551127@zju.edu.cn>
+fanjiyun <fan.jiyun@zte.com.cn>
+Fareed Dudhia <fareeddudhia@googlemail.com>
+Fathi Boudra <fathi.boudra@linaro.org>
+Federico Gimenez <fgimenez@coit.es>
+Felipe Oliveira <felipeweb.programador@gmail.com>
+Felipe Ruhland <felipe.ruhland@gmail.com>
+Felix Abecassis <fabecassis@nvidia.com>
+Felix Geisendörfer <felix@debuggable.com>
+Felix Hupfeld <felix@quobyte.com>
+Felix Rabe <felix@rabe.io>
+Felix Ruess <felix.ruess@gmail.com>
+Felix Schindler <fschindler@weluse.de>
+Feng Yan <fy2462@gmail.com>
+Fengtu Wang <wangfengtu@huawei.com>
+Ferenc Szabo <pragmaticfrank@gmail.com>
+Fernando <fermayo@gmail.com>
+Fero Volar <alian@alian.info>
+Ferran Rodenas <frodenas@gmail.com>
+Filipe Brandenburger <filbranden@google.com>
+Filipe Oliveira <contato@fmoliveira.com.br>
+Flavio Castelli <fcastelli@suse.com>
+Flavio Crisciani <flavio.crisciani@docker.com>
+Florian <FWirtz@users.noreply.github.com>
+Florian Klein <florian.klein@free.fr>
+Florian Maier <marsmensch@users.noreply.github.com>
+Florian Noeding <noeding@adobe.com>
+Florian Schmaus <flo@geekplace.eu>
+Florian Weingarten <flo@hackvalue.de>
+Florin Asavoaie <florin.asavoaie@gmail.com>
+Florin Patan <florinpatan@gmail.com>
+fonglh <fonglh@gmail.com>
+Foysal Iqbal <foysal.iqbal.fb@gmail.com>
+Francesc Campoy <campoy@google.com>
+Francesco Mari <mari.francesco@gmail.com>
+Francis Chuang <francis.chuang@boostport.com>
+Francisco Carriedo <fcarriedo@gmail.com>
+Francisco Souza <f@souza.cc>
+Frank Groeneveld <frank@ivaldi.nl>
+Frank Herrmann <fgh@4gh.tv>
+Frank Macreery <frank@macreery.com>
+Frank Rosquin <frank.rosquin+github@gmail.com>
+frankyang <yyb196@gmail.com>
+Fred Lifton <fred.lifton@docker.com>
+Frederick F. Kautz IV <fkautz@redhat.com>
+Frederik Loeffert <frederik@zitrusmedia.de>
+Frederik Nordahl Jul Sabroe <frederikns@gmail.com>
+Freek Kalter <freek@kalteronline.org>
+Frieder Bluemle <frieder.bluemle@gmail.com>
+Fu JinLin <withlin@yeah.net>
+Félix Baylac-Jacqué <baylac.felix@gmail.com>
+Félix Cantournet <felix.cantournet@cloudwatt.com>
+Gabe Rosenhouse <gabe@missionst.com>
+Gabor Nagy <mail@aigeruth.hu>
+Gabriel Linder <linder.gabriel@gmail.com>
+Gabriel Monroy <gabriel@opdemand.com>
+Gabriel Nicolas Avellaneda <avellaneda.gabriel@gmail.com>
+Gaetan de Villele <gdevillele@gmail.com>
+Galen Sampson <galen.sampson@gmail.com>
+Gang Qiao <qiaohai8866@gmail.com>
+Gareth Rushgrove <gareth@morethanseven.net>
+Garrett Barboza <garrett@garrettbarboza.com>
+Gary Schaetz <gary@schaetzkc.com>
+Gaurav <gaurav.gosec@gmail.com>
+Gaurav Singh <gaurav1086@gmail.com>
+Gaël PORTAY <gael.portay@savoirfairelinux.com>
+Genki Takiuchi <genki@s21g.com>
+GennadySpb <lipenkov@gmail.com>
+Geoffrey Bachelet <grosfrais@gmail.com>
+Geon Kim <geon0250@gmail.com>
+George Kontridze <george@bugsnag.com>
+George MacRorie <gmacr31@gmail.com>
+George Xie <georgexsh@gmail.com>
+Georgi Hristozov <georgi@forkbomb.nl>
+Gereon Frey <gereon.frey@dynport.de>
+German DZ <germ@ndz.com.ar>
+Gert van Valkenhoef <g.h.m.van.valkenhoef@rug.nl>
+Gerwim Feiken <g.feiken@tfe.nl>
+Ghislain Bourgeois <ghislain.bourgeois@gmail.com>
+Giampaolo Mancini <giampaolo@trampolineup.com>
+Gianluca Borello <g.borello@gmail.com>
+Gildas Cuisinier <gildas.cuisinier@gcuisinier.net>
+Giovan Isa Musthofa <giovanism@outlook.co.id>
+gissehel <public-devgit-dantus@gissehel.org>
+Giuseppe Mazzotta <gdm85@users.noreply.github.com>
+Gleb Fotengauer-Malinovskiy <glebfm@altlinux.org>
+Gleb M Borisov <borisov.gleb@gmail.com>
+Glyn Normington <gnormington@gopivotal.com>
+GoBella <caili_welcome@163.com>
+Goffert van Gool <goffert@phusion.nl>
+Goldwyn Rodrigues <rgoldwyn@suse.com>
+Gopikannan Venugopalsamy <gopikannan.venugopalsamy@gmail.com>
+Gosuke Miyashita <gosukenator@gmail.com>
+Gou Rao <gou@portworx.com>
+Govinda Fichtner <govinda.fichtner@googlemail.com>
+Grant Millar <rid@cylo.io>
+Grant Reaber <grant.reaber@gmail.com>
+Graydon Hoare <graydon@pobox.com>
+Greg Fausak <greg@tacodata.com>
+Greg Pflaum <gpflaum@users.noreply.github.com>
+Greg Stephens <greg@udon.org>
+Greg Thornton <xdissent@me.com>
+Grzegorz Jaśkiewicz <gj.jaskiewicz@gmail.com>
+Guilhem Lettron <guilhem+github@lettron.fr>
+Guilherme Salgado <gsalgado@gmail.com>
+Guillaume Dufour <gdufour.prestataire@voyages-sncf.com>
+Guillaume J. Charmes <guillaume.charmes@docker.com>
+guoxiuyan <guoxiuyan@huawei.com>
+Guri <odg0318@gmail.com>
+Gurjeet Singh <gurjeet@singh.im>
+Guruprasad <lgp171188@gmail.com>
+Gustav Sinder <gustav.sinder@gmail.com>
+gwx296173 <gaojing3@huawei.com>
+Günter Zöchbauer <guenter@gzoechbauer.com>
+Haichao Yang <yang.haichao@zte.com.cn>
+haikuoliu <haikuo@amazon.com>
+Hakan Özler <hakan.ozler@kodcu.com>
+Hamish Hutchings <moredhel@aoeu.me>
+Hannes Ljungberg <hannes@5monkeys.se>
+Hans Kristian Flaatten <hans@starefossen.com>
+Hans Rødtang <hansrodtang@gmail.com>
+Hao Shu Wei <haosw@cn.ibm.com>
+Hao Zhang <21521210@zju.edu.cn>
+Harald Albers <github@albersweb.de>
+Harald Niesche <harald@niesche.de>
+Harley Laue <losinggeneration@gmail.com>
+Harold Cooper <hrldcpr@gmail.com>
+Harrison Turton <harrisonturton@gmail.com>
+Harry Zhang <harryz@hyper.sh>
+Harshal Patil <harshal.patil@in.ibm.com>
+Harshal Patil <harshalp@linux.vnet.ibm.com>
+He Simei <hesimei@zju.edu.cn>
+He Xiaoxi <tossmilestone@gmail.com>
+He Xin <he_xinworld@126.com>
+heartlock <21521209@zju.edu.cn>
+Hector Castro <hectcastro@gmail.com>
+Helen Xie <chenjg@harmonycloud.cn>
+Henning Sprang <henning.sprang@gmail.com>
+Hiroshi Hatake <hatake@clear-code.com>
+Hiroyuki Sasagawa <hs19870702@gmail.com>
+Hobofan <goisser94@gmail.com>
+Hollie Teal <hollie@docker.com>
+Hong Xu <hong@topbug.net>
+Hongbin Lu <hongbin034@gmail.com>
+Hongxu Jia <hongxu.jia@windriver.com>
+Honza Pokorny <me@honza.ca>
+Hsing-Hui Hsu <hsinghui@amazon.com>
+hsinko <21551195@zju.edu.cn>
+Hu Keping <hukeping@huawei.com>
+Hu Tao <hutao@cn.fujitsu.com>
+HuanHuan Ye <logindaveye@gmail.com>
+Huanzhong Zhang <zhanghuanzhong90@gmail.com>
+Huayi Zhang <irachex@gmail.com>
+Hugo Duncan <hugo@hugoduncan.org>
+Hugo Marisco <0x6875676f@gmail.com>
+Hunter Blanks <hunter@twilio.com>
+huqun <huqun@zju.edu.cn>
+Huu Nguyen <huu@prismskylabs.com>
+hyeongkyu.lee <hyeongkyu.lee@navercorp.com>
+Hyzhou Zhy <hyzhou.zhy@alibaba-inc.com>
+Iago López Galeiras <iago@kinvolk.io>
+Ian Babrou <ibobrik@gmail.com>
+Ian Bishop <ianbishop@pace7.com>
+Ian Bull <irbull@gmail.com>
+Ian Calvert <ianjcalvert@gmail.com>
+Ian Campbell <ian.campbell@docker.com>
+Ian Chen <ianre657@gmail.com>
+Ian Lee <IanLee1521@gmail.com>
+Ian Main <imain@redhat.com>
+Ian Philpot <ian.philpot@microsoft.com>
+Ian Truslove <ian.truslove@gmail.com>
+Iavael <iavaelooeyt@gmail.com>
+Icaro Seara <icaro.seara@gmail.com>
+Ignacio Capurro <icapurrofagian@gmail.com>
+Igor Dolzhikov <bluesriverz@gmail.com>
+Igor Karpovich <i.karpovich@currencysolutions.com>
+Iliana Weller <iweller@amazon.com>
+Ilkka Laukkanen <ilkka@ilkka.io>
+Ilya Dmitrichenko <errordeveloper@gmail.com>
+Ilya Gusev <mail@igusev.ru>
+Ilya Khlopotov <ilya.khlopotov@gmail.com>
+imre Fitos <imre.fitos+github@gmail.com>
+inglesp <peter.inglesby@gmail.com>
+Ingo Gottwald <in.gottwald@gmail.com>
+Innovimax <innovimax@gmail.com>
+Isaac Dupree <antispam@idupree.com>
+Isabel Jimenez <contact.isabeljimenez@gmail.com>
+Isaiah Grace <irgkenya4@gmail.com>
+Isao Jonas <isao.jonas@gmail.com>
+Iskander Sharipov <quasilyte@gmail.com>
+Ivan Babrou <ibobrik@gmail.com>
+Ivan Fraixedes <ifcdev@gmail.com>
+Ivan Grcic <igrcic@gmail.com>
+Ivan Markin <sw@nogoegst.net>
+J Bruni <joaohbruni@yahoo.com.br>
+J. Nunn <jbnunn@gmail.com>
+Jack Danger Canty <jackdanger@squareup.com>
+Jack Laxson <jackjrabbit@gmail.com>
+Jacob Atzen <jacob@jacobatzen.dk>
+Jacob Edelman <edelman.jd@gmail.com>
+Jacob Tomlinson <jacob@tom.linson.uk>
+Jacob Vallejo <jakeev@amazon.com>
+Jacob Wen <jian.w.wen@oracle.com>
+Jaime Cepeda <jcepedavillamayor@gmail.com>
+Jaivish Kothari <janonymous.codevulture@gmail.com>
+Jake Champlin <jake.champlin.27@gmail.com>
+Jake Moshenko <jake@devtable.com>
+Jake Sanders <jsand@google.com>
+jakedt <jake@devtable.com>
+James Allen <jamesallen0108@gmail.com>
+James Carey <jecarey@us.ibm.com>
+James Carr <james.r.carr@gmail.com>
+James DeFelice <james.defelice@ishisystems.com>
+James Harrison Fisher <jameshfisher@gmail.com>
+James Kyburz <james.kyburz@gmail.com>
+James Kyle <james@jameskyle.org>
+James Lal <james@lightsofapollo.com>
+James Mills <prologic@shortcircuit.net.au>
+James Nesbitt <jnesbitt@mirantis.com>
+James Nugent <james@jen20.com>
+James Turnbull <james@lovedthanlost.net>
+James Watkins-Harvey <jwatkins@progi-media.com>
+Jamie Hannaford <jamie@limetree.org>
+Jamshid Afshar <jafshar@yahoo.com>
+Jan Chren <dev.rindeal@gmail.com>
+Jan Keromnes <janx@linux.com>
+Jan Koprowski <jan.koprowski@gmail.com>
+Jan Pazdziora <jpazdziora@redhat.com>
+Jan Toebes <jan@toebes.info>
+Jan-Gerd Tenberge <janten@gmail.com>
+Jan-Jaap Driessen <janjaapdriessen@gmail.com>
+Jana Radhakrishnan <mrjana@docker.com>
+Jannick Fahlbusch <git@jf-projects.de>
+Januar Wayong <januar@gmail.com>
+Jared Biel <jared.biel@bolderthinking.com>
+Jared Hocutt <jaredh@netapp.com>
+Jaroslaw Zabiello <hipertracker@gmail.com>
+jaseg <jaseg@jaseg.net>
+Jasmine Hegman <jasmine@jhegman.com>
+Jason A. Donenfeld <Jason@zx2c4.com>
+Jason Divock <jdivock@gmail.com>
+Jason Giedymin <jasong@apache.org>
+Jason Green <Jason.Green@AverInformatics.Com>
+Jason Hall <imjasonh@gmail.com>
+Jason Heiss <jheiss@aput.net>
+Jason Livesay <ithkuil@gmail.com>
+Jason McVetta <jason.mcvetta@gmail.com>
+Jason Plum <jplum@devonit.com>
+Jason Shepherd <jason@jasonshepherd.net>
+Jason Smith <jasonrichardsmith@gmail.com>
+Jason Sommer <jsdirv@gmail.com>
+Jason Stangroome <jason@codeassassin.com>
+jaxgeller <jacksongeller@gmail.com>
+Jay <imjching@hotmail.com>
+Jay <teguhwpurwanto@gmail.com>
+Jay Kamat <github@jgkamat.33mail.com>
+Jean Rouge <rougej+github@gmail.com>
+Jean-Baptiste Barth <jeanbaptiste.barth@gmail.com>
+Jean-Baptiste Dalido <jeanbaptiste@appgratis.com>
+Jean-Christophe Berthon <huygens@berthon.eu>
+Jean-Paul Calderone <exarkun@twistedmatrix.com>
+Jean-Pierre Huynh <jean-pierre.huynh@ounet.fr>
+Jean-Tiare Le Bigot <jt@yadutaf.fr>
+Jeeva S. Chelladhurai <sjeeva@gmail.com>
+Jeff Anderson <jeff@docker.com>
+Jeff Hajewski <jeff.hajewski@gmail.com>
+Jeff Johnston <jeff.johnston.mn@gmail.com>
+Jeff Lindsay <progrium@gmail.com>
+Jeff Mickey <j@codemac.net>
+Jeff Minard <jeff@creditkarma.com>
+Jeff Nickoloff <jeff.nickoloff@gmail.com>
+Jeff Silberman <jsilberm@gmail.com>
+Jeff Welch <whatthejeff@gmail.com>
+Jeffrey Bolle <jeffreybolle@gmail.com>
+Jeffrey Morgan <jmorganca@gmail.com>
+Jeffrey van Gogh <jvg@google.com>
+Jenny Gebske <jennifer@gebske.de>
+Jeremy Chambers <jeremy@thehipbot.com>
+Jeremy Grosser <jeremy@synack.me>
+Jeremy Price <jprice.rhit@gmail.com>
+Jeremy Qian <vanpire110@163.com>
+Jeremy Unruh <jeremybunruh@gmail.com>
+Jeremy Yallop <yallop@docker.com>
+Jeroen Franse <jeroenfranse@gmail.com>
+Jeroen Jacobs <github@jeroenj.be>
+Jesse Dearing <jesse.dearing@gmail.com>
+Jesse Dubay <jesse@thefortytwo.net>
+Jessica Frazelle <jess@oxide.computer>
+Jezeniel Zapanta <jpzapanta22@gmail.com>
+Jhon Honce <jhonce@redhat.com>
+Ji.Zhilong <zhilongji@gmail.com>
+Jian Liao <jliao@alauda.io>
+Jian Zhang <zhangjian.fnst@cn.fujitsu.com>
+Jiang Jinyang <jjyruby@gmail.com>
+Jie Luo <luo612@zju.edu.cn>
+Jie Ma <jienius@outlook.com>
+Jihyun Hwang <jhhwang@telcoware.com>
+Jilles Oldenbeuving <ojilles@gmail.com>
+Jim Alateras <jima@comware.com.au>
+Jim Ehrismann <jim.ehrismann@docker.com>
+Jim Galasyn <jim.galasyn@docker.com>
+Jim Minter <jminter@redhat.com>
+Jim Perrin <jperrin@centos.org>
+Jimmy Cuadra <jimmy@jimmycuadra.com>
+Jimmy Puckett <jimmy.puckett@spinen.com>
+Jimmy Song <rootsongjc@gmail.com>
+Jinsoo Park <cellpjs@gmail.com>
+Jintao Zhang <zhangjintao9020@gmail.com>
+Jiri Appl <jiria@microsoft.com>
+Jiri Popelka <jpopelka@redhat.com>
+Jiuyue Ma <majiuyue@huawei.com>
+Jiří Župka <jzupka@redhat.com>
+Joao Fernandes <joao.fernandes@docker.com>
+Joao Trindade <trindade.joao@gmail.com>
+Joe Beda <joe.github@bedafamily.com>
+Joe Doliner <jdoliner@pachyderm.io>
+Joe Ferguson <joe@infosiftr.com>
+Joe Gordon <joe.gordon0@gmail.com>
+Joe Shaw <joe@joeshaw.org>
+Joe Van Dyk <joe@tanga.com>
+Joel Friedly <joelfriedly@gmail.com>
+Joel Handwell <joelhandwell@gmail.com>
+Joel Hansson <joel.hansson@ecraft.com>
+Joel Wurtz <jwurtz@jolicode.com>
+Joey Geiger <jgeiger@gmail.com>
+Joey Geiger <jgeiger@users.noreply.github.com>
+Joey Gibson <joey@joeygibson.com>
+Joffrey F <joffrey@docker.com>
+Johan Euphrosine <proppy@google.com>
+Johan Rydberg <johan.rydberg@gmail.com>
+Johanan Lieberman <johanan.lieberman@gmail.com>
+Johannes 'fish' Ziemke <github@freigeist.org>
+John Costa <john.costa@gmail.com>
+John Feminella <jxf@jxf.me>
+John Gardiner Myers <jgmyers@proofpoint.com>
+John Gossman <johngos@microsoft.com>
+John Harris <john@johnharris.io>
+John Howard <github@lowenna.com>
+John Laswell <john.n.laswell@gmail.com>
+John Maguire <jmaguire@duosecurity.com>
+John Mulhausen <john@docker.com>
+John OBrien III <jobrieniii@yahoo.com>
+John Starks <jostarks@microsoft.com>
+John Stephens <johnstep@docker.com>
+John Tims <john.k.tims@gmail.com>
+John V. Martinez <jvmatl@gmail.com>
+John Warwick <jwarwick@gmail.com>
+John Willis <john.willis@docker.com>
+Jon Johnson <jonjohnson@google.com>
+Jon Surrell <jon.surrell@gmail.com>
+Jon Wedaman <jweede@gmail.com>
+Jonas Dohse <jonas@dohse.ch>
+Jonas Heinrich <Jonas@JonasHeinrich.com>
+Jonas Pfenniger <jonas@pfenniger.name>
+Jonathan A. Schweder <jonathanschweder@gmail.com>
+Jonathan A. Sternberg <jonathansternberg@gmail.com>
+Jonathan Boulle <jonathanboulle@gmail.com>
+Jonathan Camp <jonathan@irondojo.com>
+Jonathan Choy <jonathan.j.choy@gmail.com>
+Jonathan Dowland <jon+github@alcopop.org>
+Jonathan Lebon <jlebon@redhat.com>
+Jonathan Lomas <jonathan@floatinglomas.ca>
+Jonathan McCrohan <jmccrohan@gmail.com>
+Jonathan Mueller <j.mueller@apoveda.ch>
+Jonathan Pares <jonathanpa@users.noreply.github.com>
+Jonathan Rudenberg <jonathan@titanous.com>
+Jonathan Stoppani <jonathan.stoppani@divio.com>
+Jonh Wendell <jonh.wendell@redhat.com>
+Joni Sar <yoni@cocycles.com>
+Joost Cassee <joost@cassee.net>
+Jordan Arentsen <blissdev@gmail.com>
+Jordan Jennings <jjn2009@gmail.com>
+Jordan Sissel <jls@semicomplete.com>
+Jorge Marin <chipironcin@users.noreply.github.com>
+Jorit Kleine-Möllhoff <joppich@bricknet.de>
+Jose Diaz-Gonzalez <email@josediazgonzalez.com>
+Joseph Anthony Pasquale Holsten <joseph@josephholsten.com>
+Joseph Hager <ajhager@gmail.com>
+Joseph Kern <jkern@semafour.net>
+Joseph Rothrock <rothrock@rothrock.org>
+Josh <jokajak@gmail.com>
+Josh Bodah <jb3689@yahoo.com>
+Josh Bonczkowski <josh.bonczkowski@gmail.com>
+Josh Chorlton <jchorlton@gmail.com>
+Josh Eveleth <joshe@opendns.com>
+Josh Hawn <josh.hawn@docker.com>
+Josh Horwitz <horwitz@addthis.com>
+Josh Poimboeuf <jpoimboe@redhat.com>
+Josh Soref <jsoref@gmail.com>
+Josh Wilson <josh.wilson@fivestars.com>
+Josiah Kiehl <jkiehl@riotgames.com>
+José Tomás Albornoz <jojo@eljojo.net>
+Joyce Jang <mail@joycejang.com>
+JP <jpellerin@leapfrogonline.com>
+Julian Taylor <jtaylor.debian@googlemail.com>
+Julien Barbier <write0@gmail.com>
+Julien Bisconti <veggiemonk@users.noreply.github.com>
+Julien Bordellier <julienbordellier@gmail.com>
+Julien Dubois <julien.dubois@gmail.com>
+Julien Kassar <github@kassisol.com>
+Julien Maitrehenry <julien.maitrehenry@me.com>
+Julien Pervillé <julien.perville@perfect-memory.com>
+Julien Pivotto <roidelapluie@inuits.eu>
+Julio Guerra <julio@sqreen.com>
+Julio Montes <imc.coder@gmail.com>
+Jun-Ru Chang <jrjang@gmail.com>
+Jussi Nummelin <jussi.nummelin@gmail.com>
+Justas Brazauskas <brazauskasjustas@gmail.com>
+Justen Martin <jmart@the-coder.com>
+Justin Cormack <justin.cormack@docker.com>
+Justin Force <justin.force@gmail.com>
+Justin Menga <justin.menga@gmail.com>
+Justin Plock <jplock@users.noreply.github.com>
+Justin Simonelis <justin.p.simonelis@gmail.com>
+Justin Terry <juterry@microsoft.com>
+Justyn Temme <justyntemme@gmail.com>
+Jyrki Puttonen <jyrkiput@gmail.com>
+Jérémy Leherpeur <amenophis@leherpeur.net>
+Jérôme Petazzoni <jerome.petazzoni@docker.com>
+Jörg Thalheim <joerg@higgsboson.tk>
+K. Heller <pestophagous@gmail.com>
+Kai Blin <kai@samba.org>
+Kai Qiang Wu (Kennan) <wkq5325@gmail.com>
+Kamil Domański <kamil@domanski.co>
+Kamjar Gerami <kami.gerami@gmail.com>
+Kanstantsin Shautsou <kanstantsin.sha@gmail.com>
+Kara Alexandra <kalexandra@us.ibm.com>
+Karan Lyons <karan@karanlyons.com>
+Kareem Khazem <karkhaz@karkhaz.com>
+kargakis <kargakis@users.noreply.github.com>
+Karl Grzeszczak <karlgrz@gmail.com>
+Karol Duleba <mr.fuxi@gmail.com>
+Karthik Karanth <karanth.karthik@gmail.com>
+Karthik Nayak <karthik.188@gmail.com>
+Kasper Fabæch Brandt <poizan@poizan.dk>
+Kate Heddleston <kate.heddleston@gmail.com>
+Katie McLaughlin <katie@glasnt.com>
+Kato Kazuyoshi <kato.kazuyoshi@gmail.com>
+Katrina Owen <katrina.owen@gmail.com>
+Kawsar Saiyeed <kawsar.saiyeed@projiris.com>
+Kay Yan <kay.yan@daocloud.io>
+kayrus <kay.diam@gmail.com>
+Kazuhiro Sera <seratch@gmail.com>
+Ke Li <kel@splunk.com>
+Ke Xu <leonhartx.k@gmail.com>
+Kei Ohmura <ohmura.kei@gmail.com>
+Keith Hudgins <greenman@greenman.org>
+Keli Hu <dev@keli.hu>
+Ken Cochrane <kencochrane@gmail.com>
+Ken Herner <kherner@progress.com>
+Ken ICHIKAWA <ichikawa.ken@jp.fujitsu.com>
+Ken Reese <krrgithub@gmail.com>
+Kenfe-Mickaël Laventure <mickael.laventure@gmail.com>
+Kenjiro Nakayama <nakayamakenjiro@gmail.com>
+Kent Johnson <kentoj@gmail.com>
+Kenta Tada <Kenta.Tada@sony.com>
+Kevin "qwazerty" Houdebert <kevin.houdebert@gmail.com>
+Kevin Burke <kev@inburke.com>
+Kevin Clark <kevin.clark@gmail.com>
+Kevin Feyrer <kevin.feyrer@btinternet.com>
+Kevin J. Lynagh <kevin@keminglabs.com>
+Kevin Jing Qiu <kevin@idempotent.ca>
+Kevin Kern <kaiwentan@harmonycloud.cn>
+Kevin Menard <kevin@nirvdrum.com>
+Kevin Meredith <kevin.m.meredith@gmail.com>
+Kevin P. Kucharczyk <kevinkucharczyk@gmail.com>
+Kevin Parsons <kevpar@microsoft.com>
+Kevin Richardson <kevin@kevinrichardson.co>
+Kevin Shi <kshi@andrew.cmu.edu>
+Kevin Wallace <kevin@pentabarf.net>
+Kevin Yap <me@kevinyap.ca>
+Keyvan Fatehi <keyvanfatehi@gmail.com>
+kies <lleelm@gmail.com>
+Kim BKC Carlbacker <kim.carlbacker@gmail.com>
+Kim Eik <kim@heldig.org>
+Kimbro Staken <kstaken@kstaken.com>
+Kir Kolyshkin <kolyshkin@gmail.com>
+Kiran Gangadharan <kiran.daredevil@gmail.com>
+Kirill SIbirev <l0kix2@gmail.com>
+knappe <tyler.knappe@gmail.com>
+Kohei Tsuruta <coheyxyz@gmail.com>
+Koichi Shiraishi <k@zchee.io>
+Konrad Kleine <konrad.wilhelm.kleine@gmail.com>
+Konstantin Gribov <grossws@gmail.com>
+Konstantin L <sw.double@gmail.com>
+Konstantin Pelykh <kpelykh@zettaset.com>
+Krasi Georgiev <krasi@vip-consult.solutions>
+Krasimir Georgiev <support@vip-consult.co.uk>
+Kris-Mikael Krister <krismikael@protonmail.com>
+Kristian Haugene <kristian.haugene@capgemini.com>
+Kristina Zabunova <triara.xiii@gmail.com>
+Krystian Wojcicki <kwojcicki@sympatico.ca>
+Kun Zhang <zkazure@gmail.com>
+Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp>
+Kunal Tyagi <tyagi.kunal@live.com>
+Kyle Conroy <kyle.j.conroy@gmail.com>
+Kyle Linden <linden.kyle@gmail.com>
+Kyle Wuolle <kyle.wuolle@gmail.com>
+kyu <leehk1227@gmail.com>
+Lachlan Coote <lcoote@vmware.com>
+Lai Jiangshan <jiangshanlai@gmail.com>
+Lajos Papp <lajos.papp@sequenceiq.com>
+Lakshan Perera <lakshan@laktek.com>
+Lalatendu Mohanty <lmohanty@redhat.com>
+Lance Chen <cyen0312@gmail.com>
+Lance Kinley <lkinley@loyaltymethods.com>
+Lars Butler <Lars.Butler@gmail.com>
+Lars Kellogg-Stedman <lars@redhat.com>
+Lars R. Damerow <lars@pixar.com>
+Lars-Magnus Skog <ralphtheninja@riseup.net>
+Laszlo Meszaros <lacienator@gmail.com>
+Laura Frank <ljfrank@gmail.com>
+Laurent Erignoux <lerignoux@gmail.com>
+Laurie Voss <github@seldo.com>
+Leandro Siqueira <leandro.siqueira@gmail.com>
+Lee Chao <932819864@qq.com>
+Lee, Meng-Han <sunrisedm4@gmail.com>
+leeplay <hyeongkyu.lee@navercorp.com>
+Lei Gong <lgong@alauda.io>
+Lei Jitang <leijitang@huawei.com>
+Len Weincier <len@cloudafrica.net>
+Lennie <github@consolejunkie.net>
+Leo Gallucci <elgalu3@gmail.com>
+Leszek Kowalski <github@leszekkowalski.pl>
+Levi Blackstone <levi.blackstone@rackspace.com>
+Levi Gross <levi@levigross.com>
+Lewis Daly <lewisdaly@me.com>
+Lewis Marshall <lewis@lmars.net>
+Lewis Peckover <lew+github@lew.io>
+Li Yi <denverdino@gmail.com>
+Liam Macgillavry <liam@kumina.nl>
+Liana Lo <liana.lixia@gmail.com>
+Liang Mingqiang <mqliang.zju@gmail.com>
+Liang-Chi Hsieh <viirya@gmail.com>
+Liao Qingwei <liaoqingwei@huawei.com>
+Lifubang <lifubang@acmcoder.com>
+Lihua Tang <lhtang@alauda.io>
+Lily Guo <lily.guo@docker.com>
+limsy <seongyeol37@gmail.com>
+Lin Lu <doraalin@163.com>
+LingFaKe <lingfake@huawei.com>
+Linus Heckemann <lheckemann@twig-world.com>
+Liran Tal <liran.tal@gmail.com>
+Liron Levin <liron@twistlock.com>
+Liu Bo <bo.li.liu@oracle.com>
+Liu Hua <sdu.liu@huawei.com>
+liwenqi <vikilwq@zju.edu.cn>
+lixiaobing10051267 <li.xiaobing1@zte.com.cn>
+Liz Zhang <lizzha@microsoft.com>
+LIZAO LI <lzlarryli@gmail.com>
+Lizzie Dixon <_@lizzie.io>
+Lloyd Dewolf <foolswisdom@gmail.com>
+Lokesh Mandvekar <lsm5@fedoraproject.org>
+longliqiang88 <394564827@qq.com>
+Lorenz Leutgeb <lorenz.leutgeb@gmail.com>
+Lorenzo Fontana <fontanalorenz@gmail.com>
+Lotus Fenn <fenn.lotus@gmail.com>
+Louis Delossantos <ldelossa.ld@gmail.com>
+Louis Opter <kalessin@kalessin.fr>
+Luca Favatella <luca.favatella@erlang-solutions.com>
+Luca Marturana <lucamarturana@gmail.com>
+Luca Orlandi <luca.orlandi@gmail.com>
+Luca-Bogdan Grigorescu <Luca-Bogdan Grigorescu>
+Lucas Chan <lucas-github@lucaschan.com>
+Lucas Chi <lucas@teacherspayteachers.com>
+Lucas Molas <lmolas@fundacionsadosky.org.ar>
+Lucas Silvestre <lukas.silvestre@gmail.com>
+Luciano Mores <leslau@gmail.com>
+Luis Martínez de Bartolomé Izquierdo <lmartinez@biicode.com>
+Luiz Svoboda <luizek@gmail.com>
+Lukas Heeren <lukas-heeren@hotmail.com>
+Lukas Waslowski <cr7pt0gr4ph7@gmail.com>
+lukaspustina <lukas.pustina@centerdevice.com>
+Lukasz Zajaczkowski <Lukasz.Zajaczkowski@ts.fujitsu.com>
+Luke Marsden <me@lukemarsden.net>
+Lyn <energylyn@zju.edu.cn>
+Lynda O'Leary <lyndaoleary29@gmail.com>
+Lénaïc Huard <lhuard@amadeus.com>
+Ma Müller <mueller-ma@users.noreply.github.com>
+Ma Shimiao <mashimiao.fnst@cn.fujitsu.com>
+Mabin <bin.ma@huawei.com>
+Madhan Raj Mookkandy <MadhanRaj.Mookkandy@microsoft.com>
+Madhav Puri <madhav.puri@gmail.com>
+Madhu Venugopal <madhu@socketplane.io>
+Mageee <fangpuyi@foxmail.com>
+Mahesh Tiyyagura <tmahesh@gmail.com>
+malnick <malnick@gmail..com>
+Malte Janduda <mail@janduda.net>
+Manfred Touron <m@42.am>
+Manfred Zabarauskas <manfredas@zabarauskas.com>
+Manjunath A Kumatagi <mkumatag@in.ibm.com>
+Mansi Nahar <mmn4185@rit.edu>
+Manuel Meurer <manuel@krautcomputing.com>
+Manuel Rüger <manuel@rueg.eu>
+Manuel Woelker <github@manuel.woelker.org>
+mapk0y <mapk0y@gmail.com>
+Marc Abramowitz <marc@marc-abramowitz.com>
+Marc Kuo <kuomarc2@gmail.com>
+Marc Tamsky <mtamsky@gmail.com>
+Marcel Edmund Franke <marcel.edmund.franke@gmail.com>
+Marcelo Horacio Fortino <info@fortinux.com>
+Marcelo Salazar <chelosalazar@gmail.com>
+Marco Hennings <marco.hennings@freiheit.com>
+Marcus Cobden <mcobden@cisco.com>
+Marcus Farkas <toothlessgear@finitebox.com>
+Marcus Linke <marcus.linke@gmx.de>
+Marcus Martins <marcus@docker.com>
+Marcus Ramberg <marcus@nordaaker.com>
+Marek Goldmann <marek.goldmann@gmail.com>
+Marian Marinov <mm@yuhu.biz>
+Marianna Tessel <mtesselh@gmail.com>
+Mario Loriedo <mario.loriedo@gmail.com>
+Marius Gundersen <me@mariusgundersen.net>
+Marius Sturm <marius@graylog.com>
+Marius Voila <marius.voila@gmail.com>
+Mark Allen <mrallen1@yahoo.com>
+Mark Jeromin <mark.jeromin@sysfrog.net>
+Mark McGranaghan <mmcgrana@gmail.com>
+Mark McKinstry <mmckinst@umich.edu>
+Mark Milstein <mark@epiloque.com>
+Mark Oates <fl0yd@me.com>
+Mark Parker <godefroi@users.noreply.github.com>
+Mark West <markewest@gmail.com>
+Markan Patel <mpatel678@gmail.com>
+Marko Mikulicic <mmikulicic@gmail.com>
+Marko Tibold <marko@tibold.nl>
+Markus Fix <lispmeister@gmail.com>
+Markus Kortlang <hyp3rdino@googlemail.com>
+Martijn Dwars <ikben@martijndwars.nl>
+Martijn van Oosterhout <kleptog@svana.org>
+Martin Honermeyer <maze@strahlungsfrei.de>
+Martin Kelly <martin@surround.io>
+Martin Mosegaard Amdisen <martin.amdisen@praqma.com>
+Martin Muzatko <martin@happy-css.com>
+Martin Redmond <redmond.martin@gmail.com>
+Mary Anthony <mary.anthony@docker.com>
+Masahito Zembutsu <zembutsu@users.noreply.github.com>
+Masato Ohba <over.rye@gmail.com>
+Masayuki Morita <minamijoyo@gmail.com>
+Mason Malone <mason.malone@gmail.com>
+Mateusz Sulima <sulima.mateusz@gmail.com>
+Mathias Monnerville <mathias@monnerville.com>
+Mathieu Champlon <mathieu.champlon@docker.com>
+Mathieu Le Marec - Pasquet <kiorky@cryptelium.net>
+Mathieu Parent <math.parent@gmail.com>
+Matt Apperson <me@mattapperson.com>
+Matt Bachmann <bachmann.matt@gmail.com>
+Matt Bentley <matt.bentley@docker.com>
+Matt Haggard <haggardii@gmail.com>
+Matt Hoyle <matt@deployable.co>
+Matt McCormick <matt.mccormick@kitware.com>
+Matt Moore <mattmoor@google.com>
+Matt Richardson <matt@redgumtech.com.au>
+Matt Rickard <mrick@google.com>
+Matt Robenolt <matt@ydekproductions.com>
+Matt Schurenko <matt.schurenko@gmail.com>
+Matt Williams <mattyw@me.com>
+Matthew Heon <mheon@redhat.com>
+Matthew Lapworth <matthewl@bit-shift.net>
+Matthew Mayer <matthewkmayer@gmail.com>
+Matthew Mosesohn <raytrac3r@gmail.com>
+Matthew Mueller <mattmuelle@gmail.com>
+Matthew Riley <mattdr@google.com>
+Matthias Klumpp <matthias@tenstral.net>
+Matthias Kühnle <git.nivoc@neverbox.com>
+Matthias Rampke <mr@soundcloud.com>
+Matthieu Hauglustaine <matt.hauglustaine@gmail.com>
+Mattias Jernberg <nostrad@gmail.com>
+Mauricio Garavaglia <mauricio@medallia.com>
+mauriyouth <mauriyouth@gmail.com>
+Max Harmathy <max.harmathy@web.de>
+Max Shytikov <mshytikov@gmail.com>
+Maxim Fedchyshyn <sevmax@gmail.com>
+Maxim Ivanov <ivanov.maxim@gmail.com>
+Maxim Kulkin <mkulkin@mirantis.com>
+Maxim Treskin <zerthurd@gmail.com>
+Maxime Petazzoni <max@signalfuse.com>
+Maximiliano Maccanti <maccanti@amazon.com>
+Maxwell <csuhp007@gmail.com>
+Meaglith Ma <genedna@gmail.com>
+meejah <meejah@meejah.ca>
+Megan Kostick <mkostick@us.ibm.com>
+Mehul Kar <mehul.kar@gmail.com>
+Mei ChunTao <mei.chuntao@zte.com.cn>
+Mengdi Gao <usrgdd@gmail.com>
+Mert Yazıcıoğlu <merty@users.noreply.github.com>
+mgniu <mgniu@dataman-inc.com>
+Micah Zoltu <micah@newrelic.com>
+Michael A. Smith <michael@smith-li.com>
+Michael Bridgen <mikeb@squaremobius.net>
+Michael Brown <michael@netdirect.ca>
+Michael Chiang <mchiang@docker.com>
+Michael Crosby <michael@docker.com>
+Michael Currie <mcurrie@bruceforceresearch.com>
+Michael Friis <friism@gmail.com>
+Michael Gorsuch <gorsuch@github.com>
+Michael Grauer <michael.grauer@kitware.com>
+Michael Holzheu <holzheu@linux.vnet.ibm.com>
+Michael Hudson-Doyle <michael.hudson@canonical.com>
+Michael Huettermann <michael@huettermann.net>
+Michael Irwin <mikesir87@gmail.com>
+Michael Käufl <docker@c.michael-kaeufl.de>
+Michael Neale <michael.neale@gmail.com>
+Michael Nussbaum <michael.nussbaum@getbraintree.com>
+Michael Prokop <github@michael-prokop.at>
+Michael Scharf <github@scharf.gr>
+Michael Spetsiotis <michael_spets@hotmail.com>
+Michael Stapelberg <michael+gh@stapelberg.de>
+Michael Steinert <mike.steinert@gmail.com>
+Michael Thies <michaelthies78@gmail.com>
+Michael West <mwest@mdsol.com>
+Michael Zhao <michael.zhao@arm.com>
+Michal Fojtik <mfojtik@redhat.com>
+Michal Gebauer <mishak@mishak.net>
+Michal Jemala <michal.jemala@gmail.com>
+Michal Minář <miminar@redhat.com>
+Michal Wieczorek <wieczorek-michal@wp.pl>
+Michaël Pailloncy <mpapo.dev@gmail.com>
+Michał Czeraszkiewicz <czerasz@gmail.com>
+Michał Gryko <github@odkurzacz.org>
+Michiel de Jong <michiel@unhosted.org>
+Mickaël Fortunato <morsi.morsicus@gmail.com>
+Mickaël Remars <mickael@remars.com>
+Miguel Angel Fernández <elmendalerenda@gmail.com>
+Miguel Morales <mimoralea@gmail.com>
+Mihai Borobocea <MihaiBorob@gmail.com>
+Mihuleacc Sergiu <mihuleac.sergiu@gmail.com>
+Mike Brown <brownwm@us.ibm.com>
+Mike Bush <mpbush@gmail.com>
+Mike Casas <mkcsas0@gmail.com>
+Mike Chelen <michael.chelen@gmail.com>
+Mike Danese <mikedanese@google.com>
+Mike Dillon <mike@embody.org>
+Mike Dougherty <mike.dougherty@docker.com>
+Mike Estes <mike.estes@logos.com>
+Mike Gaffney <mike@uberu.com>
+Mike Goelzer <mike.goelzer@docker.com>
+Mike Leone <mleone896@gmail.com>
+Mike Lundy <mike@fluffypenguin.org>
+Mike MacCana <mike.maccana@gmail.com>
+Mike Naberezny <mike@naberezny.com>
+Mike Snitzer <snitzer@redhat.com>
+mikelinjie <294893458@qq.com>
+Mikhail Sobolev <mss@mawhrin.net>
+Miklos Szegedi <miklos.szegedi@cloudera.com>
+Milind Chawre <milindchawre@gmail.com>
+Miloslav Trmač <mitr@redhat.com>
+mingqing <limingqing@cyou-inc.com>
+Mingzhen Feng <fmzhen@zju.edu.cn>
+Misty Stanley-Jones <misty@docker.com>
+Mitch Capper <mitch.capper@gmail.com>
+Mizuki Urushida <z11111001011@gmail.com>
+mlarcher <github@ringabell.org>
+Mohammad Banikazemi <mb@us.ibm.com>
+Mohammad Nasirifar <farnasirim@gmail.com>
+Mohammed Aaqib Ansari <maaquib@gmail.com>
+Mohit Soni <mosoni@ebay.com>
+Moorthy RS <rsmoorthy@gmail.com>
+Morgan Bauer <mbauer@us.ibm.com>
+Morgante Pell <morgante.pell@morgante.net>
+Morgy93 <thomas@ulfertsprygoda.de>
+Morten Siebuhr <sbhr@sbhr.dk>
+Morton Fox <github@qslw.com>
+Moysés Borges <moysesb@gmail.com>
+mrfly <mr.wrfly@gmail.com>
+Mrunal Patel <mrunalp@gmail.com>
+Muayyad Alsadi <alsadi@gmail.com>
+Mustafa Akın <mustafa91@gmail.com>
+Muthukumar R <muthur@gmail.com>
+Máximo Cuadros <mcuadros@gmail.com>
+Médi-Rémi Hashim <medimatrix@users.noreply.github.com>
+Nace Oroz <orkica@gmail.com>
+Nahum Shalman <nshalman@omniti.com>
+Nakul Pathak <nakulpathak3@hotmail.com>
+Nalin Dahyabhai <nalin@redhat.com>
+Nan Monnand Deng <monnand@gmail.com>
+Naoki Orii <norii@cs.cmu.edu>
+Natalie Parker <nparker@omnifone.com>
+Natanael Copa <natanael.copa@docker.com>
+Natasha Jarus <linuxmercedes@gmail.com>
+Nate Brennand <nate.brennand@clever.com>
+Nate Eagleson <nate@nateeag.com>
+Nate Jones <nate@endot.org>
+Nathan Hsieh <hsieh.nathan@gmail.com>
+Nathan Kleyn <nathan@nathankleyn.com>
+Nathan LeClaire <nathan.leclaire@docker.com>
+Nathan McCauley <nathan.mccauley@docker.com>
+Nathan Williams <nathan@teamtreehouse.com>
+Naveed Jamil <naveed.jamil@tenpearls.com>
+Neal McBurnett <neal@mcburnett.org>
+Neil Horman <nhorman@tuxdriver.com>
+Neil Peterson <neilpeterson@outlook.com>
+Nelson Chen <crazysim@gmail.com>
+Neyazul Haque <nuhaque@gmail.com>
+Nghia Tran <nghia@google.com>
+Niall O'Higgins <niallo@unworkable.org>
+Nicholas E. Rabenau <nerab@gmx.at>
+Nick Adcock <nick.adcock@docker.com>
+Nick DeCoursin <n.decoursin@foodpanda.com>
+Nick Irvine <nfirvine@nfirvine.com>
+Nick Neisen <nwneisen@gmail.com>
+Nick Parker <nikaios@gmail.com>
+Nick Payne <nick@kurai.co.uk>
+Nick Russo <nicholasjamesrusso@gmail.com>
+Nick Stenning <nick.stenning@digital.cabinet-office.gov.uk>
+Nick Stinemates <nick@stinemates.org>
+NickrenREN <yuquan.ren@easystack.cn>
+Nicola Kabar <nicolaka@gmail.com>
+Nicolas Borboën <ponsfrilus@gmail.com>
+Nicolas De Loof <nicolas.deloof@gmail.com>
+Nicolas Dudebout <nicolas.dudebout@gatech.edu>
+Nicolas Goy <kuon@goyman.com>
+Nicolas Kaiser <nikai@nikai.net>
+Nicolas Sterchele <sterchele.nicolas@gmail.com>
+Nicolas V Castet <nvcastet@us.ibm.com>
+Nicolás Hock Isaza <nhocki@gmail.com>
+Nigel Poulton <nigelpoulton@hotmail.com>
+Nik Nyby <nikolas@gnu.org>
+Nikhil Chawla <chawlanikhil24@gmail.com>
+NikolaMandic <mn080202@gmail.com>
+Nikolas Garofil <nikolas.garofil@uantwerpen.be>
+Nikolay Edigaryev <edigaryev@gmail.com>
+Nikolay Milovanov <nmil@itransformers.net>
+Nirmal Mehta <nirmalkmehta@gmail.com>
+Nishant Totla <nishanttotla@gmail.com>
+NIWA Hideyuki <niwa.niwa@nifty.ne.jp>
+Noah Meyerhans <nmeyerha@amazon.com>
+Noah Treuhaft <noah.treuhaft@docker.com>
+NobodyOnSE <ich@sektor.selfip.com>
+noducks <onemannoducks@gmail.com>
+Nolan Darilek <nolan@thewordnerd.info>
+Noriki Nakamura <noriki.nakamura@miraclelinux.com>
+nponeccop <andy.melnikov@gmail.com>
+Nuutti Kotivuori <naked@iki.fi>
+nzwsch <hi@nzwsch.com>
+O.S. Tezer <ostezer@gmail.com>
+objectified <objectified@gmail.com>
+Odin Ugedal <odin@ugedal.com>
+Oguz Bilgic <fisyonet@gmail.com>
+Oh Jinkyun <tintypemolly@gmail.com>
+Ohad Schneider <ohadschn@users.noreply.github.com>
+ohmystack <jun.jiang02@ele.me>
+Ole Reifschneider <mail@ole-reifschneider.de>
+Oliver Neal <ItsVeryWindy@users.noreply.github.com>
+Oliver Reason <oli@overrateddev.co>
+Olivier Gambier <dmp42@users.noreply.github.com>
+Olle Jonsson <olle.jonsson@gmail.com>
+Olli Janatuinen <olli.janatuinen@gmail.com>
+Olly Pomeroy <oppomeroy@gmail.com>
+Omri Shiv <Omri.Shiv@teradata.com>
+Oriol Francès <oriolfa@gmail.com>
+Oskar Niburski <oskarniburski@gmail.com>
+Otto Kekäläinen <otto@seravo.fi>
+Ouyang Liduo <oyld0210@163.com>
+Ovidio Mallo <ovidio.mallo@gmail.com>
+Panagiotis Moustafellos <pmoust@elastic.co>
+Paolo G. Giarrusso <p.giarrusso@gmail.com>
+Pascal <pascalgn@users.noreply.github.com>
+Pascal Bach <pascal.bach@siemens.com>
+Pascal Borreli <pascal@borreli.com>
+Pascal Hartig <phartig@rdrei.net>
+Patrick Böänziger <patrick.baenziger@bsi-software.com>
+Patrick Devine <patrick.devine@docker.com>
+Patrick Hemmer <patrick.hemmer@gmail.com>
+Patrick Stapleton <github@gdi2290.com>
+Patrik Cyvoct <patrik@ptrk.io>
+pattichen <craftsbear@gmail.com>
+Paul <paul9869@gmail.com>
+paul <paul@inkling.com>
+Paul Annesley <paul@annesley.cc>
+Paul Bellamy <paul.a.bellamy@gmail.com>
+Paul Bowsher <pbowsher@globalpersonals.co.uk>
+Paul Furtado <pfurtado@hubspot.com>
+Paul Hammond <paul@paulhammond.org>
+Paul Jimenez <pj@place.org>
+Paul Kehrer <paul.l.kehrer@gmail.com>
+Paul Lietar <paul@lietar.net>
+Paul Liljenberg <liljenberg.paul@gmail.com>
+Paul Morie <pmorie@gmail.com>
+Paul Nasrat <pnasrat@gmail.com>
+Paul Weaver <pauweave@cisco.com>
+Paulo Ribeiro <paigr.io@gmail.com>
+Pavel Lobashov <ShockwaveNN@gmail.com>
+Pavel Matěja <pavel@verotel.cz>
+Pavel Pletenev <cpp.create@gmail.com>
+Pavel Pospisil <pospispa@gmail.com>
+Pavel Sutyrin <pavel.sutyrin@gmail.com>
+Pavel Tikhomirov <ptikhomirov@virtuozzo.com>
+Pavlos Ratis <dastergon@gentoo.org>
+Pavol Vargovcik <pallly.vargovcik@gmail.com>
+Pawel Konczalski <mail@konczalski.de>
+Peeyush Gupta <gpeeyush@linux.vnet.ibm.com>
+Peggy Li <peggyli.224@gmail.com>
+Pei Su <sillyousu@gmail.com>
+Peng Tao <bergwolf@gmail.com>
+Penghan Wang <ph.wang@daocloud.io>
+Per Weijnitz <per.weijnitz@gmail.com>
+perhapszzy@sina.com <perhapszzy@sina.com>
+Peter Bourgon <peter@bourgon.org>
+Peter Braden <peterbraden@peterbraden.co.uk>
+Peter Bücker <peter.buecker@pressrelations.de>
+Peter Choi <phkchoi89@gmail.com>
+Peter Dave Hello <hsu@peterdavehello.org>
+Peter Edge <peter.edge@gmail.com>
+Peter Ericson <pdericson@gmail.com>
+Peter Esbensen <pkesbensen@gmail.com>
+Peter Jaffe <pjaffe@nevo.com>
+Peter Kang <peter@spell.run>
+Peter Malmgren <ptmalmgren@gmail.com>
+Peter Salvatore <peter@psftw.com>
+Peter Volpe <petervo@redhat.com>
+Peter Waller <p@pwaller.net>
+Petr Švihlík <svihlik.petr@gmail.com>
+Phil <underscorephil@gmail.com>
+Phil Estes <estesp@linux.vnet.ibm.com>
+Phil Spitler <pspitler@gmail.com>
+Philip Alexander Etling <paetling@gmail.com>
+Philip Monroe <phil@philmonroe.com>
+Philipp Gillé <philipp.gille@gmail.com>
+Philipp Wahala <philipp.wahala@gmail.com>
+Philipp Weissensteiner <mail@philippweissensteiner.com>
+Phillip Alexander <git@phillipalexander.io>
+phineas <phin@phineas.io>
+pidster <pid@pidster.com>
+Piergiuliano Bossi <pgbossi@gmail.com>
+Pierre <py@poujade.org>
+Pierre Carrier <pierre@meteor.com>
+Pierre Dal-Pra <dalpra.pierre@gmail.com>
+Pierre Wacrenier <pierre.wacrenier@gmail.com>
+Pierre-Alain RIVIERE <pariviere@ippon.fr>
+Piotr Bogdan <ppbogdan@gmail.com>
+pixelistik <pixelistik@users.noreply.github.com>
+Porjo <porjo38@yahoo.com.au>
+Poul Kjeldager Sørensen <pks@s-innovations.net>
+Pradeep Chhetri <pradeep@indix.com>
+Pradip Dhara <pradipd@microsoft.com>
+Prasanna Gautam <prasannagautam@gmail.com>
+Pratik Karki <prertik@outlook.com>
+Prayag Verma <prayag.verma@gmail.com>
+Priya Wadhwa <priyawadhwa@google.com>
+Projjol Banerji <probaner23@gmail.com>
+Przemek Hejman <przemyslaw.hejman@gmail.com>
+Pure White <daniel48@126.com>
+pysqz <randomq@126.com>
+Qiang Huang <h.huangqiang@huawei.com>
+Qinglan Peng <qinglanpeng@zju.edu.cn>
+qudongfang <qudongfang@gmail.com>
+Quentin Brossard <qbrossard@gmail.com>
+Quentin Perez <qperez@ocs.online.net>
+Quentin Tayssier <qtayssier@gmail.com>
+r0n22 <cameron.regan@gmail.com>
+Radostin Stoyanov <rstoyanov1@gmail.com>
+Rafal Jeczalik <rjeczalik@gmail.com>
+Rafe Colton <rafael.colton@gmail.com>
+Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
+Raghuram Devarakonda <draghuram@gmail.com>
+Raja Sami <raja.sami@tenpearls.com>
+Rajat Pandit <rp@rajatpandit.com>
+Rajdeep Dua <dua_rajdeep@yahoo.com>
+Ralf Sippl <ralf.sippl@gmail.com>
+Ralle <spam@rasmusa.net>
+Ralph Bean <rbean@redhat.com>
+Ramkumar Ramachandra <artagnon@gmail.com>
+Ramon Brooker <rbrooker@aetherealmind.com>
+Ramon van Alteren <ramon@vanalteren.nl>
+RaviTeja Pothana <ravi-teja@live.com>
+Ray Tsang <rayt@google.com>
+ReadmeCritic <frankensteinbot@gmail.com>
+Recursive Madman <recursive.madman@gmx.de>
+Reficul <xuzhenglun@gmail.com>
+Regan McCooey <rmccooey27@aol.com>
+Remi Rampin <remirampin@gmail.com>
+Remy Suen <remy.suen@gmail.com>
+Renato Riccieri Santos Zannon <renato.riccieri@gmail.com>
+Renaud Gaubert <rgaubert@nvidia.com>
+Rhys Hiltner <rhys@twitch.tv>
+Ri Xu <xuri.me@gmail.com>
+Ricardo N Feliciano <FelicianoTech@gmail.com>
+Rich Moyse <rich@moyse.us>
+Rich Seymour <rseymour@gmail.com>
+Richard <richard.scothern@gmail.com>
+Richard Burnison <rburnison@ebay.com>
+Richard Harvey <richard@squarecows.com>
+Richard Mathie <richard.mathie@amey.co.uk>
+Richard Metzler <richard@paadee.com>
+Richard Scothern <richard.scothern@gmail.com>
+Richo Healey <richo@psych0tik.net>
+Rick Bradley <rick@users.noreply.github.com>
+Rick van de Loo <rickvandeloo@gmail.com>
+Rick Wieman <git@rickw.nl>
+Rik Nijessen <rik@keefo.nl>
+Riku Voipio <riku.voipio@linaro.org>
+Riley Guerin <rileytg.dev@gmail.com>
+Ritesh H Shukla <sritesh@vmware.com>
+Riyaz Faizullabhoy <riyaz.faizullabhoy@docker.com>
+Rob Gulewich <rgulewich@netflix.com>
+Rob Vesse <rvesse@dotnetrdf.org>
+Robert Bachmann <rb@robertbachmann.at>
+Robert Bittle <guywithnose@gmail.com>
+Robert Obryk <robryk@gmail.com>
+Robert Schneider <mail@shakeme.info>
+Robert Stern <lexandro2000@gmail.com>
+Robert Terhaar <rterhaar@atlanticdynamic.com>
+Robert Wallis <smilingrob@gmail.com>
+Robert Wang <robert@arctic.tw>
+Roberto G. Hashioka <roberto.hashioka@docker.com>
+Roberto Muñoz Fernández <robertomf@gmail.com>
+Robin Naundorf <r.naundorf@fh-muenster.de>
+Robin Schneider <ypid@riseup.net>
+Robin Speekenbrink <robin@kingsquare.nl>
+Robin Thoni <robin@rthoni.com>
+robpc <rpcann@gmail.com>
+Rodolfo Carvalho <rhcarvalho@gmail.com>
+Rodrigo Vaz <rodrigo.vaz@gmail.com>
+Roel Van Nyen <roel.vannyen@gmail.com>
+Roger Peppe <rogpeppe@gmail.com>
+Rohit Jnagal <jnagal@google.com>
+Rohit Kadam <rohit.d.kadam@gmail.com>
+Rohit Kapur <rkapur@flatiron.com>
+Rojin George <rojingeorge@huawei.com>
+Roland Huß <roland@jolokia.org>
+Roland Kammerer <roland.kammerer@linbit.com>
+Roland Moriz <rmoriz@users.noreply.github.com>
+Roma Sokolov <sokolov.r.v@gmail.com>
+Roman Dudin <katrmr@gmail.com>
+Roman Mazur <roman@balena.io>
+Roman Strashkin <roman.strashkin@gmail.com>
+Ron Smits <ron.smits@gmail.com>
+Ron Williams <ron.a.williams@gmail.com>
+Rong Gao <gaoronggood@163.com>
+Rong Zhang <rongzhang@alauda.io>
+Rongxiang Song <tinysong1226@gmail.com>
+root <docker-dummy@example.com>
+root <root@lxdebmas.marist.edu>
+root <root@ubuntu-14.04-amd64-vbox>
+root <root@webm215.cluster016.ha.ovh.net>
+Rory Hunter <roryhunter2@gmail.com>
+Rory McCune <raesene@gmail.com>
+Ross Boucher <rboucher@gmail.com>
+Rovanion Luckey <rovanion.luckey@gmail.com>
+Royce Remer <royceremer@gmail.com>
+Rozhnov Alexandr <nox73@ya.ru>
+Rudolph Gottesheim <r.gottesheim@loot.at>
+Rui Cao <ruicao@alauda.io>
+Rui Lopes <rgl@ruilopes.com>
+Ruilin Li <liruilin4@huawei.com>
+Runshen Zhu <runshen.zhu@gmail.com>
+Russ Magee <rmagee@gmail.com>
+Ryan Abrams <rdabrams@gmail.com>
+Ryan Anderson <anderson.ryanc@gmail.com>
+Ryan Aslett <github@mixologic.com>
+Ryan Belgrave <rmb1993@gmail.com>
+Ryan Detzel <ryan.detzel@gmail.com>
+Ryan Fowler <rwfowler@gmail.com>
+Ryan Liu <ryanlyy@me.com>
+Ryan McLaughlin <rmclaughlin@insidesales.com>
+Ryan O'Donnell <odonnellryanc@gmail.com>
+Ryan Seto <ryanseto@yak.net>
+Ryan Simmen <ryan.simmen@gmail.com>
+Ryan Stelly <ryan.stelly@live.com>
+Ryan Thomas <rthomas@atlassian.com>
+Ryan Trauntvein <rtrauntvein@novacoast.com>
+Ryan Wallner <ryan.wallner@clusterhq.com>
+Ryan Zhang <ryan.zhang@docker.com>
+ryancooper7 <ryan.cooper7@gmail.com>
+RyanDeng <sheldon.d1018@gmail.com>
+Ryo Nakao <nakabonne@gmail.com>
+Rémy Greinhofer <remy.greinhofer@livelovely.com>
+s. rannou <mxs@sbrk.org>
+s00318865 <sunyuan3@huawei.com>
+Sabin Basyal <sabin.basyal@gmail.com>
+Sachin Joshi <sachin_jayant_joshi@hotmail.com>
+Sagar Hani <sagarhani33@gmail.com>
+Sainath Grandhi <sainath.grandhi@intel.com>
+Sakeven Jiang <jc5930@sina.cn>
+Salahuddin Khan <salah@docker.com>
+Sally O'Malley <somalley@redhat.com>
+Sam Abed <sam.abed@gmail.com>
+Sam Alba <sam.alba@gmail.com>
+Sam Bailey <cyprix@cyprix.com.au>
+Sam J Sharpe <sam.sharpe@digital.cabinet-office.gov.uk>
+Sam Neirinck <sam@samneirinck.com>
+Sam Reis <sreis@atlassian.com>
+Sam Rijs <srijs@airpost.net>
+Sam Whited <sam@samwhited.com>
+Sambuddha Basu <sambuddhabasu1@gmail.com>
+Sami Wagiaalla <swagiaal@redhat.com>
+Samuel Andaya <samuel@andaya.net>
+Samuel Dion-Girardeau <samuel.diongirardeau@gmail.com>
+Samuel Karp <skarp@amazon.com>
+Samuel PHAN <samuel-phan@users.noreply.github.com>
+Sandeep Bansal <sabansal@microsoft.com>
+Sankar சங்கர் <sankar.curiosity@gmail.com>
+Sanket Saurav <sanketsaurav@gmail.com>
+Santhosh Manohar <santhosh@docker.com>
+sapphiredev <se.imas.kr@gmail.com>
+Sargun Dhillon <sargun@netflix.com>
+Sascha Andres <sascha.andres@outlook.com>
+Sascha Grunert <sgrunert@suse.com>
+SataQiu <qiushida@beyondcent.com>
+Satnam Singh <satnam@raintown.org>
+Satoshi Amemiya <satoshi_amemiya@voyagegroup.com>
+Satoshi Tagomori <tagomoris@gmail.com>
+Scott Bessler <scottbessler@gmail.com>
+Scott Collier <emailscottcollier@gmail.com>
+Scott Johnston <scott@docker.com>
+Scott Stamp <scottstamp851@gmail.com>
+Scott Walls <sawalls@umich.edu>
+sdreyesg <sdreyesg@gmail.com>
+Sean Christopherson <sean.j.christopherson@intel.com>
+Sean Cronin <seancron@gmail.com>
+Sean Lee <seanlee@tw.ibm.com>
+Sean McIntyre <s.mcintyre@xverba.ca>
+Sean OMeara <sean@chef.io>
+Sean P. Kane <skane@newrelic.com>
+Sean Rodman <srodman7689@gmail.com>
+Sebastiaan van Steenis <mail@superseb.nl>
+Sebastiaan van Stijn <github@gone.nl>
+Senthil Kumar Selvaraj <senthil.thecoder@gmail.com>
+Senthil Kumaran <senthil@uthcode.com>
+SeongJae Park <sj38.park@gmail.com>
+Seongyeol Lim <seongyeol37@gmail.com>
+Serge Hallyn <serge.hallyn@ubuntu.com>
+Sergey Alekseev <sergey.alekseev.minsk@gmail.com>
+Sergey Evstifeev <sergey.evstifeev@gmail.com>
+Sergii Kabashniuk <skabashnyuk@codenvy.com>
+Sergio Lopez <slp@redhat.com>
+Serhat Gülçiçek <serhat25@gmail.com>
+SeungUkLee <lsy931106@gmail.com>
+Sevki Hasirci <s@sevki.org>
+Shane Canon <scanon@lbl.gov>
+Shane da Silva <shane@dasilva.io>
+Shaun Kaasten <shaunk@gmail.com>
+shaunol <shaunol@gmail.com>
+Shawn Landden <shawn@churchofgit.com>
+Shawn Siefkas <shawn.siefkas@meredith.com>
+shawnhe <shawnhe@shawnhedeMacBook-Pro.local>
+Shayne Wang <shaynexwang@gmail.com>
+Shekhar Gulati <shekhargulati84@gmail.com>
+Sheng Yang <sheng@yasker.org>
+Shengbo Song <thomassong@tencent.com>
+Shev Yan <yandong_8212@163.com>
+Shih-Yuan Lee <fourdollars@gmail.com>
+Shijiang Wei <mountkin@gmail.com>
+Shijun Qin <qinshijun16@mails.ucas.ac.cn>
+Shishir Mahajan <shishir.mahajan@redhat.com>
+Shoubhik Bose <sbose78@gmail.com>
+Shourya Sarcar <shourya.sarcar@gmail.com>
+Shu-Wai Chow <shu-wai.chow@seattlechildrens.org>
+shuai-z <zs.broccoli@gmail.com>
+Shukui Yang <yangshukui@huawei.com>
+Shuwei Hao <haosw@cn.ibm.com>
+Sian Lerk Lau <kiawin@gmail.com>
+Sidhartha Mani <sidharthamn@gmail.com>
+sidharthamani <sid@rancher.com>
+Silas Sewell <silas@sewell.org>
+Silvan Jegen <s.jegen@gmail.com>
+Simão Reis <smnrsti@gmail.com>
+Simei He <hesimei@zju.edu.cn>
+Simon Barendse <simon.barendse@gmail.com>
+Simon Eskildsen <sirup@sirupsen.com>
+Simon Ferquel <simon.ferquel@docker.com>
+Simon Leinen <simon.leinen@gmail.com>
+Simon Menke <simon.menke@gmail.com>
+Simon Taranto <simon.taranto@gmail.com>
+Simon Vikstrom <pullreq@devsn.se>
+Sindhu S <sindhus@live.in>
+Sjoerd Langkemper <sjoerd-github@linuxonly.nl>
+skanehira <sho19921005@gmail.com>
+Solganik Alexander <solganik@gmail.com>
+Solomon Hykes <solomon@docker.com>
+Song Gao <song@gao.io>
+Soshi Katsuta <soshi.katsuta@gmail.com>
+Soulou <leo@unbekandt.eu>
+Spencer Brown <spencer@spencerbrown.org>
+Spencer Smith <robertspencersmith@gmail.com>
+Sridatta Thatipamala <sthatipamala@gmail.com>
+Sridhar Ratnakumar <sridharr@activestate.com>
+Srini Brahmaroutu <srbrahma@us.ibm.com>
+Srinivasan Srivatsan <srinivasan.srivatsan@hpe.com>
+Staf Wagemakers <staf@wagemakers.be>
+Stanislav Bondarenko <stanislav.bondarenko@gmail.com>
+Stanislav Levin <slev@altlinux.org>
+Steeve Morin <steeve.morin@gmail.com>
+Stefan Berger <stefanb@linux.vnet.ibm.com>
+Stefan J. Wernli <swernli@microsoft.com>
+Stefan Praszalowicz <stefan@greplin.com>
+Stefan S. <tronicum@user.github.com>
+Stefan Scherer <stefan.scherer@docker.com>
+Stefan Staudenmeyer <doerte@instana.com>
+Stefan Weil <sw@weilnetz.de>
+Stephan Spindler <shutefan@gmail.com>
+Stephen Benjamin <stephen@redhat.com>
+Stephen Crosby <stevecrozz@gmail.com>
+Stephen Day <stevvooe@gmail.com>
+Stephen Drake <stephen@xenolith.net>
+Stephen Rust <srust@blockbridge.com>
+Steve Desmond <steve@vtsv.ca>
+Steve Dougherty <steve@asksteved.com>
+Steve Durrheimer <s.durrheimer@gmail.com>
+Steve Francia <steve.francia@gmail.com>
+Steve Koch <stevekochscience@gmail.com>
+Steven Burgess <steven.a.burgess@hotmail.com>
+Steven Erenst <stevenerenst@gmail.com>
+Steven Hartland <steven.hartland@multiplay.co.uk>
+Steven Iveson <sjiveson@outlook.com>
+Steven Merrill <steven.merrill@gmail.com>
+Steven Richards <steven@axiomzen.co>
+Steven Taylor <steven.taylor@me.com>
+Stig Larsson <stig@larsson.dev>
+Subhajit Ghosh <isubuz.g@gmail.com>
+Sujith Haridasan <sujith.h@gmail.com>
+Sun Gengze <690388648@qq.com>
+Sun Jianbo <wonderflow.sun@gmail.com>
+Sune Keller <sune.keller@gmail.com>
+Sunny Gogoi <indiasuny000@gmail.com>
+Suryakumar Sudar <surya.trunks@gmail.com>
+Sven Dowideit <SvenDowideit@home.org.au>
+Swapnil Daingade <swapnil.daingade@gmail.com>
+Sylvain Baubeau <sbaubeau@redhat.com>
+Sylvain Bellemare <sylvain@ascribe.io>
+Sébastien <sebastien@yoozio.com>
+Sébastien HOUZÉ <cto@verylastroom.com>
+Sébastien Luttringer <seblu@seblu.net>
+Sébastien Stormacq <sebsto@users.noreply.github.com>
+Tabakhase <mail@tabakhase.com>
+Tadej Janež <tadej.j@nez.si>
+TAGOMORI Satoshi <tagomoris@gmail.com>
+tang0th <tang0th@gmx.com>
+Tangi Colin <tangicolin@gmail.com>
+Tatsuki Sugiura <sugi@nemui.org>
+Tatsushi Inagaki <e29253@jp.ibm.com>
+Taylan Isikdemir <taylani@google.com>
+Taylor Jones <monitorjbl@gmail.com>
+Ted M. Young <tedyoung@gmail.com>
+Tehmasp Chaudhri <tehmasp@gmail.com>
+Tejaswini Duggaraju <naduggar@microsoft.com>
+Tejesh Mehta <tejesh.mehta@gmail.com>
+terryding77 <550147740@qq.com>
+tgic <farmer1992@gmail.com>
+Thatcher Peskens <thatcher@docker.com>
+theadactyl <thea.lamkin@gmail.com>
+Thell 'Bo' Fowler <thell@tbfowler.name>
+Thermionix <bond711@gmail.com>
+Thijs Terlouw <thijsterlouw@gmail.com>
+Thomas Bikeev <thomas.bikeev@mac.com>
+Thomas Frössman <thomasf@jossystem.se>
+Thomas Gazagnaire <thomas@gazagnaire.org>
+Thomas Grainger <tagrain@gmail.com>
+Thomas Hansen <thomas.hansen@gmail.com>
+Thomas Leonard <thomas.leonard@docker.com>
+Thomas Léveil <thomasleveil@gmail.com>
+Thomas Orozco <thomas@orozco.fr>
+Thomas Riccardi <riccardi@systran.fr>
+Thomas Schroeter <thomas@cliqz.com>
+Thomas Sjögren <konstruktoid@users.noreply.github.com>
+Thomas Swift <tgs242@gmail.com>
+Thomas Tanaka <thomas.tanaka@oracle.com>
+Thomas Texier <sharkone@en-mousse.org>
+Ti Zhou <tizhou1986@gmail.com>
+Tianon Gravi <admwiggin@gmail.com>
+Tianyi Wang <capkurmagati@gmail.com>
+Tibor Vass <teabee89@gmail.com>
+Tiffany Jernigan <tiffany.f.j@gmail.com>
+Tiffany Low <tiffany@box.com>
+Till Wegmüller <toasterson@gmail.com>
+Tim <elatllat@gmail.com>
+Tim Bart <tim@fewagainstmany.com>
+Tim Bosse <taim@bosboot.org>
+Tim Dettrick <t.dettrick@uq.edu.au>
+Tim Düsterhus <tim@bastelstu.be>
+Tim Hockin <thockin@google.com>
+Tim Potter <tpot@hpe.com>
+Tim Ruffles <oi@truffles.me.uk>
+Tim Smith <timbot@google.com>
+Tim Terhorst <mynamewastaken+git@gmail.com>
+Tim Wang <timwangdev@gmail.com>
+Tim Waugh <twaugh@redhat.com>
+Tim Wraight <tim.wraight@tangentlabs.co.uk>
+Tim Zju <21651152@zju.edu.cn>
+timfeirg <kkcocogogo@gmail.com>
+Timothy Hobbs <timothyhobbs@seznam.cz>
+tjwebb123 <tjwebb123@users.noreply.github.com>
+tobe <tobegit3hub@gmail.com>
+Tobias Bieniek <Tobias.Bieniek@gmx.de>
+Tobias Bradtke <webwurst@gmail.com>
+Tobias Gesellchen <tobias@gesellix.de>
+Tobias Klauser <tklauser@distanz.ch>
+Tobias Munk <schmunk@usrbin.de>
+Tobias Schmidt <ts@soundcloud.com>
+Tobias Schwab <tobias.schwab@dynport.de>
+Todd Crane <todd@toddcrane.com>
+Todd Lunter <tlunter@gmail.com>
+Todd Whiteman <todd.whiteman@joyent.com>
+Toli Kuznets <toli@docker.com>
+Tom Barlow <tomwbarlow@gmail.com>
+Tom Booth <tombooth@gmail.com>
+Tom Denham <tom@tomdee.co.uk>
+Tom Fotherby <tom+github@peopleperhour.com>
+Tom Howe <tom.howe@enstratius.com>
+Tom Hulihan <hulihan.tom159@gmail.com>
+Tom Maaswinkel <tom.maaswinkel@12wiki.eu>
+Tom Sweeney <tsweeney@redhat.com>
+Tom Wilkie <tom.wilkie@gmail.com>
+Tom X. Tobin <tomxtobin@tomxtobin.com>
+Tomas Tomecek <ttomecek@redhat.com>
+Tomasz Kopczynski <tomek@kopczynski.net.pl>
+Tomasz Lipinski <tlipinski@users.noreply.github.com>
+Tomasz Nurkiewicz <nurkiewicz@gmail.com>
+Tommaso Visconti <tommaso.visconti@gmail.com>
+Tomáš Hrčka <thrcka@redhat.com>
+Tonny Xu <tonny.xu@gmail.com>
+Tony Abboud <tdabboud@hotmail.com>
+Tony Daws <tony@daws.ca>
+Tony Miller <mcfiredrill@gmail.com>
+toogley <toogley@mailbox.org>
+Torstein Husebø <torstein@huseboe.net>
+Tõnis Tiigi <tonistiigi@gmail.com>
+Trace Andreason <tandreason@gmail.com>
+tracylihui <793912329@qq.com>
+Trapier Marshall <trapier.marshall@docker.com>
+Travis Cline <travis.cline@gmail.com>
+Travis Thieman <travis.thieman@gmail.com>
+Trent Ogren <tedwardo2@gmail.com>
+Trevor <trevinwoodstock@gmail.com>
+Trevor Pounds <trevor.pounds@gmail.com>
+Trevor Sullivan <pcgeek86@gmail.com>
+Trishna Guha <trishnaguha17@gmail.com>
+Tristan Carel <tristan@cogniteev.com>
+Troy Denton <trdenton@gmail.com>
+Tycho Andersen <tycho@docker.com>
+Tyler Brock <tyler.brock@gmail.com>
+Tyler Brown <tylers.pile@gmail.com>
+Tzu-Jung Lee <roylee17@gmail.com>
+uhayate <uhayate.gong@daocloud.io>
+Ulysse Carion <ulyssecarion@gmail.com>
+Umesh Yadav <umesh4257@gmail.com>
+Utz Bacher <utz.bacher@de.ibm.com>
+vagrant <vagrant@ubuntu-14.04-amd64-vbox>
+Vaidas Jablonskis <jablonskis@gmail.com>
+vanderliang <lansheng@meili-inc.com>
+Velko Ivanov <vivanov@deeperplane.com>
+Veres Lajos <vlajos@gmail.com>
+Victor Algaze <valgaze@gmail.com>
+Victor Coisne <victor.coisne@dotcloud.com>
+Victor Costan <costan@gmail.com>
+Victor I. Wood <viw@t2am.com>
+Victor Lyuboslavsky <victor@victoreda.com>
+Victor Marmol <vmarmol@google.com>
+Victor Palma <palma.victor@gmail.com>
+Victor Vieux <victor.vieux@docker.com>
+Victoria Bialas <victoria.bialas@docker.com>
+Vijaya Kumar K <vijayak@caviumnetworks.com>
+Vikram bir Singh <vsingh@mirantis.com>
+Viktor Stanchev <me@viktorstanchev.com>
+Viktor Vojnovski <viktor.vojnovski@amadeus.com>
+VinayRaghavanKS <raghavan.vinay@gmail.com>
+Vincent Batts <vbatts@redhat.com>
+Vincent Bernat <Vincent.Bernat@exoscale.ch>
+Vincent Boulineau <vincent.boulineau@datadoghq.com>
+Vincent Demeester <vincent.demeester@docker.com>
+Vincent Giersch <vincent.giersch@ovh.net>
+Vincent Mayers <vincent.mayers@inbloom.org>
+Vincent Woo <me@vincentwoo.com>
+Vinod Kulkarni <vinod.kulkarni@gmail.com>
+Vishal Doshi <vishal.doshi@gmail.com>
+Vishnu Kannan <vishnuk@google.com>
+Vitaly Ostrosablin <vostrosablin@virtuozzo.com>
+Vitor Monteiro <vmrmonteiro@gmail.com>
+Vivek Agarwal <me@vivek.im>
+Vivek Dasgupta <vdasgupt@redhat.com>
+Vivek Goyal <vgoyal@redhat.com>
+Vladimir Bulyga <xx@ccxx.cc>
+Vladimir Kirillov <proger@wilab.org.ua>
+Vladimir Pouzanov <farcaller@google.com>
+Vladimir Rutsky <altsysrq@gmail.com>
+Vladimir Varankin <nek.narqo+git@gmail.com>
+VladimirAus <v_roudakov@yahoo.com>
+Vlastimil Zeman <vlastimil.zeman@diffblue.com>
+Vojtech Vitek (V-Teq) <vvitek@redhat.com>
+waitingkuo <waitingkuo0527@gmail.com>
+Walter Leibbrandt <github@wrl.co.za>
+Walter Stanish <walter@pratyeka.org>
+Wang Chao <chao.wang@ucloud.cn>
+Wang Guoliang <liangcszzu@163.com>
+Wang Jie <wangjie5@chinaskycloud.com>
+Wang Long <long.wanglong@huawei.com>
+Wang Ping <present.wp@icloud.com>
+Wang Xing <hzwangxing@corp.netease.com>
+Wang Yuexiao <wang.yuexiao@zte.com.cn>
+Wang Yumu <37442693@qq.com>
+wanghuaiqing <wanghuaiqing@loongson.cn>
+Ward Vandewege <ward@jhvc.com>
+WarheadsSE <max@warheads.net>
+Wassim Dhif <wassimdhif@gmail.com>
+Wayne Chang <wayne@neverfear.org>
+Wayne Song <wsong@docker.com>
+Weerasak Chongnguluam <singpor@gmail.com>
+Wei Fu <fuweid89@gmail.com>
+Wei Wu <wuwei4455@gmail.com>
+Wei-Ting Kuo <waitingkuo0527@gmail.com>
+weipeng <weipeng@tuscloud.io>
+weiyan <weiyan3@huawei.com>
+Weiyang Zhu <cnresonant@gmail.com>
+Wen Cheng Ma <wenchma@cn.ibm.com>
+Wendel Fleming <wfleming@usc.edu>
+Wenjun Tang <tangwj2@lenovo.com>
+Wenkai Yin <yinw@vmware.com>
+wenlxie <wenlxie@ebay.com>
+Wentao Zhang <zhangwentao234@huawei.com>
+Wenxuan Zhao <viz@linux.com>
+Wenyu You <21551128@zju.edu.cn>
+Wenzhi Liang <wenzhi.liang@gmail.com>
+Wes Morgan <cap10morgan@gmail.com>
+Wewang Xiaorenfine <wang.xiaoren@zte.com.cn>
+Wiktor Kwapisiewicz <wiktor@metacode.biz>
+Will Dietz <w@wdtz.org>
+Will Rouesnel <w.rouesnel@gmail.com>
+Will Weaver <monkey@buildingbananas.com>
+willhf <willhf@gmail.com>
+William Delanoue <william.delanoue@gmail.com>
+William Henry <whenry@redhat.com>
+William Hubbs <w.d.hubbs@gmail.com>
+William Martin <wmartin@pivotal.io>
+William Riancho <wr.wllm@gmail.com>
+William Thurston <thurstw@amazon.com>
+Wilson Júnior <wilsonpjunior@gmail.com>
+Wing-Kam Wong <wingkwong.code@gmail.com>
+WiseTrem <shepelyov.g@gmail.com>
+Wolfgang Powisch <powo@powo.priv.at>
+Wonjun Kim <wonjun.kim@navercorp.com>
+xamyzhao <x.amy.zhao@gmail.com>
+Xian Chaobo <xianchaobo@huawei.com>
+Xianglin Gao <xlgao@zju.edu.cn>
+Xianlu Bird <xianlubird@gmail.com>
+Xiao YongBiao <xyb4638@gmail.com>
+XiaoBing Jiang <s7v7nislands@gmail.com>
+Xiaodong Liu <liuxiaodong@loongson.cn>
+Xiaodong Zhang <a4012017@sina.com>
+Xiaoxi He <xxhe@alauda.io>
+Xiaoxu Chen <chenxiaoxu14@otcaix.iscas.ac.cn>
+Xiaoyu Zhang <zhang.xiaoyu33@zte.com.cn>
+xichengliudui <1693291525@qq.com>
+xiekeyang <xiekeyang@huawei.com>
+Ximo Guanter Gonzálbez <joaquin.guantergonzalbez@telefonica.com>
+Xinbo Weng <xihuanbo_0521@zju.edu.cn>
+Xinfeng Liu <xinfeng.liu@gmail.com>
+Xinzi Zhou <imdreamrunner@gmail.com>
+Xiuming Chen <cc@cxm.cc>
+Xuecong Liao <satorulogic@gmail.com>
+xuzhaokui <cynicholas@gmail.com>
+Yadnyawalkya Tale <ytale@redhat.com>
+Yahya <ya7yaz@gmail.com>
+YAMADA Tsuyoshi <tyamada@minimum2scp.org>
+Yamasaki Masahide <masahide.y@gmail.com>
+Yan Feng <yanfeng2@huawei.com>
+Yang Bai <hamo.by@gmail.com>
+Yang Pengfei <yangpengfei4@huawei.com>
+yangchenliang <yangchenliang@huawei.com>
+Yanqiang Miao <miao.yanqiang@zte.com.cn>
+Yao Zaiyong <yaozaiyong@hotmail.com>
+Yash Murty <yashmurty@gmail.com>
+Yassine Tijani <yasstij11@gmail.com>
+Yasunori Mahata <nori@mahata.net>
+Yazhong Liu <yorkiefixer@gmail.com>
+Yestin Sun <sunyi0804@gmail.com>
+Yi EungJun <eungjun.yi@navercorp.com>
+Yibai Zhang <xm1994@gmail.com>
+Yihang Ho <hoyihang5@gmail.com>
+Ying Li <ying.li@docker.com>
+Yohei Ueda <yohei@jp.ibm.com>
+Yong Tang <yong.tang.github@outlook.com>
+Yongxin Li <yxli@alauda.io>
+Yongzhi Pan <panyongzhi@gmail.com>
+Yosef Fertel <yfertel@gmail.com>
+You-Sheng Yang (楊有勝) <vicamo@gmail.com>
+youcai <omegacoleman@gmail.com>
+Youcef YEKHLEF <yyekhlef@gmail.com>
+Yu Changchun <yuchangchun1@huawei.com>
+Yu Chengxia <yuchengxia@huawei.com>
+Yu Peng <yu.peng36@zte.com.cn>
+Yu-Ju Hong <yjhong@google.com>
+Yuan Sun <sunyuan3@huawei.com>
+Yuanhong Peng <pengyuanhong@huawei.com>
+Yue Zhang <zy675793960@yeah.net>
+Yuhao Fang <fangyuhao@gmail.com>
+Yuichiro Kaneko <spiketeika@gmail.com>
+Yunxiang Huang <hyxqshk@vip.qq.com>
+Yurii Rashkovskii <yrashk@gmail.com>
+Yusuf Tarık Günaydın <yusuf_tarik@hotmail.com>
+Yves Junqueira <yves.junqueira@gmail.com>
+Zac Dover <zdover@redhat.com>
+Zach Borboa <zachborboa@gmail.com>
+Zachary Jaffee <zjaffee@us.ibm.com>
+Zain Memon <zain@inzain.net>
+Zaiste! <oh@zaiste.net>
+Zane DeGraffenried <zane.deg@gmail.com>
+Zefan Li <lizefan@huawei.com>
+Zen Lin(Zhinan Lin) <linzhinan@huawei.com>
+Zhang Kun <zkazure@gmail.com>
+Zhang Wei <zhangwei555@huawei.com>
+Zhang Wentao <zhangwentao234@huawei.com>
+ZhangHang <stevezhang2014@gmail.com>
+zhangxianwei <xianwei.zw@alibaba-inc.com>
+Zhenan Ye <21551168@zju.edu.cn>
+zhenghenghuo <zhenghenghuo@zju.edu.cn>
+Zhenhai Gao <gaozh1988@live.com>
+Zhenkun Bi <bi.zhenkun@zte.com.cn>
+zhipengzuo <zuozhipeng@baidu.com>
+Zhou Hao <zhouhao@cn.fujitsu.com>
+Zhoulin Xie <zhoulin.xie@daocloud.io>
+Zhu Guihua <zhugh.fnst@cn.fujitsu.com>
+Zhu Kunjia <zhu.kunjia@zte.com.cn>
+Zhuoyun Wei <wzyboy@wzyboy.org>
+Ziheng Liu <lzhfromustc@gmail.com>
+Zilin Du <zilin.du@gmail.com>
+zimbatm <zimbatm@zimbatm.com>
+Ziming Dong <bnudzm@foxmail.com>
+ZJUshuaizhou <21551191@zju.edu.cn>
+zmarouf <zeid.marouf@gmail.com>
+Zoltan Tombol <zoltan.tombol@gmail.com>
+Zou Yu <zouyu7@huawei.com>
+zqh <zqhxuyuan@gmail.com>
+Zuhayr Elahi <zuhayr.elahi@docker.com>
+Zunayed Ali <zunayed@gmail.com>
+Álex González <agonzalezro@gmail.com>
+Álvaro Lázaro <alvaro.lazaro.g@gmail.com>
+Átila Camurça Alves <camurca.home@gmail.com>
+尹吉峰 <jifeng.yin@gmail.com>
+屈骏 <qujun@tiduyun.com>
+徐俊杰 <paco.xu@daocloud.io>
+慕陶 <jihui.xjh@alibaba-inc.com>
+搏通 <yufeng.pyf@alibaba-inc.com>
+黄艳红00139573 <huang.yanhong@zte.com.cn>
diff --git a/vendor/github.com/docker/docker/LICENSE b/vendor/github.com/docker/docker/LICENSE
new file mode 100644
index 0000000000000..6d8d58fb676bb
--- /dev/null
+++ b/vendor/github.com/docker/docker/LICENSE
@@ -0,0 +1,191 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        https://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   Copyright 2013-2018 Docker, Inc.
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       https://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
diff --git a/vendor/github.com/docker/docker/NOTICE b/vendor/github.com/docker/docker/NOTICE
new file mode 100644
index 0000000000000..58b19b6d15b99
--- /dev/null
+++ b/vendor/github.com/docker/docker/NOTICE
@@ -0,0 +1,19 @@
+Docker
+Copyright 2012-2017 Docker, Inc.
+
+This product includes software developed at Docker, Inc. (https://www.docker.com).
+
+This product contains software (https://github.com/creack/pty) developed
+by Keith Rarick, licensed under the MIT License.
+
+The following is courtesy of our legal counsel:
+
+
+Use and transfer of Docker may be subject to certain restrictions by the
+United States and other governments.
+It is your responsibility to ensure that your use and/or transfer does not
+violate applicable laws.
+
+For more information, please see https://www.bis.doc.gov
+
+See also https://www.apache.org/dev/crypto.html and/or seek legal counsel.
diff --git a/vendor/github.com/docker/docker/api/README.md b/vendor/github.com/docker/docker/api/README.md
new file mode 100644
index 0000000000000..f136c3433af40
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/README.md
@@ -0,0 +1,42 @@
+# Working on the Engine API
+
+The Engine API is an HTTP API used by the command-line client to communicate with the daemon. It can also be used by third-party software to control the daemon.
+
+It consists of various components in this repository:
+
+- `api/swagger.yaml` A Swagger definition of the API.
+- `api/types/` Types shared by both the client and server, representing various objects, options, responses, etc. Most are written manually, but some are automatically generated from the Swagger definition. See [#27919](https://github.com/docker/docker/issues/27919) for progress on this.
+- `cli/` The command-line client.
+- `client/` The Go client used by the command-line client. It can also be used by third-party Go programs.
+- `daemon/` The daemon, which serves the API.
+
+## Swagger definition
+
+The API is defined by the [Swagger](http://swagger.io/specification/) definition in `api/swagger.yaml`. This definition can be used to:
+
+1. Automatically generate documentation.
+2. Automatically generate the Go server and client. (A work-in-progress.)
+3. Provide a machine readable version of the API for introspecting what it can do, automatically generating clients for other languages, etc.
+
+## Updating the API documentation
+
+The API documentation is generated entirely from `api/swagger.yaml`. If you make updates to the API, edit this file to represent the change in the documentation.
+
+The file is split into two main sections:
+
+- `definitions`, which defines re-usable objects used in requests and responses
+- `paths`, which defines the API endpoints (and some inline objects which don't need to be reusable)
+
+To make an edit, first look for the endpoint you want to edit under `paths`, then make the required edits. Endpoints may reference reusable objects with `$ref`, which can be found in the `definitions` section.
+
+There is hopefully enough example material in the file for you to copy a similar pattern from elsewhere in the file (e.g. adding new fields or endpoints), but for the full reference, see the [Swagger specification](https://github.com/docker/docker/issues/27919).
+
+`swagger.yaml` is validated by `hack/validate/swagger` to ensure it is a valid Swagger definition. This is useful when making edits to ensure you are doing the right thing.
+
+## Viewing the API documentation
+
+When you make edits to `swagger.yaml`, you may want to check the generated API documentation to ensure it renders correctly.
+
+Run `make swagger-docs` and a preview will be running at `http://localhost`. Some of the styling may be incorrect, but you'll be able to ensure that it is generating the correct documentation.
+
+The production documentation is generated by vendoring `swagger.yaml` into [docker/docker.github.io](https://github.com/docker/docker.github.io).
diff --git a/vendor/github.com/docker/docker/api/common.go b/vendor/github.com/docker/docker/api/common.go
new file mode 100644
index 0000000000000..1565e2af64745
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/common.go
@@ -0,0 +1,11 @@
+package api // import "github.com/docker/docker/api"
+
+// Common constants for daemon and client.
+const (
+	// DefaultVersion of Current REST API
+	DefaultVersion = "1.41"
+
+	// NoBaseImageSpecifier is the symbol used by the FROM
+	// command to specify that no base image is to be used.
+	NoBaseImageSpecifier = "scratch"
+)
diff --git a/vendor/github.com/docker/docker/api/common_unix.go b/vendor/github.com/docker/docker/api/common_unix.go
new file mode 100644
index 0000000000000..19fc63d6589aa
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/common_unix.go
@@ -0,0 +1,7 @@
+//go:build !windows
+// +build !windows
+
+package api // import "github.com/docker/docker/api"
+
+// MinVersion represents Minimum REST API version supported
+const MinVersion = "1.12"
diff --git a/vendor/github.com/docker/docker/api/common_windows.go b/vendor/github.com/docker/docker/api/common_windows.go
new file mode 100644
index 0000000000000..590ba5479be13
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/common_windows.go
@@ -0,0 +1,8 @@
+package api // import "github.com/docker/docker/api"
+
+// MinVersion represents Minimum REST API version supported
+// Technically the first daemon API version released on Windows is v1.25 in
+// engine version 1.13. However, some clients are explicitly using downlevel
+// APIs (e.g. docker-compose v2.1 file format) and that is just too restrictive.
+// Hence also allowing 1.24 on Windows.
+const MinVersion string = "1.24"
diff --git a/vendor/github.com/docker/docker/api/swagger-gen.yaml b/vendor/github.com/docker/docker/api/swagger-gen.yaml
new file mode 100644
index 0000000000000..f07a02737f737
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/swagger-gen.yaml
@@ -0,0 +1,12 @@
+
+layout:
+  models:
+    - name: definition
+      source: asset:model
+      target: "{{ joinFilePath .Target .ModelPackage }}"
+      file_name: "{{ (snakize (pascalize .Name)) }}.go"
+  operations:
+    - name: handler
+      source: asset:serverOperation
+      target: "{{ joinFilePath .Target .APIPackage .Package }}"
+      file_name: "{{ (snakize (pascalize .Name)) }}.go"
diff --git a/vendor/github.com/docker/docker/api/swagger.yaml b/vendor/github.com/docker/docker/api/swagger.yaml
new file mode 100644
index 0000000000000..c24f57bc9a7a1
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/swagger.yaml
@@ -0,0 +1,11484 @@
+# A Swagger 2.0 (a.k.a. OpenAPI) definition of the Engine API.
+#
+# This is used for generating API documentation and the types used by the
+# client/server. See api/README.md for more information.
+#
+# Some style notes:
+# - This file is used by ReDoc, which allows GitHub Flavored Markdown in
+#   descriptions.
+# - There is no maximum line length, for ease of editing and pretty diffs.
+# - operationIds are in the format "NounVerb", with a singular noun.
+
+swagger: "2.0"
+schemes:
+  - "http"
+  - "https"
+produces:
+  - "application/json"
+  - "text/plain"
+consumes:
+  - "application/json"
+  - "text/plain"
+basePath: "/v1.41"
+info:
+  title: "Docker Engine API"
+  version: "1.41"
+  x-logo:
+    url: "https://docs.docker.com/assets/images/logo-docker-main.png"
+  description: |
+    The Engine API is an HTTP API served by Docker Engine. It is the API the
+    Docker client uses to communicate with the Engine, so everything the Docker
+    client can do can be done with the API.
+
+    Most of the client's commands map directly to API endpoints (e.g. `docker ps`
+    is `GET /containers/json`). The notable exception is running containers,
+    which consists of several API calls.
+
+    # Errors
+
+    The API uses standard HTTP status codes to indicate the success or failure
+    of the API call. The body of the response will be JSON in the following
+    format:
+
+    ```
+    {
+      "message": "page not found"
+    }
+    ```
+
+    # Versioning
+
+    The API is usually changed in each release, so API calls are versioned to
+    ensure that clients don't break. To lock to a specific version of the API,
+    you prefix the URL with its version, for example, call `/v1.30/info` to use
+    the v1.30 version of the `/info` endpoint. If the API version specified in
+    the URL is not supported by the daemon, a HTTP `400 Bad Request` error message
+    is returned.
+
+    If you omit the version-prefix, the current version of the API (v1.41) is used.
+    For example, calling `/info` is the same as calling `/v1.41/info`. Using the
+    API without a version-prefix is deprecated and will be removed in a future release.
+
+    Engine releases in the near future should support this version of the API,
+    so your client will continue to work even if it is talking to a newer Engine.
+
+    The API uses an open schema model, which means server may add extra properties
+    to responses. Likewise, the server will ignore any extra query parameters and
+    request body properties. When you write clients, you need to ignore additional
+    properties in responses to ensure they do not break when talking to newer
+    daemons.
+
+
+    # Authentication
+
+    Authentication for registries is handled client side. The client has to send
+    authentication details to various endpoints that need to communicate with
+    registries, such as `POST /images/(name)/push`. These are sent as
+    `X-Registry-Auth` header as a [base64url encoded](https://tools.ietf.org/html/rfc4648#section-5)
+    (JSON) string with the following structure:
+
+    ```
+    {
+      "username": "string",
+      "password": "string",
+      "email": "string",
+      "serveraddress": "string"
+    }
+    ```
+
+    The `serveraddress` is a domain/IP without a protocol. Throughout this
+    structure, double quotes are required.
+
+    If you have already got an identity token from the [`/auth` endpoint](#operation/SystemAuth),
+    you can just pass this instead of credentials:
+
+    ```
+    {
+      "identitytoken": "9cbaf023786cd7..."
+    }
+    ```
+
+# The tags on paths define the menu sections in the ReDoc documentation, so
+# the usage of tags must make sense for that:
+# - They should be singular, not plural.
+# - There should not be too many tags, or the menu becomes unwieldy. For
+#   example, it is preferable to add a path to the "System" tag instead of
+#   creating a tag with a single path in it.
+# - The order of tags in this list defines the order in the menu.
+tags:
+  # Primary objects
+  - name: "Container"
+    x-displayName: "Containers"
+    description: |
+      Create and manage containers.
+  - name: "Image"
+    x-displayName: "Images"
+  - name: "Network"
+    x-displayName: "Networks"
+    description: |
+      Networks are user-defined networks that containers can be attached to.
+      See the [networking documentation](https://docs.docker.com/network/)
+      for more information.
+  - name: "Volume"
+    x-displayName: "Volumes"
+    description: |
+      Create and manage persistent storage that can be attached to containers.
+  - name: "Exec"
+    x-displayName: "Exec"
+    description: |
+      Run new commands inside running containers. Refer to the
+      [command-line reference](https://docs.docker.com/engine/reference/commandline/exec/)
+      for more information.
+
+      To exec a command in a container, you first need to create an exec instance,
+      then start it. These two API endpoints are wrapped up in a single command-line
+      command, `docker exec`.
+
+  # Swarm things
+  - name: "Swarm"
+    x-displayName: "Swarm"
+    description: |
+      Engines can be clustered together in a swarm. Refer to the
+      [swarm mode documentation](https://docs.docker.com/engine/swarm/)
+      for more information.
+  - name: "Node"
+    x-displayName: "Nodes"
+    description: |
+      Nodes are instances of the Engine participating in a swarm. Swarm mode
+      must be enabled for these endpoints to work.
+  - name: "Service"
+    x-displayName: "Services"
+    description: |
+      Services are the definitions of tasks to run on a swarm. Swarm mode must
+      be enabled for these endpoints to work.
+  - name: "Task"
+    x-displayName: "Tasks"
+    description: |
+      A task is a container running on a swarm. It is the atomic scheduling unit
+      of swarm. Swarm mode must be enabled for these endpoints to work.
+  - name: "Secret"
+    x-displayName: "Secrets"
+    description: |
+      Secrets are sensitive data that can be used by services. Swarm mode must
+      be enabled for these endpoints to work.
+  - name: "Config"
+    x-displayName: "Configs"
+    description: |
+      Configs are application configurations that can be used by services. Swarm
+      mode must be enabled for these endpoints to work.
+  # System things
+  - name: "Plugin"
+    x-displayName: "Plugins"
+  - name: "System"
+    x-displayName: "System"
+
+definitions:
+  Port:
+    type: "object"
+    description: "An open port on a container"
+    required: [PrivatePort, Type]
+    properties:
+      IP:
+        type: "string"
+        format: "ip-address"
+        description: "Host IP address that the container's port is mapped to"
+      PrivatePort:
+        type: "integer"
+        format: "uint16"
+        x-nullable: false
+        description: "Port on the container"
+      PublicPort:
+        type: "integer"
+        format: "uint16"
+        description: "Port exposed on the host"
+      Type:
+        type: "string"
+        x-nullable: false
+        enum: ["tcp", "udp", "sctp"]
+    example:
+      PrivatePort: 8080
+      PublicPort: 80
+      Type: "tcp"
+
+  MountPoint:
+    type: "object"
+    description: "A mount point inside a container"
+    properties:
+      Type:
+        type: "string"
+      Name:
+        type: "string"
+      Source:
+        type: "string"
+      Destination:
+        type: "string"
+      Driver:
+        type: "string"
+      Mode:
+        type: "string"
+      RW:
+        type: "boolean"
+      Propagation:
+        type: "string"
+
+  DeviceMapping:
+    type: "object"
+    description: "A device mapping between the host and container"
+    properties:
+      PathOnHost:
+        type: "string"
+      PathInContainer:
+        type: "string"
+      CgroupPermissions:
+        type: "string"
+    example:
+      PathOnHost: "/dev/deviceName"
+      PathInContainer: "/dev/deviceName"
+      CgroupPermissions: "mrw"
+
+  DeviceRequest:
+    type: "object"
+    description: "A request for devices to be sent to device drivers"
+    properties:
+      Driver:
+        type: "string"
+        example: "nvidia"
+      Count:
+        type: "integer"
+        example: -1
+      DeviceIDs:
+        type: "array"
+        items:
+          type: "string"
+        example:
+          - "0"
+          - "1"
+          - "GPU-fef8089b-4820-abfc-e83e-94318197576e"
+      Capabilities:
+        description: |
+          A list of capabilities; an OR list of AND lists of capabilities.
+        type: "array"
+        items:
+          type: "array"
+          items:
+            type: "string"
+        example:
+          # gpu AND nvidia AND compute
+          - ["gpu", "nvidia", "compute"]
+      Options:
+        description: |
+          Driver-specific options, specified as a key/value pairs. These options
+          are passed directly to the driver.
+        type: "object"
+        additionalProperties:
+          type: "string"
+
+  ThrottleDevice:
+    type: "object"
+    properties:
+      Path:
+        description: "Device path"
+        type: "string"
+      Rate:
+        description: "Rate"
+        type: "integer"
+        format: "int64"
+        minimum: 0
+
+  Mount:
+    type: "object"
+    properties:
+      Target:
+        description: "Container path."
+        type: "string"
+      Source:
+        description: "Mount source (e.g. a volume name, a host path)."
+        type: "string"
+      Type:
+        description: |
+          The mount type. Available types:
+
+          - `bind` Mounts a file or directory from the host into the container. Must exist prior to creating the container.
+          - `volume` Creates a volume with the given name and options (or uses a pre-existing volume with the same name and options). These are **not** removed when the container is removed.
+          - `tmpfs` Create a tmpfs with the given options. The mount source cannot be specified for tmpfs.
+          - `npipe` Mounts a named pipe from the host into the container. Must exist prior to creating the container.
+        type: "string"
+        enum:
+          - "bind"
+          - "volume"
+          - "tmpfs"
+          - "npipe"
+      ReadOnly:
+        description: "Whether the mount should be read-only."
+        type: "boolean"
+      Consistency:
+        description: "The consistency requirement for the mount: `default`, `consistent`, `cached`, or `delegated`."
+        type: "string"
+      BindOptions:
+        description: "Optional configuration for the `bind` type."
+        type: "object"
+        properties:
+          Propagation:
+            description: "A propagation mode with the value `[r]private`, `[r]shared`, or `[r]slave`."
+            type: "string"
+            enum:
+              - "private"
+              - "rprivate"
+              - "shared"
+              - "rshared"
+              - "slave"
+              - "rslave"
+          NonRecursive:
+            description: "Disable recursive bind mount."
+            type: "boolean"
+            default: false
+      VolumeOptions:
+        description: "Optional configuration for the `volume` type."
+        type: "object"
+        properties:
+          NoCopy:
+            description: "Populate volume with data from the target."
+            type: "boolean"
+            default: false
+          Labels:
+            description: "User-defined key/value metadata."
+            type: "object"
+            additionalProperties:
+              type: "string"
+          DriverConfig:
+            description: "Map of driver specific options"
+            type: "object"
+            properties:
+              Name:
+                description: "Name of the driver to use to create the volume."
+                type: "string"
+              Options:
+                description: "key/value map of driver specific options."
+                type: "object"
+                additionalProperties:
+                  type: "string"
+      TmpfsOptions:
+        description: "Optional configuration for the `tmpfs` type."
+        type: "object"
+        properties:
+          SizeBytes:
+            description: "The size for the tmpfs mount in bytes."
+            type: "integer"
+            format: "int64"
+          Mode:
+            description: "The permission mode for the tmpfs mount in an integer."
+            type: "integer"
+
+  RestartPolicy:
+    description: |
+      The behavior to apply when the container exits. The default is not to
+      restart.
+
+      An ever increasing delay (double the previous delay, starting at 100ms) is
+      added before each restart to prevent flooding the server.
+    type: "object"
+    properties:
+      Name:
+        type: "string"
+        description: |
+          - Empty string means not to restart
+          - `always` Always restart
+          - `unless-stopped` Restart always except when the user has manually stopped the container
+          - `on-failure` Restart only when the container exit code is non-zero
+        enum:
+          - ""
+          - "always"
+          - "unless-stopped"
+          - "on-failure"
+      MaximumRetryCount:
+        type: "integer"
+        description: |
+          If `on-failure` is used, the number of times to retry before giving up.
+
+  Resources:
+    description: "A container's resources (cgroups config, ulimits, etc)"
+    type: "object"
+    properties:
+      # Applicable to all platforms
+      CpuShares:
+        description: |
+          An integer value representing this container's relative CPU weight
+          versus other containers.
+        type: "integer"
+      Memory:
+        description: "Memory limit in bytes."
+        type: "integer"
+        format: "int64"
+        default: 0
+      # Applicable to UNIX platforms
+      CgroupParent:
+        description: |
+          Path to `cgroups` under which the container's `cgroup` is created. If
+          the path is not absolute, the path is considered to be relative to the
+          `cgroups` path of the init process. Cgroups are created if they do not
+          already exist.
+        type: "string"
+      BlkioWeight:
+        description: "Block IO weight (relative weight)."
+        type: "integer"
+        minimum: 0
+        maximum: 1000
+      BlkioWeightDevice:
+        description: |
+          Block IO weight (relative device weight) in the form:
+
+          ```
+          [{"Path": "device_path", "Weight": weight}]
+          ```
+        type: "array"
+        items:
+          type: "object"
+          properties:
+            Path:
+              type: "string"
+            Weight:
+              type: "integer"
+              minimum: 0
+      BlkioDeviceReadBps:
+        description: |
+          Limit read rate (bytes per second) from a device, in the form:
+
+          ```
+          [{"Path": "device_path", "Rate": rate}]
+          ```
+        type: "array"
+        items:
+          $ref: "#/definitions/ThrottleDevice"
+      BlkioDeviceWriteBps:
+        description: |
+          Limit write rate (bytes per second) to a device, in the form:
+
+          ```
+          [{"Path": "device_path", "Rate": rate}]
+          ```
+        type: "array"
+        items:
+          $ref: "#/definitions/ThrottleDevice"
+      BlkioDeviceReadIOps:
+        description: |
+          Limit read rate (IO per second) from a device, in the form:
+
+          ```
+          [{"Path": "device_path", "Rate": rate}]
+          ```
+        type: "array"
+        items:
+          $ref: "#/definitions/ThrottleDevice"
+      BlkioDeviceWriteIOps:
+        description: |
+          Limit write rate (IO per second) to a device, in the form:
+
+          ```
+          [{"Path": "device_path", "Rate": rate}]
+          ```
+        type: "array"
+        items:
+          $ref: "#/definitions/ThrottleDevice"
+      CpuPeriod:
+        description: "The length of a CPU period in microseconds."
+        type: "integer"
+        format: "int64"
+      CpuQuota:
+        description: |
+          Microseconds of CPU time that the container can get in a CPU period.
+        type: "integer"
+        format: "int64"
+      CpuRealtimePeriod:
+        description: |
+          The length of a CPU real-time period in microseconds. Set to 0 to
+          allocate no time allocated to real-time tasks.
+        type: "integer"
+        format: "int64"
+      CpuRealtimeRuntime:
+        description: |
+          The length of a CPU real-time runtime in microseconds. Set to 0 to
+          allocate no time allocated to real-time tasks.
+        type: "integer"
+        format: "int64"
+      CpusetCpus:
+        description: |
+          CPUs in which to allow execution (e.g., `0-3`, `0,1`).
+        type: "string"
+        example: "0-3"
+      CpusetMems:
+        description: |
+          Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only
+          effective on NUMA systems.
+        type: "string"
+      Devices:
+        description: "A list of devices to add to the container."
+        type: "array"
+        items:
+          $ref: "#/definitions/DeviceMapping"
+      DeviceCgroupRules:
+        description: "a list of cgroup rules to apply to the container"
+        type: "array"
+        items:
+          type: "string"
+          example: "c 13:* rwm"
+      DeviceRequests:
+        description: |
+          A list of requests for devices to be sent to device drivers.
+        type: "array"
+        items:
+          $ref: "#/definitions/DeviceRequest"
+      KernelMemory:
+        description: |
+          Kernel memory limit in bytes.
+
+          <p><br /></p>
+
+          > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated
+          > `kmem.limit_in_bytes`.
+        type: "integer"
+        format: "int64"
+        example: 209715200
+      KernelMemoryTCP:
+        description: "Hard limit for kernel TCP buffer memory (in bytes)."
+        type: "integer"
+        format: "int64"
+      MemoryReservation:
+        description: "Memory soft limit in bytes."
+        type: "integer"
+        format: "int64"
+      MemorySwap:
+        description: |
+          Total memory limit (memory + swap). Set as `-1` to enable unlimited
+          swap.
+        type: "integer"
+        format: "int64"
+      MemorySwappiness:
+        description: |
+          Tune a container's memory swappiness behavior. Accepts an integer
+          between 0 and 100.
+        type: "integer"
+        format: "int64"
+        minimum: 0
+        maximum: 100
+      NanoCpus:
+        description: "CPU quota in units of 10<sup>-9</sup> CPUs."
+        type: "integer"
+        format: "int64"
+      OomKillDisable:
+        description: "Disable OOM Killer for the container."
+        type: "boolean"
+      Init:
+        description: |
+          Run an init inside the container that forwards signals and reaps
+          processes. This field is omitted if empty, and the default (as
+          configured on the daemon) is used.
+        type: "boolean"
+        x-nullable: true
+      PidsLimit:
+        description: |
+          Tune a container's PIDs limit. Set `0` or `-1` for unlimited, or `null`
+          to not change.
+        type: "integer"
+        format: "int64"
+        x-nullable: true
+      Ulimits:
+        description: |
+          A list of resource limits to set in the container. For example:
+
+          ```
+          {"Name": "nofile", "Soft": 1024, "Hard": 2048}
+          ```
+        type: "array"
+        items:
+          type: "object"
+          properties:
+            Name:
+              description: "Name of ulimit"
+              type: "string"
+            Soft:
+              description: "Soft limit"
+              type: "integer"
+            Hard:
+              description: "Hard limit"
+              type: "integer"
+      # Applicable to Windows
+      CpuCount:
+        description: |
+          The number of usable CPUs (Windows only).
+
+          On Windows Server containers, the processor resource controls are
+          mutually exclusive. The order of precedence is `CPUCount` first, then
+          `CPUShares`, and `CPUPercent` last.
+        type: "integer"
+        format: "int64"
+      CpuPercent:
+        description: |
+          The usable percentage of the available CPUs (Windows only).
+
+          On Windows Server containers, the processor resource controls are
+          mutually exclusive. The order of precedence is `CPUCount` first, then
+          `CPUShares`, and `CPUPercent` last.
+        type: "integer"
+        format: "int64"
+      IOMaximumIOps:
+        description: "Maximum IOps for the container system drive (Windows only)"
+        type: "integer"
+        format: "int64"
+      IOMaximumBandwidth:
+        description: |
+          Maximum IO in bytes per second for the container system drive
+          (Windows only).
+        type: "integer"
+        format: "int64"
+
+  Limit:
+    description: |
+      An object describing a limit on resources which can be requested by a task.
+    type: "object"
+    properties:
+      NanoCPUs:
+        type: "integer"
+        format: "int64"
+        example: 4000000000
+      MemoryBytes:
+        type: "integer"
+        format: "int64"
+        example: 8272408576
+      Pids:
+        description: |
+          Limits the maximum number of PIDs in the container. Set `0` for unlimited.
+        type: "integer"
+        format: "int64"
+        default: 0
+        example: 100
+
+  ResourceObject:
+    description: |
+      An object describing the resources which can be advertised by a node and
+      requested by a task.
+    type: "object"
+    properties:
+      NanoCPUs:
+        type: "integer"
+        format: "int64"
+        example: 4000000000
+      MemoryBytes:
+        type: "integer"
+        format: "int64"
+        example: 8272408576
+      GenericResources:
+        $ref: "#/definitions/GenericResources"
+
+  GenericResources:
+    description: |
+      User-defined resources can be either Integer resources (e.g, `SSD=3`) or
+      String resources (e.g, `GPU=UUID1`).
+    type: "array"
+    items:
+      type: "object"
+      properties:
+        NamedResourceSpec:
+          type: "object"
+          properties:
+            Kind:
+              type: "string"
+            Value:
+              type: "string"
+        DiscreteResourceSpec:
+          type: "object"
+          properties:
+            Kind:
+              type: "string"
+            Value:
+              type: "integer"
+              format: "int64"
+    example:
+      - DiscreteResourceSpec:
+          Kind: "SSD"
+          Value: 3
+      - NamedResourceSpec:
+          Kind: "GPU"
+          Value: "UUID1"
+      - NamedResourceSpec:
+          Kind: "GPU"
+          Value: "UUID2"
+
+  HealthConfig:
+    description: "A test to perform to check that the container is healthy."
+    type: "object"
+    properties:
+      Test:
+        description: |
+          The test to perform. Possible values are:
+
+          - `[]` inherit healthcheck from image or parent image
+          - `["NONE"]` disable healthcheck
+          - `["CMD", args...]` exec arguments directly
+          - `["CMD-SHELL", command]` run command with system's default shell
+        type: "array"
+        items:
+          type: "string"
+      Interval:
+        description: |
+          The time to wait between checks in nanoseconds. It should be 0 or at
+          least 1000000 (1 ms). 0 means inherit.
+        type: "integer"
+      Timeout:
+        description: |
+          The time to wait before considering the check to have hung. It should
+          be 0 or at least 1000000 (1 ms). 0 means inherit.
+        type: "integer"
+      Retries:
+        description: |
+          The number of consecutive failures needed to consider a container as
+          unhealthy. 0 means inherit.
+        type: "integer"
+      StartPeriod:
+        description: |
+          Start period for the container to initialize before starting
+          health-retries countdown in nanoseconds. It should be 0 or at least
+          1000000 (1 ms). 0 means inherit.
+        type: "integer"
+
+  Health:
+    description: |
+      Health stores information about the container's healthcheck results.
+    type: "object"
+    properties:
+      Status:
+        description: |
+          Status is one of `none`, `starting`, `healthy` or `unhealthy`
+
+          - "none"      Indicates there is no healthcheck
+          - "starting"  Starting indicates that the container is not yet ready
+          - "healthy"   Healthy indicates that the container is running correctly
+          - "unhealthy" Unhealthy indicates that the container has a problem
+        type: "string"
+        enum:
+          - "none"
+          - "starting"
+          - "healthy"
+          - "unhealthy"
+        example: "healthy"
+      FailingStreak:
+        description: "FailingStreak is the number of consecutive failures"
+        type: "integer"
+        example: 0
+      Log:
+        type: "array"
+        description: |
+          Log contains the last few results (oldest first)
+        items:
+          x-nullable: true
+          $ref: "#/definitions/HealthcheckResult"
+
+  HealthcheckResult:
+    description: |
+      HealthcheckResult stores information about a single run of a healthcheck probe
+    type: "object"
+    properties:
+      Start:
+        description: |
+          Date and time at which this check started in
+          [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds.
+        type: "string"
+        format: "date-time"
+        example: "2020-01-04T10:44:24.496525531Z"
+      End:
+        description: |
+          Date and time at which this check ended in
+          [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds.
+        type: "string"
+        format: "dateTime"
+        example: "2020-01-04T10:45:21.364524523Z"
+      ExitCode:
+        description: |
+          ExitCode meanings:
+
+          - `0` healthy
+          - `1` unhealthy
+          - `2` reserved (considered unhealthy)
+          - other values: error running probe
+        type: "integer"
+        example: 0
+      Output:
+        description: "Output from last check"
+        type: "string"
+
+  HostConfig:
+    description: "Container configuration that depends on the host we are running on"
+    allOf:
+      - $ref: "#/definitions/Resources"
+      - type: "object"
+        properties:
+          # Applicable to all platforms
+          Binds:
+            type: "array"
+            description: |
+              A list of volume bindings for this container. Each volume binding
+              is a string in one of these forms:
+
+              - `host-src:container-dest[:options]` to bind-mount a host path
+                into the container. Both `host-src`, and `container-dest` must
+                be an _absolute_ path.
+              - `volume-name:container-dest[:options]` to bind-mount a volume
+                managed by a volume driver into the container. `container-dest`
+                must be an _absolute_ path.
+
+              `options` is an optional, comma-delimited list of:
+
+              - `nocopy` disables automatic copying of data from the container
+                path to the volume. The `nocopy` flag only applies to named volumes.
+              - `[ro|rw]` mounts a volume read-only or read-write, respectively.
+                If omitted or set to `rw`, volumes are mounted read-write.
+              - `[z|Z]` applies SELinux labels to allow or deny multiple containers
+                to read and write to the same volume.
+                  - `z`: a _shared_ content label is applied to the content. This
+                    label indicates that multiple containers can share the volume
+                    content, for both reading and writing.
+                  - `Z`: a _private unshared_ label is applied to the content.
+                    This label indicates that only the current container can use
+                    a private volume. Labeling systems such as SELinux require
+                    proper labels to be placed on volume content that is mounted
+                    into a container. Without a label, the security system can
+                    prevent a container's processes from using the content. By
+                    default, the labels set by the host operating system are not
+                    modified.
+              - `[[r]shared|[r]slave|[r]private]` specifies mount
+                [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt).
+                This only applies to bind-mounted volumes, not internal volumes
+                or named volumes. Mount propagation requires the source mount
+                point (the location where the source directory is mounted in the
+                host operating system) to have the correct propagation properties.
+                For shared volumes, the source mount point must be set to `shared`.
+                For slave volumes, the mount must be set to either `shared` or
+                `slave`.
+            items:
+              type: "string"
+          ContainerIDFile:
+            type: "string"
+            description: "Path to a file where the container ID is written"
+          LogConfig:
+            type: "object"
+            description: "The logging configuration for this container"
+            properties:
+              Type:
+                type: "string"
+                enum:
+                  - "json-file"
+                  - "syslog"
+                  - "journald"
+                  - "gelf"
+                  - "fluentd"
+                  - "awslogs"
+                  - "splunk"
+                  - "etwlogs"
+                  - "none"
+              Config:
+                type: "object"
+                additionalProperties:
+                  type: "string"
+          NetworkMode:
+            type: "string"
+            description: |
+              Network mode to use for this container. Supported standard values
+              are: `bridge`, `host`, `none`, and `container:<name|id>`. Any
+              other value is taken as a custom network's name to which this
+              container should connect to.
+          PortBindings:
+            $ref: "#/definitions/PortMap"
+          RestartPolicy:
+            $ref: "#/definitions/RestartPolicy"
+          AutoRemove:
+            type: "boolean"
+            description: |
+              Automatically remove the container when the container's process
+              exits. This has no effect if `RestartPolicy` is set.
+          VolumeDriver:
+            type: "string"
+            description: "Driver that this container uses to mount volumes."
+          VolumesFrom:
+            type: "array"
+            description: |
+              A list of volumes to inherit from another container, specified in
+              the form `<container name>[:<ro|rw>]`.
+            items:
+              type: "string"
+          Mounts:
+            description: |
+              Specification for mounts to be added to the container.
+            type: "array"
+            items:
+              $ref: "#/definitions/Mount"
+
+          # Applicable to UNIX platforms
+          CapAdd:
+            type: "array"
+            description: |
+              A list of kernel capabilities to add to the container. Conflicts
+              with option 'Capabilities'.
+            items:
+              type: "string"
+          CapDrop:
+            type: "array"
+            description: |
+              A list of kernel capabilities to drop from the container. Conflicts
+              with option 'Capabilities'.
+            items:
+              type: "string"
+          CgroupnsMode:
+            type: "string"
+            enum:
+              - "private"
+              - "host"
+            description: |
+              cgroup namespace mode for the container. Possible values are:
+
+              - `"private"`: the container runs in its own private cgroup namespace
+              - `"host"`: use the host system's cgroup namespace
+
+              If not specified, the daemon default is used, which can either be `"private"`
+              or `"host"`, depending on daemon version, kernel support and configuration.
+          Dns:
+            type: "array"
+            description: "A list of DNS servers for the container to use."
+            items:
+              type: "string"
+          DnsOptions:
+            type: "array"
+            description: "A list of DNS options."
+            items:
+              type: "string"
+          DnsSearch:
+            type: "array"
+            description: "A list of DNS search domains."
+            items:
+              type: "string"
+          ExtraHosts:
+            type: "array"
+            description: |
+              A list of hostnames/IP mappings to add to the container's `/etc/hosts`
+              file. Specified in the form `["hostname:IP"]`.
+            items:
+              type: "string"
+          GroupAdd:
+            type: "array"
+            description: |
+              A list of additional groups that the container process will run as.
+            items:
+              type: "string"
+          IpcMode:
+            type: "string"
+            description: |
+              IPC sharing mode for the container. Possible values are:
+
+              - `"none"`: own private IPC namespace, with /dev/shm not mounted
+              - `"private"`: own private IPC namespace
+              - `"shareable"`: own private IPC namespace, with a possibility to share it with other containers
+              - `"container:<name|id>"`: join another (shareable) container's IPC namespace
+              - `"host"`: use the host system's IPC namespace
+
+              If not specified, daemon default is used, which can either be `"private"`
+              or `"shareable"`, depending on daemon version and configuration.
+          Cgroup:
+            type: "string"
+            description: "Cgroup to use for the container."
+          Links:
+            type: "array"
+            description: |
+              A list of links for the container in the form `container_name:alias`.
+            items:
+              type: "string"
+          OomScoreAdj:
+            type: "integer"
+            description: |
+              An integer value containing the score given to the container in
+              order to tune OOM killer preferences.
+            example: 500
+          PidMode:
+            type: "string"
+            description: |
+              Set the PID (Process) Namespace mode for the container. It can be
+              either:
+
+              - `"container:<name|id>"`: joins another container's PID namespace
+              - `"host"`: use the host's PID namespace inside the container
+          Privileged:
+            type: "boolean"
+            description: "Gives the container full access to the host."
+          PublishAllPorts:
+            type: "boolean"
+            description: |
+              Allocates an ephemeral host port for all of a container's
+              exposed ports.
+
+              Ports are de-allocated when the container stops and allocated when
+              the container starts. The allocated port might be changed when
+              restarting the container.
+
+              The port is selected from the ephemeral port range that depends on
+              the kernel. For example, on Linux the range is defined by
+              `/proc/sys/net/ipv4/ip_local_port_range`.
+          ReadonlyRootfs:
+            type: "boolean"
+            description: "Mount the container's root filesystem as read only."
+          SecurityOpt:
+            type: "array"
+            description: "A list of string values to customize labels for MLS
+            systems, such as SELinux."
+            items:
+              type: "string"
+          StorageOpt:
+            type: "object"
+            description: |
+              Storage driver options for this container, in the form `{"size": "120G"}`.
+            additionalProperties:
+              type: "string"
+          Tmpfs:
+            type: "object"
+            description: |
+              A map of container directories which should be replaced by tmpfs
+              mounts, and their corresponding mount options. For example:
+
+              ```
+              { "/run": "rw,noexec,nosuid,size=65536k" }
+              ```
+            additionalProperties:
+              type: "string"
+          UTSMode:
+            type: "string"
+            description: "UTS namespace to use for the container."
+          UsernsMode:
+            type: "string"
+            description: |
+              Sets the usernamespace mode for the container when usernamespace
+              remapping option is enabled.
+          ShmSize:
+            type: "integer"
+            description: |
+              Size of `/dev/shm` in bytes. If omitted, the system uses 64MB.
+            minimum: 0
+          Sysctls:
+            type: "object"
+            description: |
+              A list of kernel parameters (sysctls) to set in the container.
+              For example:
+
+              ```
+              {"net.ipv4.ip_forward": "1"}
+              ```
+            additionalProperties:
+              type: "string"
+          Runtime:
+            type: "string"
+            description: "Runtime to use with this container."
+          # Applicable to Windows
+          ConsoleSize:
+            type: "array"
+            description: |
+              Initial console size, as an `[height, width]` array. (Windows only)
+            minItems: 2
+            maxItems: 2
+            items:
+              type: "integer"
+              minimum: 0
+          Isolation:
+            type: "string"
+            description: |
+              Isolation technology of the container. (Windows only)
+            enum:
+              - "default"
+              - "process"
+              - "hyperv"
+          MaskedPaths:
+            type: "array"
+            description: |
+              The list of paths to be masked inside the container (this overrides
+              the default set of paths).
+            items:
+              type: "string"
+          ReadonlyPaths:
+            type: "array"
+            description: |
+              The list of paths to be set as read-only inside the container
+              (this overrides the default set of paths).
+            items:
+              type: "string"
+
+  ContainerConfig:
+    description: "Configuration for a container that is portable between hosts"
+    type: "object"
+    properties:
+      Hostname:
+        description: "The hostname to use for the container, as a valid RFC 1123 hostname."
+        type: "string"
+      Domainname:
+        description: "The domain name to use for the container."
+        type: "string"
+      User:
+        description: "The user that commands are run as inside the container."
+        type: "string"
+      AttachStdin:
+        description: "Whether to attach to `stdin`."
+        type: "boolean"
+        default: false
+      AttachStdout:
+        description: "Whether to attach to `stdout`."
+        type: "boolean"
+        default: true
+      AttachStderr:
+        description: "Whether to attach to `stderr`."
+        type: "boolean"
+        default: true
+      ExposedPorts:
+        description: |
+          An object mapping ports to an empty object in the form:
+
+          `{"<port>/<tcp|udp|sctp>": {}}`
+        type: "object"
+        additionalProperties:
+          type: "object"
+          enum:
+            - {}
+          default: {}
+      Tty:
+        description: |
+          Attach standard streams to a TTY, including `stdin` if it is not closed.
+        type: "boolean"
+        default: false
+      OpenStdin:
+        description: "Open `stdin`"
+        type: "boolean"
+        default: false
+      StdinOnce:
+        description: "Close `stdin` after one attached client disconnects"
+        type: "boolean"
+        default: false
+      Env:
+        description: |
+          A list of environment variables to set inside the container in the
+          form `["VAR=value", ...]`. A variable without `=` is removed from the
+          environment, rather than to have an empty value.
+        type: "array"
+        items:
+          type: "string"
+      Cmd:
+        description: |
+          Command to run specified as a string or an array of strings.
+        type: "array"
+        items:
+          type: "string"
+      Healthcheck:
+        $ref: "#/definitions/HealthConfig"
+      ArgsEscaped:
+        description: "Command is already escaped (Windows only)"
+        type: "boolean"
+      Image:
+        description: |
+          The name of the image to use when creating the container/
+        type: "string"
+      Volumes:
+        description: |
+          An object mapping mount point paths inside the container to empty
+          objects.
+        type: "object"
+        additionalProperties:
+          type: "object"
+          enum:
+            - {}
+          default: {}
+      WorkingDir:
+        description: "The working directory for commands to run in."
+        type: "string"
+      Entrypoint:
+        description: |
+          The entry point for the container as a string or an array of strings.
+
+          If the array consists of exactly one empty string (`[""]`) then the
+          entry point is reset to system default (i.e., the entry point used by
+          docker when there is no `ENTRYPOINT` instruction in the `Dockerfile`).
+        type: "array"
+        items:
+          type: "string"
+      NetworkDisabled:
+        description: "Disable networking for the container."
+        type: "boolean"
+      MacAddress:
+        description: "MAC address of the container."
+        type: "string"
+      OnBuild:
+        description: |
+          `ONBUILD` metadata that were defined in the image's `Dockerfile`.
+        type: "array"
+        items:
+          type: "string"
+      Labels:
+        description: "User-defined key/value metadata."
+        type: "object"
+        additionalProperties:
+          type: "string"
+      StopSignal:
+        description: |
+          Signal to stop a container as a string or unsigned integer.
+        type: "string"
+        default: "SIGTERM"
+      StopTimeout:
+        description: "Timeout to stop a container in seconds."
+        type: "integer"
+        default: 10
+      Shell:
+        description: |
+          Shell for when `RUN`, `CMD`, and `ENTRYPOINT` uses a shell.
+        type: "array"
+        items:
+          type: "string"
+
+  NetworkingConfig:
+    description: |
+      NetworkingConfig represents the container's networking configuration for
+      each of its interfaces.
+      It is used for the networking configs specified in the `docker create`
+      and `docker network connect` commands.
+    type: "object"
+    properties:
+      EndpointsConfig:
+        description: |
+          A mapping of network name to endpoint configuration for that network.
+        type: "object"
+        additionalProperties:
+          $ref: "#/definitions/EndpointSettings"
+    example:
+      # putting an example here, instead of using the example values from
+      # /definitions/EndpointSettings, because containers/create currently
+      # does not support attaching to multiple networks, so the example request
+      # would be confusing if it showed that multiple networks can be contained
+      # in the EndpointsConfig.
+      # TODO remove once we support multiple networks on container create (see https://github.com/moby/moby/blob/07e6b843594e061f82baa5fa23c2ff7d536c2a05/daemon/create.go#L323)
+      EndpointsConfig:
+        isolated_nw:
+          IPAMConfig:
+            IPv4Address: "172.20.30.33"
+            IPv6Address: "2001:db8:abcd::3033"
+            LinkLocalIPs:
+              - "169.254.34.68"
+              - "fe80::3468"
+          Links:
+            - "container_1"
+            - "container_2"
+          Aliases:
+            - "server_x"
+            - "server_y"
+
+  NetworkSettings:
+    description: "NetworkSettings exposes the network settings in the API"
+    type: "object"
+    properties:
+      Bridge:
+        description: Name of the network'a bridge (for example, `docker0`).
+        type: "string"
+        example: "docker0"
+      SandboxID:
+        description: SandboxID uniquely represents a container's network stack.
+        type: "string"
+        example: "9d12daf2c33f5959c8bf90aa513e4f65b561738661003029ec84830cd503a0c3"
+      HairpinMode:
+        description: |
+          Indicates if hairpin NAT should be enabled on the virtual interface.
+        type: "boolean"
+        example: false
+      LinkLocalIPv6Address:
+        description: IPv6 unicast address using the link-local prefix.
+        type: "string"
+        example: "fe80::42:acff:fe11:1"
+      LinkLocalIPv6PrefixLen:
+        description: Prefix length of the IPv6 unicast address.
+        type: "integer"
+        example: "64"
+      Ports:
+        $ref: "#/definitions/PortMap"
+      SandboxKey:
+        description: SandboxKey identifies the sandbox
+        type: "string"
+        example: "/var/run/docker/netns/8ab54b426c38"
+
+      # TODO is SecondaryIPAddresses actually used?
+      SecondaryIPAddresses:
+        description: ""
+        type: "array"
+        items:
+          $ref: "#/definitions/Address"
+        x-nullable: true
+
+      # TODO is SecondaryIPv6Addresses actually used?
+      SecondaryIPv6Addresses:
+        description: ""
+        type: "array"
+        items:
+          $ref: "#/definitions/Address"
+        x-nullable: true
+
+      # TODO properties below are part of DefaultNetworkSettings, which is
+      # marked as deprecated since Docker 1.9 and to be removed in Docker v17.12
+      EndpointID:
+        description: |
+          EndpointID uniquely represents a service endpoint in a Sandbox.
+
+          <p><br /></p>
+
+          > **Deprecated**: This field is only propagated when attached to the
+          > default "bridge" network. Use the information from the "bridge"
+          > network inside the `Networks` map instead, which contains the same
+          > information. This field was deprecated in Docker 1.9 and is scheduled
+          > to be removed in Docker 17.12.0
+        type: "string"
+        example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b"
+      Gateway:
+        description: |
+          Gateway address for the default "bridge" network.
+
+          <p><br /></p>
+
+          > **Deprecated**: This field is only propagated when attached to the
+          > default "bridge" network. Use the information from the "bridge"
+          > network inside the `Networks` map instead, which contains the same
+          > information. This field was deprecated in Docker 1.9 and is scheduled
+          > to be removed in Docker 17.12.0
+        type: "string"
+        example: "172.17.0.1"
+      GlobalIPv6Address:
+        description: |
+          Global IPv6 address for the default "bridge" network.
+
+          <p><br /></p>
+
+          > **Deprecated**: This field is only propagated when attached to the
+          > default "bridge" network. Use the information from the "bridge"
+          > network inside the `Networks` map instead, which contains the same
+          > information. This field was deprecated in Docker 1.9 and is scheduled
+          > to be removed in Docker 17.12.0
+        type: "string"
+        example: "2001:db8::5689"
+      GlobalIPv6PrefixLen:
+        description: |
+          Mask length of the global IPv6 address.
+
+          <p><br /></p>
+
+          > **Deprecated**: This field is only propagated when attached to the
+          > default "bridge" network. Use the information from the "bridge"
+          > network inside the `Networks` map instead, which contains the same
+          > information. This field was deprecated in Docker 1.9 and is scheduled
+          > to be removed in Docker 17.12.0
+        type: "integer"
+        example: 64
+      IPAddress:
+        description: |
+          IPv4 address for the default "bridge" network.
+
+          <p><br /></p>
+
+          > **Deprecated**: This field is only propagated when attached to the
+          > default "bridge" network. Use the information from the "bridge"
+          > network inside the `Networks` map instead, which contains the same
+          > information. This field was deprecated in Docker 1.9 and is scheduled
+          > to be removed in Docker 17.12.0
+        type: "string"
+        example: "172.17.0.4"
+      IPPrefixLen:
+        description: |
+          Mask length of the IPv4 address.
+
+          <p><br /></p>
+
+          > **Deprecated**: This field is only propagated when attached to the
+          > default "bridge" network. Use the information from the "bridge"
+          > network inside the `Networks` map instead, which contains the same
+          > information. This field was deprecated in Docker 1.9 and is scheduled
+          > to be removed in Docker 17.12.0
+        type: "integer"
+        example: 16
+      IPv6Gateway:
+        description: |
+          IPv6 gateway address for this network.
+
+          <p><br /></p>
+
+          > **Deprecated**: This field is only propagated when attached to the
+          > default "bridge" network. Use the information from the "bridge"
+          > network inside the `Networks` map instead, which contains the same
+          > information. This field was deprecated in Docker 1.9 and is scheduled
+          > to be removed in Docker 17.12.0
+        type: "string"
+        example: "2001:db8:2::100"
+      MacAddress:
+        description: |
+          MAC address for the container on the default "bridge" network.
+
+          <p><br /></p>
+
+          > **Deprecated**: This field is only propagated when attached to the
+          > default "bridge" network. Use the information from the "bridge"
+          > network inside the `Networks` map instead, which contains the same
+          > information. This field was deprecated in Docker 1.9 and is scheduled
+          > to be removed in Docker 17.12.0
+        type: "string"
+        example: "02:42:ac:11:00:04"
+      Networks:
+        description: |
+          Information about all networks that the container is connected to.
+        type: "object"
+        additionalProperties:
+          $ref: "#/definitions/EndpointSettings"
+
+  Address:
+    description: Address represents an IPv4 or IPv6 IP address.
+    type: "object"
+    properties:
+      Addr:
+        description: IP address.
+        type: "string"
+      PrefixLen:
+        description: Mask length of the IP address.
+        type: "integer"
+
+  PortMap:
+    description: |
+      PortMap describes the mapping of container ports to host ports, using the
+      container's port-number and protocol as key in the format `<port>/<protocol>`,
+      for example, `80/udp`.
+
+      If a container's port is mapped for multiple protocols, separate entries
+      are added to the mapping table.
+    type: "object"
+    additionalProperties:
+      type: "array"
+      x-nullable: true
+      items:
+        $ref: "#/definitions/PortBinding"
+    example:
+      "443/tcp":
+        - HostIp: "127.0.0.1"
+          HostPort: "4443"
+      "80/tcp":
+        - HostIp: "0.0.0.0"
+          HostPort: "80"
+        - HostIp: "0.0.0.0"
+          HostPort: "8080"
+      "80/udp":
+        - HostIp: "0.0.0.0"
+          HostPort: "80"
+      "53/udp":
+        - HostIp: "0.0.0.0"
+          HostPort: "53"
+      "2377/tcp": null
+
+  PortBinding:
+    description: |
+      PortBinding represents a binding between a host IP address and a host
+      port.
+    type: "object"
+    properties:
+      HostIp:
+        description: "Host IP address that the container's port is mapped to."
+        type: "string"
+        example: "127.0.0.1"
+      HostPort:
+        description: "Host port number that the container's port is mapped to."
+        type: "string"
+        example: "4443"
+
+  GraphDriverData:
+    description: "Information about a container's graph driver."
+    type: "object"
+    required: [Name, Data]
+    properties:
+      Name:
+        type: "string"
+        x-nullable: false
+      Data:
+        type: "object"
+        x-nullable: false
+        additionalProperties:
+          type: "string"
+
+  Image:
+    type: "object"
+    required:
+      - Id
+      - Parent
+      - Comment
+      - Created
+      - Container
+      - DockerVersion
+      - Author
+      - Architecture
+      - Os
+      - Size
+      - VirtualSize
+      - GraphDriver
+      - RootFS
+    properties:
+      Id:
+        type: "string"
+        x-nullable: false
+      RepoTags:
+        type: "array"
+        items:
+          type: "string"
+      RepoDigests:
+        type: "array"
+        items:
+          type: "string"
+      Parent:
+        type: "string"
+        x-nullable: false
+      Comment:
+        type: "string"
+        x-nullable: false
+      Created:
+        type: "string"
+        x-nullable: false
+      Container:
+        type: "string"
+        x-nullable: false
+      ContainerConfig:
+        $ref: "#/definitions/ContainerConfig"
+      DockerVersion:
+        type: "string"
+        x-nullable: false
+      Author:
+        type: "string"
+        x-nullable: false
+      Config:
+        $ref: "#/definitions/ContainerConfig"
+      Architecture:
+        type: "string"
+        x-nullable: false
+      Os:
+        type: "string"
+        x-nullable: false
+      OsVersion:
+        type: "string"
+      Size:
+        type: "integer"
+        format: "int64"
+        x-nullable: false
+      VirtualSize:
+        type: "integer"
+        format: "int64"
+        x-nullable: false
+      GraphDriver:
+        $ref: "#/definitions/GraphDriverData"
+      RootFS:
+        type: "object"
+        required: [Type]
+        properties:
+          Type:
+            type: "string"
+            x-nullable: false
+          Layers:
+            type: "array"
+            items:
+              type: "string"
+          BaseLayer:
+            type: "string"
+      Metadata:
+        type: "object"
+        properties:
+          LastTagTime:
+            type: "string"
+            format: "dateTime"
+
+  ImageSummary:
+    type: "object"
+    required:
+      - Id
+      - ParentId
+      - RepoTags
+      - RepoDigests
+      - Created
+      - Size
+      - SharedSize
+      - VirtualSize
+      - Labels
+      - Containers
+    properties:
+      Id:
+        type: "string"
+        x-nullable: false
+      ParentId:
+        type: "string"
+        x-nullable: false
+      RepoTags:
+        type: "array"
+        x-nullable: false
+        items:
+          type: "string"
+      RepoDigests:
+        type: "array"
+        x-nullable: false
+        items:
+          type: "string"
+      Created:
+        type: "integer"
+        x-nullable: false
+      Size:
+        type: "integer"
+        x-nullable: false
+      SharedSize:
+        type: "integer"
+        x-nullable: false
+      VirtualSize:
+        type: "integer"
+        x-nullable: false
+      Labels:
+        type: "object"
+        x-nullable: false
+        additionalProperties:
+          type: "string"
+      Containers:
+        x-nullable: false
+        type: "integer"
+
+  AuthConfig:
+    type: "object"
+    properties:
+      username:
+        type: "string"
+      password:
+        type: "string"
+      email:
+        type: "string"
+      serveraddress:
+        type: "string"
+    example:
+      username: "hannibal"
+      password: "xxxx"
+      serveraddress: "https://index.docker.io/v1/"
+
+  ProcessConfig:
+    type: "object"
+    properties:
+      privileged:
+        type: "boolean"
+      user:
+        type: "string"
+      tty:
+        type: "boolean"
+      entrypoint:
+        type: "string"
+      arguments:
+        type: "array"
+        items:
+          type: "string"
+
+  Volume:
+    type: "object"
+    required: [Name, Driver, Mountpoint, Labels, Scope, Options]
+    properties:
+      Name:
+        type: "string"
+        description: "Name of the volume."
+        x-nullable: false
+      Driver:
+        type: "string"
+        description: "Name of the volume driver used by the volume."
+        x-nullable: false
+      Mountpoint:
+        type: "string"
+        description: "Mount path of the volume on the host."
+        x-nullable: false
+      CreatedAt:
+        type: "string"
+        format: "dateTime"
+        description: "Date/Time the volume was created."
+      Status:
+        type: "object"
+        description: |
+          Low-level details about the volume, provided by the volume driver.
+          Details are returned as a map with key/value pairs:
+          `{"key":"value","key2":"value2"}`.
+
+          The `Status` field is optional, and is omitted if the volume driver
+          does not support this feature.
+        additionalProperties:
+          type: "object"
+      Labels:
+        type: "object"
+        description: "User-defined key/value metadata."
+        x-nullable: false
+        additionalProperties:
+          type: "string"
+      Scope:
+        type: "string"
+        description: |
+          The level at which the volume exists. Either `global` for cluster-wide,
+          or `local` for machine level.
+        default: "local"
+        x-nullable: false
+        enum: ["local", "global"]
+      Options:
+        type: "object"
+        description: |
+          The driver specific options used when creating the volume.
+        additionalProperties:
+          type: "string"
+      UsageData:
+        type: "object"
+        x-nullable: true
+        required: [Size, RefCount]
+        description: |
+          Usage details about the volume. This information is used by the
+          `GET /system/df` endpoint, and omitted in other endpoints.
+        properties:
+          Size:
+            type: "integer"
+            default: -1
+            description: |
+              Amount of disk space used by the volume (in bytes). This information
+              is only available for volumes created with the `"local"` volume
+              driver. For volumes created with other volume drivers, this field
+              is set to `-1` ("not available")
+            x-nullable: false
+          RefCount:
+            type: "integer"
+            default: -1
+            description: |
+              The number of containers referencing this volume. This field
+              is set to `-1` if the reference-count is not available.
+            x-nullable: false
+
+    example:
+      Name: "tardis"
+      Driver: "custom"
+      Mountpoint: "/var/lib/docker/volumes/tardis"
+      Status:
+        hello: "world"
+      Labels:
+        com.example.some-label: "some-value"
+        com.example.some-other-label: "some-other-value"
+      Scope: "local"
+      CreatedAt: "2016-06-07T20:31:11.853781916Z"
+
+  Network:
+    type: "object"
+    properties:
+      Name:
+        type: "string"
+      Id:
+        type: "string"
+      Created:
+        type: "string"
+        format: "dateTime"
+      Scope:
+        type: "string"
+      Driver:
+        type: "string"
+      EnableIPv6:
+        type: "boolean"
+      IPAM:
+        $ref: "#/definitions/IPAM"
+      Internal:
+        type: "boolean"
+      Attachable:
+        type: "boolean"
+      Ingress:
+        type: "boolean"
+      Containers:
+        type: "object"
+        additionalProperties:
+          $ref: "#/definitions/NetworkContainer"
+      Options:
+        type: "object"
+        additionalProperties:
+          type: "string"
+      Labels:
+        type: "object"
+        additionalProperties:
+          type: "string"
+    example:
+      Name: "net01"
+      Id: "7d86d31b1478e7cca9ebed7e73aa0fdeec46c5ca29497431d3007d2d9e15ed99"
+      Created: "2016-10-19T04:33:30.360899459Z"
+      Scope: "local"
+      Driver: "bridge"
+      EnableIPv6: false
+      IPAM:
+        Driver: "default"
+        Config:
+          - Subnet: "172.19.0.0/16"
+            Gateway: "172.19.0.1"
+        Options:
+          foo: "bar"
+      Internal: false
+      Attachable: false
+      Ingress: false
+      Containers:
+        19a4d5d687db25203351ed79d478946f861258f018fe384f229f2efa4b23513c:
+          Name: "test"
+          EndpointID: "628cadb8bcb92de107b2a1e516cbffe463e321f548feb37697cce00ad694f21a"
+          MacAddress: "02:42:ac:13:00:02"
+          IPv4Address: "172.19.0.2/16"
+          IPv6Address: ""
+      Options:
+        com.docker.network.bridge.default_bridge: "true"
+        com.docker.network.bridge.enable_icc: "true"
+        com.docker.network.bridge.enable_ip_masquerade: "true"
+        com.docker.network.bridge.host_binding_ipv4: "0.0.0.0"
+        com.docker.network.bridge.name: "docker0"
+        com.docker.network.driver.mtu: "1500"
+      Labels:
+        com.example.some-label: "some-value"
+        com.example.some-other-label: "some-other-value"
+  IPAM:
+    type: "object"
+    properties:
+      Driver:
+        description: "Name of the IPAM driver to use."
+        type: "string"
+        default: "default"
+      Config:
+        description: |
+          List of IPAM configuration options, specified as a map:
+
+          ```
+          {"Subnet": <CIDR>, "IPRange": <CIDR>, "Gateway": <IP address>, "AuxAddress": <device_name:IP address>}
+          ```
+        type: "array"
+        items:
+          type: "object"
+          additionalProperties:
+            type: "string"
+      Options:
+        description: "Driver-specific options, specified as a map."
+        type: "object"
+        additionalProperties:
+          type: "string"
+
+  NetworkContainer:
+    type: "object"
+    properties:
+      Name:
+        type: "string"
+      EndpointID:
+        type: "string"
+      MacAddress:
+        type: "string"
+      IPv4Address:
+        type: "string"
+      IPv6Address:
+        type: "string"
+
+  BuildInfo:
+    type: "object"
+    properties:
+      id:
+        type: "string"
+      stream:
+        type: "string"
+      error:
+        type: "string"
+      errorDetail:
+        $ref: "#/definitions/ErrorDetail"
+      status:
+        type: "string"
+      progress:
+        type: "string"
+      progressDetail:
+        $ref: "#/definitions/ProgressDetail"
+      aux:
+        $ref: "#/definitions/ImageID"
+
+  BuildCache:
+    type: "object"
+    description: |
+      BuildCache contains information about a build cache record.
+    properties:
+      ID:
+        type: "string"
+        description: |
+          Unique ID of the build cache record.
+        example: "ndlpt0hhvkqcdfkputsk4cq9c"
+      Parent:
+        description: |
+          ID of the parent build cache record.
+        type: "string"
+        example: "hw53o5aio51xtltp5xjp8v7fx"
+      Type:
+        type: "string"
+        description: |
+          Cache record type.
+        example: "regular"
+        # see https://github.com/moby/buildkit/blob/fce4a32258dc9d9664f71a4831d5de10f0670677/client/diskusage.go#L75-L84
+        enum:
+          - "internal"
+          - "frontend"
+          - "source.local"
+          - "source.git.checkout"
+          - "exec.cachemount"
+          - "regular"
+      Description:
+        type: "string"
+        description: |
+          Description of the build-step that produced the build cache.
+        example: "mount / from exec /bin/sh -c echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/keep-cache"
+      InUse:
+        type: "boolean"
+        description: |
+          Indicates if the build cache is in use.
+        example: false
+      Shared:
+        type: "boolean"
+        description: |
+          Indicates if the build cache is shared.
+        example: true
+      Size:
+        description: |
+          Amount of disk space used by the build cache (in bytes).
+        type: "integer"
+        example: 51
+      CreatedAt:
+        description: |
+          Date and time at which the build cache was created in
+          [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds.
+        type: "string"
+        format: "dateTime"
+        example: "2016-08-18T10:44:24.496525531Z"
+      LastUsedAt:
+        description: |
+          Date and time at which the build cache was last used in
+          [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds.
+        type: "string"
+        format: "dateTime"
+        x-nullable: true
+        example: "2017-08-09T07:09:37.632105588Z"
+      UsageCount:
+        type: "integer"
+        example: 26
+
+  ImageID:
+    type: "object"
+    description: "Image ID or Digest"
+    properties:
+      ID:
+        type: "string"
+    example:
+      ID: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c"
+
+  CreateImageInfo:
+    type: "object"
+    properties:
+      id:
+        type: "string"
+      error:
+        type: "string"
+      status:
+        type: "string"
+      progress:
+        type: "string"
+      progressDetail:
+        $ref: "#/definitions/ProgressDetail"
+
+  PushImageInfo:
+    type: "object"
+    properties:
+      error:
+        type: "string"
+      status:
+        type: "string"
+      progress:
+        type: "string"
+      progressDetail:
+        $ref: "#/definitions/ProgressDetail"
+
+  ErrorDetail:
+    type: "object"
+    properties:
+      code:
+        type: "integer"
+      message:
+        type: "string"
+
+  ProgressDetail:
+    type: "object"
+    properties:
+      current:
+        type: "integer"
+      total:
+        type: "integer"
+
+  ErrorResponse:
+    description: "Represents an error."
+    type: "object"
+    required: ["message"]
+    properties:
+      message:
+        description: "The error message."
+        type: "string"
+        x-nullable: false
+    example:
+      message: "Something went wrong."
+
+  IdResponse:
+    description: "Response to an API call that returns just an Id"
+    type: "object"
+    required: ["Id"]
+    properties:
+      Id:
+        description: "The id of the newly created object."
+        type: "string"
+        x-nullable: false
+
+  EndpointSettings:
+    description: "Configuration for a network endpoint."
+    type: "object"
+    properties:
+      # Configurations
+      IPAMConfig:
+        $ref: "#/definitions/EndpointIPAMConfig"
+      Links:
+        type: "array"
+        items:
+          type: "string"
+        example:
+          - "container_1"
+          - "container_2"
+      Aliases:
+        type: "array"
+        items:
+          type: "string"
+        example:
+          - "server_x"
+          - "server_y"
+
+      # Operational data
+      NetworkID:
+        description: |
+          Unique ID of the network.
+        type: "string"
+        example: "08754567f1f40222263eab4102e1c733ae697e8e354aa9cd6e18d7402835292a"
+      EndpointID:
+        description: |
+          Unique ID for the service endpoint in a Sandbox.
+        type: "string"
+        example: "b88f5b905aabf2893f3cbc4ee42d1ea7980bbc0a92e2c8922b1e1795298afb0b"
+      Gateway:
+        description: |
+          Gateway address for this network.
+        type: "string"
+        example: "172.17.0.1"
+      IPAddress:
+        description: |
+          IPv4 address.
+        type: "string"
+        example: "172.17.0.4"
+      IPPrefixLen:
+        description: |
+          Mask length of the IPv4 address.
+        type: "integer"
+        example: 16
+      IPv6Gateway:
+        description: |
+          IPv6 gateway address.
+        type: "string"
+        example: "2001:db8:2::100"
+      GlobalIPv6Address:
+        description: |
+          Global IPv6 address.
+        type: "string"
+        example: "2001:db8::5689"
+      GlobalIPv6PrefixLen:
+        description: |
+          Mask length of the global IPv6 address.
+        type: "integer"
+        format: "int64"
+        example: 64
+      MacAddress:
+        description: |
+          MAC address for the endpoint on this network.
+        type: "string"
+        example: "02:42:ac:11:00:04"
+      DriverOpts:
+        description: |
+          DriverOpts is a mapping of driver options and values. These options
+          are passed directly to the driver and are driver specific.
+        type: "object"
+        x-nullable: true
+        additionalProperties:
+          type: "string"
+        example:
+          com.example.some-label: "some-value"
+          com.example.some-other-label: "some-other-value"
+
+  EndpointIPAMConfig:
+    description: |
+      EndpointIPAMConfig represents an endpoint's IPAM configuration.
+    type: "object"
+    x-nullable: true
+    properties:
+      IPv4Address:
+        type: "string"
+        example: "172.20.30.33"
+      IPv6Address:
+        type: "string"
+        example: "2001:db8:abcd::3033"
+      LinkLocalIPs:
+        type: "array"
+        items:
+          type: "string"
+        example:
+          - "169.254.34.68"
+          - "fe80::3468"
+
+  PluginMount:
+    type: "object"
+    x-nullable: false
+    required: [Name, Description, Settable, Source, Destination, Type, Options]
+    properties:
+      Name:
+        type: "string"
+        x-nullable: false
+        example: "some-mount"
+      Description:
+        type: "string"
+        x-nullable: false
+        example: "This is a mount that's used by the plugin."
+      Settable:
+        type: "array"
+        items:
+          type: "string"
+      Source:
+        type: "string"
+        example: "/var/lib/docker/plugins/"
+      Destination:
+        type: "string"
+        x-nullable: false
+        example: "/mnt/state"
+      Type:
+        type: "string"
+        x-nullable: false
+        example: "bind"
+      Options:
+        type: "array"
+        items:
+          type: "string"
+        example:
+          - "rbind"
+          - "rw"
+
+  PluginDevice:
+    type: "object"
+    required: [Name, Description, Settable, Path]
+    x-nullable: false
+    properties:
+      Name:
+        type: "string"
+        x-nullable: false
+      Description:
+        type: "string"
+        x-nullable: false
+      Settable:
+        type: "array"
+        items:
+          type: "string"
+      Path:
+        type: "string"
+        example: "/dev/fuse"
+
+  PluginEnv:
+    type: "object"
+    x-nullable: false
+    required: [Name, Description, Settable, Value]
+    properties:
+      Name:
+        x-nullable: false
+        type: "string"
+      Description:
+        x-nullable: false
+        type: "string"
+      Settable:
+        type: "array"
+        items:
+          type: "string"
+      Value:
+        type: "string"
+
+  PluginInterfaceType:
+    type: "object"
+    x-nullable: false
+    required: [Prefix, Capability, Version]
+    properties:
+      Prefix:
+        type: "string"
+        x-nullable: false
+      Capability:
+        type: "string"
+        x-nullable: false
+      Version:
+        type: "string"
+        x-nullable: false
+
+  Plugin:
+    description: "A plugin for the Engine API"
+    type: "object"
+    required: [Settings, Enabled, Config, Name]
+    properties:
+      Id:
+        type: "string"
+        example: "5724e2c8652da337ab2eedd19fc6fc0ec908e4bd907c7421bf6a8dfc70c4c078"
+      Name:
+        type: "string"
+        x-nullable: false
+        example: "tiborvass/sample-volume-plugin"
+      Enabled:
+        description:
+          True if the plugin is running. False if the plugin is not running,
+          only installed.
+        type: "boolean"
+        x-nullable: false
+        example: true
+      Settings:
+        description: "Settings that can be modified by users."
+        type: "object"
+        x-nullable: false
+        required: [Args, Devices, Env, Mounts]
+        properties:
+          Mounts:
+            type: "array"
+            items:
+              $ref: "#/definitions/PluginMount"
+          Env:
+            type: "array"
+            items:
+              type: "string"
+            example:
+              - "DEBUG=0"
+          Args:
+            type: "array"
+            items:
+              type: "string"
+          Devices:
+            type: "array"
+            items:
+              $ref: "#/definitions/PluginDevice"
+      PluginReference:
+        description: "plugin remote reference used to push/pull the plugin"
+        type: "string"
+        x-nullable: false
+        example: "localhost:5000/tiborvass/sample-volume-plugin:latest"
+      Config:
+        description: "The config of a plugin."
+        type: "object"
+        x-nullable: false
+        required:
+          - Description
+          - Documentation
+          - Interface
+          - Entrypoint
+          - WorkDir
+          - Network
+          - Linux
+          - PidHost
+          - PropagatedMount
+          - IpcHost
+          - Mounts
+          - Env
+          - Args
+        properties:
+          DockerVersion:
+            description: "Docker Version used to create the plugin"
+            type: "string"
+            x-nullable: false
+            example: "17.06.0-ce"
+          Description:
+            type: "string"
+            x-nullable: false
+            example: "A sample volume plugin for Docker"
+          Documentation:
+            type: "string"
+            x-nullable: false
+            example: "https://docs.docker.com/engine/extend/plugins/"
+          Interface:
+            description: "The interface between Docker and the plugin"
+            x-nullable: false
+            type: "object"
+            required: [Types, Socket]
+            properties:
+              Types:
+                type: "array"
+                items:
+                  $ref: "#/definitions/PluginInterfaceType"
+                example:
+                  - "docker.volumedriver/1.0"
+              Socket:
+                type: "string"
+                x-nullable: false
+                example: "plugins.sock"
+              ProtocolScheme:
+                type: "string"
+                example: "some.protocol/v1.0"
+                description: "Protocol to use for clients connecting to the plugin."
+                enum:
+                  - ""
+                  - "moby.plugins.http/v1"
+          Entrypoint:
+            type: "array"
+            items:
+              type: "string"
+            example:
+              - "/usr/bin/sample-volume-plugin"
+              - "/data"
+          WorkDir:
+            type: "string"
+            x-nullable: false
+            example: "/bin/"
+          User:
+            type: "object"
+            x-nullable: false
+            properties:
+              UID:
+                type: "integer"
+                format: "uint32"
+                example: 1000
+              GID:
+                type: "integer"
+                format: "uint32"
+                example: 1000
+          Network:
+            type: "object"
+            x-nullable: false
+            required: [Type]
+            properties:
+              Type:
+                x-nullable: false
+                type: "string"
+                example: "host"
+          Linux:
+            type: "object"
+            x-nullable: false
+            required: [Capabilities, AllowAllDevices, Devices]
+            properties:
+              Capabilities:
+                type: "array"
+                items:
+                  type: "string"
+                example:
+                  - "CAP_SYS_ADMIN"
+                  - "CAP_SYSLOG"
+              AllowAllDevices:
+                type: "boolean"
+                x-nullable: false
+                example: false
+              Devices:
+                type: "array"
+                items:
+                  $ref: "#/definitions/PluginDevice"
+          PropagatedMount:
+            type: "string"
+            x-nullable: false
+            example: "/mnt/volumes"
+          IpcHost:
+            type: "boolean"
+            x-nullable: false
+            example: false
+          PidHost:
+            type: "boolean"
+            x-nullable: false
+            example: false
+          Mounts:
+            type: "array"
+            items:
+              $ref: "#/definitions/PluginMount"
+          Env:
+            type: "array"
+            items:
+              $ref: "#/definitions/PluginEnv"
+            example:
+              - Name: "DEBUG"
+                Description: "If set, prints debug messages"
+                Settable: null
+                Value: "0"
+          Args:
+            type: "object"
+            x-nullable: false
+            required: [Name, Description, Settable, Value]
+            properties:
+              Name:
+                x-nullable: false
+                type: "string"
+                example: "args"
+              Description:
+                x-nullable: false
+                type: "string"
+                example: "command line arguments"
+              Settable:
+                type: "array"
+                items:
+                  type: "string"
+              Value:
+                type: "array"
+                items:
+                  type: "string"
+          rootfs:
+            type: "object"
+            properties:
+              type:
+                type: "string"
+                example: "layers"
+              diff_ids:
+                type: "array"
+                items:
+                  type: "string"
+                example:
+                  - "sha256:675532206fbf3030b8458f88d6e26d4eb1577688a25efec97154c94e8b6b4887"
+                  - "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8"
+
+  ObjectVersion:
+    description: |
+      The version number of the object such as node, service, etc. This is needed
+      to avoid conflicting writes. The client must send the version number along
+      with the modified specification when updating these objects.
+
+      This approach ensures safe concurrency and determinism in that the change
+      on the object may not be applied if the version number has changed from the
+      last read. In other words, if two update requests specify the same base
+      version, only one of the requests can succeed. As a result, two separate
+      update requests that happen at the same time will not unintentionally
+      overwrite each other.
+    type: "object"
+    properties:
+      Index:
+        type: "integer"
+        format: "uint64"
+        example: 373531
+
+  NodeSpec:
+    type: "object"
+    properties:
+      Name:
+        description: "Name for the node."
+        type: "string"
+        example: "my-node"
+      Labels:
+        description: "User-defined key/value metadata."
+        type: "object"
+        additionalProperties:
+          type: "string"
+      Role:
+        description: "Role of the node."
+        type: "string"
+        enum:
+          - "worker"
+          - "manager"
+        example: "manager"
+      Availability:
+        description: "Availability of the node."
+        type: "string"
+        enum:
+          - "active"
+          - "pause"
+          - "drain"
+        example: "active"
+    example:
+      Availability: "active"
+      Name: "node-name"
+      Role: "manager"
+      Labels:
+        foo: "bar"
+
+  Node:
+    type: "object"
+    properties:
+      ID:
+        type: "string"
+        example: "24ifsmvkjbyhk"
+      Version:
+        $ref: "#/definitions/ObjectVersion"
+      CreatedAt:
+        description: |
+          Date and time at which the node was added to the swarm in
+          [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds.
+        type: "string"
+        format: "dateTime"
+        example: "2016-08-18T10:44:24.496525531Z"
+      UpdatedAt:
+        description: |
+          Date and time at which the node was last updated in
+          [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds.
+        type: "string"
+        format: "dateTime"
+        example: "2017-08-09T07:09:37.632105588Z"
+      Spec:
+        $ref: "#/definitions/NodeSpec"
+      Description:
+        $ref: "#/definitions/NodeDescription"
+      Status:
+        $ref: "#/definitions/NodeStatus"
+      ManagerStatus:
+        $ref: "#/definitions/ManagerStatus"
+
+  NodeDescription:
+    description: |
+      NodeDescription encapsulates the properties of the Node as reported by the
+      agent.
+    type: "object"
+    properties:
+      Hostname:
+        type: "string"
+        example: "bf3067039e47"
+      Platform:
+        $ref: "#/definitions/Platform"
+      Resources:
+        $ref: "#/definitions/ResourceObject"
+      Engine:
+        $ref: "#/definitions/EngineDescription"
+      TLSInfo:
+        $ref: "#/definitions/TLSInfo"
+
+  Platform:
+    description: |
+      Platform represents the platform (Arch/OS).
+    type: "object"
+    properties:
+      Architecture:
+        description: |
+          Architecture represents the hardware architecture (for example,
+          `x86_64`).
+        type: "string"
+        example: "x86_64"
+      OS:
+        description: |
+          OS represents the Operating System (for example, `linux` or `windows`).
+        type: "string"
+        example: "linux"
+
+  EngineDescription:
+    description: "EngineDescription provides information about an engine."
+    type: "object"
+    properties:
+      EngineVersion:
+        type: "string"
+        example: "17.06.0"
+      Labels:
+        type: "object"
+        additionalProperties:
+          type: "string"
+        example:
+          foo: "bar"
+      Plugins:
+        type: "array"
+        items:
+          type: "object"
+          properties:
+            Type:
+              type: "string"
+            Name:
+              type: "string"
+        example:
+          - Type: "Log"
+            Name: "awslogs"
+          - Type: "Log"
+            Name: "fluentd"
+          - Type: "Log"
+            Name: "gcplogs"
+          - Type: "Log"
+            Name: "gelf"
+          - Type: "Log"
+            Name: "journald"
+          - Type: "Log"
+            Name: "json-file"
+          - Type: "Log"
+            Name: "logentries"
+          - Type: "Log"
+            Name: "splunk"
+          - Type: "Log"
+            Name: "syslog"
+          - Type: "Network"
+            Name: "bridge"
+          - Type: "Network"
+            Name: "host"
+          - Type: "Network"
+            Name: "ipvlan"
+          - Type: "Network"
+            Name: "macvlan"
+          - Type: "Network"
+            Name: "null"
+          - Type: "Network"
+            Name: "overlay"
+          - Type: "Volume"
+            Name: "local"
+          - Type: "Volume"
+            Name: "localhost:5000/vieux/sshfs:latest"
+          - Type: "Volume"
+            Name: "vieux/sshfs:latest"
+
+  TLSInfo:
+    description: |
+      Information about the issuer of leaf TLS certificates and the trusted root
+      CA certificate.
+    type: "object"
+    properties:
+      TrustRoot:
+        description: |
+          The root CA certificate(s) that are used to validate leaf TLS
+          certificates.
+        type: "string"
+      CertIssuerSubject:
+        description:
+          The base64-url-safe-encoded raw subject bytes of the issuer.
+        type: "string"
+      CertIssuerPublicKey:
+        description: |
+          The base64-url-safe-encoded raw public key bytes of the issuer.
+        type: "string"
+    example:
+      TrustRoot: |
+        -----BEGIN CERTIFICATE-----
+        MIIBajCCARCgAwIBAgIUbYqrLSOSQHoxD8CwG6Bi2PJi9c8wCgYIKoZIzj0EAwIw
+        EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMTcwNDI0MjE0MzAwWhcNMzcwNDE5MjE0
+        MzAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH
+        A0IABJk/VyMPYdaqDXJb/VXh5n/1Yuv7iNrxV3Qb3l06XD46seovcDWs3IZNV1lf
+        3Skyr0ofcchipoiHkXBODojJydSjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB
+        Af8EBTADAQH/MB0GA1UdDgQWBBRUXxuRcnFjDfR/RIAUQab8ZV/n4jAKBggqhkjO
+        PQQDAgNIADBFAiAy+JTe6Uc3KyLCMiqGl2GyWGQqQDEcO3/YG36x7om65AIhAJvz
+        pxv6zFeVEkAEEkqIYi0omA9+CjanB/6Bz4n1uw8H
+        -----END CERTIFICATE-----
+      CertIssuerSubject: "MBMxETAPBgNVBAMTCHN3YXJtLWNh"
+      CertIssuerPublicKey: "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEmT9XIw9h1qoNclv9VeHmf/Vi6/uI2vFXdBveXTpcPjqx6i9wNazchk1XWV/dKTKvSh9xyGKmiIeRcE4OiMnJ1A=="
+
+  NodeStatus:
+    description: |
+      NodeStatus represents the status of a node.
+
+      It provides the current status of the node, as seen by the manager.
+    type: "object"
+    properties:
+      State:
+        $ref: "#/definitions/NodeState"
+      Message:
+        type: "string"
+        example: ""
+      Addr:
+        description: "IP address of the node."
+        type: "string"
+        example: "172.17.0.2"
+
+  NodeState:
+    description: "NodeState represents the state of a node."
+    type: "string"
+    enum:
+      - "unknown"
+      - "down"
+      - "ready"
+      - "disconnected"
+    example: "ready"
+
+  ManagerStatus:
+    description: |
+      ManagerStatus represents the status of a manager.
+
+      It provides the current status of a node's manager component, if the node
+      is a manager.
+    x-nullable: true
+    type: "object"
+    properties:
+      Leader:
+        type: "boolean"
+        default: false
+        example: true
+      Reachability:
+        $ref: "#/definitions/Reachability"
+      Addr:
+        description: |
+          The IP address and port at which the manager is reachable.
+        type: "string"
+        example: "10.0.0.46:2377"
+
+  Reachability:
+    description: "Reachability represents the reachability of a node."
+    type: "string"
+    enum:
+      - "unknown"
+      - "unreachable"
+      - "reachable"
+    example: "reachable"
+
+  SwarmSpec:
+    description: "User modifiable swarm configuration."
+    type: "object"
+    properties:
+      Name:
+        description: "Name of the swarm."
+        type: "string"
+        example: "default"
+      Labels:
+        description: "User-defined key/value metadata."
+        type: "object"
+        additionalProperties:
+          type: "string"
+        example:
+          com.example.corp.type: "production"
+          com.example.corp.department: "engineering"
+      Orchestration:
+        description: "Orchestration configuration."
+        type: "object"
+        x-nullable: true
+        properties:
+          TaskHistoryRetentionLimit:
+            description: |
+              The number of historic tasks to keep per instance or node. If
+              negative, never remove completed or failed tasks.
+            type: "integer"
+            format: "int64"
+            example: 10
+      Raft:
+        description: "Raft configuration."
+        type: "object"
+        properties:
+          SnapshotInterval:
+            description: "The number of log entries between snapshots."
+            type: "integer"
+            format: "uint64"
+            example: 10000
+          KeepOldSnapshots:
+            description: |
+              The number of snapshots to keep beyond the current snapshot.
+            type: "integer"
+            format: "uint64"
+          LogEntriesForSlowFollowers:
+            description: |
+              The number of log entries to keep around to sync up slow followers
+              after a snapshot is created.
+            type: "integer"
+            format: "uint64"
+            example: 500
+          ElectionTick:
+            description: |
+              The number of ticks that a follower will wait for a message from
+              the leader before becoming a candidate and starting an election.
+              `ElectionTick` must be greater than `HeartbeatTick`.
+
+              A tick currently defaults to one second, so these translate
+              directly to seconds currently, but this is NOT guaranteed.
+            type: "integer"
+            example: 3
+          HeartbeatTick:
+            description: |
+              The number of ticks between heartbeats. Every HeartbeatTick ticks,
+              the leader will send a heartbeat to the followers.
+
+              A tick currently defaults to one second, so these translate
+              directly to seconds currently, but this is NOT guaranteed.
+            type: "integer"
+            example: 1
+      Dispatcher:
+        description: "Dispatcher configuration."
+        type: "object"
+        x-nullable: true
+        properties:
+          HeartbeatPeriod:
+            description: |
+              The delay for an agent to send a heartbeat to the dispatcher.
+            type: "integer"
+            format: "int64"
+            example: 5000000000
+      CAConfig:
+        description: "CA configuration."
+        type: "object"
+        x-nullable: true
+        properties:
+          NodeCertExpiry:
+            description: "The duration node certificates are issued for."
+            type: "integer"
+            format: "int64"
+            example: 7776000000000000
+          ExternalCAs:
+            description: |
+              Configuration for forwarding signing requests to an external
+              certificate authority.
+            type: "array"
+            items:
+              type: "object"
+              properties:
+                Protocol:
+                  description: |
+                    Protocol for communication with the external CA (currently
+                    only `cfssl` is supported).
+                  type: "string"
+                  enum:
+                    - "cfssl"
+                  default: "cfssl"
+                URL:
+                  description: |
+                    URL where certificate signing requests should be sent.
+                  type: "string"
+                Options:
+                  description: |
+                    An object with key/value pairs that are interpreted as
+                    protocol-specific options for the external CA driver.
+                  type: "object"
+                  additionalProperties:
+                    type: "string"
+                CACert:
+                  description: |
+                    The root CA certificate (in PEM format) this external CA uses
+                    to issue TLS certificates (assumed to be to the current swarm
+                    root CA certificate if not provided).
+                  type: "string"
+          SigningCACert:
+            description: |
+              The desired signing CA certificate for all swarm node TLS leaf
+              certificates, in PEM format.
+            type: "string"
+          SigningCAKey:
+            description: |
+              The desired signing CA key for all swarm node TLS leaf certificates,
+              in PEM format.
+            type: "string"
+          ForceRotate:
+            description: |
+              An integer whose purpose is to force swarm to generate a new
+              signing CA certificate and key, if none have been specified in
+              `SigningCACert` and `SigningCAKey`
+            format: "uint64"
+            type: "integer"
+      EncryptionConfig:
+        description: "Parameters related to encryption-at-rest."
+        type: "object"
+        properties:
+          AutoLockManagers:
+            description: |
+              If set, generate a key and use it to lock data stored on the
+              managers.
+            type: "boolean"
+            example: false
+      TaskDefaults:
+        description: "Defaults for creating tasks in this cluster."
+        type: "object"
+        properties:
+          LogDriver:
+            description: |
+              The log driver to use for tasks created in the orchestrator if
+              unspecified by a service.
+
+              Updating this value only affects new tasks. Existing tasks continue
+              to use their previously configured log driver until recreated.
+            type: "object"
+            properties:
+              Name:
+                description: |
+                  The log driver to use as a default for new tasks.
+                type: "string"
+                example: "json-file"
+              Options:
+                description: |
+                  Driver-specific options for the selectd log driver, specified
+                  as key/value pairs.
+                type: "object"
+                additionalProperties:
+                  type: "string"
+                example:
+                  "max-file": "10"
+                  "max-size": "100m"
+
+  # The Swarm information for `GET /info`. It is the same as `GET /swarm`, but
+  # without `JoinTokens`.
+  ClusterInfo:
+    description: |
+      ClusterInfo represents information about the swarm as is returned by the
+      "/info" endpoint. Join-tokens are not included.
+    x-nullable: true
+    type: "object"
+    properties:
+      ID:
+        description: "The ID of the swarm."
+        type: "string"
+        example: "abajmipo7b4xz5ip2nrla6b11"
+      Version:
+        $ref: "#/definitions/ObjectVersion"
+      CreatedAt:
+        description: |
+          Date and time at which the swarm was initialised in
+          [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds.
+        type: "string"
+        format: "dateTime"
+        example: "2016-08-18T10:44:24.496525531Z"
+      UpdatedAt:
+        description: |
+          Date and time at which the swarm was last updated in
+          [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format with nano-seconds.
+        type: "string"
+        format: "dateTime"
+        example: "2017-08-09T07:09:37.632105588Z"
+      Spec:
+        $ref: "#/definitions/SwarmSpec"
+      TLSInfo:
+        $ref: "#/definitions/TLSInfo"
+      RootRotationInProgress:
+        description: |
+          Whether there is currently a root CA rotation in progress for the swarm
+        type: "boolean"
+        example: false
+      DataPathPort:
+        description: |
+          DataPathPort specifies the data path port number for data traffic.
+          Acceptable port range is 1024 to 49151.
+          If no port is set or is set to 0, the default port (4789) is used.
+        type: "integer"
+        format: "uint32"
+        default: 4789
+        example: 4789
+      DefaultAddrPool:
+        description: |
+          Default Address Pool specifies default subnet pools for global scope
+          networks.
+        type: "array"
+        items:
+          type: "string"
+          format: "CIDR"
+          example: ["10.10.0.0/16", "20.20.0.0/16"]
+      SubnetSize:
+        description: |
+          SubnetSize specifies the subnet size of the networks created from the
+          default subnet pool.
+        type: "integer"
+        format: "uint32"
+        maximum: 29
+        default: 24
+        example: 24
+
+  JoinTokens:
+    description: |
+      JoinTokens contains the tokens workers and managers need to join the swarm.
+    type: "object"
+    properties:
+      Worker:
+        description: |
+          The token workers can use to join the swarm.
+        type: "string"
+        example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-1awxwuwd3z9j1z3puu7rcgdbx"
+      Manager:
+        description: |
+          The token managers can use to join the swarm.
+        type: "string"
+        example: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2"
+
+  Swarm:
+    type: "object"
+    allOf:
+      - $ref: "#/definitions/ClusterInfo"
+      - type: "object"
+        properties:
+          JoinTokens:
+            $ref: "#/definitions/JoinTokens"
+
+  TaskSpec:
+    description: "User modifiable task configuration."
+    type: "object"
+    properties:
+      PluginSpec:
+        type: "object"
+        description: |
+          Plugin spec for the service.  *(Experimental release only.)*
+
+          <p><br /></p>
+
+          > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are
+          > mutually exclusive. PluginSpec is only used when the Runtime field
+          > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime
+          > field is set to `attachment`.
+        properties:
+          Name:
+            description: "The name or 'alias' to use for the plugin."
+            type: "string"
+          Remote:
+            description: "The plugin image reference to use."
+            type: "string"
+          Disabled:
+            description: "Disable the plugin once scheduled."
+            type: "boolean"
+          PluginPrivilege:
+            type: "array"
+            items:
+              description: |
+                Describes a permission accepted by the user upon installing the
+                plugin.
+              type: "object"
+              properties:
+                Name:
+                  type: "string"
+                Description:
+                  type: "string"
+                Value:
+                  type: "array"
+                  items:
+                    type: "string"
+      ContainerSpec:
+        type: "object"
+        description: |
+          Container spec for the service.
+
+          <p><br /></p>
+
+          > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are
+          > mutually exclusive. PluginSpec is only used when the Runtime field
+          > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime
+          > field is set to `attachment`.
+        properties:
+          Image:
+            description: "The image name to use for the container"
+            type: "string"
+          Labels:
+            description: "User-defined key/value data."
+            type: "object"
+            additionalProperties:
+              type: "string"
+          Command:
+            description: "The command to be run in the image."
+            type: "array"
+            items:
+              type: "string"
+          Args:
+            description: "Arguments to the command."
+            type: "array"
+            items:
+              type: "string"
+          Hostname:
+            description: |
+              The hostname to use for the container, as a valid
+              [RFC 1123](https://tools.ietf.org/html/rfc1123) hostname.
+            type: "string"
+          Env:
+            description: |
+              A list of environment variables in the form `VAR=value`.
+            type: "array"
+            items:
+              type: "string"
+          Dir:
+            description: "The working directory for commands to run in."
+            type: "string"
+          User:
+            description: "The user inside the container."
+            type: "string"
+          Groups:
+            type: "array"
+            description: |
+              A list of additional groups that the container process will run as.
+            items:
+              type: "string"
+          Privileges:
+            type: "object"
+            description: "Security options for the container"
+            properties:
+              CredentialSpec:
+                type: "object"
+                description: "CredentialSpec for managed service account (Windows only)"
+                properties:
+                  Config:
+                    type: "string"
+                    example: "0bt9dmxjvjiqermk6xrop3ekq"
+                    description: |
+                      Load credential spec from a Swarm Config with the given ID.
+                      The specified config must also be present in the Configs
+                      field with the Runtime property set.
+
+                      <p><br /></p>
+
+
+                      > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`,
+                      > and `CredentialSpec.Config` are mutually exclusive.
+                  File:
+                    type: "string"
+                    example: "spec.json"
+                    description: |
+                      Load credential spec from this file. The file is read by
+                      the daemon, and must be present in the `CredentialSpecs`
+                      subdirectory in the docker data directory, which defaults
+                      to `C:\ProgramData\Docker\` on Windows.
+
+                      For example, specifying `spec.json` loads
+                      `C:\ProgramData\Docker\CredentialSpecs\spec.json`.
+
+                      <p><br /></p>
+
+                      > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`,
+                      > and `CredentialSpec.Config` are mutually exclusive.
+                  Registry:
+                    type: "string"
+                    description: |
+                      Load credential spec from this value in the Windows
+                      registry. The specified registry value must be located in:
+
+                      `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs`
+
+                      <p><br /></p>
+
+
+                      > **Note**: `CredentialSpec.File`, `CredentialSpec.Registry`,
+                      > and `CredentialSpec.Config` are mutually exclusive.
+              SELinuxContext:
+                type: "object"
+                description: "SELinux labels of the container"
+                properties:
+                  Disable:
+                    type: "boolean"
+                    description: "Disable SELinux"
+                  User:
+                    type: "string"
+                    description: "SELinux user label"
+                  Role:
+                    type: "string"
+                    description: "SELinux role label"
+                  Type:
+                    type: "string"
+                    description: "SELinux type label"
+                  Level:
+                    type: "string"
+                    description: "SELinux level label"
+          TTY:
+            description: "Whether a pseudo-TTY should be allocated."
+            type: "boolean"
+          OpenStdin:
+            description: "Open `stdin`"
+            type: "boolean"
+          ReadOnly:
+            description: "Mount the container's root filesystem as read only."
+            type: "boolean"
+          Mounts:
+            description: |
+              Specification for mounts to be added to containers created as part
+              of the service.
+            type: "array"
+            items:
+              $ref: "#/definitions/Mount"
+          StopSignal:
+            description: "Signal to stop the container."
+            type: "string"
+          StopGracePeriod:
+            description: |
+              Amount of time to wait for the container to terminate before
+              forcefully killing it.
+            type: "integer"
+            format: "int64"
+          HealthCheck:
+            $ref: "#/definitions/HealthConfig"
+          Hosts:
+            type: "array"
+            description: |
+              A list of hostname/IP mappings to add to the container's `hosts`
+              file. The format of extra hosts is specified in the
+              [hosts(5)](http://man7.org/linux/man-pages/man5/hosts.5.html)
+              man page:
+
+                  IP_address canonical_hostname [aliases...]
+            items:
+              type: "string"
+          DNSConfig:
+            description: |
+              Specification for DNS related configurations in resolver configuration
+              file (`resolv.conf`).
+            type: "object"
+            properties:
+              Nameservers:
+                description: "The IP addresses of the name servers."
+                type: "array"
+                items:
+                  type: "string"
+              Search:
+                description: "A search list for host-name lookup."
+                type: "array"
+                items:
+                  type: "string"
+              Options:
+                description: |
+                  A list of internal resolver variables to be modified (e.g.,
+                  `debug`, `ndots:3`, etc.).
+                type: "array"
+                items:
+                  type: "string"
+          Secrets:
+            description: |
+              Secrets contains references to zero or more secrets that will be
+              exposed to the service.
+            type: "array"
+            items:
+              type: "object"
+              properties:
+                File:
+                  description: |
+                    File represents a specific target that is backed by a file.
+                  type: "object"
+                  properties:
+                    Name:
+                      description: |
+                        Name represents the final filename in the filesystem.
+                      type: "string"
+                    UID:
+                      description: "UID represents the file UID."
+                      type: "string"
+                    GID:
+                      description: "GID represents the file GID."
+                      type: "string"
+                    Mode:
+                      description: "Mode represents the FileMode of the file."
+                      type: "integer"
+                      format: "uint32"
+                SecretID:
+                  description: |
+                    SecretID represents the ID of the specific secret that we're
+                    referencing.
+                  type: "string"
+                SecretName:
+                  description: |
+                    SecretName is the name of the secret that this references,
+                    but this is just provided for lookup/display purposes. The
+                    secret in the reference will be identified by its ID.
+                  type: "string"
+          Configs:
+            description: |
+              Configs contains references to zero or more configs that will be
+              exposed to the service.
+            type: "array"
+            items:
+              type: "object"
+              properties:
+                File:
+                  description: |
+                    File represents a specific target that is backed by a file.
+
+                    <p><br /><p>
+
+                    > **Note**: `Configs.File` and `Configs.Runtime` are mutually exclusive
+                  type: "object"
+                  properties:
+                    Name:
+                      description: |
+                        Name represents the final filename in the filesystem.
+                      type: "string"
+                    UID:
+                      description: "UID represents the file UID."
+                      type: "string"
+                    GID:
+                      description: "GID represents the file GID."
+                      type: "string"
+                    Mode:
+                      description: "Mode represents the FileMode of the file."
+                      type: "integer"
+                      format: "uint32"
+                Runtime:
+                  description: |
+                    Runtime represents a target that is not mounted into the
+                    container but is used by the task
+
+                    <p><br /><p>
+
+                    > **Note**: `Configs.File` and `Configs.Runtime` are mutually
+                    > exclusive
+                  type: "object"
+                ConfigID:
+                  description: |
+                    ConfigID represents the ID of the specific config that we're
+                    referencing.
+                  type: "string"
+                ConfigName:
+                  description: |
+                    ConfigName is the name of the config that this references,
+                    but this is just provided for lookup/display purposes. The
+                    config in the reference will be identified by its ID.
+                  type: "string"
+          Isolation:
+            type: "string"
+            description: |
+              Isolation technology of the containers running the service.
+              (Windows only)
+            enum:
+              - "default"
+              - "process"
+              - "hyperv"
+          Init:
+            description: |
+              Run an init inside the container that forwards signals and reaps
+              processes. This field is omitted if empty, and the default (as
+              configured on the daemon) is used.
+            type: "boolean"
+            x-nullable: true
+          Sysctls:
+            description: |
+              Set kernel namedspaced parameters (sysctls) in the container.
+              The Sysctls option on services accepts the same sysctls as the
+              are supported on containers. Note that while the same sysctls are
+              supported, no guarantees or checks are made about their
+              suitability for a clustered environment, and it's up to the user
+              to determine whether a given sysctl will work properly in a
+              Service.
+            type: "object"
+            additionalProperties:
+              type: "string"
+          # This option is not used by Windows containers
+          CapabilityAdd:
+            type: "array"
+            description: |
+              A list of kernel capabilities to add to the default set
+              for the container.
+            items:
+              type: "string"
+            example:
+              - "CAP_NET_RAW"
+              - "CAP_SYS_ADMIN"
+              - "CAP_SYS_CHROOT"
+              - "CAP_SYSLOG"
+          CapabilityDrop:
+            type: "array"
+            description: |
+              A list of kernel capabilities to drop from the default set
+              for the container.
+            items:
+              type: "string"
+            example:
+              - "CAP_NET_RAW"
+          Ulimits:
+            description: |
+              A list of resource limits to set in the container. For example: `{"Name": "nofile", "Soft": 1024, "Hard": 2048}`"
+            type: "array"
+            items:
+              type: "object"
+              properties:
+                Name:
+                  description: "Name of ulimit"
+                  type: "string"
+                Soft:
+                  description: "Soft limit"
+                  type: "integer"
+                Hard:
+                  description: "Hard limit"
+                  type: "integer"
+      NetworkAttachmentSpec:
+        description: |
+          Read-only spec type for non-swarm containers attached to swarm overlay
+          networks.
+
+          <p><br /></p>
+
+          > **Note**: ContainerSpec, NetworkAttachmentSpec, and PluginSpec are
+          > mutually exclusive. PluginSpec is only used when the Runtime field
+          > is set to `plugin`. NetworkAttachmentSpec is used when the Runtime
+          > field is set to `attachment`.
+        type: "object"
+        properties:
+          ContainerID:
+            description: "ID of the container represented by this task"
+            type: "string"
+      Resources:
+        description: |
+          Resource requirements which apply to each individual container created
+          as part of the service.
+        type: "object"
+        properties:
+          Limits:
+            description: "Define resources limits."
+            $ref: "#/definitions/Limit"
+          Reservations:
+            description: "Define resources reservation."
+            $ref: "#/definitions/ResourceObject"
+      RestartPolicy:
+        description: |
+          Specification for the restart policy which applies to containers
+          created as part of this service.
+        type: "object"
+        properties:
+          Condition:
+            description: "Condition for restart."
+            type: "string"
+            enum:
+              - "none"
+              - "on-failure"
+              - "any"
+          Delay:
+            description: "Delay between restart attempts."
+            type: "integer"
+            format: "int64"
+          MaxAttempts:
+            description: |
+              Maximum attempts to restart a given container before giving up
+              (default value is 0, which is ignored).
+            type: "integer"
+            format: "int64"
+            default: 0
+          Window:
+            description: |
+              Windows is the time window used to evaluate the restart policy
+              (default value is 0, which is unbounded).
+            type: "integer"
+            format: "int64"
+            default: 0
+      Placement:
+        type: "object"
+        properties:
+          Constraints:
+            description: |
+              An array of constraint expressions to limit the set of nodes where
+              a task can be scheduled. Constraint expressions can either use a
+              _match_ (`==`) or _exclude_ (`!=`) rule. Multiple constraints find
+              nodes that satisfy every expression (AND match). Constraints can
+              match node or Docker Engine labels as follows:
+
+              node attribute       | matches                        | example
+              ---------------------|--------------------------------|-----------------------------------------------
+              `node.id`            | Node ID                        | `node.id==2ivku8v2gvtg4`
+              `node.hostname`      | Node hostname                  | `node.hostname!=node-2`
+              `node.role`          | Node role (`manager`/`worker`) | `node.role==manager`
+              `node.platform.os`   | Node operating system          | `node.platform.os==windows`
+              `node.platform.arch` | Node architecture              | `node.platform.arch==x86_64`
+              `node.labels`        | User-defined node labels       | `node.labels.security==high`
+              `engine.labels`      | Docker Engine's labels         | `engine.labels.operatingsystem==ubuntu-14.04`
+
+              `engine.labels` apply to Docker Engine labels like operating system,
+              drivers, etc. Swarm administrators add `node.labels` for operational
+              purposes by using the [`node update endpoint`](#operation/NodeUpdate).
+
+            type: "array"
+            items:
+              type: "string"
+            example:
+              - "node.hostname!=node3.corp.example.com"
+              - "node.role!=manager"
+              - "node.labels.type==production"
+              - "node.platform.os==linux"
+              - "node.platform.arch==x86_64"
+          Preferences:
+            description: |
+              Preferences provide a way to make the scheduler aware of factors
+              such as topology. They are provided in order from highest to
+              lowest precedence.
+            type: "array"
+            items:
+              type: "object"
+              properties:
+                Spread:
+                  type: "object"
+                  properties:
+                    SpreadDescriptor:
+                      description: |
+                        label descriptor, such as `engine.labels.az`.
+                      type: "string"
+            example:
+              - Spread:
+                  SpreadDescriptor: "node.labels.datacenter"
+              - Spread:
+                  SpreadDescriptor: "node.labels.rack"
+          MaxReplicas:
+            description: |
+              Maximum number of replicas for per node (default value is 0, which
+              is unlimited)
+            type: "integer"
+            format: "int64"
+            default: 0
+          Platforms:
+            description: |
+              Platforms stores all the platforms that the service's image can
+              run on. This field is used in the platform filter for scheduling.
+              If empty, then the platform filter is off, meaning there are no
+              scheduling restrictions.
+            type: "array"
+            items:
+              $ref: "#/definitions/Platform"
+      ForceUpdate:
+        description: |
+          A counter that triggers an update even if no relevant parameters have
+          been changed.
+        type: "integer"
+      Runtime:
+        description: |
+          Runtime is the type of runtime specified for the task executor.
+        type: "string"
+      Networks:
+        description: "Specifies which networks the service should attach to."
+        type: "array"
+        items:
+          $ref: "#/definitions/NetworkAttachmentConfig"
+      LogDriver:
+        description: |
+          Specifies the log driver to use for tasks created from this spec. If
+          not present, the default one for the swarm will be used, finally
+          falling back to the engine default if not specified.
+        type: "object"
+        properties:
+          Name:
+            type: "string"
+          Options:
+            type: "object"
+            additionalProperties:
+              type: "string"
+
+  TaskState:
+    type: "string"
+    enum:
+      - "new"
+      - "allocated"
+      - "pending"
+      - "assigned"
+      - "accepted"
+      - "preparing"
+      - "ready"
+      - "starting"
+      - "running"
+      - "complete"
+      - "shutdown"
+      - "failed"
+      - "rejected"
+      - "remove"
+      - "orphaned"
+
+  Task:
+    type: "object"
+    properties:
+      ID:
+        description: "The ID of the task."
+        type: "string"
+      Version:
+        $ref: "#/definitions/ObjectVersion"
+      CreatedAt:
+        type: "string"
+        format: "dateTime"
+      UpdatedAt:
+        type: "string"
+        format: "dateTime"
+      Name:
+        description: "Name of the task."
+        type: "string"
+      Labels:
+        description: "User-defined key/value metadata."
+        type: "object"
+        additionalProperties:
+          type: "string"
+      Spec:
+        $ref: "#/definitions/TaskSpec"
+      ServiceID:
+        description: "The ID of the service this task is part of."
+        type: "string"
+      Slot:
+        type: "integer"
+      NodeID:
+        description: "The ID of the node that this task is on."
+        type: "string"
+      AssignedGenericResources:
+        $ref: "#/definitions/GenericResources"
+      Status:
+        type: "object"
+        properties:
+          Timestamp:
+            type: "string"
+            format: "dateTime"
+          State:
+            $ref: "#/definitions/TaskState"
+          Message:
+            type: "string"
+          Err:
+            type: "string"
+          ContainerStatus:
+            type: "object"
+            properties:
+              ContainerID:
+                type: "string"
+              PID:
+                type: "integer"
+              ExitCode:
+                type: "integer"
+      DesiredState:
+        $ref: "#/definitions/TaskState"
+      JobIteration:
+        description: |
+          If the Service this Task belongs to is a job-mode service, contains
+          the JobIteration of the Service this Task was created for. Absent if
+          the Task was created for a Replicated or Global Service.
+        $ref: "#/definitions/ObjectVersion"
+    example:
+      ID: "0kzzo1i0y4jz6027t0k7aezc7"
+      Version:
+        Index: 71
+      CreatedAt: "2016-06-07T21:07:31.171892745Z"
+      UpdatedAt: "2016-06-07T21:07:31.376370513Z"
+      Spec:
+        ContainerSpec:
+          Image: "redis"
+        Resources:
+          Limits: {}
+          Reservations: {}
+        RestartPolicy:
+          Condition: "any"
+          MaxAttempts: 0
+        Placement: {}
+      ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz"
+      Slot: 1
+      NodeID: "60gvrl6tm78dmak4yl7srz94v"
+      Status:
+        Timestamp: "2016-06-07T21:07:31.290032978Z"
+        State: "running"
+        Message: "started"
+        ContainerStatus:
+          ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035"
+          PID: 677
+      DesiredState: "running"
+      NetworksAttachments:
+        - Network:
+            ID: "4qvuz4ko70xaltuqbt8956gd1"
+            Version:
+              Index: 18
+            CreatedAt: "2016-06-07T20:31:11.912919752Z"
+            UpdatedAt: "2016-06-07T21:07:29.955277358Z"
+            Spec:
+              Name: "ingress"
+              Labels:
+                com.docker.swarm.internal: "true"
+              DriverConfiguration: {}
+              IPAMOptions:
+                Driver: {}
+                Configs:
+                  - Subnet: "10.255.0.0/16"
+                    Gateway: "10.255.0.1"
+            DriverState:
+              Name: "overlay"
+              Options:
+                com.docker.network.driver.overlay.vxlanid_list: "256"
+            IPAMOptions:
+              Driver:
+                Name: "default"
+              Configs:
+                - Subnet: "10.255.0.0/16"
+                  Gateway: "10.255.0.1"
+          Addresses:
+            - "10.255.0.10/16"
+      AssignedGenericResources:
+        - DiscreteResourceSpec:
+            Kind: "SSD"
+            Value: 3
+        - NamedResourceSpec:
+            Kind: "GPU"
+            Value: "UUID1"
+        - NamedResourceSpec:
+            Kind: "GPU"
+            Value: "UUID2"
+
+  ServiceSpec:
+    description: "User modifiable configuration for a service."
+    properties:
+      Name:
+        description: "Name of the service."
+        type: "string"
+      Labels:
+        description: "User-defined key/value metadata."
+        type: "object"
+        additionalProperties:
+          type: "string"
+      TaskTemplate:
+        $ref: "#/definitions/TaskSpec"
+      Mode:
+        description: "Scheduling mode for the service."
+        type: "object"
+        properties:
+          Replicated:
+            type: "object"
+            properties:
+              Replicas:
+                type: "integer"
+                format: "int64"
+          Global:
+            type: "object"
+          ReplicatedJob:
+            description: |
+              The mode used for services with a finite number of tasks that run
+              to a completed state.
+            type: "object"
+            properties:
+              MaxConcurrent:
+                description: |
+                  The maximum number of replicas to run simultaneously.
+                type: "integer"
+                format: "int64"
+                default: 1
+              TotalCompletions:
+                description: |
+                  The total number of replicas desired to reach the Completed
+                  state. If unset, will default to the value of `MaxConcurrent`
+                type: "integer"
+                format: "int64"
+          GlobalJob:
+            description: |
+              The mode used for services which run a task to the completed state
+              on each valid node.
+            type: "object"
+      UpdateConfig:
+        description: "Specification for the update strategy of the service."
+        type: "object"
+        properties:
+          Parallelism:
+            description: |
+              Maximum number of tasks to be updated in one iteration (0 means
+              unlimited parallelism).
+            type: "integer"
+            format: "int64"
+          Delay:
+            description: "Amount of time between updates, in nanoseconds."
+            type: "integer"
+            format: "int64"
+          FailureAction:
+            description: |
+              Action to take if an updated task fails to run, or stops running
+              during the update.
+            type: "string"
+            enum:
+              - "continue"
+              - "pause"
+              - "rollback"
+          Monitor:
+            description: |
+              Amount of time to monitor each updated task for failures, in
+              nanoseconds.
+            type: "integer"
+            format: "int64"
+          MaxFailureRatio:
+            description: |
+              The fraction of tasks that may fail during an update before the
+              failure action is invoked, specified as a floating point number
+              between 0 and 1.
+            type: "number"
+            default: 0
+          Order:
+            description: |
+              The order of operations when rolling out an updated task. Either
+              the old task is shut down before the new task is started, or the
+              new task is started before the old task is shut down.
+            type: "string"
+            enum:
+              - "stop-first"
+              - "start-first"
+      RollbackConfig:
+        description: "Specification for the rollback strategy of the service."
+        type: "object"
+        properties:
+          Parallelism:
+            description: |
+              Maximum number of tasks to be rolled back in one iteration (0 means
+              unlimited parallelism).
+            type: "integer"
+            format: "int64"
+          Delay:
+            description: |
+              Amount of time between rollback iterations, in nanoseconds.
+            type: "integer"
+            format: "int64"
+          FailureAction:
+            description: |
+              Action to take if an rolled back task fails to run, or stops
+              running during the rollback.
+            type: "string"
+            enum:
+              - "continue"
+              - "pause"
+          Monitor:
+            description: |
+              Amount of time to monitor each rolled back task for failures, in
+              nanoseconds.
+            type: "integer"
+            format: "int64"
+          MaxFailureRatio:
+            description: |
+              The fraction of tasks that may fail during a rollback before the
+              failure action is invoked, specified as a floating point number
+              between 0 and 1.
+            type: "number"
+            default: 0
+          Order:
+            description: |
+              The order of operations when rolling back a task. Either the old
+              task is shut down before the new task is started, or the new task
+              is started before the old task is shut down.
+            type: "string"
+            enum:
+              - "stop-first"
+              - "start-first"
+      Networks:
+        description: "Specifies which networks the service should attach to."
+        type: "array"
+        items:
+          $ref: "#/definitions/NetworkAttachmentConfig"
+
+      EndpointSpec:
+        $ref: "#/definitions/EndpointSpec"
+
+  EndpointPortConfig:
+    type: "object"
+    properties:
+      Name:
+        type: "string"
+      Protocol:
+        type: "string"
+        enum:
+          - "tcp"
+          - "udp"
+          - "sctp"
+      TargetPort:
+        description: "The port inside the container."
+        type: "integer"
+      PublishedPort:
+        description: "The port on the swarm hosts."
+        type: "integer"
+      PublishMode:
+        description: |
+          The mode in which port is published.
+
+          <p><br /></p>
+
+          - "ingress" makes the target port accessible on every node,
+            regardless of whether there is a task for the service running on
+            that node or not.
+          - "host" bypasses the routing mesh and publish the port directly on
+            the swarm node where that service is running.
+
+        type: "string"
+        enum:
+          - "ingress"
+          - "host"
+        default: "ingress"
+        example: "ingress"
+
+  EndpointSpec:
+    description: "Properties that can be configured to access and load balance a service."
+    type: "object"
+    properties:
+      Mode:
+        description: |
+          The mode of resolution to use for internal load balancing between tasks.
+        type: "string"
+        enum:
+          - "vip"
+          - "dnsrr"
+        default: "vip"
+      Ports:
+        description: |
+          List of exposed ports that this service is accessible on from the
+          outside. Ports can only be provided if `vip` resolution mode is used.
+        type: "array"
+        items:
+          $ref: "#/definitions/EndpointPortConfig"
+
+  Service:
+    type: "object"
+    properties:
+      ID:
+        type: "string"
+      Version:
+        $ref: "#/definitions/ObjectVersion"
+      CreatedAt:
+        type: "string"
+        format: "dateTime"
+      UpdatedAt:
+        type: "string"
+        format: "dateTime"
+      Spec:
+        $ref: "#/definitions/ServiceSpec"
+      Endpoint:
+        type: "object"
+        properties:
+          Spec:
+            $ref: "#/definitions/EndpointSpec"
+          Ports:
+            type: "array"
+            items:
+              $ref: "#/definitions/EndpointPortConfig"
+          VirtualIPs:
+            type: "array"
+            items:
+              type: "object"
+              properties:
+                NetworkID:
+                  type: "string"
+                Addr:
+                  type: "string"
+      UpdateStatus:
+        description: "The status of a service update."
+        type: "object"
+        properties:
+          State:
+            type: "string"
+            enum:
+              - "updating"
+              - "paused"
+              - "completed"
+          StartedAt:
+            type: "string"
+            format: "dateTime"
+          CompletedAt:
+            type: "string"
+            format: "dateTime"
+          Message:
+            type: "string"
+      ServiceStatus:
+        description: |
+          The status of the service's tasks. Provided only when requested as
+          part of a ServiceList operation.
+        type: "object"
+        properties:
+          RunningTasks:
+            description: |
+              The number of tasks for the service currently in the Running state.
+            type: "integer"
+            format: "uint64"
+            example: 7
+          DesiredTasks:
+            description: |
+              The number of tasks for the service desired to be running.
+              For replicated services, this is the replica count from the
+              service spec. For global services, this is computed by taking
+              count of all tasks for the service with a Desired State other
+              than Shutdown.
+            type: "integer"
+            format: "uint64"
+            example: 10
+          CompletedTasks:
+            description: |
+              The number of tasks for a job that are in the Completed state.
+              This field must be cross-referenced with the service type, as the
+              value of 0 may mean the service is not in a job mode, or it may
+              mean the job-mode service has no tasks yet Completed.
+            type: "integer"
+            format: "uint64"
+      JobStatus:
+        description: |
+          The status of the service when it is in one of ReplicatedJob or
+          GlobalJob modes. Absent on Replicated and Global mode services. The
+          JobIteration is an ObjectVersion, but unlike the Service's version,
+          does not need to be sent with an update request.
+        type: "object"
+        properties:
+          JobIteration:
+            description: |
+              JobIteration is a value increased each time a Job is executed,
+              successfully or otherwise. "Executed", in this case, means the
+              job as a whole has been started, not that an individual Task has
+              been launched. A job is "Executed" when its ServiceSpec is
+              updated. JobIteration can be used to disambiguate Tasks belonging
+              to different executions of a job.  Though JobIteration will
+              increase with each subsequent execution, it may not necessarily
+              increase by 1, and so JobIteration should not be used to
+            $ref: "#/definitions/ObjectVersion"
+          LastExecution:
+            description: |
+              The last time, as observed by the server, that this job was
+              started.
+            type: "string"
+            format: "dateTime"
+    example:
+      ID: "9mnpnzenvg8p8tdbtq4wvbkcz"
+      Version:
+        Index: 19
+      CreatedAt: "2016-06-07T21:05:51.880065305Z"
+      UpdatedAt: "2016-06-07T21:07:29.962229872Z"
+      Spec:
+        Name: "hopeful_cori"
+        TaskTemplate:
+          ContainerSpec:
+            Image: "redis"
+          Resources:
+            Limits: {}
+            Reservations: {}
+          RestartPolicy:
+            Condition: "any"
+            MaxAttempts: 0
+          Placement: {}
+          ForceUpdate: 0
+        Mode:
+          Replicated:
+            Replicas: 1
+        UpdateConfig:
+          Parallelism: 1
+          Delay: 1000000000
+          FailureAction: "pause"
+          Monitor: 15000000000
+          MaxFailureRatio: 0.15
+        RollbackConfig:
+          Parallelism: 1
+          Delay: 1000000000
+          FailureAction: "pause"
+          Monitor: 15000000000
+          MaxFailureRatio: 0.15
+        EndpointSpec:
+          Mode: "vip"
+          Ports:
+            -
+              Protocol: "tcp"
+              TargetPort: 6379
+              PublishedPort: 30001
+      Endpoint:
+        Spec:
+          Mode: "vip"
+          Ports:
+            -
+              Protocol: "tcp"
+              TargetPort: 6379
+              PublishedPort: 30001
+        Ports:
+          -
+            Protocol: "tcp"
+            TargetPort: 6379
+            PublishedPort: 30001
+        VirtualIPs:
+          -
+            NetworkID: "4qvuz4ko70xaltuqbt8956gd1"
+            Addr: "10.255.0.2/16"
+          -
+            NetworkID: "4qvuz4ko70xaltuqbt8956gd1"
+            Addr: "10.255.0.3/16"
+
+  ImageDeleteResponseItem:
+    type: "object"
+    properties:
+      Untagged:
+        description: "The image ID of an image that was untagged"
+        type: "string"
+      Deleted:
+        description: "The image ID of an image that was deleted"
+        type: "string"
+
+  ServiceUpdateResponse:
+    type: "object"
+    properties:
+      Warnings:
+        description: "Optional warning messages"
+        type: "array"
+        items:
+          type: "string"
+    example:
+      Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found"
+
+  ContainerSummary:
+    type: "array"
+    items:
+      type: "object"
+      properties:
+        Id:
+          description: "The ID of this container"
+          type: "string"
+          x-go-name: "ID"
+        Names:
+          description: "The names that this container has been given"
+          type: "array"
+          items:
+            type: "string"
+        Image:
+          description: "The name of the image used when creating this container"
+          type: "string"
+        ImageID:
+          description: "The ID of the image that this container was created from"
+          type: "string"
+        Command:
+          description: "Command to run when starting the container"
+          type: "string"
+        Created:
+          description: "When the container was created"
+          type: "integer"
+          format: "int64"
+        Ports:
+          description: "The ports exposed by this container"
+          type: "array"
+          items:
+            $ref: "#/definitions/Port"
+        SizeRw:
+          description: "The size of files that have been created or changed by this container"
+          type: "integer"
+          format: "int64"
+        SizeRootFs:
+          description: "The total size of all the files in this container"
+          type: "integer"
+          format: "int64"
+        Labels:
+          description: "User-defined key/value metadata."
+          type: "object"
+          additionalProperties:
+            type: "string"
+        State:
+          description: "The state of this container (e.g. `Exited`)"
+          type: "string"
+        Status:
+          description: "Additional human-readable status of this container (e.g. `Exit 0`)"
+          type: "string"
+        HostConfig:
+          type: "object"
+          properties:
+            NetworkMode:
+              type: "string"
+        NetworkSettings:
+          description: "A summary of the container's network settings"
+          type: "object"
+          properties:
+            Networks:
+              type: "object"
+              additionalProperties:
+                $ref: "#/definitions/EndpointSettings"
+        Mounts:
+          type: "array"
+          items:
+            $ref: "#/definitions/Mount"
+
+  Driver:
+    description: "Driver represents a driver (network, logging, secrets)."
+    type: "object"
+    required: [Name]
+    properties:
+      Name:
+        description: "Name of the driver."
+        type: "string"
+        x-nullable: false
+        example: "some-driver"
+      Options:
+        description: "Key/value map of driver-specific options."
+        type: "object"
+        x-nullable: false
+        additionalProperties:
+          type: "string"
+        example:
+          OptionA: "value for driver-specific option A"
+          OptionB: "value for driver-specific option B"
+
+  SecretSpec:
+    type: "object"
+    properties:
+      Name:
+        description: "User-defined name of the secret."
+        type: "string"
+      Labels:
+        description: "User-defined key/value metadata."
+        type: "object"
+        additionalProperties:
+          type: "string"
+        example:
+          com.example.some-label: "some-value"
+          com.example.some-other-label: "some-other-value"
+      Data:
+        description: |
+          Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5))
+          data to store as secret.
+
+          This field is only used to _create_ a secret, and is not returned by
+          other endpoints.
+        type: "string"
+        example: ""
+      Driver:
+        description: |
+          Name of the secrets driver used to fetch the secret's value from an
+          external secret store.
+        $ref: "#/definitions/Driver"
+      Templating:
+        description: |
+          Templating driver, if applicable
+
+          Templating controls whether and how to evaluate the config payload as
+          a template. If no driver is set, no templating is used.
+        $ref: "#/definitions/Driver"
+
+  Secret:
+    type: "object"
+    properties:
+      ID:
+        type: "string"
+        example: "blt1owaxmitz71s9v5zh81zun"
+      Version:
+        $ref: "#/definitions/ObjectVersion"
+      CreatedAt:
+        type: "string"
+        format: "dateTime"
+        example: "2017-07-20T13:55:28.678958722Z"
+      UpdatedAt:
+        type: "string"
+        format: "dateTime"
+        example: "2017-07-20T13:55:28.678958722Z"
+      Spec:
+        $ref: "#/definitions/SecretSpec"
+
+  ConfigSpec:
+    type: "object"
+    properties:
+      Name:
+        description: "User-defined name of the config."
+        type: "string"
+      Labels:
+        description: "User-defined key/value metadata."
+        type: "object"
+        additionalProperties:
+          type: "string"
+      Data:
+        description: |
+          Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5))
+          config data.
+        type: "string"
+      Templating:
+        description: |
+          Templating driver, if applicable
+
+          Templating controls whether and how to evaluate the config payload as
+          a template. If no driver is set, no templating is used.
+        $ref: "#/definitions/Driver"
+
+  Config:
+    type: "object"
+    properties:
+      ID:
+        type: "string"
+      Version:
+        $ref: "#/definitions/ObjectVersion"
+      CreatedAt:
+        type: "string"
+        format: "dateTime"
+      UpdatedAt:
+        type: "string"
+        format: "dateTime"
+      Spec:
+        $ref: "#/definitions/ConfigSpec"
+
+  ContainerState:
+    description: |
+      ContainerState stores container's running state. It's part of ContainerJSONBase
+      and will be returned by the "inspect" command.
+    type: "object"
+    properties:
+      Status:
+        description: |
+          String representation of the container state. Can be one of "created",
+          "running", "paused", "restarting", "removing", "exited", or "dead".
+        type: "string"
+        enum: ["created", "running", "paused", "restarting", "removing", "exited", "dead"]
+        example: "running"
+      Running:
+        description: |
+          Whether this container is running.
+
+          Note that a running container can be _paused_. The `Running` and `Paused`
+          booleans are not mutually exclusive:
+
+          When pausing a container (on Linux), the freezer cgroup is used to suspend
+          all processes in the container. Freezing the process requires the process to
+          be running. As a result, paused containers are both `Running` _and_ `Paused`.
+
+          Use the `Status` field instead to determine if a container's state is "running".
+        type: "boolean"
+        example: true
+      Paused:
+        description: "Whether this container is paused."
+        type: "boolean"
+        example: false
+      Restarting:
+        description: "Whether this container is restarting."
+        type: "boolean"
+        example: false
+      OOMKilled:
+        description: |
+          Whether this container has been killed because it ran out of memory.
+        type: "boolean"
+        example: false
+      Dead:
+        type: "boolean"
+        example: false
+      Pid:
+        description: "The process ID of this container"
+        type: "integer"
+        example: 1234
+      ExitCode:
+        description: "The last exit code of this container"
+        type: "integer"
+        example: 0
+      Error:
+        type: "string"
+      StartedAt:
+        description: "The time when this container was last started."
+        type: "string"
+        example: "2020-01-06T09:06:59.461876391Z"
+      FinishedAt:
+        description: "The time when this container last exited."
+        type: "string"
+        example: "2020-01-06T09:07:59.461876391Z"
+      Health:
+        x-nullable: true
+        $ref: "#/definitions/Health"
+
+  SystemVersion:
+    type: "object"
+    description: |
+      Response of Engine API: GET "/version"
+    properties:
+      Platform:
+        type: "object"
+        required: [Name]
+        properties:
+          Name:
+            type: "string"
+      Components:
+        type: "array"
+        description: |
+          Information about system components
+        items:
+          type: "object"
+          x-go-name: ComponentVersion
+          required: [Name, Version]
+          properties:
+            Name:
+              description: |
+                Name of the component
+              type: "string"
+              example: "Engine"
+            Version:
+              description: |
+                Version of the component
+              type: "string"
+              x-nullable: false
+              example: "19.03.12"
+            Details:
+              description: |
+                Key/value pairs of strings with additional information about the
+                component. These values are intended for informational purposes
+                only, and their content is not defined, and not part of the API
+                specification.
+
+                These messages can be printed by the client as information to the user.
+              type: "object"
+              x-nullable: true
+      Version:
+        description: "The version of the daemon"
+        type: "string"
+        example: "19.03.12"
+      ApiVersion:
+        description: |
+          The default (and highest) API version that is supported by the daemon
+        type: "string"
+        example: "1.40"
+      MinAPIVersion:
+        description: |
+          The minimum API version that is supported by the daemon
+        type: "string"
+        example: "1.12"
+      GitCommit:
+        description: |
+          The Git commit of the source code that was used to build the daemon
+        type: "string"
+        example: "48a66213fe"
+      GoVersion:
+        description: |
+          The version Go used to compile the daemon, and the version of the Go
+          runtime in use.
+        type: "string"
+        example: "go1.13.14"
+      Os:
+        description: |
+          The operating system that the daemon is running on ("linux" or "windows")
+        type: "string"
+        example: "linux"
+      Arch:
+        description: |
+          The architecture that the daemon is running on
+        type: "string"
+        example: "amd64"
+      KernelVersion:
+        description: |
+          The kernel version (`uname -r`) that the daemon is running on.
+
+          This field is omitted when empty.
+        type: "string"
+        example: "4.19.76-linuxkit"
+      Experimental:
+        description: |
+          Indicates if the daemon is started with experimental features enabled.
+
+          This field is omitted when empty / false.
+        type: "boolean"
+        example: true
+      BuildTime:
+        description: |
+          The date and time that the daemon was compiled.
+        type: "string"
+        example: "2020-06-22T15:49:27.000000000+00:00"
+
+
+  SystemInfo:
+    type: "object"
+    properties:
+      ID:
+        description: |
+          Unique identifier of the daemon.
+
+          <p><br /></p>
+
+          > **Note**: The format of the ID itself is not part of the API, and
+          > should not be considered stable.
+        type: "string"
+        example: "7TRN:IPZB:QYBB:VPBQ:UMPP:KARE:6ZNR:XE6T:7EWV:PKF4:ZOJD:TPYS"
+      Containers:
+        description: "Total number of containers on the host."
+        type: "integer"
+        example: 14
+      ContainersRunning:
+        description: |
+          Number of containers with status `"running"`.
+        type: "integer"
+        example: 3
+      ContainersPaused:
+        description: |
+          Number of containers with status `"paused"`.
+        type: "integer"
+        example: 1
+      ContainersStopped:
+        description: |
+          Number of containers with status `"stopped"`.
+        type: "integer"
+        example: 10
+      Images:
+        description: |
+          Total number of images on the host.
+
+          Both _tagged_ and _untagged_ (dangling) images are counted.
+        type: "integer"
+        example: 508
+      Driver:
+        description: "Name of the storage driver in use."
+        type: "string"
+        example: "overlay2"
+      DriverStatus:
+        description: |
+          Information specific to the storage driver, provided as
+          "label" / "value" pairs.
+
+          This information is provided by the storage driver, and formatted
+          in a way consistent with the output of `docker info` on the command
+          line.
+
+          <p><br /></p>
+
+          > **Note**: The information returned in this field, including the
+          > formatting of values and labels, should not be considered stable,
+          > and may change without notice.
+        type: "array"
+        items:
+          type: "array"
+          items:
+            type: "string"
+        example:
+          - ["Backing Filesystem", "extfs"]
+          - ["Supports d_type", "true"]
+          - ["Native Overlay Diff", "true"]
+      DockerRootDir:
+        description: |
+          Root directory of persistent Docker state.
+
+          Defaults to `/var/lib/docker` on Linux, and `C:\ProgramData\docker`
+          on Windows.
+        type: "string"
+        example: "/var/lib/docker"
+      Plugins:
+        $ref: "#/definitions/PluginsInfo"
+      MemoryLimit:
+        description: "Indicates if the host has memory limit support enabled."
+        type: "boolean"
+        example: true
+      SwapLimit:
+        description: "Indicates if the host has memory swap limit support enabled."
+        type: "boolean"
+        example: true
+      KernelMemory:
+        description: |
+          Indicates if the host has kernel memory limit support enabled.
+
+          <p><br /></p>
+
+          > **Deprecated**: This field is deprecated as the kernel 5.4 deprecated
+          > `kmem.limit_in_bytes`.
+        type: "boolean"
+        example: true
+      CpuCfsPeriod:
+        description: |
+          Indicates if CPU CFS(Completely Fair Scheduler) period is supported by
+          the host.
+        type: "boolean"
+        example: true
+      CpuCfsQuota:
+        description: |
+          Indicates if CPU CFS(Completely Fair Scheduler) quota is supported by
+          the host.
+        type: "boolean"
+        example: true
+      CPUShares:
+        description: |
+          Indicates if CPU Shares limiting is supported by the host.
+        type: "boolean"
+        example: true
+      CPUSet:
+        description: |
+          Indicates if CPUsets (cpuset.cpus, cpuset.mems) are supported by the host.
+
+          See [cpuset(7)](https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt)
+        type: "boolean"
+        example: true
+      PidsLimit:
+        description: "Indicates if the host kernel has PID limit support enabled."
+        type: "boolean"
+        example: true
+      OomKillDisable:
+        description: "Indicates if OOM killer disable is supported on the host."
+        type: "boolean"
+      IPv4Forwarding:
+        description: "Indicates IPv4 forwarding is enabled."
+        type: "boolean"
+        example: true
+      BridgeNfIptables:
+        description: "Indicates if `bridge-nf-call-iptables` is available on the host."
+        type: "boolean"
+        example: true
+      BridgeNfIp6tables:
+        description: "Indicates if `bridge-nf-call-ip6tables` is available on the host."
+        type: "boolean"
+        example: true
+      Debug:
+        description: |
+          Indicates if the daemon is running in debug-mode / with debug-level
+          logging enabled.
+        type: "boolean"
+        example: true
+      NFd:
+        description: |
+          The total number of file Descriptors in use by the daemon process.
+
+          This information is only returned if debug-mode is enabled.
+        type: "integer"
+        example: 64
+      NGoroutines:
+        description: |
+          The  number of goroutines that currently exist.
+
+          This information is only returned if debug-mode is enabled.
+        type: "integer"
+        example: 174
+      SystemTime:
+        description: |
+          Current system-time in [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt)
+          format with nano-seconds.
+        type: "string"
+        example: "2017-08-08T20:28:29.06202363Z"
+      LoggingDriver:
+        description: |
+          The logging driver to use as a default for new containers.
+        type: "string"
+      CgroupDriver:
+        description: |
+          The driver to use for managing cgroups.
+        type: "string"
+        enum: ["cgroupfs", "systemd", "none"]
+        default: "cgroupfs"
+        example: "cgroupfs"
+      CgroupVersion:
+        description: |
+          The version of the cgroup.
+        type: "string"
+        enum: ["1", "2"]
+        default: "1"
+        example: "1"
+      NEventsListener:
+        description: "Number of event listeners subscribed."
+        type: "integer"
+        example: 30
+      KernelVersion:
+        description: |
+          Kernel version of the host.
+
+          On Linux, this information obtained from `uname`. On Windows this
+          information is queried from the <kbd>HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\</kbd>
+          registry value, for example _"10.0 14393 (14393.1198.amd64fre.rs1_release_sec.170427-1353)"_.
+        type: "string"
+        example: "4.9.38-moby"
+      OperatingSystem:
+        description: |
+          Name of the host's operating system, for example: "Ubuntu 16.04.2 LTS"
+          or "Windows Server 2016 Datacenter"
+        type: "string"
+        example: "Alpine Linux v3.5"
+      OSVersion:
+        description: |
+          Version of the host's operating system
+
+          <p><br /></p>
+
+          > **Note**: The information returned in this field, including its
+          > very existence, and the formatting of values, should not be considered
+          > stable, and may change without notice.
+        type: "string"
+        example: "16.04"
+      OSType:
+        description: |
+          Generic type of the operating system of the host, as returned by the
+          Go runtime (`GOOS`).
+
+          Currently returned values are "linux" and "windows". A full list of
+          possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment).
+        type: "string"
+        example: "linux"
+      Architecture:
+        description: |
+          Hardware architecture of the host, as returned by the Go runtime
+          (`GOARCH`).
+
+          A full list of possible values can be found in the [Go documentation](https://golang.org/doc/install/source#environment).
+        type: "string"
+        example: "x86_64"
+      NCPU:
+        description: |
+          The number of logical CPUs usable by the daemon.
+
+          The number of available CPUs is checked by querying the operating
+          system when the daemon starts. Changes to operating system CPU
+          allocation after the daemon is started are not reflected.
+        type: "integer"
+        example: 4
+      MemTotal:
+        description: |
+          Total amount of physical memory available on the host, in bytes.
+        type: "integer"
+        format: "int64"
+        example: 2095882240
+
+      IndexServerAddress:
+        description: |
+          Address / URL of the index server that is used for image search,
+          and as a default for user authentication for Docker Hub and Docker Cloud.
+        default: "https://index.docker.io/v1/"
+        type: "string"
+        example: "https://index.docker.io/v1/"
+      RegistryConfig:
+        $ref: "#/definitions/RegistryServiceConfig"
+      GenericResources:
+        $ref: "#/definitions/GenericResources"
+      HttpProxy:
+        description: |
+          HTTP-proxy configured for the daemon. This value is obtained from the
+          [`HTTP_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable.
+          Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL
+          are masked in the API response.
+
+          Containers do not automatically inherit this configuration.
+        type: "string"
+        example: "http://xxxxx:xxxxx@proxy.corp.example.com:8080"
+      HttpsProxy:
+        description: |
+          HTTPS-proxy configured for the daemon. This value is obtained from the
+          [`HTTPS_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html) environment variable.
+          Credentials ([user info component](https://tools.ietf.org/html/rfc3986#section-3.2.1)) in the proxy URL
+          are masked in the API response.
+
+          Containers do not automatically inherit this configuration.
+        type: "string"
+        example: "https://xxxxx:xxxxx@proxy.corp.example.com:4443"
+      NoProxy:
+        description: |
+          Comma-separated list of domain extensions for which no proxy should be
+          used. This value is obtained from the [`NO_PROXY`](https://www.gnu.org/software/wget/manual/html_node/Proxies.html)
+          environment variable.
+
+          Containers do not automatically inherit this configuration.
+        type: "string"
+        example: "*.local, 169.254/16"
+      Name:
+        description: "Hostname of the host."
+        type: "string"
+        example: "node5.corp.example.com"
+      Labels:
+        description: |
+          User-defined labels (key/value metadata) as set on the daemon.
+
+          <p><br /></p>
+
+          > **Note**: When part of a Swarm, nodes can both have _daemon_ labels,
+          > set through the daemon configuration, and _node_ labels, set from a
+          > manager node in the Swarm. Node labels are not included in this
+          > field. Node labels can be retrieved using the `/nodes/(id)` endpoint
+          > on a manager node in the Swarm.
+        type: "array"
+        items:
+          type: "string"
+        example: ["storage=ssd", "production"]
+      ExperimentalBuild:
+        description: |
+          Indicates if experimental features are enabled on the daemon.
+        type: "boolean"
+        example: true
+      ServerVersion:
+        description: |
+          Version string of the daemon.
+
+          > **Note**: the [standalone Swarm API](https://docs.docker.com/swarm/swarm-api/)
+          > returns the Swarm version instead of the daemon  version, for example
+          > `swarm/1.2.8`.
+        type: "string"
+        example: "17.06.0-ce"
+      ClusterStore:
+        description: |
+          URL of the distributed storage backend.
+
+
+          The storage backend is used for multihost networking (to store
+          network and endpoint information) and by the node discovery mechanism.
+
+          <p><br /></p>
+
+          > **Deprecated**: This field is only propagated when using standalone Swarm
+          > mode, and overlay networking using an external k/v store. Overlay
+          > networks with Swarm mode enabled use the built-in raft store, and
+          > this field will be empty.
+        type: "string"
+        example: "consul://consul.corp.example.com:8600/some/path"
+      ClusterAdvertise:
+        description: |
+          The network endpoint that the Engine advertises for the purpose of
+          node discovery. ClusterAdvertise is a `host:port` combination on which
+          the daemon is reachable by other hosts.
+
+          <p><br /></p>
+
+          > **Deprecated**: This field is only propagated when using standalone Swarm
+          > mode, and overlay networking using an external k/v store. Overlay
+          > networks with Swarm mode enabled use the built-in raft store, and
+          > this field will be empty.
+        type: "string"
+        example: "node5.corp.example.com:8000"
+      Runtimes:
+        description: |
+          List of [OCI compliant](https://github.com/opencontainers/runtime-spec)
+          runtimes configured on the daemon. Keys hold the "name" used to
+          reference the runtime.
+
+          The Docker daemon relies on an OCI compliant runtime (invoked via the
+          `containerd` daemon) as its interface to the Linux kernel namespaces,
+          cgroups, and SELinux.
+
+          The default runtime is `runc`, and automatically configured. Additional
+          runtimes can be configured by the user and will be listed here.
+        type: "object"
+        additionalProperties:
+          $ref: "#/definitions/Runtime"
+        default:
+          runc:
+            path: "runc"
+        example:
+          runc:
+            path: "runc"
+          runc-master:
+            path: "/go/bin/runc"
+          custom:
+            path: "/usr/local/bin/my-oci-runtime"
+            runtimeArgs: ["--debug", "--systemd-cgroup=false"]
+      DefaultRuntime:
+        description: |
+          Name of the default OCI runtime that is used when starting containers.
+
+          The default can be overridden per-container at create time.
+        type: "string"
+        default: "runc"
+        example: "runc"
+      Swarm:
+        $ref: "#/definitions/SwarmInfo"
+      LiveRestoreEnabled:
+        description: |
+          Indicates if live restore is enabled.
+
+          If enabled, containers are kept running when the daemon is shutdown
+          or upon daemon start if running containers are detected.
+        type: "boolean"
+        default: false
+        example: false
+      Isolation:
+        description: |
+          Represents the isolation technology to use as a default for containers.
+          The supported values are platform-specific.
+
+          If no isolation value is specified on daemon start, on Windows client,
+          the default is `hyperv`, and on Windows server, the default is `process`.
+
+          This option is currently not used on other platforms.
+        default: "default"
+        type: "string"
+        enum:
+          - "default"
+          - "hyperv"
+          - "process"
+      InitBinary:
+        description: |
+          Name and, optional, path of the `docker-init` binary.
+
+          If the path is omitted, the daemon searches the host's `$PATH` for the
+          binary and uses the first result.
+        type: "string"
+        example: "docker-init"
+      ContainerdCommit:
+        $ref: "#/definitions/Commit"
+      RuncCommit:
+        $ref: "#/definitions/Commit"
+      InitCommit:
+        $ref: "#/definitions/Commit"
+      SecurityOptions:
+        description: |
+          List of security features that are enabled on the daemon, such as
+          apparmor, seccomp, SELinux, user-namespaces (userns), and rootless.
+
+          Additional configuration options for each security feature may
+          be present, and are included as a comma-separated list of key/value
+          pairs.
+        type: "array"
+        items:
+          type: "string"
+        example:
+          - "name=apparmor"
+          - "name=seccomp,profile=default"
+          - "name=selinux"
+          - "name=userns"
+          - "name=rootless"
+      ProductLicense:
+        description: |
+          Reports a summary of the product license on the daemon.
+
+          If a commercial license has been applied to the daemon, information
+          such as number of nodes, and expiration are included.
+        type: "string"
+        example: "Community Engine"
+      DefaultAddressPools:
+        description: |
+          List of custom default address pools for local networks, which can be
+          specified in the daemon.json file or dockerd option.
+
+          Example: a Base "10.10.0.0/16" with Size 24 will define the set of 256
+          10.10.[0-255].0/24 address pools.
+        type: "array"
+        items:
+          type: "object"
+          properties:
+            Base:
+              description: "The network address in CIDR format"
+              type: "string"
+              example: "10.10.0.0/16"
+            Size:
+              description: "The network pool size"
+              type: "integer"
+              example: "24"
+      Warnings:
+        description: |
+          List of warnings / informational messages about missing features, or
+          issues related to the daemon configuration.
+
+          These messages can be printed by the client as information to the user.
+        type: "array"
+        items:
+          type: "string"
+        example:
+          - "WARNING: No memory limit support"
+          - "WARNING: bridge-nf-call-iptables is disabled"
+          - "WARNING: bridge-nf-call-ip6tables is disabled"
+
+
+  # PluginsInfo is a temp struct holding Plugins name
+  # registered with docker daemon. It is used by Info struct
+  PluginsInfo:
+    description: |
+      Available plugins per type.
+
+      <p><br /></p>
+
+      > **Note**: Only unmanaged (V1) plugins are included in this list.
+      > V1 plugins are "lazily" loaded, and are not returned in this list
+      > if there is no resource using the plugin.
+    type: "object"
+    properties:
+      Volume:
+        description: "Names of available volume-drivers, and network-driver plugins."
+        type: "array"
+        items:
+          type: "string"
+        example: ["local"]
+      Network:
+        description: "Names of available network-drivers, and network-driver plugins."
+        type: "array"
+        items:
+          type: "string"
+        example: ["bridge", "host", "ipvlan", "macvlan", "null", "overlay"]
+      Authorization:
+        description: "Names of available authorization plugins."
+        type: "array"
+        items:
+          type: "string"
+        example: ["img-authz-plugin", "hbm"]
+      Log:
+        description: "Names of available logging-drivers, and logging-driver plugins."
+        type: "array"
+        items:
+          type: "string"
+        example: ["awslogs", "fluentd", "gcplogs", "gelf", "journald", "json-file", "logentries", "splunk", "syslog"]
+
+
+  RegistryServiceConfig:
+    description: |
+      RegistryServiceConfig stores daemon registry services configuration.
+    type: "object"
+    x-nullable: true
+    properties:
+      AllowNondistributableArtifactsCIDRs:
+        description: |
+          List of IP ranges to which nondistributable artifacts can be pushed,
+          using the CIDR syntax [RFC 4632](https://tools.ietf.org/html/4632).
+
+          Some images (for example, Windows base images) contain artifacts
+          whose distribution is restricted by license. When these images are
+          pushed to a registry, restricted artifacts are not included.
+
+          This configuration override this behavior, and enables the daemon to
+          push nondistributable artifacts to all registries whose resolved IP
+          address is within the subnet described by the CIDR syntax.
+
+          This option is useful when pushing images containing
+          nondistributable artifacts to a registry on an air-gapped network so
+          hosts on that network can pull the images without connecting to
+          another server.
+
+          > **Warning**: Nondistributable artifacts typically have restrictions
+          > on how and where they can be distributed and shared. Only use this
+          > feature to push artifacts to private registries and ensure that you
+          > are in compliance with any terms that cover redistributing
+          > nondistributable artifacts.
+
+        type: "array"
+        items:
+          type: "string"
+        example: ["::1/128", "127.0.0.0/8"]
+      AllowNondistributableArtifactsHostnames:
+        description: |
+          List of registry hostnames to which nondistributable artifacts can be
+          pushed, using the format `<hostname>[:<port>]` or `<IP address>[:<port>]`.
+
+          Some images (for example, Windows base images) contain artifacts
+          whose distribution is restricted by license. When these images are
+          pushed to a registry, restricted artifacts are not included.
+
+          This configuration override this behavior for the specified
+          registries.
+
+          This option is useful when pushing images containing
+          nondistributable artifacts to a registry on an air-gapped network so
+          hosts on that network can pull the images without connecting to
+          another server.
+
+          > **Warning**: Nondistributable artifacts typically have restrictions
+          > on how and where they can be distributed and shared. Only use this
+          > feature to push artifacts to private registries and ensure that you
+          > are in compliance with any terms that cover redistributing
+          > nondistributable artifacts.
+        type: "array"
+        items:
+          type: "string"
+        example: ["registry.internal.corp.example.com:3000", "[2001:db8:a0b:12f0::1]:443"]
+      InsecureRegistryCIDRs:
+        description: |
+          List of IP ranges of insecure registries, using the CIDR syntax
+          ([RFC 4632](https://tools.ietf.org/html/4632)). Insecure registries
+          accept un-encrypted (HTTP) and/or untrusted (HTTPS with certificates
+          from unknown CAs) communication.
+
+          By default, local registries (`127.0.0.0/8`) are configured as
+          insecure. All other registries are secure. Communicating with an
+          insecure registry is not possible if the daemon assumes that registry
+          is secure.
+
+          This configuration override this behavior, insecure communication with
+          registries whose resolved IP address is within the subnet described by
+          the CIDR syntax.
+
+          Registries can also be marked insecure by hostname. Those registries
+          are listed under `IndexConfigs` and have their `Secure` field set to
+          `false`.
+
+          > **Warning**: Using this option can be useful when running a local
+          > registry, but introduces security vulnerabilities. This option
+          > should therefore ONLY be used for testing purposes. For increased
+          > security, users should add their CA to their system's list of trusted
+          > CAs instead of enabling this option.
+        type: "array"
+        items:
+          type: "string"
+        example: ["::1/128", "127.0.0.0/8"]
+      IndexConfigs:
+        type: "object"
+        additionalProperties:
+          $ref: "#/definitions/IndexInfo"
+        example:
+          "127.0.0.1:5000":
+            "Name": "127.0.0.1:5000"
+            "Mirrors": []
+            "Secure": false
+            "Official": false
+          "[2001:db8:a0b:12f0::1]:80":
+            "Name": "[2001:db8:a0b:12f0::1]:80"
+            "Mirrors": []
+            "Secure": false
+            "Official": false
+          "docker.io":
+            Name: "docker.io"
+            Mirrors: ["https://hub-mirror.corp.example.com:5000/"]
+            Secure: true
+            Official: true
+          "registry.internal.corp.example.com:3000":
+            Name: "registry.internal.corp.example.com:3000"
+            Mirrors: []
+            Secure: false
+            Official: false
+      Mirrors:
+        description: |
+          List of registry URLs that act as a mirror for the official
+          (`docker.io`) registry.
+
+        type: "array"
+        items:
+          type: "string"
+        example:
+          - "https://hub-mirror.corp.example.com:5000/"
+          - "https://[2001:db8:a0b:12f0::1]/"
+
+  IndexInfo:
+    description:
+      IndexInfo contains information about a registry.
+    type: "object"
+    x-nullable: true
+    properties:
+      Name:
+        description: |
+          Name of the registry, such as "docker.io".
+        type: "string"
+        example: "docker.io"
+      Mirrors:
+        description: |
+          List of mirrors, expressed as URIs.
+        type: "array"
+        items:
+          type: "string"
+        example:
+          - "https://hub-mirror.corp.example.com:5000/"
+          - "https://registry-2.docker.io/"
+          - "https://registry-3.docker.io/"
+      Secure:
+        description: |
+          Indicates if the registry is part of the list of insecure
+          registries.
+
+          If `false`, the registry is insecure. Insecure registries accept
+          un-encrypted (HTTP) and/or untrusted (HTTPS with certificates from
+          unknown CAs) communication.
+
+          > **Warning**: Insecure registries can be useful when running a local
+          > registry. However, because its use creates security vulnerabilities
+          > it should ONLY be enabled for testing purposes. For increased
+          > security, users should add their CA to their system's list of
+          > trusted CAs instead of enabling this option.
+        type: "boolean"
+        example: true
+      Official:
+        description: |
+          Indicates whether this is an official registry (i.e., Docker Hub / docker.io)
+        type: "boolean"
+        example: true
+
+  Runtime:
+    description: |
+      Runtime describes an [OCI compliant](https://github.com/opencontainers/runtime-spec)
+      runtime.
+
+      The runtime is invoked by the daemon via the `containerd` daemon. OCI
+      runtimes act as an interface to the Linux kernel namespaces, cgroups,
+      and SELinux.
+    type: "object"
+    properties:
+      path:
+        description: |
+          Name and, optional, path, of the OCI executable binary.
+
+          If the path is omitted, the daemon searches the host's `$PATH` for the
+          binary and uses the first result.
+        type: "string"
+        example: "/usr/local/bin/my-oci-runtime"
+      runtimeArgs:
+        description: |
+          List of command-line arguments to pass to the runtime when invoked.
+        type: "array"
+        x-nullable: true
+        items:
+          type: "string"
+        example: ["--debug", "--systemd-cgroup=false"]
+
+  Commit:
+    description: |
+      Commit holds the Git-commit (SHA1) that a binary was built from, as
+      reported in the version-string of external tools, such as `containerd`,
+      or `runC`.
+    type: "object"
+    properties:
+      ID:
+        description: "Actual commit ID of external tool."
+        type: "string"
+        example: "cfb82a876ecc11b5ca0977d1733adbe58599088a"
+      Expected:
+        description: |
+          Commit ID of external tool expected by dockerd as set at build time.
+        type: "string"
+        example: "2d41c047c83e09a6d61d464906feb2a2f3c52aa4"
+
+  SwarmInfo:
+    description: |
+      Represents generic information about swarm.
+    type: "object"
+    properties:
+      NodeID:
+        description: "Unique identifier of for this node in the swarm."
+        type: "string"
+        default: ""
+        example: "k67qz4598weg5unwwffg6z1m1"
+      NodeAddr:
+        description: |
+          IP address at which this node can be reached by other nodes in the
+          swarm.
+        type: "string"
+        default: ""
+        example: "10.0.0.46"
+      LocalNodeState:
+        $ref: "#/definitions/LocalNodeState"
+      ControlAvailable:
+        type: "boolean"
+        default: false
+        example: true
+      Error:
+        type: "string"
+        default: ""
+      RemoteManagers:
+        description: |
+          List of ID's and addresses of other managers in the swarm.
+        type: "array"
+        default: null
+        x-nullable: true
+        items:
+          $ref: "#/definitions/PeerNode"
+        example:
+          - NodeID: "71izy0goik036k48jg985xnds"
+            Addr: "10.0.0.158:2377"
+          - NodeID: "79y6h1o4gv8n120drcprv5nmc"
+            Addr: "10.0.0.159:2377"
+          - NodeID: "k67qz4598weg5unwwffg6z1m1"
+            Addr: "10.0.0.46:2377"
+      Nodes:
+        description: "Total number of nodes in the swarm."
+        type: "integer"
+        x-nullable: true
+        example: 4
+      Managers:
+        description: "Total number of managers in the swarm."
+        type: "integer"
+        x-nullable: true
+        example: 3
+      Cluster:
+        $ref: "#/definitions/ClusterInfo"
+
+  LocalNodeState:
+    description: "Current local status of this node."
+    type: "string"
+    default: ""
+    enum:
+      - ""
+      - "inactive"
+      - "pending"
+      - "active"
+      - "error"
+      - "locked"
+    example: "active"
+
+  PeerNode:
+    description: "Represents a peer-node in the swarm"
+    properties:
+      NodeID:
+        description: "Unique identifier of for this node in the swarm."
+        type: "string"
+      Addr:
+        description: |
+          IP address and ports at which this node can be reached.
+        type: "string"
+
+  NetworkAttachmentConfig:
+    description: |
+      Specifies how a service should be attached to a particular network.
+    type: "object"
+    properties:
+      Target:
+        description: |
+          The target network for attachment. Must be a network name or ID.
+        type: "string"
+      Aliases:
+        description: |
+          Discoverable alternate names for the service on this network.
+        type: "array"
+        items:
+          type: "string"
+      DriverOpts:
+        description: |
+          Driver attachment options for the network target.
+        type: "object"
+        additionalProperties:
+          type: "string"
+
+paths:
+  /containers/json:
+    get:
+      summary: "List containers"
+      description: |
+        Returns a list of containers. For details on the format, see the
+        [inspect endpoint](#operation/ContainerInspect).
+
+        Note that it uses a different, smaller representation of a container
+        than inspecting a single container. For example, the list of linked
+        containers is not propagated .
+      operationId: "ContainerList"
+      produces:
+        - "application/json"
+      parameters:
+        - name: "all"
+          in: "query"
+          description: |
+            Return all containers. By default, only running containers are shown.
+          type: "boolean"
+          default: false
+        - name: "limit"
+          in: "query"
+          description: |
+            Return this number of most recently created containers, including
+            non-running ones.
+          type: "integer"
+        - name: "size"
+          in: "query"
+          description: |
+            Return the size of container as fields `SizeRw` and `SizeRootFs`.
+          type: "boolean"
+          default: false
+        - name: "filters"
+          in: "query"
+          description: |
+            Filters to process on the container list, encoded as JSON (a
+            `map[string][]string`). For example, `{"status": ["paused"]}` will
+            only return paused containers.
+
+            Available filters:
+
+            - `ancestor`=(`<image-name>[:<tag>]`, `<image id>`, or `<image@digest>`)
+            - `before`=(`<container id>` or `<container name>`)
+            - `expose`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`)
+            - `exited=<int>` containers with exit code of `<int>`
+            - `health`=(`starting`|`healthy`|`unhealthy`|`none`)
+            - `id=<ID>` a container's ID
+            - `isolation=`(`default`|`process`|`hyperv`) (Windows daemon only)
+            - `is-task=`(`true`|`false`)
+            - `label=key` or `label="key=value"` of a container label
+            - `name=<name>` a container's name
+            - `network`=(`<network id>` or `<network name>`)
+            - `publish`=(`<port>[/<proto>]`|`<startport-endport>/[<proto>]`)
+            - `since`=(`<container id>` or `<container name>`)
+            - `status=`(`created`|`restarting`|`running`|`removing`|`paused`|`exited`|`dead`)
+            - `volume`=(`<volume name>` or `<mount point destination>`)
+          type: "string"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            $ref: "#/definitions/ContainerSummary"
+          examples:
+            application/json:
+              - Id: "8dfafdbc3a40"
+                Names:
+                  - "/boring_feynman"
+                Image: "ubuntu:latest"
+                ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82"
+                Command: "echo 1"
+                Created: 1367854155
+                State: "Exited"
+                Status: "Exit 0"
+                Ports:
+                  - PrivatePort: 2222
+                    PublicPort: 3333
+                    Type: "tcp"
+                Labels:
+                  com.example.vendor: "Acme"
+                  com.example.license: "GPL"
+                  com.example.version: "1.0"
+                SizeRw: 12288
+                SizeRootFs: 0
+                HostConfig:
+                  NetworkMode: "default"
+                NetworkSettings:
+                  Networks:
+                    bridge:
+                      NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812"
+                      EndpointID: "2cdc4edb1ded3631c81f57966563e5c8525b81121bb3706a9a9a3ae102711f3f"
+                      Gateway: "172.17.0.1"
+                      IPAddress: "172.17.0.2"
+                      IPPrefixLen: 16
+                      IPv6Gateway: ""
+                      GlobalIPv6Address: ""
+                      GlobalIPv6PrefixLen: 0
+                      MacAddress: "02:42:ac:11:00:02"
+                Mounts:
+                  - Name: "fac362...80535"
+                    Source: "/data"
+                    Destination: "/data"
+                    Driver: "local"
+                    Mode: "ro,Z"
+                    RW: false
+                    Propagation: ""
+              - Id: "9cd87474be90"
+                Names:
+                  - "/coolName"
+                Image: "ubuntu:latest"
+                ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82"
+                Command: "echo 222222"
+                Created: 1367854155
+                State: "Exited"
+                Status: "Exit 0"
+                Ports: []
+                Labels: {}
+                SizeRw: 12288
+                SizeRootFs: 0
+                HostConfig:
+                  NetworkMode: "default"
+                NetworkSettings:
+                  Networks:
+                    bridge:
+                      NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812"
+                      EndpointID: "88eaed7b37b38c2a3f0c4bc796494fdf51b270c2d22656412a2ca5d559a64d7a"
+                      Gateway: "172.17.0.1"
+                      IPAddress: "172.17.0.8"
+                      IPPrefixLen: 16
+                      IPv6Gateway: ""
+                      GlobalIPv6Address: ""
+                      GlobalIPv6PrefixLen: 0
+                      MacAddress: "02:42:ac:11:00:08"
+                Mounts: []
+              - Id: "3176a2479c92"
+                Names:
+                  - "/sleepy_dog"
+                Image: "ubuntu:latest"
+                ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82"
+                Command: "echo 3333333333333333"
+                Created: 1367854154
+                State: "Exited"
+                Status: "Exit 0"
+                Ports: []
+                Labels: {}
+                SizeRw: 12288
+                SizeRootFs: 0
+                HostConfig:
+                  NetworkMode: "default"
+                NetworkSettings:
+                  Networks:
+                    bridge:
+                      NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812"
+                      EndpointID: "8b27c041c30326d59cd6e6f510d4f8d1d570a228466f956edf7815508f78e30d"
+                      Gateway: "172.17.0.1"
+                      IPAddress: "172.17.0.6"
+                      IPPrefixLen: 16
+                      IPv6Gateway: ""
+                      GlobalIPv6Address: ""
+                      GlobalIPv6PrefixLen: 0
+                      MacAddress: "02:42:ac:11:00:06"
+                Mounts: []
+              - Id: "4cb07b47f9fb"
+                Names:
+                  - "/running_cat"
+                Image: "ubuntu:latest"
+                ImageID: "d74508fb6632491cea586a1fd7d748dfc5274cd6fdfedee309ecdcbc2bf5cb82"
+                Command: "echo 444444444444444444444444444444444"
+                Created: 1367854152
+                State: "Exited"
+                Status: "Exit 0"
+                Ports: []
+                Labels: {}
+                SizeRw: 12288
+                SizeRootFs: 0
+                HostConfig:
+                  NetworkMode: "default"
+                NetworkSettings:
+                  Networks:
+                    bridge:
+                      NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812"
+                      EndpointID: "d91c7b2f0644403d7ef3095985ea0e2370325cd2332ff3a3225c4247328e66e9"
+                      Gateway: "172.17.0.1"
+                      IPAddress: "172.17.0.5"
+                      IPPrefixLen: 16
+                      IPv6Gateway: ""
+                      GlobalIPv6Address: ""
+                      GlobalIPv6PrefixLen: 0
+                      MacAddress: "02:42:ac:11:00:05"
+                Mounts: []
+        400:
+          description: "bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      tags: ["Container"]
+  /containers/create:
+    post:
+      summary: "Create a container"
+      operationId: "ContainerCreate"
+      consumes:
+        - "application/json"
+        - "application/octet-stream"
+      produces:
+        - "application/json"
+      parameters:
+        - name: "name"
+          in: "query"
+          description: |
+            Assign the specified name to the container. Must match
+            `/?[a-zA-Z0-9][a-zA-Z0-9_.-]+`.
+          type: "string"
+          pattern: "^/?[a-zA-Z0-9][a-zA-Z0-9_.-]+$"
+        - name: "platform"
+          in: "query"
+          description: |
+            Platform in the format `os[/arch[/variant]]` used for image lookup.
+
+            When specified, the daemon checks if the requested image is present
+            in the local image cache with the given OS and Architecture, and
+            otherwise returns a `404` status.
+
+            If the option is not set, the host's native OS and Architecture are
+            used to look up the image in the image cache. However, if no platform
+            is passed and the given image does exist in the local image cache,
+            but its OS or architecture does not match, the container is created
+            with the available image, and a warning is added to the `Warnings`
+            field in the response, for example;
+
+                WARNING: The requested image's platform (linux/arm64/v8) does not
+                         match the detected host platform (linux/amd64) and no
+                         specific platform was requested
+
+          type: "string"
+          default: ""
+        - name: "body"
+          in: "body"
+          description: "Container to create"
+          schema:
+            allOf:
+              - $ref: "#/definitions/ContainerConfig"
+              - type: "object"
+                properties:
+                  HostConfig:
+                    $ref: "#/definitions/HostConfig"
+                  NetworkingConfig:
+                    $ref: "#/definitions/NetworkingConfig"
+            example:
+              Hostname: ""
+              Domainname: ""
+              User: ""
+              AttachStdin: false
+              AttachStdout: true
+              AttachStderr: true
+              Tty: false
+              OpenStdin: false
+              StdinOnce: false
+              Env:
+                - "FOO=bar"
+                - "BAZ=quux"
+              Cmd:
+                - "date"
+              Entrypoint: ""
+              Image: "ubuntu"
+              Labels:
+                com.example.vendor: "Acme"
+                com.example.license: "GPL"
+                com.example.version: "1.0"
+              Volumes:
+                /volumes/data: {}
+              WorkingDir: ""
+              NetworkDisabled: false
+              MacAddress: "12:34:56:78:9a:bc"
+              ExposedPorts:
+                22/tcp: {}
+              StopSignal: "SIGTERM"
+              StopTimeout: 10
+              HostConfig:
+                Binds:
+                  - "/tmp:/tmp"
+                Links:
+                  - "redis3:redis"
+                Memory: 0
+                MemorySwap: 0
+                MemoryReservation: 0
+                KernelMemory: 0
+                NanoCpus: 500000
+                CpuPercent: 80
+                CpuShares: 512
+                CpuPeriod: 100000
+                CpuRealtimePeriod: 1000000
+                CpuRealtimeRuntime: 10000
+                CpuQuota: 50000
+                CpusetCpus: "0,1"
+                CpusetMems: "0,1"
+                MaximumIOps: 0
+                MaximumIOBps: 0
+                BlkioWeight: 300
+                BlkioWeightDevice:
+                  - {}
+                BlkioDeviceReadBps:
+                  - {}
+                BlkioDeviceReadIOps:
+                  - {}
+                BlkioDeviceWriteBps:
+                  - {}
+                BlkioDeviceWriteIOps:
+                  - {}
+                DeviceRequests:
+                  - Driver: "nvidia"
+                    Count: -1
+                    DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"]
+                    Capabilities: [["gpu", "nvidia", "compute"]]
+                    Options:
+                      property1: "string"
+                      property2: "string"
+                MemorySwappiness: 60
+                OomKillDisable: false
+                OomScoreAdj: 500
+                PidMode: ""
+                PidsLimit: 0
+                PortBindings:
+                  22/tcp:
+                    - HostPort: "11022"
+                PublishAllPorts: false
+                Privileged: false
+                ReadonlyRootfs: false
+                Dns:
+                  - "8.8.8.8"
+                DnsOptions:
+                  - ""
+                DnsSearch:
+                  - ""
+                VolumesFrom:
+                  - "parent"
+                  - "other:ro"
+                CapAdd:
+                  - "NET_ADMIN"
+                CapDrop:
+                  - "MKNOD"
+                GroupAdd:
+                  - "newgroup"
+                RestartPolicy:
+                  Name: ""
+                  MaximumRetryCount: 0
+                AutoRemove: true
+                NetworkMode: "bridge"
+                Devices: []
+                Ulimits:
+                  - {}
+                LogConfig:
+                  Type: "json-file"
+                  Config: {}
+                SecurityOpt: []
+                StorageOpt: {}
+                CgroupParent: ""
+                VolumeDriver: ""
+                ShmSize: 67108864
+              NetworkingConfig:
+                EndpointsConfig:
+                  isolated_nw:
+                    IPAMConfig:
+                      IPv4Address: "172.20.30.33"
+                      IPv6Address: "2001:db8:abcd::3033"
+                      LinkLocalIPs:
+                        - "169.254.34.68"
+                        - "fe80::3468"
+                    Links:
+                      - "container_1"
+                      - "container_2"
+                    Aliases:
+                      - "server_x"
+                      - "server_y"
+
+          required: true
+      responses:
+        201:
+          description: "Container created successfully"
+          schema:
+            type: "object"
+            title: "ContainerCreateResponse"
+            description: "OK response to ContainerCreate operation"
+            required: [Id, Warnings]
+            properties:
+              Id:
+                description: "The ID of the created container"
+                type: "string"
+                x-nullable: false
+              Warnings:
+                description: "Warnings encountered when creating the container"
+                type: "array"
+                x-nullable: false
+                items:
+                  type: "string"
+          examples:
+            application/json:
+              Id: "e90e34656806"
+              Warnings: []
+        400:
+          description: "bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        404:
+          description: "no such image"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such image: c2ada9df5af8"
+        409:
+          description: "conflict"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      tags: ["Container"]
+  /containers/{id}/json:
+    get:
+      summary: "Inspect a container"
+      description: "Return low-level information about a container."
+      operationId: "ContainerInspect"
+      produces:
+        - "application/json"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            type: "object"
+            title: "ContainerInspectResponse"
+            properties:
+              Id:
+                description: "The ID of the container"
+                type: "string"
+              Created:
+                description: "The time the container was created"
+                type: "string"
+              Path:
+                description: "The path to the command being run"
+                type: "string"
+              Args:
+                description: "The arguments to the command being run"
+                type: "array"
+                items:
+                  type: "string"
+              State:
+                x-nullable: true
+                $ref: "#/definitions/ContainerState"
+              Image:
+                description: "The container's image ID"
+                type: "string"
+              ResolvConfPath:
+                type: "string"
+              HostnamePath:
+                type: "string"
+              HostsPath:
+                type: "string"
+              LogPath:
+                type: "string"
+              Name:
+                type: "string"
+              RestartCount:
+                type: "integer"
+              Driver:
+                type: "string"
+              Platform:
+                type: "string"
+              MountLabel:
+                type: "string"
+              ProcessLabel:
+                type: "string"
+              AppArmorProfile:
+                type: "string"
+              ExecIDs:
+                description: "IDs of exec instances that are running in the container."
+                type: "array"
+                items:
+                  type: "string"
+                x-nullable: true
+              HostConfig:
+                $ref: "#/definitions/HostConfig"
+              GraphDriver:
+                $ref: "#/definitions/GraphDriverData"
+              SizeRw:
+                description: |
+                  The size of files that have been created or changed by this
+                  container.
+                type: "integer"
+                format: "int64"
+              SizeRootFs:
+                description: "The total size of all the files in this container."
+                type: "integer"
+                format: "int64"
+              Mounts:
+                type: "array"
+                items:
+                  $ref: "#/definitions/MountPoint"
+              Config:
+                $ref: "#/definitions/ContainerConfig"
+              NetworkSettings:
+                $ref: "#/definitions/NetworkSettings"
+          examples:
+            application/json:
+              AppArmorProfile: ""
+              Args:
+                - "-c"
+                - "exit 9"
+              Config:
+                AttachStderr: true
+                AttachStdin: false
+                AttachStdout: true
+                Cmd:
+                  - "/bin/sh"
+                  - "-c"
+                  - "exit 9"
+                Domainname: ""
+                Env:
+                  - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
+                Healthcheck:
+                  Test: ["CMD-SHELL", "exit 0"]
+                Hostname: "ba033ac44011"
+                Image: "ubuntu"
+                Labels:
+                  com.example.vendor: "Acme"
+                  com.example.license: "GPL"
+                  com.example.version: "1.0"
+                MacAddress: ""
+                NetworkDisabled: false
+                OpenStdin: false
+                StdinOnce: false
+                Tty: false
+                User: ""
+                Volumes:
+                  /volumes/data: {}
+                WorkingDir: ""
+                StopSignal: "SIGTERM"
+                StopTimeout: 10
+              Created: "2015-01-06T15:47:31.485331387Z"
+              Driver: "devicemapper"
+              ExecIDs:
+                - "b35395de42bc8abd327f9dd65d913b9ba28c74d2f0734eeeae84fa1c616a0fca"
+                - "3fc1232e5cd20c8de182ed81178503dc6437f4e7ef12b52cc5e8de020652f1c4"
+              HostConfig:
+                MaximumIOps: 0
+                MaximumIOBps: 0
+                BlkioWeight: 0
+                BlkioWeightDevice:
+                  - {}
+                BlkioDeviceReadBps:
+                  - {}
+                BlkioDeviceWriteBps:
+                  - {}
+                BlkioDeviceReadIOps:
+                  - {}
+                BlkioDeviceWriteIOps:
+                  - {}
+                ContainerIDFile: ""
+                CpusetCpus: ""
+                CpusetMems: ""
+                CpuPercent: 80
+                CpuShares: 0
+                CpuPeriod: 100000
+                CpuRealtimePeriod: 1000000
+                CpuRealtimeRuntime: 10000
+                Devices: []
+                DeviceRequests:
+                  - Driver: "nvidia"
+                    Count: -1
+                    DeviceIDs": ["0", "1", "GPU-fef8089b-4820-abfc-e83e-94318197576e"]
+                    Capabilities: [["gpu", "nvidia", "compute"]]
+                    Options:
+                      property1: "string"
+                      property2: "string"
+                IpcMode: ""
+                Memory: 0
+                MemorySwap: 0
+                MemoryReservation: 0
+                KernelMemory: 0
+                OomKillDisable: false
+                OomScoreAdj: 500
+                NetworkMode: "bridge"
+                PidMode: ""
+                PortBindings: {}
+                Privileged: false
+                ReadonlyRootfs: false
+                PublishAllPorts: false
+                RestartPolicy:
+                  MaximumRetryCount: 2
+                  Name: "on-failure"
+                LogConfig:
+                  Type: "json-file"
+                Sysctls:
+                  net.ipv4.ip_forward: "1"
+                Ulimits:
+                  - {}
+                VolumeDriver: ""
+                ShmSize: 67108864
+              HostnamePath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hostname"
+              HostsPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/hosts"
+              LogPath: "/var/lib/docker/containers/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b/1eb5fabf5a03807136561b3c00adcd2992b535d624d5e18b6cdc6a6844d9767b-json.log"
+              Id: "ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39"
+              Image: "04c5d3b7b0656168630d3ba35d8889bd0e9caafcaeb3004d2bfbc47e7c5d35d2"
+              MountLabel: ""
+              Name: "/boring_euclid"
+              NetworkSettings:
+                Bridge: ""
+                SandboxID: ""
+                HairpinMode: false
+                LinkLocalIPv6Address: ""
+                LinkLocalIPv6PrefixLen: 0
+                SandboxKey: ""
+                EndpointID: ""
+                Gateway: ""
+                GlobalIPv6Address: ""
+                GlobalIPv6PrefixLen: 0
+                IPAddress: ""
+                IPPrefixLen: 0
+                IPv6Gateway: ""
+                MacAddress: ""
+                Networks:
+                  bridge:
+                    NetworkID: "7ea29fc1412292a2d7bba362f9253545fecdfa8ce9a6e37dd10ba8bee7129812"
+                    EndpointID: "7587b82f0dada3656fda26588aee72630c6fab1536d36e394b2bfbcf898c971d"
+                    Gateway: "172.17.0.1"
+                    IPAddress: "172.17.0.2"
+                    IPPrefixLen: 16
+                    IPv6Gateway: ""
+                    GlobalIPv6Address: ""
+                    GlobalIPv6PrefixLen: 0
+                    MacAddress: "02:42:ac:12:00:02"
+              Path: "/bin/sh"
+              ProcessLabel: ""
+              ResolvConfPath: "/var/lib/docker/containers/ba033ac4401106a3b513bc9d639eee123ad78ca3616b921167cd74b20e25ed39/resolv.conf"
+              RestartCount: 1
+              State:
+                Error: ""
+                ExitCode: 9
+                FinishedAt: "2015-01-06T15:47:32.080254511Z"
+                Health:
+                  Status: "healthy"
+                  FailingStreak: 0
+                  Log:
+                    - Start: "2019-12-22T10:59:05.6385933Z"
+                      End: "2019-12-22T10:59:05.8078452Z"
+                      ExitCode: 0
+                      Output: ""
+                OOMKilled: false
+                Dead: false
+                Paused: false
+                Pid: 0
+                Restarting: false
+                Running: true
+                StartedAt: "2015-01-06T15:47:32.072697474Z"
+                Status: "running"
+              Mounts:
+                - Name: "fac362...80535"
+                  Source: "/data"
+                  Destination: "/data"
+                  Driver: "local"
+                  Mode: "ro,Z"
+                  RW: false
+                  Propagation: ""
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+        - name: "size"
+          in: "query"
+          type: "boolean"
+          default: false
+          description: "Return the size of container as fields `SizeRw` and `SizeRootFs`"
+      tags: ["Container"]
+  /containers/{id}/top:
+    get:
+      summary: "List processes running inside a container"
+      description: |
+        On Unix systems, this is done by running the `ps` command. This endpoint
+        is not supported on Windows.
+      operationId: "ContainerTop"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            type: "object"
+            title: "ContainerTopResponse"
+            description: "OK response to ContainerTop operation"
+            properties:
+              Titles:
+                description: "The ps column titles"
+                type: "array"
+                items:
+                  type: "string"
+              Processes:
+                description: |
+                  Each process running in the container, where each is process
+                  is an array of values corresponding to the titles.
+                type: "array"
+                items:
+                  type: "array"
+                  items:
+                    type: "string"
+          examples:
+            application/json:
+              Titles:
+                - "UID"
+                - "PID"
+                - "PPID"
+                - "C"
+                - "STIME"
+                - "TTY"
+                - "TIME"
+                - "CMD"
+              Processes:
+                -
+                  - "root"
+                  - "13642"
+                  - "882"
+                  - "0"
+                  - "17:03"
+                  - "pts/0"
+                  - "00:00:00"
+                  - "/bin/bash"
+                -
+                  - "root"
+                  - "13735"
+                  - "13642"
+                  - "0"
+                  - "17:06"
+                  - "pts/0"
+                  - "00:00:00"
+                  - "sleep 10"
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+        - name: "ps_args"
+          in: "query"
+          description: "The arguments to pass to `ps`. For example, `aux`"
+          type: "string"
+          default: "-ef"
+      tags: ["Container"]
+  /containers/{id}/logs:
+    get:
+      summary: "Get container logs"
+      description: |
+        Get `stdout` and `stderr` logs from a container.
+
+        Note: This endpoint works only for containers with the `json-file` or
+        `journald` logging driver.
+      operationId: "ContainerLogs"
+      responses:
+        200:
+          description: |
+            logs returned as a stream in response body.
+            For the stream format, [see the documentation for the attach endpoint](#operation/ContainerAttach).
+            Note that unlike the attach endpoint, the logs endpoint does not
+            upgrade the connection and does not set Content-Type.
+          schema:
+            type: "string"
+            format: "binary"
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+        - name: "follow"
+          in: "query"
+          description: "Keep connection after returning logs."
+          type: "boolean"
+          default: false
+        - name: "stdout"
+          in: "query"
+          description: "Return logs from `stdout`"
+          type: "boolean"
+          default: false
+        - name: "stderr"
+          in: "query"
+          description: "Return logs from `stderr`"
+          type: "boolean"
+          default: false
+        - name: "since"
+          in: "query"
+          description: "Only return logs since this time, as a UNIX timestamp"
+          type: "integer"
+          default: 0
+        - name: "until"
+          in: "query"
+          description: "Only return logs before this time, as a UNIX timestamp"
+          type: "integer"
+          default: 0
+        - name: "timestamps"
+          in: "query"
+          description: "Add timestamps to every log line"
+          type: "boolean"
+          default: false
+        - name: "tail"
+          in: "query"
+          description: |
+            Only return this number of log lines from the end of the logs.
+            Specify as an integer or `all` to output all log lines.
+          type: "string"
+          default: "all"
+      tags: ["Container"]
+  /containers/{id}/changes:
+    get:
+      summary: "Get changes on a container’s filesystem"
+      description: |
+        Returns which files in a container's filesystem have been added, deleted,
+        or modified. The `Kind` of modification can be one of:
+
+        - `0`: Modified
+        - `1`: Added
+        - `2`: Deleted
+      operationId: "ContainerChanges"
+      produces: ["application/json"]
+      responses:
+        200:
+          description: "The list of changes"
+          schema:
+            type: "array"
+            items:
+              type: "object"
+              x-go-name: "ContainerChangeResponseItem"
+              title: "ContainerChangeResponseItem"
+              description: "change item in response to ContainerChanges operation"
+              required: [Path, Kind]
+              properties:
+                Path:
+                  description: "Path to file that has changed"
+                  type: "string"
+                  x-nullable: false
+                Kind:
+                  description: "Kind of change"
+                  type: "integer"
+                  format: "uint8"
+                  enum: [0, 1, 2]
+                  x-nullable: false
+          examples:
+            application/json:
+              - Path: "/dev"
+                Kind: 0
+              - Path: "/dev/kmsg"
+                Kind: 1
+              - Path: "/test"
+                Kind: 1
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+      tags: ["Container"]
+  /containers/{id}/export:
+    get:
+      summary: "Export a container"
+      description: "Export the contents of a container as a tarball."
+      operationId: "ContainerExport"
+      produces:
+        - "application/octet-stream"
+      responses:
+        200:
+          description: "no error"
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+      tags: ["Container"]
+  /containers/{id}/stats:
+    get:
+      summary: "Get container stats based on resource usage"
+      description: |
+        This endpoint returns a live stream of a container’s resource usage
+        statistics.
+
+        The `precpu_stats` is the CPU statistic of the *previous* read, and is
+        used to calculate the CPU usage percentage. It is not an exact copy
+        of the `cpu_stats` field.
+
+        If either `precpu_stats.online_cpus` or `cpu_stats.online_cpus` is
+        nil then for compatibility with older daemons the length of the
+        corresponding `cpu_usage.percpu_usage` array should be used.
+
+        On a cgroup v2 host, the following fields are not set
+        * `blkio_stats`: all fields other than `io_service_bytes_recursive`
+        * `cpu_stats`: `cpu_usage.percpu_usage`
+        * `memory_stats`: `max_usage` and `failcnt`
+        Also, `memory_stats.stats` fields are incompatible with cgroup v1.
+
+        To calculate the values shown by the `stats` command of the docker cli tool
+        the following formulas can be used:
+        * used_memory = `memory_stats.usage - memory_stats.stats.cache`
+        * available_memory = `memory_stats.limit`
+        * Memory usage % = `(used_memory / available_memory) * 100.0`
+        * cpu_delta = `cpu_stats.cpu_usage.total_usage - precpu_stats.cpu_usage.total_usage`
+        * system_cpu_delta = `cpu_stats.system_cpu_usage - precpu_stats.system_cpu_usage`
+        * number_cpus = `lenght(cpu_stats.cpu_usage.percpu_usage)` or `cpu_stats.online_cpus`
+        * CPU usage % = `(cpu_delta / system_cpu_delta) * number_cpus * 100.0`
+      operationId: "ContainerStats"
+      produces: ["application/json"]
+      responses:
+        200:
+          description: "no error"
+          schema:
+            type: "object"
+          examples:
+            application/json:
+              read: "2015-01-08T22:57:31.547920715Z"
+              pids_stats:
+                current: 3
+              networks:
+                eth0:
+                  rx_bytes: 5338
+                  rx_dropped: 0
+                  rx_errors: 0
+                  rx_packets: 36
+                  tx_bytes: 648
+                  tx_dropped: 0
+                  tx_errors: 0
+                  tx_packets: 8
+                eth5:
+                  rx_bytes: 4641
+                  rx_dropped: 0
+                  rx_errors: 0
+                  rx_packets: 26
+                  tx_bytes: 690
+                  tx_dropped: 0
+                  tx_errors: 0
+                  tx_packets: 9
+              memory_stats:
+                stats:
+                  total_pgmajfault: 0
+                  cache: 0
+                  mapped_file: 0
+                  total_inactive_file: 0
+                  pgpgout: 414
+                  rss: 6537216
+                  total_mapped_file: 0
+                  writeback: 0
+                  unevictable: 0
+                  pgpgin: 477
+                  total_unevictable: 0
+                  pgmajfault: 0
+                  total_rss: 6537216
+                  total_rss_huge: 6291456
+                  total_writeback: 0
+                  total_inactive_anon: 0
+                  rss_huge: 6291456
+                  hierarchical_memory_limit: 67108864
+                  total_pgfault: 964
+                  total_active_file: 0
+                  active_anon: 6537216
+                  total_active_anon: 6537216
+                  total_pgpgout: 414
+                  total_cache: 0
+                  inactive_anon: 0
+                  active_file: 0
+                  pgfault: 964
+                  inactive_file: 0
+                  total_pgpgin: 477
+                max_usage: 6651904
+                usage: 6537216
+                failcnt: 0
+                limit: 67108864
+              blkio_stats: {}
+              cpu_stats:
+                cpu_usage:
+                  percpu_usage:
+                    - 8646879
+                    - 24472255
+                    - 36438778
+                    - 30657443
+                  usage_in_usermode: 50000000
+                  total_usage: 100215355
+                  usage_in_kernelmode: 30000000
+                system_cpu_usage: 739306590000000
+                online_cpus: 4
+                throttling_data:
+                  periods: 0
+                  throttled_periods: 0
+                  throttled_time: 0
+              precpu_stats:
+                cpu_usage:
+                  percpu_usage:
+                    - 8646879
+                    - 24350896
+                    - 36438778
+                    - 30657443
+                  usage_in_usermode: 50000000
+                  total_usage: 100093996
+                  usage_in_kernelmode: 30000000
+                system_cpu_usage: 9492140000000
+                online_cpus: 4
+                throttling_data:
+                  periods: 0
+                  throttled_periods: 0
+                  throttled_time: 0
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+        - name: "stream"
+          in: "query"
+          description: |
+            Stream the output. If false, the stats will be output once and then
+            it will disconnect.
+          type: "boolean"
+          default: true
+        - name: "one-shot"
+          in: "query"
+          description: |
+            Only get a single stat instead of waiting for 2 cycles. Must be used
+            with `stream=false`.
+          type: "boolean"
+          default: false
+      tags: ["Container"]
+  /containers/{id}/resize:
+    post:
+      summary: "Resize a container TTY"
+      description: "Resize the TTY for a container."
+      operationId: "ContainerResize"
+      consumes:
+        - "application/octet-stream"
+      produces:
+        - "text/plain"
+      responses:
+        200:
+          description: "no error"
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "cannot resize container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+        - name: "h"
+          in: "query"
+          description: "Height of the TTY session in characters"
+          type: "integer"
+        - name: "w"
+          in: "query"
+          description: "Width of the TTY session in characters"
+          type: "integer"
+      tags: ["Container"]
+  /containers/{id}/start:
+    post:
+      summary: "Start a container"
+      operationId: "ContainerStart"
+      responses:
+        204:
+          description: "no error"
+        304:
+          description: "container already started"
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+        - name: "detachKeys"
+          in: "query"
+          description: |
+            Override the key sequence for detaching a container. Format is a
+            single character `[a-Z]` or `ctrl-<value>` where `<value>` is one
+            of: `a-z`, `@`, `^`, `[`, `,` or `_`.
+          type: "string"
+      tags: ["Container"]
+  /containers/{id}/stop:
+    post:
+      summary: "Stop a container"
+      operationId: "ContainerStop"
+      responses:
+        204:
+          description: "no error"
+        304:
+          description: "container already stopped"
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+        - name: "t"
+          in: "query"
+          description: "Number of seconds to wait before killing the container"
+          type: "integer"
+      tags: ["Container"]
+  /containers/{id}/restart:
+    post:
+      summary: "Restart a container"
+      operationId: "ContainerRestart"
+      responses:
+        204:
+          description: "no error"
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+        - name: "t"
+          in: "query"
+          description: "Number of seconds to wait before killing the container"
+          type: "integer"
+      tags: ["Container"]
+  /containers/{id}/kill:
+    post:
+      summary: "Kill a container"
+      description: |
+        Send a POSIX signal to a container, defaulting to killing to the
+        container.
+      operationId: "ContainerKill"
+      responses:
+        204:
+          description: "no error"
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        409:
+          description: "container is not running"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "Container d37cde0fe4ad63c3a7252023b2f9800282894247d145cb5933ddf6e52cc03a28 is not running"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+        - name: "signal"
+          in: "query"
+          description: "Signal to send to the container as an integer or string (e.g. `SIGINT`)"
+          type: "string"
+          default: "SIGKILL"
+      tags: ["Container"]
+  /containers/{id}/update:
+    post:
+      summary: "Update a container"
+      description: |
+        Change various configuration options of a container without having to
+        recreate it.
+      operationId: "ContainerUpdate"
+      consumes: ["application/json"]
+      produces: ["application/json"]
+      responses:
+        200:
+          description: "The container has been updated."
+          schema:
+            type: "object"
+            title: "ContainerUpdateResponse"
+            description: "OK response to ContainerUpdate operation"
+            properties:
+              Warnings:
+                type: "array"
+                items:
+                  type: "string"
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+        - name: "update"
+          in: "body"
+          required: true
+          schema:
+            allOf:
+              - $ref: "#/definitions/Resources"
+              - type: "object"
+                properties:
+                  RestartPolicy:
+                    $ref: "#/definitions/RestartPolicy"
+            example:
+              BlkioWeight: 300
+              CpuShares: 512
+              CpuPeriod: 100000
+              CpuQuota: 50000
+              CpuRealtimePeriod: 1000000
+              CpuRealtimeRuntime: 10000
+              CpusetCpus: "0,1"
+              CpusetMems: "0"
+              Memory: 314572800
+              MemorySwap: 514288000
+              MemoryReservation: 209715200
+              KernelMemory: 52428800
+              RestartPolicy:
+                MaximumRetryCount: 4
+                Name: "on-failure"
+      tags: ["Container"]
+  /containers/{id}/rename:
+    post:
+      summary: "Rename a container"
+      operationId: "ContainerRename"
+      responses:
+        204:
+          description: "no error"
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        409:
+          description: "name already in use"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+        - name: "name"
+          in: "query"
+          required: true
+          description: "New name for the container"
+          type: "string"
+      tags: ["Container"]
+  /containers/{id}/pause:
+    post:
+      summary: "Pause a container"
+      description: |
+        Use the freezer cgroup to suspend all processes in a container.
+
+        Traditionally, when suspending a process the `SIGSTOP` signal is used,
+        which is observable by the process being suspended. With the freezer
+        cgroup the process is unaware, and unable to capture, that it is being
+        suspended, and subsequently resumed.
+      operationId: "ContainerPause"
+      responses:
+        204:
+          description: "no error"
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+      tags: ["Container"]
+  /containers/{id}/unpause:
+    post:
+      summary: "Unpause a container"
+      description: "Resume a container which has been paused."
+      operationId: "ContainerUnpause"
+      responses:
+        204:
+          description: "no error"
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+      tags: ["Container"]
+  /containers/{id}/attach:
+    post:
+      summary: "Attach to a container"
+      description: |
+        Attach to a container to read its output or send it input. You can attach
+        to the same container multiple times and you can reattach to containers
+        that have been detached.
+
+        Either the `stream` or `logs` parameter must be `true` for this endpoint
+        to do anything.
+
+        See the [documentation for the `docker attach` command](https://docs.docker.com/engine/reference/commandline/attach/)
+        for more details.
+
+        ### Hijacking
+
+        This endpoint hijacks the HTTP connection to transport `stdin`, `stdout`,
+        and `stderr` on the same socket.
+
+        This is the response from the daemon for an attach request:
+
+        ```
+        HTTP/1.1 200 OK
+        Content-Type: application/vnd.docker.raw-stream
+
+        [STREAM]
+        ```
+
+        After the headers and two new lines, the TCP connection can now be used
+        for raw, bidirectional communication between the client and server.
+
+        To hint potential proxies about connection hijacking, the Docker client
+        can also optionally send connection upgrade headers.
+
+        For example, the client sends this request to upgrade the connection:
+
+        ```
+        POST /containers/16253994b7c4/attach?stream=1&stdout=1 HTTP/1.1
+        Upgrade: tcp
+        Connection: Upgrade
+        ```
+
+        The Docker daemon will respond with a `101 UPGRADED` response, and will
+        similarly follow with the raw stream:
+
+        ```
+        HTTP/1.1 101 UPGRADED
+        Content-Type: application/vnd.docker.raw-stream
+        Connection: Upgrade
+        Upgrade: tcp
+
+        [STREAM]
+        ```
+
+        ### Stream format
+
+        When the TTY setting is disabled in [`POST /containers/create`](#operation/ContainerCreate),
+        the stream over the hijacked connected is multiplexed to separate out
+        `stdout` and `stderr`. The stream consists of a series of frames, each
+        containing a header and a payload.
+
+        The header contains the information which the stream writes (`stdout` or
+        `stderr`). It also contains the size of the associated frame encoded in
+        the last four bytes (`uint32`).
+
+        It is encoded on the first eight bytes like this:
+
+        ```go
+        header := [8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}
+        ```
+
+        `STREAM_TYPE` can be:
+
+        - 0: `stdin` (is written on `stdout`)
+        - 1: `stdout`
+        - 2: `stderr`
+
+        `SIZE1, SIZE2, SIZE3, SIZE4` are the four bytes of the `uint32` size
+        encoded as big endian.
+
+        Following the header is the payload, which is the specified number of
+        bytes of `STREAM_TYPE`.
+
+        The simplest way to implement this protocol is the following:
+
+        1. Read 8 bytes.
+        2. Choose `stdout` or `stderr` depending on the first byte.
+        3. Extract the frame size from the last four bytes.
+        4. Read the extracted size and output it on the correct output.
+        5. Goto 1.
+
+        ### Stream format when using a TTY
+
+        When the TTY setting is enabled in [`POST /containers/create`](#operation/ContainerCreate),
+        the stream is not multiplexed. The data exchanged over the hijacked
+        connection is simply the raw data from the process PTY and client's
+        `stdin`.
+
+      operationId: "ContainerAttach"
+      produces:
+        - "application/vnd.docker.raw-stream"
+      responses:
+        101:
+          description: "no error, hints proxy about hijacking"
+        200:
+          description: "no error, no upgrade header found"
+        400:
+          description: "bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+        - name: "detachKeys"
+          in: "query"
+          description: |
+            Override the key sequence for detaching a container.Format is a single
+            character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`,
+            `@`, `^`, `[`, `,` or `_`.
+          type: "string"
+        - name: "logs"
+          in: "query"
+          description: |
+            Replay previous logs from the container.
+
+            This is useful for attaching to a container that has started and you
+            want to output everything since the container started.
+
+            If `stream` is also enabled, once all the previous output has been
+            returned, it will seamlessly transition into streaming current
+            output.
+          type: "boolean"
+          default: false
+        - name: "stream"
+          in: "query"
+          description: |
+            Stream attached streams from the time the request was made onwards.
+          type: "boolean"
+          default: false
+        - name: "stdin"
+          in: "query"
+          description: "Attach to `stdin`"
+          type: "boolean"
+          default: false
+        - name: "stdout"
+          in: "query"
+          description: "Attach to `stdout`"
+          type: "boolean"
+          default: false
+        - name: "stderr"
+          in: "query"
+          description: "Attach to `stderr`"
+          type: "boolean"
+          default: false
+      tags: ["Container"]
+  /containers/{id}/attach/ws:
+    get:
+      summary: "Attach to a container via a websocket"
+      operationId: "ContainerAttachWebsocket"
+      responses:
+        101:
+          description: "no error, hints proxy about hijacking"
+        200:
+          description: "no error, no upgrade header found"
+        400:
+          description: "bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+        - name: "detachKeys"
+          in: "query"
+          description: |
+            Override the key sequence for detaching a container.Format is a single
+            character `[a-Z]` or `ctrl-<value>` where `<value>` is one of: `a-z`,
+            `@`, `^`, `[`, `,`, or `_`.
+          type: "string"
+        - name: "logs"
+          in: "query"
+          description: "Return logs"
+          type: "boolean"
+          default: false
+        - name: "stream"
+          in: "query"
+          description: "Return stream"
+          type: "boolean"
+          default: false
+        - name: "stdin"
+          in: "query"
+          description: "Attach to `stdin`"
+          type: "boolean"
+          default: false
+        - name: "stdout"
+          in: "query"
+          description: "Attach to `stdout`"
+          type: "boolean"
+          default: false
+        - name: "stderr"
+          in: "query"
+          description: "Attach to `stderr`"
+          type: "boolean"
+          default: false
+      tags: ["Container"]
+  /containers/{id}/wait:
+    post:
+      summary: "Wait for a container"
+      description: "Block until a container stops, then returns the exit code."
+      operationId: "ContainerWait"
+      produces: ["application/json"]
+      responses:
+        200:
+          description: "The container has exit."
+          schema:
+            type: "object"
+            title: "ContainerWaitResponse"
+            description: "OK response to ContainerWait operation"
+            required: [StatusCode]
+            properties:
+              StatusCode:
+                description: "Exit code of the container"
+                type: "integer"
+                x-nullable: false
+              Error:
+                description: "container waiting error, if any"
+                type: "object"
+                properties:
+                  Message:
+                    description: "Details of an error"
+                    type: "string"
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+        - name: "condition"
+          in: "query"
+          description: |
+            Wait until a container state reaches the given condition, either
+            'not-running' (default), 'next-exit', or 'removed'.
+          type: "string"
+          default: "not-running"
+      tags: ["Container"]
+  /containers/{id}:
+    delete:
+      summary: "Remove a container"
+      operationId: "ContainerDelete"
+      responses:
+        204:
+          description: "no error"
+        400:
+          description: "bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        409:
+          description: "conflict"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: |
+                You cannot remove a running container: c2ada9df5af8. Stop the
+                container before attempting removal or force remove
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+        - name: "v"
+          in: "query"
+          description: "Remove anonymous volumes associated with the container."
+          type: "boolean"
+          default: false
+        - name: "force"
+          in: "query"
+          description: "If the container is running, kill it before removing it."
+          type: "boolean"
+          default: false
+        - name: "link"
+          in: "query"
+          description: "Remove the specified link associated with the container."
+          type: "boolean"
+          default: false
+      tags: ["Container"]
+  /containers/{id}/archive:
+    head:
+      summary: "Get information about files in a container"
+      description: |
+        A response header `X-Docker-Container-Path-Stat` is returned, containing
+        a base64 - encoded JSON object with some filesystem header information
+        about the path.
+      operationId: "ContainerArchiveInfo"
+      responses:
+        200:
+          description: "no error"
+          headers:
+            X-Docker-Container-Path-Stat:
+              type: "string"
+              description: |
+                A base64 - encoded JSON object with some filesystem header
+                information about the path
+        400:
+          description: "Bad parameter"
+          schema:
+            allOf:
+              - $ref: "#/definitions/ErrorResponse"
+              - type: "object"
+                properties:
+                  message:
+                    description: |
+                      The error message. Either "must specify path parameter"
+                      (path cannot be empty) or "not a directory" (path was
+                      asserted to be a directory but exists as a file).
+                    type: "string"
+                    x-nullable: false
+        404:
+          description: "Container or path does not exist"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+        - name: "path"
+          in: "query"
+          required: true
+          description: "Resource in the container’s filesystem to archive."
+          type: "string"
+      tags: ["Container"]
+    get:
+      summary: "Get an archive of a filesystem resource in a container"
+      description: "Get a tar archive of a resource in the filesystem of container id."
+      operationId: "ContainerArchive"
+      produces: ["application/x-tar"]
+      responses:
+        200:
+          description: "no error"
+        400:
+          description: "Bad parameter"
+          schema:
+            allOf:
+              - $ref: "#/definitions/ErrorResponse"
+              - type: "object"
+                properties:
+                  message:
+                    description: |
+                      The error message. Either "must specify path parameter"
+                      (path cannot be empty) or "not a directory" (path was
+                      asserted to be a directory but exists as a file).
+                    type: "string"
+                    x-nullable: false
+        404:
+          description: "Container or path does not exist"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+        - name: "path"
+          in: "query"
+          required: true
+          description: "Resource in the container’s filesystem to archive."
+          type: "string"
+      tags: ["Container"]
+    put:
+      summary: "Extract an archive of files or folders to a directory in a container"
+      description: "Upload a tar archive to be extracted to a path in the filesystem of container id."
+      operationId: "PutContainerArchive"
+      consumes: ["application/x-tar", "application/octet-stream"]
+      responses:
+        200:
+          description: "The content was extracted successfully"
+        400:
+          description: "Bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        403:
+          description: "Permission denied, the volume or container rootfs is marked as read-only."
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        404:
+          description: "No such container or path does not exist inside the container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the container"
+          type: "string"
+        - name: "path"
+          in: "query"
+          required: true
+          description: "Path to a directory in the container to extract the archive’s contents into. "
+          type: "string"
+        - name: "noOverwriteDirNonDir"
+          in: "query"
+          description: |
+            If `1`, `true`, or `True` then it will be an error if unpacking the
+            given content would cause an existing directory to be replaced with
+            a non-directory and vice versa.
+          type: "string"
+        - name: "copyUIDGID"
+          in: "query"
+          description: |
+            If `1`, `true`, then it will copy UID/GID maps to the dest file or
+            dir
+          type: "string"
+        - name: "inputStream"
+          in: "body"
+          required: true
+          description: |
+            The input stream must be a tar archive compressed with one of the
+            following algorithms: `identity` (no compression), `gzip`, `bzip2`,
+            or `xz`.
+          schema:
+            type: "string"
+            format: "binary"
+      tags: ["Container"]
+  /containers/prune:
+    post:
+      summary: "Delete stopped containers"
+      produces:
+        - "application/json"
+      operationId: "ContainerPrune"
+      parameters:
+        - name: "filters"
+          in: "query"
+          description: |
+            Filters to process on the prune list, encoded as JSON (a `map[string][]string`).
+
+            Available filters:
+            - `until=<timestamp>` Prune containers created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time.
+            - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune containers with (or without, in case `label!=...` is used) the specified labels.
+          type: "string"
+      responses:
+        200:
+          description: "No error"
+          schema:
+            type: "object"
+            title: "ContainerPruneResponse"
+            properties:
+              ContainersDeleted:
+                description: "Container IDs that were deleted"
+                type: "array"
+                items:
+                  type: "string"
+              SpaceReclaimed:
+                description: "Disk space reclaimed in bytes"
+                type: "integer"
+                format: "int64"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      tags: ["Container"]
+  /images/json:
+    get:
+      summary: "List Images"
+      description: "Returns a list of images on the server. Note that it uses a different, smaller representation of an image than inspecting a single image."
+      operationId: "ImageList"
+      produces:
+        - "application/json"
+      responses:
+        200:
+          description: "Summary image data for the images matching the query"
+          schema:
+            type: "array"
+            items:
+              $ref: "#/definitions/ImageSummary"
+          examples:
+            application/json:
+              - Id: "sha256:e216a057b1cb1efc11f8a268f37ef62083e70b1b38323ba252e25ac88904a7e8"
+                ParentId: ""
+                RepoTags:
+                  - "ubuntu:12.04"
+                  - "ubuntu:precise"
+                RepoDigests:
+                  - "ubuntu@sha256:992069aee4016783df6345315302fa59681aae51a8eeb2f889dea59290f21787"
+                Created: 1474925151
+                Size: 103579269
+                VirtualSize: 103579269
+                SharedSize: 0
+                Labels: {}
+                Containers: 2
+              - Id: "sha256:3e314f95dcace0f5e4fd37b10862fe8398e3c60ed36600bc0ca5fda78b087175"
+                ParentId: ""
+                RepoTags:
+                  - "ubuntu:12.10"
+                  - "ubuntu:quantal"
+                RepoDigests:
+                  - "ubuntu@sha256:002fba3e3255af10be97ea26e476692a7ebed0bb074a9ab960b2e7a1526b15d7"
+                  - "ubuntu@sha256:68ea0200f0b90df725d99d823905b04cf844f6039ef60c60bf3e019915017bd3"
+                Created: 1403128455
+                Size: 172064416
+                VirtualSize: 172064416
+                SharedSize: 0
+                Labels: {}
+                Containers: 5
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "all"
+          in: "query"
+          description: "Show all images. Only images from a final layer (no children) are shown by default."
+          type: "boolean"
+          default: false
+        - name: "filters"
+          in: "query"
+          description: |
+            A JSON encoded value of the filters (a `map[string][]string`) to
+            process on the images list.
+
+            Available filters:
+
+            - `before`=(`<image-name>[:<tag>]`,  `<image id>` or `<image@digest>`)
+            - `dangling=true`
+            - `label=key` or `label="key=value"` of an image label
+            - `reference`=(`<image-name>[:<tag>]`)
+            - `since`=(`<image-name>[:<tag>]`,  `<image id>` or `<image@digest>`)
+          type: "string"
+        - name: "digests"
+          in: "query"
+          description: "Show digest information as a `RepoDigests` field on each image."
+          type: "boolean"
+          default: false
+      tags: ["Image"]
+  /build:
+    post:
+      summary: "Build an image"
+      description: |
+        Build an image from a tar archive with a `Dockerfile` in it.
+
+        The `Dockerfile` specifies how the image is built from the tar archive. It is typically in the archive's root, but can be at a different path or have a different name by specifying the `dockerfile` parameter. [See the `Dockerfile` reference for more information](https://docs.docker.com/engine/reference/builder/).
+
+        The Docker daemon performs a preliminary validation of the `Dockerfile` before starting the build, and returns an error if the syntax is incorrect. After that, each instruction is run one-by-one until the ID of the new image is output.
+
+        The build is canceled if the client drops the connection by quitting or being killed.
+      operationId: "ImageBuild"
+      consumes:
+        - "application/octet-stream"
+      produces:
+        - "application/json"
+      parameters:
+        - name: "inputStream"
+          in: "body"
+          description: "A tar archive compressed with one of the following algorithms: identity (no compression), gzip, bzip2, xz."
+          schema:
+            type: "string"
+            format: "binary"
+        - name: "dockerfile"
+          in: "query"
+          description: "Path within the build context to the `Dockerfile`. This is ignored if `remote` is specified and points to an external `Dockerfile`."
+          type: "string"
+          default: "Dockerfile"
+        - name: "t"
+          in: "query"
+          description: "A name and optional tag to apply to the image in the `name:tag` format. If you omit the tag the default `latest` value is assumed. You can provide several `t` parameters."
+          type: "string"
+        - name: "extrahosts"
+          in: "query"
+          description: "Extra hosts to add to /etc/hosts"
+          type: "string"
+        - name: "remote"
+          in: "query"
+          description: "A Git repository URI or HTTP/HTTPS context URI. If the URI points to a single text file, the file’s contents are placed into a file called `Dockerfile` and the image is built from that file. If the URI points to a tarball, the file is downloaded by the daemon and the contents therein used as the context for the build. If the URI points to a tarball and the `dockerfile` parameter is also specified, there must be a file with the corresponding path inside the tarball."
+          type: "string"
+        - name: "q"
+          in: "query"
+          description: "Suppress verbose build output."
+          type: "boolean"
+          default: false
+        - name: "nocache"
+          in: "query"
+          description: "Do not use the cache when building the image."
+          type: "boolean"
+          default: false
+        - name: "cachefrom"
+          in: "query"
+          description: "JSON array of images used for build cache resolution."
+          type: "string"
+        - name: "pull"
+          in: "query"
+          description: "Attempt to pull the image even if an older image exists locally."
+          type: "string"
+        - name: "rm"
+          in: "query"
+          description: "Remove intermediate containers after a successful build."
+          type: "boolean"
+          default: true
+        - name: "forcerm"
+          in: "query"
+          description: "Always remove intermediate containers, even upon failure."
+          type: "boolean"
+          default: false
+        - name: "memory"
+          in: "query"
+          description: "Set memory limit for build."
+          type: "integer"
+        - name: "memswap"
+          in: "query"
+          description: "Total memory (memory + swap). Set as `-1` to disable swap."
+          type: "integer"
+        - name: "cpushares"
+          in: "query"
+          description: "CPU shares (relative weight)."
+          type: "integer"
+        - name: "cpusetcpus"
+          in: "query"
+          description: "CPUs in which to allow execution (e.g., `0-3`, `0,1`)."
+          type: "string"
+        - name: "cpuperiod"
+          in: "query"
+          description: "The length of a CPU period in microseconds."
+          type: "integer"
+        - name: "cpuquota"
+          in: "query"
+          description: "Microseconds of CPU time that the container can get in a CPU period."
+          type: "integer"
+        - name: "buildargs"
+          in: "query"
+          description: >
+            JSON map of string pairs for build-time variables. Users pass these values at build-time. Docker
+            uses the buildargs as the environment context for commands run via the `Dockerfile` RUN
+            instruction, or for variable expansion in other `Dockerfile` instructions. This is not meant for
+            passing secret values.
+
+
+            For example, the build arg `FOO=bar` would become `{"FOO":"bar"}` in JSON. This would result in the
+            query parameter `buildargs={"FOO":"bar"}`. Note that `{"FOO":"bar"}` should be URI component encoded.
+
+
+            [Read more about the buildargs instruction.](https://docs.docker.com/engine/reference/builder/#arg)
+          type: "string"
+        - name: "shmsize"
+          in: "query"
+          description: "Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB."
+          type: "integer"
+        - name: "squash"
+          in: "query"
+          description: "Squash the resulting images layers into a single layer. *(Experimental release only.)*"
+          type: "boolean"
+        - name: "labels"
+          in: "query"
+          description: "Arbitrary key/value labels to set on the image, as a JSON map of string pairs."
+          type: "string"
+        - name: "networkmode"
+          in: "query"
+          description: |
+            Sets the networking mode for the run commands during build. Supported
+            standard values are: `bridge`, `host`, `none`, and `container:<name|id>`.
+            Any other value is taken as a custom network's name or ID to which this
+            container should connect to.
+          type: "string"
+        - name: "Content-type"
+          in: "header"
+          type: "string"
+          enum:
+            - "application/x-tar"
+          default: "application/x-tar"
+        - name: "X-Registry-Config"
+          in: "header"
+          description: |
+            This is a base64-encoded JSON object with auth configurations for multiple registries that a build may refer to.
+
+            The key is a registry URL, and the value is an auth configuration object, [as described in the authentication section](#section/Authentication). For example:
+
+            ```
+            {
+              "docker.example.com": {
+                "username": "janedoe",
+                "password": "hunter2"
+              },
+              "https://index.docker.io/v1/": {
+                "username": "mobydock",
+                "password": "conta1n3rize14"
+              }
+            }
+            ```
+
+            Only the registry domain name (and port if not the default 443) are required. However, for legacy reasons, the Docker Hub registry must be specified with both a `https://` prefix and a `/v1/` suffix even though Docker will prefer to use the v2 registry API.
+          type: "string"
+        - name: "platform"
+          in: "query"
+          description: "Platform in the format os[/arch[/variant]]"
+          type: "string"
+          default: ""
+        - name: "target"
+          in: "query"
+          description: "Target build stage"
+          type: "string"
+          default: ""
+        - name: "outputs"
+          in: "query"
+          description: "BuildKit output configuration"
+          type: "string"
+          default: ""
+      responses:
+        200:
+          description: "no error"
+        400:
+          description: "Bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      tags: ["Image"]
+  /build/prune:
+    post:
+      summary: "Delete builder cache"
+      produces:
+        - "application/json"
+      operationId: "BuildPrune"
+      parameters:
+        - name: "keep-storage"
+          in: "query"
+          description: "Amount of disk space in bytes to keep for cache"
+          type: "integer"
+          format: "int64"
+        - name: "all"
+          in: "query"
+          type: "boolean"
+          description: "Remove all types of build cache"
+        - name: "filters"
+          in: "query"
+          type: "string"
+          description: |
+            A JSON encoded value of the filters (a `map[string][]string`) to
+            process on the list of build cache objects.
+
+            Available filters:
+
+            - `until=<duration>`: duration relative to daemon's time, during which build cache was not used, in Go's duration format (e.g., '24h')
+            - `id=<id>`
+            - `parent=<id>`
+            - `type=<string>`
+            - `description=<string>`
+            - `inuse`
+            - `shared`
+            - `private`
+      responses:
+        200:
+          description: "No error"
+          schema:
+            type: "object"
+            title: "BuildPruneResponse"
+            properties:
+              CachesDeleted:
+                type: "array"
+                items:
+                  description: "ID of build cache object"
+                  type: "string"
+              SpaceReclaimed:
+                description: "Disk space reclaimed in bytes"
+                type: "integer"
+                format: "int64"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      tags: ["Image"]
+  /images/create:
+    post:
+      summary: "Create an image"
+      description: "Create an image by either pulling it from a registry or importing it."
+      operationId: "ImageCreate"
+      consumes:
+        - "text/plain"
+        - "application/octet-stream"
+      produces:
+        - "application/json"
+      responses:
+        200:
+          description: "no error"
+        404:
+          description: "repository does not exist or no read access"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "fromImage"
+          in: "query"
+          description: "Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed."
+          type: "string"
+        - name: "fromSrc"
+          in: "query"
+          description: "Source to import. The value may be a URL from which the image can be retrieved or `-` to read the image from the request body. This parameter may only be used when importing an image."
+          type: "string"
+        - name: "repo"
+          in: "query"
+          description: "Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image."
+          type: "string"
+        - name: "tag"
+          in: "query"
+          description: "Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled."
+          type: "string"
+        - name: "message"
+          in: "query"
+          description: "Set commit message for imported image."
+          type: "string"
+        - name: "inputImage"
+          in: "body"
+          description: "Image content if the value `-` has been specified in fromSrc query parameter"
+          schema:
+            type: "string"
+          required: false
+        - name: "X-Registry-Auth"
+          in: "header"
+          description: |
+            A base64url-encoded auth configuration.
+
+            Refer to the [authentication section](#section/Authentication) for
+            details.
+          type: "string"
+        - name: "platform"
+          in: "query"
+          description: "Platform in the format os[/arch[/variant]]"
+          type: "string"
+          default: ""
+      tags: ["Image"]
+  /images/{name}/json:
+    get:
+      summary: "Inspect an image"
+      description: "Return low-level information about an image."
+      operationId: "ImageInspect"
+      produces:
+        - "application/json"
+      responses:
+        200:
+          description: "No error"
+          schema:
+            $ref: "#/definitions/Image"
+          examples:
+            application/json:
+              Id: "sha256:85f05633ddc1c50679be2b16a0479ab6f7637f8884e0cfe0f4d20e1ebb3d6e7c"
+              Container: "cb91e48a60d01f1e27028b4fc6819f4f290b3cf12496c8176ec714d0d390984a"
+              Comment: ""
+              Os: "linux"
+              Architecture: "amd64"
+              Parent: "sha256:91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c"
+              ContainerConfig:
+                Tty: false
+                Hostname: "e611e15f9c9d"
+                Domainname: ""
+                AttachStdout: false
+                PublishService: ""
+                AttachStdin: false
+                OpenStdin: false
+                StdinOnce: false
+                NetworkDisabled: false
+                OnBuild: []
+                Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c"
+                User: ""
+                WorkingDir: ""
+                MacAddress: ""
+                AttachStderr: false
+                Labels:
+                  com.example.license: "GPL"
+                  com.example.version: "1.0"
+                  com.example.vendor: "Acme"
+                Env:
+                  - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
+                Cmd:
+                  - "/bin/sh"
+                  - "-c"
+                  - "#(nop) LABEL com.example.vendor=Acme com.example.license=GPL com.example.version=1.0"
+              DockerVersion: "1.9.0-dev"
+              VirtualSize: 188359297
+              Size: 0
+              Author: ""
+              Created: "2015-09-10T08:30:53.26995814Z"
+              GraphDriver:
+                Name: "aufs"
+                Data: {}
+              RepoDigests:
+                - "localhost:5000/test/busybox/example@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf"
+              RepoTags:
+                - "example:1.0"
+                - "example:latest"
+                - "example:stable"
+              Config:
+                Image: "91e54dfb11794fad694460162bf0cb0a4fa710cfa3f60979c177d920813e267c"
+                NetworkDisabled: false
+                OnBuild: []
+                StdinOnce: false
+                PublishService: ""
+                AttachStdin: false
+                OpenStdin: false
+                Domainname: ""
+                AttachStdout: false
+                Tty: false
+                Hostname: "e611e15f9c9d"
+                Cmd:
+                  - "/bin/bash"
+                Env:
+                  - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
+                Labels:
+                  com.example.vendor: "Acme"
+                  com.example.version: "1.0"
+                  com.example.license: "GPL"
+                MacAddress: ""
+                AttachStderr: false
+                WorkingDir: ""
+                User: ""
+              RootFS:
+                Type: "layers"
+                Layers:
+                  - "sha256:1834950e52ce4d5a88a1bbd131c537f4d0e56d10ff0dd69e66be3b7dfa9df7e6"
+                  - "sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef"
+        404:
+          description: "No such image"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such image: someimage (tag: latest)"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "name"
+          in: "path"
+          description: "Image name or id"
+          type: "string"
+          required: true
+      tags: ["Image"]
+  /images/{name}/history:
+    get:
+      summary: "Get the history of an image"
+      description: "Return parent layers of an image."
+      operationId: "ImageHistory"
+      produces: ["application/json"]
+      responses:
+        200:
+          description: "List of image layers"
+          schema:
+            type: "array"
+            items:
+              type: "object"
+              x-go-name: HistoryResponseItem
+              title: "HistoryResponseItem"
+              description: "individual image layer information in response to ImageHistory operation"
+              required: [Id, Created, CreatedBy, Tags, Size, Comment]
+              properties:
+                Id:
+                  type: "string"
+                  x-nullable: false
+                Created:
+                  type: "integer"
+                  format: "int64"
+                  x-nullable: false
+                CreatedBy:
+                  type: "string"
+                  x-nullable: false
+                Tags:
+                  type: "array"
+                  items:
+                    type: "string"
+                Size:
+                  type: "integer"
+                  format: "int64"
+                  x-nullable: false
+                Comment:
+                  type: "string"
+                  x-nullable: false
+          examples:
+            application/json:
+              - Id: "3db9c44f45209632d6050b35958829c3a2aa256d81b9a7be45b362ff85c54710"
+                Created: 1398108230
+                CreatedBy: "/bin/sh -c #(nop) ADD file:eb15dbd63394e063b805a3c32ca7bf0266ef64676d5a6fab4801f2e81e2a5148 in /"
+                Tags:
+                  - "ubuntu:lucid"
+                  - "ubuntu:10.04"
+                Size: 182964289
+                Comment: ""
+              - Id: "6cfa4d1f33fb861d4d114f43b25abd0ac737509268065cdfd69d544a59c85ab8"
+                Created: 1398108222
+                CreatedBy: "/bin/sh -c #(nop) MAINTAINER Tianon Gravi <admwiggin@gmail.com> - mkimage-debootstrap.sh -i iproute,iputils-ping,ubuntu-minimal -t lucid.tar.xz lucid http://archive.ubuntu.com/ubuntu/"
+                Tags: []
+                Size: 0
+                Comment: ""
+              - Id: "511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158"
+                Created: 1371157430
+                CreatedBy: ""
+                Tags:
+                  - "scratch12:latest"
+                  - "scratch:latest"
+                Size: 0
+                Comment: "Imported from -"
+        404:
+          description: "No such image"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "name"
+          in: "path"
+          description: "Image name or ID"
+          type: "string"
+          required: true
+      tags: ["Image"]
+  /images/{name}/push:
+    post:
+      summary: "Push an image"
+      description: |
+        Push an image to a registry.
+
+        If you wish to push an image on to a private registry, that image must
+        already have a tag which references the registry. For example,
+        `registry.example.com/myimage:latest`.
+
+        The push is cancelled if the HTTP connection is closed.
+      operationId: "ImagePush"
+      consumes:
+        - "application/octet-stream"
+      responses:
+        200:
+          description: "No error"
+        404:
+          description: "No such image"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "name"
+          in: "path"
+          description: "Image name or ID."
+          type: "string"
+          required: true
+        - name: "tag"
+          in: "query"
+          description: "The tag to associate with the image on the registry."
+          type: "string"
+        - name: "X-Registry-Auth"
+          in: "header"
+          description: |
+            A base64url-encoded auth configuration.
+
+            Refer to the [authentication section](#section/Authentication) for
+            details.
+          type: "string"
+          required: true
+      tags: ["Image"]
+  /images/{name}/tag:
+    post:
+      summary: "Tag an image"
+      description: "Tag an image so that it becomes part of a repository."
+      operationId: "ImageTag"
+      responses:
+        201:
+          description: "No error"
+        400:
+          description: "Bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        404:
+          description: "No such image"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        409:
+          description: "Conflict"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "name"
+          in: "path"
+          description: "Image name or ID to tag."
+          type: "string"
+          required: true
+        - name: "repo"
+          in: "query"
+          description: "The repository to tag in. For example, `someuser/someimage`."
+          type: "string"
+        - name: "tag"
+          in: "query"
+          description: "The name of the new tag."
+          type: "string"
+      tags: ["Image"]
+  /images/{name}:
+    delete:
+      summary: "Remove an image"
+      description: |
+        Remove an image, along with any untagged parent images that were
+        referenced by that image.
+
+        Images can't be removed if they have descendant images, are being
+        used by a running container or are being used by a build.
+      operationId: "ImageDelete"
+      produces: ["application/json"]
+      responses:
+        200:
+          description: "The image was deleted successfully"
+          schema:
+            type: "array"
+            items:
+              $ref: "#/definitions/ImageDeleteResponseItem"
+          examples:
+            application/json:
+              - Untagged: "3e2f21a89f"
+              - Deleted: "3e2f21a89f"
+              - Deleted: "53b4f83ac9"
+        404:
+          description: "No such image"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        409:
+          description: "Conflict"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "name"
+          in: "path"
+          description: "Image name or ID"
+          type: "string"
+          required: true
+        - name: "force"
+          in: "query"
+          description: "Remove the image even if it is being used by stopped containers or has other tags"
+          type: "boolean"
+          default: false
+        - name: "noprune"
+          in: "query"
+          description: "Do not delete untagged parent images"
+          type: "boolean"
+          default: false
+      tags: ["Image"]
+  /images/search:
+    get:
+      summary: "Search images"
+      description: "Search for an image on Docker Hub."
+      operationId: "ImageSearch"
+      produces:
+        - "application/json"
+      responses:
+        200:
+          description: "No error"
+          schema:
+            type: "array"
+            items:
+              type: "object"
+              title: "ImageSearchResponseItem"
+              properties:
+                description:
+                  type: "string"
+                is_official:
+                  type: "boolean"
+                is_automated:
+                  type: "boolean"
+                name:
+                  type: "string"
+                star_count:
+                  type: "integer"
+          examples:
+            application/json:
+              - description: ""
+                is_official: false
+                is_automated: false
+                name: "wma55/u1210sshd"
+                star_count: 0
+              - description: ""
+                is_official: false
+                is_automated: false
+                name: "jdswinbank/sshd"
+                star_count: 0
+              - description: ""
+                is_official: false
+                is_automated: false
+                name: "vgauthier/sshd"
+                star_count: 0
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "term"
+          in: "query"
+          description: "Term to search"
+          type: "string"
+          required: true
+        - name: "limit"
+          in: "query"
+          description: "Maximum number of results to return"
+          type: "integer"
+        - name: "filters"
+          in: "query"
+          description: |
+            A JSON encoded value of the filters (a `map[string][]string`) to process on the images list. Available filters:
+
+            - `is-automated=(true|false)`
+            - `is-official=(true|false)`
+            - `stars=<number>` Matches images that has at least 'number' stars.
+          type: "string"
+      tags: ["Image"]
+  /images/prune:
+    post:
+      summary: "Delete unused images"
+      produces:
+        - "application/json"
+      operationId: "ImagePrune"
+      parameters:
+        - name: "filters"
+          in: "query"
+          description: |
+            Filters to process on the prune list, encoded as JSON (a `map[string][]string`). Available filters:
+
+            - `dangling=<boolean>` When set to `true` (or `1`), prune only
+               unused *and* untagged images. When set to `false`
+               (or `0`), all unused images are pruned.
+            - `until=<string>` Prune images created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time.
+            - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune images with (or without, in case `label!=...` is used) the specified labels.
+          type: "string"
+      responses:
+        200:
+          description: "No error"
+          schema:
+            type: "object"
+            title: "ImagePruneResponse"
+            properties:
+              ImagesDeleted:
+                description: "Images that were deleted"
+                type: "array"
+                items:
+                  $ref: "#/definitions/ImageDeleteResponseItem"
+              SpaceReclaimed:
+                description: "Disk space reclaimed in bytes"
+                type: "integer"
+                format: "int64"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      tags: ["Image"]
+  /auth:
+    post:
+      summary: "Check auth configuration"
+      description: |
+        Validate credentials for a registry and, if available, get an identity
+        token for accessing the registry without password.
+      operationId: "SystemAuth"
+      consumes: ["application/json"]
+      produces: ["application/json"]
+      responses:
+        200:
+          description: "An identity token was generated successfully."
+          schema:
+            type: "object"
+            title: "SystemAuthResponse"
+            required: [Status]
+            properties:
+              Status:
+                description: "The status of the authentication"
+                type: "string"
+                x-nullable: false
+              IdentityToken:
+                description: "An opaque token used to authenticate a user after a successful login"
+                type: "string"
+                x-nullable: false
+          examples:
+            application/json:
+              Status: "Login Succeeded"
+              IdentityToken: "9cbaf023786cd7..."
+        204:
+          description: "No error"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "authConfig"
+          in: "body"
+          description: "Authentication to check"
+          schema:
+            $ref: "#/definitions/AuthConfig"
+      tags: ["System"]
+  /info:
+    get:
+      summary: "Get system information"
+      operationId: "SystemInfo"
+      produces:
+        - "application/json"
+      responses:
+        200:
+          description: "No error"
+          schema:
+            $ref: "#/definitions/SystemInfo"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      tags: ["System"]
+  /version:
+    get:
+      summary: "Get version"
+      description: "Returns the version of Docker that is running and various information about the system that Docker is running on."
+      operationId: "SystemVersion"
+      produces: ["application/json"]
+      responses:
+        200:
+          description: "no error"
+          schema:
+            $ref: "#/definitions/SystemVersion"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      tags: ["System"]
+  /_ping:
+    get:
+      summary: "Ping"
+      description: "This is a dummy endpoint you can use to test if the server is accessible."
+      operationId: "SystemPing"
+      produces: ["text/plain"]
+      responses:
+        200:
+          description: "no error"
+          schema:
+            type: "string"
+            example: "OK"
+          headers:
+            API-Version:
+              type: "string"
+              description: "Max API Version the server supports"
+            Builder-Version:
+              type: "string"
+              description: "Default version of docker image builder"
+            Docker-Experimental:
+              type: "boolean"
+              description: "If the server is running with experimental mode enabled"
+            Cache-Control:
+              type: "string"
+              default: "no-cache, no-store, must-revalidate"
+            Pragma:
+              type: "string"
+              default: "no-cache"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          headers:
+            Cache-Control:
+              type: "string"
+              default: "no-cache, no-store, must-revalidate"
+            Pragma:
+              type: "string"
+              default: "no-cache"
+      tags: ["System"]
+    head:
+      summary: "Ping"
+      description: "This is a dummy endpoint you can use to test if the server is accessible."
+      operationId: "SystemPingHead"
+      produces: ["text/plain"]
+      responses:
+        200:
+          description: "no error"
+          schema:
+            type: "string"
+            example: "(empty)"
+          headers:
+            API-Version:
+              type: "string"
+              description: "Max API Version the server supports"
+            Builder-Version:
+              type: "string"
+              description: "Default version of docker image builder"
+            Docker-Experimental:
+              type: "boolean"
+              description: "If the server is running with experimental mode enabled"
+            Cache-Control:
+              type: "string"
+              default: "no-cache, no-store, must-revalidate"
+            Pragma:
+              type: "string"
+              default: "no-cache"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      tags: ["System"]
+  /commit:
+    post:
+      summary: "Create a new image from a container"
+      operationId: "ImageCommit"
+      consumes:
+        - "application/json"
+      produces:
+        - "application/json"
+      responses:
+        201:
+          description: "no error"
+          schema:
+            $ref: "#/definitions/IdResponse"
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "containerConfig"
+          in: "body"
+          description: "The container configuration"
+          schema:
+            $ref: "#/definitions/ContainerConfig"
+        - name: "container"
+          in: "query"
+          description: "The ID or name of the container to commit"
+          type: "string"
+        - name: "repo"
+          in: "query"
+          description: "Repository name for the created image"
+          type: "string"
+        - name: "tag"
+          in: "query"
+          description: "Tag name for the create image"
+          type: "string"
+        - name: "comment"
+          in: "query"
+          description: "Commit message"
+          type: "string"
+        - name: "author"
+          in: "query"
+          description: "Author of the image (e.g., `John Hannibal Smith <hannibal@a-team.com>`)"
+          type: "string"
+        - name: "pause"
+          in: "query"
+          description: "Whether to pause the container before committing"
+          type: "boolean"
+          default: true
+        - name: "changes"
+          in: "query"
+          description: "`Dockerfile` instructions to apply while committing"
+          type: "string"
+      tags: ["Image"]
+  /events:
+    get:
+      summary: "Monitor events"
+      description: |
+        Stream real-time events from the server.
+
+        Various objects within Docker report events when something happens to them.
+
+        Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `exec_die`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, `update`, and `prune`
+
+        Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, `untag`, and `prune`
+
+        Volumes report these events: `create`, `mount`, `unmount`, `destroy`, and `prune`
+
+        Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, `remove`, and `prune`
+
+        The Docker daemon reports these events: `reload`
+
+        Services report these events: `create`, `update`, and `remove`
+
+        Nodes report these events: `create`, `update`, and `remove`
+
+        Secrets report these events: `create`, `update`, and `remove`
+
+        Configs report these events: `create`, `update`, and `remove`
+
+        The Builder reports `prune` events
+
+      operationId: "SystemEvents"
+      produces:
+        - "application/json"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            type: "object"
+            title: "SystemEventsResponse"
+            properties:
+              Type:
+                description: "The type of object emitting the event"
+                type: "string"
+              Action:
+                description: "The type of event"
+                type: "string"
+              Actor:
+                type: "object"
+                properties:
+                  ID:
+                    description: "The ID of the object emitting the event"
+                    type: "string"
+                  Attributes:
+                    description: "Various key/value attributes of the object, depending on its type"
+                    type: "object"
+                    additionalProperties:
+                      type: "string"
+              time:
+                description: "Timestamp of event"
+                type: "integer"
+              timeNano:
+                description: "Timestamp of event, with nanosecond accuracy"
+                type: "integer"
+                format: "int64"
+          examples:
+            application/json:
+              Type: "container"
+              Action: "create"
+              Actor:
+                ID: "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743"
+                Attributes:
+                  com.example.some-label: "some-label-value"
+                  image: "alpine"
+                  name: "my-container"
+              time: 1461943101
+        400:
+          description: "bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "since"
+          in: "query"
+          description: "Show events created since this timestamp then stream new events."
+          type: "string"
+        - name: "until"
+          in: "query"
+          description: "Show events created until this timestamp then stop streaming."
+          type: "string"
+        - name: "filters"
+          in: "query"
+          description: |
+            A JSON encoded value of filters (a `map[string][]string`) to process on the event list. Available filters:
+
+            - `config=<string>` config name or ID
+            - `container=<string>` container name or ID
+            - `daemon=<string>` daemon name or ID
+            - `event=<string>` event type
+            - `image=<string>` image name or ID
+            - `label=<string>` image or container label
+            - `network=<string>` network name or ID
+            - `node=<string>` node ID
+            - `plugin`=<string> plugin name or ID
+            - `scope`=<string> local or swarm
+            - `secret=<string>` secret name or ID
+            - `service=<string>` service name or ID
+            - `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service`, `secret` or `config`
+            - `volume=<string>` volume name
+          type: "string"
+      tags: ["System"]
+  /system/df:
+    get:
+      summary: "Get data usage information"
+      operationId: "SystemDataUsage"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            type: "object"
+            title: "SystemDataUsageResponse"
+            properties:
+              LayersSize:
+                type: "integer"
+                format: "int64"
+              Images:
+                type: "array"
+                items:
+                  $ref: "#/definitions/ImageSummary"
+              Containers:
+                type: "array"
+                items:
+                  $ref: "#/definitions/ContainerSummary"
+              Volumes:
+                type: "array"
+                items:
+                  $ref: "#/definitions/Volume"
+              BuildCache:
+                type: "array"
+                items:
+                  $ref: "#/definitions/BuildCache"
+            example:
+              LayersSize: 1092588
+              Images:
+                -
+                  Id: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749"
+                  ParentId: ""
+                  RepoTags:
+                    - "busybox:latest"
+                  RepoDigests:
+                    - "busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6"
+                  Created: 1466724217
+                  Size: 1092588
+                  SharedSize: 0
+                  VirtualSize: 1092588
+                  Labels: {}
+                  Containers: 1
+              Containers:
+                -
+                  Id: "e575172ed11dc01bfce087fb27bee502db149e1a0fad7c296ad300bbff178148"
+                  Names:
+                    - "/top"
+                  Image: "busybox"
+                  ImageID: "sha256:2b8fd9751c4c0f5dd266fcae00707e67a2545ef34f9a29354585f93dac906749"
+                  Command: "top"
+                  Created: 1472592424
+                  Ports: []
+                  SizeRootFs: 1092588
+                  Labels: {}
+                  State: "exited"
+                  Status: "Exited (0) 56 minutes ago"
+                  HostConfig:
+                    NetworkMode: "default"
+                  NetworkSettings:
+                    Networks:
+                      bridge:
+                        IPAMConfig: null
+                        Links: null
+                        Aliases: null
+                        NetworkID: "d687bc59335f0e5c9ee8193e5612e8aee000c8c62ea170cfb99c098f95899d92"
+                        EndpointID: "8ed5115aeaad9abb174f68dcf135b49f11daf597678315231a32ca28441dec6a"
+                        Gateway: "172.18.0.1"
+                        IPAddress: "172.18.0.2"
+                        IPPrefixLen: 16
+                        IPv6Gateway: ""
+                        GlobalIPv6Address: ""
+                        GlobalIPv6PrefixLen: 0
+                        MacAddress: "02:42:ac:12:00:02"
+                  Mounts: []
+              Volumes:
+                -
+                  Name: "my-volume"
+                  Driver: "local"
+                  Mountpoint: "/var/lib/docker/volumes/my-volume/_data"
+                  Labels: null
+                  Scope: "local"
+                  Options: null
+                  UsageData:
+                    Size: 10920104
+                    RefCount: 2
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      tags: ["System"]
+  /images/{name}/get:
+    get:
+      summary: "Export an image"
+      description: |
+        Get a tarball containing all images and metadata for a repository.
+
+        If `name` is a specific name and tag (e.g. `ubuntu:latest`), then only that image (and its parents) are returned. If `name` is an image ID, similarly only that image (and its parents) are returned, but with the exclusion of the `repositories` file in the tarball, as there were no image names referenced.
+
+        ### Image tarball format
+
+        An image tarball contains one directory per image layer (named using its long ID), each containing these files:
+
+        - `VERSION`: currently `1.0` - the file format version
+        - `json`: detailed layer information, similar to `docker inspect layer_id`
+        - `layer.tar`: A tarfile containing the filesystem changes in this layer
+
+        The `layer.tar` file contains `aufs` style `.wh..wh.aufs` files and directories for storing attribute changes and deletions.
+
+        If the tarball defines a repository, the tarball should also include a `repositories` file at the root that contains a list of repository and tag names mapped to layer IDs.
+
+        ```json
+        {
+          "hello-world": {
+            "latest": "565a9d68a73f6706862bfe8409a7f659776d4d60a8d096eb4a3cbce6999cc2a1"
+          }
+        }
+        ```
+      operationId: "ImageGet"
+      produces:
+        - "application/x-tar"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            type: "string"
+            format: "binary"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "name"
+          in: "path"
+          description: "Image name or ID"
+          type: "string"
+          required: true
+      tags: ["Image"]
+  /images/get:
+    get:
+      summary: "Export several images"
+      description: |
+        Get a tarball containing all images and metadata for several image
+        repositories.
+
+        For each value of the `names` parameter: if it is a specific name and
+        tag (e.g. `ubuntu:latest`), then only that image (and its parents) are
+        returned; if it is an image ID, similarly only that image (and its parents)
+        are returned and there would be no names referenced in the 'repositories'
+        file for this image ID.
+
+        For details on the format, see the [export image endpoint](#operation/ImageGet).
+      operationId: "ImageGetAll"
+      produces:
+        - "application/x-tar"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            type: "string"
+            format: "binary"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "names"
+          in: "query"
+          description: "Image names to filter by"
+          type: "array"
+          items:
+            type: "string"
+      tags: ["Image"]
+  /images/load:
+    post:
+      summary: "Import images"
+      description: |
+        Load a set of images and tags into a repository.
+
+        For details on the format, see the [export image endpoint](#operation/ImageGet).
+      operationId: "ImageLoad"
+      consumes:
+        - "application/x-tar"
+      produces:
+        - "application/json"
+      responses:
+        200:
+          description: "no error"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "imagesTarball"
+          in: "body"
+          description: "Tar archive containing images"
+          schema:
+            type: "string"
+            format: "binary"
+        - name: "quiet"
+          in: "query"
+          description: "Suppress progress details during load."
+          type: "boolean"
+          default: false
+      tags: ["Image"]
+  /containers/{id}/exec:
+    post:
+      summary: "Create an exec instance"
+      description: "Run a command inside a running container."
+      operationId: "ContainerExec"
+      consumes:
+        - "application/json"
+      produces:
+        - "application/json"
+      responses:
+        201:
+          description: "no error"
+          schema:
+            $ref: "#/definitions/IdResponse"
+        404:
+          description: "no such container"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such container: c2ada9df5af8"
+        409:
+          description: "container is paused"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "execConfig"
+          in: "body"
+          description: "Exec configuration"
+          schema:
+            type: "object"
+            properties:
+              AttachStdin:
+                type: "boolean"
+                description: "Attach to `stdin` of the exec command."
+              AttachStdout:
+                type: "boolean"
+                description: "Attach to `stdout` of the exec command."
+              AttachStderr:
+                type: "boolean"
+                description: "Attach to `stderr` of the exec command."
+              DetachKeys:
+                type: "string"
+                description: |
+                  Override the key sequence for detaching a container. Format is
+                  a single character `[a-Z]` or `ctrl-<value>` where `<value>`
+                  is one of: `a-z`, `@`, `^`, `[`, `,` or `_`.
+              Tty:
+                type: "boolean"
+                description: "Allocate a pseudo-TTY."
+              Env:
+                description: |
+                  A list of environment variables in the form `["VAR=value", ...]`.
+                type: "array"
+                items:
+                  type: "string"
+              Cmd:
+                type: "array"
+                description: "Command to run, as a string or array of strings."
+                items:
+                  type: "string"
+              Privileged:
+                type: "boolean"
+                description: "Runs the exec process with extended privileges."
+                default: false
+              User:
+                type: "string"
+                description: |
+                  The user, and optionally, group to run the exec process inside
+                  the container. Format is one of: `user`, `user:group`, `uid`,
+                  or `uid:gid`.
+              WorkingDir:
+                type: "string"
+                description: |
+                  The working directory for the exec process inside the container.
+            example:
+              AttachStdin: false
+              AttachStdout: true
+              AttachStderr: true
+              DetachKeys: "ctrl-p,ctrl-q"
+              Tty: false
+              Cmd:
+                - "date"
+              Env:
+                - "FOO=bar"
+                - "BAZ=quux"
+          required: true
+        - name: "id"
+          in: "path"
+          description: "ID or name of container"
+          type: "string"
+          required: true
+      tags: ["Exec"]
+  /exec/{id}/start:
+    post:
+      summary: "Start an exec instance"
+      description: |
+        Starts a previously set up exec instance. If detach is true, this endpoint
+        returns immediately after starting the command. Otherwise, it sets up an
+        interactive session with the command.
+      operationId: "ExecStart"
+      consumes:
+        - "application/json"
+      produces:
+        - "application/vnd.docker.raw-stream"
+      responses:
+        200:
+          description: "No error"
+        404:
+          description: "No such exec instance"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        409:
+          description: "Container is stopped or paused"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "execStartConfig"
+          in: "body"
+          schema:
+            type: "object"
+            properties:
+              Detach:
+                type: "boolean"
+                description: "Detach from the command."
+              Tty:
+                type: "boolean"
+                description: "Allocate a pseudo-TTY."
+            example:
+              Detach: false
+              Tty: false
+        - name: "id"
+          in: "path"
+          description: "Exec instance ID"
+          required: true
+          type: "string"
+      tags: ["Exec"]
+  /exec/{id}/resize:
+    post:
+      summary: "Resize an exec instance"
+      description: |
+        Resize the TTY session used by an exec instance. This endpoint only works
+        if `tty` was specified as part of creating and starting the exec instance.
+      operationId: "ExecResize"
+      responses:
+        200:
+          description: "No error"
+        400:
+          description: "bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        404:
+          description: "No such exec instance"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          description: "Exec instance ID"
+          required: true
+          type: "string"
+        - name: "h"
+          in: "query"
+          description: "Height of the TTY session in characters"
+          type: "integer"
+        - name: "w"
+          in: "query"
+          description: "Width of the TTY session in characters"
+          type: "integer"
+      tags: ["Exec"]
+  /exec/{id}/json:
+    get:
+      summary: "Inspect an exec instance"
+      description: "Return low-level information about an exec instance."
+      operationId: "ExecInspect"
+      produces:
+        - "application/json"
+      responses:
+        200:
+          description: "No error"
+          schema:
+            type: "object"
+            title: "ExecInspectResponse"
+            properties:
+              CanRemove:
+                type: "boolean"
+              DetachKeys:
+                type: "string"
+              ID:
+                type: "string"
+              Running:
+                type: "boolean"
+              ExitCode:
+                type: "integer"
+              ProcessConfig:
+                $ref: "#/definitions/ProcessConfig"
+              OpenStdin:
+                type: "boolean"
+              OpenStderr:
+                type: "boolean"
+              OpenStdout:
+                type: "boolean"
+              ContainerID:
+                type: "string"
+              Pid:
+                type: "integer"
+                description: "The system process ID for the exec process."
+          examples:
+            application/json:
+              CanRemove: false
+              ContainerID: "b53ee82b53a40c7dca428523e34f741f3abc51d9f297a14ff874bf761b995126"
+              DetachKeys: ""
+              ExitCode: 2
+              ID: "f33bbfb39f5b142420f4759b2348913bd4a8d1a6d7fd56499cb41a1bb91d7b3b"
+              OpenStderr: true
+              OpenStdin: true
+              OpenStdout: true
+              ProcessConfig:
+                arguments:
+                  - "-c"
+                  - "exit 2"
+                entrypoint: "sh"
+                privileged: false
+                tty: true
+                user: "1000"
+              Running: false
+              Pid: 42000
+        404:
+          description: "No such exec instance"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          description: "Exec instance ID"
+          required: true
+          type: "string"
+      tags: ["Exec"]
+
+  /volumes:
+    get:
+      summary: "List volumes"
+      operationId: "VolumeList"
+      produces: ["application/json"]
+      responses:
+        200:
+          description: "Summary volume data that matches the query"
+          schema:
+            type: "object"
+            title: "VolumeListResponse"
+            description: "Volume list response"
+            required: [Volumes, Warnings]
+            properties:
+              Volumes:
+                type: "array"
+                x-nullable: false
+                description: "List of volumes"
+                items:
+                  $ref: "#/definitions/Volume"
+              Warnings:
+                type: "array"
+                x-nullable: false
+                description: |
+                  Warnings that occurred when fetching the list of volumes.
+                items:
+                  type: "string"
+
+          examples:
+            application/json:
+              Volumes:
+                - CreatedAt: "2017-07-19T12:00:26Z"
+                  Name: "tardis"
+                  Driver: "local"
+                  Mountpoint: "/var/lib/docker/volumes/tardis"
+                  Labels:
+                    com.example.some-label: "some-value"
+                    com.example.some-other-label: "some-other-value"
+                  Scope: "local"
+                  Options:
+                    device: "tmpfs"
+                    o: "size=100m,uid=1000"
+                    type: "tmpfs"
+              Warnings: []
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "filters"
+          in: "query"
+          description: |
+            JSON encoded value of the filters (a `map[string][]string`) to
+            process on the volumes list. Available filters:
+
+            - `dangling=<boolean>` When set to `true` (or `1`), returns all
+               volumes that are not in use by a container. When set to `false`
+               (or `0`), only volumes that are in use by one or more
+               containers are returned.
+            - `driver=<volume-driver-name>` Matches volumes based on their driver.
+            - `label=<key>` or `label=<key>:<value>` Matches volumes based on
+               the presence of a `label` alone or a `label` and a value.
+            - `name=<volume-name>` Matches all or part of a volume name.
+          type: "string"
+          format: "json"
+      tags: ["Volume"]
+
+  /volumes/create:
+    post:
+      summary: "Create a volume"
+      operationId: "VolumeCreate"
+      consumes: ["application/json"]
+      produces: ["application/json"]
+      responses:
+        201:
+          description: "The volume was created successfully"
+          schema:
+            $ref: "#/definitions/Volume"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "volumeConfig"
+          in: "body"
+          required: true
+          description: "Volume configuration"
+          schema:
+            type: "object"
+            description: "Volume configuration"
+            title: "VolumeConfig"
+            properties:
+              Name:
+                description: |
+                  The new volume's name. If not specified, Docker generates a name.
+                type: "string"
+                x-nullable: false
+              Driver:
+                description: "Name of the volume driver to use."
+                type: "string"
+                default: "local"
+                x-nullable: false
+              DriverOpts:
+                description: |
+                  A mapping of driver options and values. These options are
+                  passed directly to the driver and are driver specific.
+                type: "object"
+                additionalProperties:
+                  type: "string"
+              Labels:
+                description: "User-defined key/value metadata."
+                type: "object"
+                additionalProperties:
+                  type: "string"
+            example:
+              Name: "tardis"
+              Labels:
+                com.example.some-label: "some-value"
+                com.example.some-other-label: "some-other-value"
+              Driver: "custom"
+      tags: ["Volume"]
+
+  /volumes/{name}:
+    get:
+      summary: "Inspect a volume"
+      operationId: "VolumeInspect"
+      produces: ["application/json"]
+      responses:
+        200:
+          description: "No error"
+          schema:
+            $ref: "#/definitions/Volume"
+        404:
+          description: "No such volume"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "name"
+          in: "path"
+          required: true
+          description: "Volume name or ID"
+          type: "string"
+      tags: ["Volume"]
+
+    delete:
+      summary: "Remove a volume"
+      description: "Instruct the driver to remove the volume."
+      operationId: "VolumeDelete"
+      responses:
+        204:
+          description: "The volume was removed"
+        404:
+          description: "No such volume or volume driver"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        409:
+          description: "Volume is in use and cannot be removed"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "name"
+          in: "path"
+          required: true
+          description: "Volume name or ID"
+          type: "string"
+        - name: "force"
+          in: "query"
+          description: "Force the removal of the volume"
+          type: "boolean"
+          default: false
+      tags: ["Volume"]
+  /volumes/prune:
+    post:
+      summary: "Delete unused volumes"
+      produces:
+        - "application/json"
+      operationId: "VolumePrune"
+      parameters:
+        - name: "filters"
+          in: "query"
+          description: |
+            Filters to process on the prune list, encoded as JSON (a `map[string][]string`).
+
+            Available filters:
+            - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune volumes with (or without, in case `label!=...` is used) the specified labels.
+          type: "string"
+      responses:
+        200:
+          description: "No error"
+          schema:
+            type: "object"
+            title: "VolumePruneResponse"
+            properties:
+              VolumesDeleted:
+                description: "Volumes that were deleted"
+                type: "array"
+                items:
+                  type: "string"
+              SpaceReclaimed:
+                description: "Disk space reclaimed in bytes"
+                type: "integer"
+                format: "int64"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      tags: ["Volume"]
+  /networks:
+    get:
+      summary: "List networks"
+      description: |
+        Returns a list of networks. For details on the format, see the
+        [network inspect endpoint](#operation/NetworkInspect).
+
+        Note that it uses a different, smaller representation of a network than
+        inspecting a single network. For example, the list of containers attached
+        to the network is not propagated in API versions 1.28 and up.
+      operationId: "NetworkList"
+      produces:
+        - "application/json"
+      responses:
+        200:
+          description: "No error"
+          schema:
+            type: "array"
+            items:
+              $ref: "#/definitions/Network"
+          examples:
+            application/json:
+              - Name: "bridge"
+                Id: "f2de39df4171b0dc801e8002d1d999b77256983dfc63041c0f34030aa3977566"
+                Created: "2016-10-19T06:21:00.416543526Z"
+                Scope: "local"
+                Driver: "bridge"
+                EnableIPv6: false
+                Internal: false
+                Attachable: false
+                Ingress: false
+                IPAM:
+                  Driver: "default"
+                  Config:
+                    -
+                      Subnet: "172.17.0.0/16"
+                Options:
+                  com.docker.network.bridge.default_bridge: "true"
+                  com.docker.network.bridge.enable_icc: "true"
+                  com.docker.network.bridge.enable_ip_masquerade: "true"
+                  com.docker.network.bridge.host_binding_ipv4: "0.0.0.0"
+                  com.docker.network.bridge.name: "docker0"
+                  com.docker.network.driver.mtu: "1500"
+              - Name: "none"
+                Id: "e086a3893b05ab69242d3c44e49483a3bbbd3a26b46baa8f61ab797c1088d794"
+                Created: "0001-01-01T00:00:00Z"
+                Scope: "local"
+                Driver: "null"
+                EnableIPv6: false
+                Internal: false
+                Attachable: false
+                Ingress: false
+                IPAM:
+                  Driver: "default"
+                  Config: []
+                Containers: {}
+                Options: {}
+              - Name: "host"
+                Id: "13e871235c677f196c4e1ecebb9dc733b9b2d2ab589e30c539efeda84a24215e"
+                Created: "0001-01-01T00:00:00Z"
+                Scope: "local"
+                Driver: "host"
+                EnableIPv6: false
+                Internal: false
+                Attachable: false
+                Ingress: false
+                IPAM:
+                  Driver: "default"
+                  Config: []
+                Containers: {}
+                Options: {}
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "filters"
+          in: "query"
+          description: |
+            JSON encoded value of the filters (a `map[string][]string`) to process
+            on the networks list.
+
+            Available filters:
+
+            - `dangling=<boolean>` When set to `true` (or `1`), returns all
+               networks that are not in use by a container. When set to `false`
+               (or `0`), only networks that are in use by one or more
+               containers are returned.
+            - `driver=<driver-name>` Matches a network's driver.
+            - `id=<network-id>` Matches all or part of a network ID.
+            - `label=<key>` or `label=<key>=<value>` of a network label.
+            - `name=<network-name>` Matches all or part of a network name.
+            - `scope=["swarm"|"global"|"local"]` Filters networks by scope (`swarm`, `global`, or `local`).
+            - `type=["custom"|"builtin"]` Filters networks by type. The `custom` keyword returns all user-defined networks.
+          type: "string"
+      tags: ["Network"]
+
+  /networks/{id}:
+    get:
+      summary: "Inspect a network"
+      operationId: "NetworkInspect"
+      produces:
+        - "application/json"
+      responses:
+        200:
+          description: "No error"
+          schema:
+            $ref: "#/definitions/Network"
+        404:
+          description: "Network not found"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          description: "Network ID or name"
+          required: true
+          type: "string"
+        - name: "verbose"
+          in: "query"
+          description: "Detailed inspect output for troubleshooting"
+          type: "boolean"
+          default: false
+        - name: "scope"
+          in: "query"
+          description: "Filter the network by scope (swarm, global, or local)"
+          type: "string"
+      tags: ["Network"]
+
+    delete:
+      summary: "Remove a network"
+      operationId: "NetworkDelete"
+      responses:
+        204:
+          description: "No error"
+        403:
+          description: "operation not supported for pre-defined networks"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        404:
+          description: "no such network"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          description: "Network ID or name"
+          required: true
+          type: "string"
+      tags: ["Network"]
+
+  /networks/create:
+    post:
+      summary: "Create a network"
+      operationId: "NetworkCreate"
+      consumes:
+        - "application/json"
+      produces:
+        - "application/json"
+      responses:
+        201:
+          description: "No error"
+          schema:
+            type: "object"
+            title: "NetworkCreateResponse"
+            properties:
+              Id:
+                description: "The ID of the created network."
+                type: "string"
+              Warning:
+                type: "string"
+            example:
+              Id: "22be93d5babb089c5aab8dbc369042fad48ff791584ca2da2100db837a1c7c30"
+              Warning: ""
+        403:
+          description: "operation not supported for pre-defined networks"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        404:
+          description: "plugin not found"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "networkConfig"
+          in: "body"
+          description: "Network configuration"
+          required: true
+          schema:
+            type: "object"
+            required: ["Name"]
+            properties:
+              Name:
+                description: "The network's name."
+                type: "string"
+              CheckDuplicate:
+                description: |
+                  Check for networks with duplicate names. Since Network is
+                  primarily keyed based on a random ID and not on the name, and
+                  network name is strictly a user-friendly alias to the network
+                  which is uniquely identified using ID, there is no guaranteed
+                  way to check for duplicates. CheckDuplicate is there to provide
+                  a best effort checking of any networks which has the same name
+                  but it is not guaranteed to catch all name collisions.
+                type: "boolean"
+              Driver:
+                description: "Name of the network driver plugin to use."
+                type: "string"
+                default: "bridge"
+              Internal:
+                description: "Restrict external access to the network."
+                type: "boolean"
+              Attachable:
+                description: |
+                  Globally scoped network is manually attachable by regular
+                  containers from workers in swarm mode.
+                type: "boolean"
+              Ingress:
+                description: |
+                  Ingress network is the network which provides the routing-mesh
+                  in swarm mode.
+                type: "boolean"
+              IPAM:
+                description: "Optional custom IP scheme for the network."
+                $ref: "#/definitions/IPAM"
+              EnableIPv6:
+                description: "Enable IPv6 on the network."
+                type: "boolean"
+              Options:
+                description: "Network specific options to be used by the drivers."
+                type: "object"
+                additionalProperties:
+                  type: "string"
+              Labels:
+                description: "User-defined key/value metadata."
+                type: "object"
+                additionalProperties:
+                  type: "string"
+            example:
+              Name: "isolated_nw"
+              CheckDuplicate: false
+              Driver: "bridge"
+              EnableIPv6: true
+              IPAM:
+                Driver: "default"
+                Config:
+                  - Subnet: "172.20.0.0/16"
+                    IPRange: "172.20.10.0/24"
+                    Gateway: "172.20.10.11"
+                  - Subnet: "2001:db8:abcd::/64"
+                    Gateway: "2001:db8:abcd::1011"
+                Options:
+                  foo: "bar"
+              Internal: true
+              Attachable: false
+              Ingress: false
+              Options:
+                com.docker.network.bridge.default_bridge: "true"
+                com.docker.network.bridge.enable_icc: "true"
+                com.docker.network.bridge.enable_ip_masquerade: "true"
+                com.docker.network.bridge.host_binding_ipv4: "0.0.0.0"
+                com.docker.network.bridge.name: "docker0"
+                com.docker.network.driver.mtu: "1500"
+              Labels:
+                com.example.some-label: "some-value"
+                com.example.some-other-label: "some-other-value"
+      tags: ["Network"]
+
+  /networks/{id}/connect:
+    post:
+      summary: "Connect a container to a network"
+      operationId: "NetworkConnect"
+      consumes:
+        - "application/json"
+      responses:
+        200:
+          description: "No error"
+        403:
+          description: "Operation not supported for swarm scoped networks"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        404:
+          description: "Network or container not found"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          description: "Network ID or name"
+          required: true
+          type: "string"
+        - name: "container"
+          in: "body"
+          required: true
+          schema:
+            type: "object"
+            properties:
+              Container:
+                type: "string"
+                description: "The ID or name of the container to connect to the network."
+              EndpointConfig:
+                $ref: "#/definitions/EndpointSettings"
+            example:
+              Container: "3613f73ba0e4"
+              EndpointConfig:
+                IPAMConfig:
+                  IPv4Address: "172.24.56.89"
+                  IPv6Address: "2001:db8::5689"
+      tags: ["Network"]
+
+  /networks/{id}/disconnect:
+    post:
+      summary: "Disconnect a container from a network"
+      operationId: "NetworkDisconnect"
+      consumes:
+        - "application/json"
+      responses:
+        200:
+          description: "No error"
+        403:
+          description: "Operation not supported for swarm scoped networks"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        404:
+          description: "Network or container not found"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          description: "Network ID or name"
+          required: true
+          type: "string"
+        - name: "container"
+          in: "body"
+          required: true
+          schema:
+            type: "object"
+            properties:
+              Container:
+                type: "string"
+                description: |
+                  The ID or name of the container to disconnect from the network.
+              Force:
+                type: "boolean"
+                description: |
+                  Force the container to disconnect from the network.
+      tags: ["Network"]
+  /networks/prune:
+    post:
+      summary: "Delete unused networks"
+      produces:
+        - "application/json"
+      operationId: "NetworkPrune"
+      parameters:
+        - name: "filters"
+          in: "query"
+          description: |
+            Filters to process on the prune list, encoded as JSON (a `map[string][]string`).
+
+            Available filters:
+            - `until=<timestamp>` Prune networks created before this timestamp. The `<timestamp>` can be Unix timestamps, date formatted timestamps, or Go duration strings (e.g. `10m`, `1h30m`) computed relative to the daemon machine’s time.
+            - `label` (`label=<key>`, `label=<key>=<value>`, `label!=<key>`, or `label!=<key>=<value>`) Prune networks with (or without, in case `label!=...` is used) the specified labels.
+          type: "string"
+      responses:
+        200:
+          description: "No error"
+          schema:
+            type: "object"
+            title: "NetworkPruneResponse"
+            properties:
+              NetworksDeleted:
+                description: "Networks that were deleted"
+                type: "array"
+                items:
+                  type: "string"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      tags: ["Network"]
+  /plugins:
+    get:
+      summary: "List plugins"
+      operationId: "PluginList"
+      description: "Returns information about installed plugins."
+      produces: ["application/json"]
+      responses:
+        200:
+          description: "No error"
+          schema:
+            type: "array"
+            items:
+              $ref: "#/definitions/Plugin"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "filters"
+          in: "query"
+          type: "string"
+          description: |
+            A JSON encoded value of the filters (a `map[string][]string`) to
+            process on the plugin list.
+
+            Available filters:
+
+            - `capability=<capability name>`
+            - `enable=<true>|<false>`
+      tags: ["Plugin"]
+
+  /plugins/privileges:
+    get:
+      summary: "Get plugin privileges"
+      operationId: "GetPluginPrivileges"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            type: "array"
+            items:
+              description: |
+                Describes a permission the user has to accept upon installing
+                the plugin.
+              type: "object"
+              title: "PluginPrivilegeItem"
+              properties:
+                Name:
+                  type: "string"
+                Description:
+                  type: "string"
+                Value:
+                  type: "array"
+                  items:
+                    type: "string"
+            example:
+              - Name: "network"
+                Description: ""
+                Value:
+                  - "host"
+              - Name: "mount"
+                Description: ""
+                Value:
+                  - "/data"
+              - Name: "device"
+                Description: ""
+                Value:
+                  - "/dev/cpu_dma_latency"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "remote"
+          in: "query"
+          description: |
+            The name of the plugin. The `:latest` tag is optional, and is the
+            default if omitted.
+          required: true
+          type: "string"
+      tags:
+        - "Plugin"
+
+  /plugins/pull:
+    post:
+      summary: "Install a plugin"
+      operationId: "PluginPull"
+      description: |
+        Pulls and installs a plugin. After the plugin is installed, it can be
+        enabled using the [`POST /plugins/{name}/enable` endpoint](#operation/PostPluginsEnable).
+      produces:
+        - "application/json"
+      responses:
+        204:
+          description: "no error"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "remote"
+          in: "query"
+          description: |
+            Remote reference for plugin to install.
+
+            The `:latest` tag is optional, and is used as the default if omitted.
+          required: true
+          type: "string"
+        - name: "name"
+          in: "query"
+          description: |
+            Local name for the pulled plugin.
+
+            The `:latest` tag is optional, and is used as the default if omitted.
+          required: false
+          type: "string"
+        - name: "X-Registry-Auth"
+          in: "header"
+          description: |
+            A base64url-encoded auth configuration to use when pulling a plugin
+            from a registry.
+
+            Refer to the [authentication section](#section/Authentication) for
+            details.
+          type: "string"
+        - name: "body"
+          in: "body"
+          schema:
+            type: "array"
+            items:
+              description: |
+                Describes a permission accepted by the user upon installing the
+                plugin.
+              type: "object"
+              properties:
+                Name:
+                  type: "string"
+                Description:
+                  type: "string"
+                Value:
+                  type: "array"
+                  items:
+                    type: "string"
+            example:
+              - Name: "network"
+                Description: ""
+                Value:
+                  - "host"
+              - Name: "mount"
+                Description: ""
+                Value:
+                  - "/data"
+              - Name: "device"
+                Description: ""
+                Value:
+                  - "/dev/cpu_dma_latency"
+      tags: ["Plugin"]
+  /plugins/{name}/json:
+    get:
+      summary: "Inspect a plugin"
+      operationId: "PluginInspect"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            $ref: "#/definitions/Plugin"
+        404:
+          description: "plugin is not installed"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "name"
+          in: "path"
+          description: |
+            The name of the plugin. The `:latest` tag is optional, and is the
+            default if omitted.
+          required: true
+          type: "string"
+      tags: ["Plugin"]
+  /plugins/{name}:
+    delete:
+      summary: "Remove a plugin"
+      operationId: "PluginDelete"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            $ref: "#/definitions/Plugin"
+        404:
+          description: "plugin is not installed"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "name"
+          in: "path"
+          description: |
+            The name of the plugin. The `:latest` tag is optional, and is the
+            default if omitted.
+          required: true
+          type: "string"
+        - name: "force"
+          in: "query"
+          description: |
+            Disable the plugin before removing. This may result in issues if the
+            plugin is in use by a container.
+          type: "boolean"
+          default: false
+      tags: ["Plugin"]
+  /plugins/{name}/enable:
+    post:
+      summary: "Enable a plugin"
+      operationId: "PluginEnable"
+      responses:
+        200:
+          description: "no error"
+        404:
+          description: "plugin is not installed"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "name"
+          in: "path"
+          description: |
+            The name of the plugin. The `:latest` tag is optional, and is the
+            default if omitted.
+          required: true
+          type: "string"
+        - name: "timeout"
+          in: "query"
+          description: "Set the HTTP client timeout (in seconds)"
+          type: "integer"
+          default: 0
+      tags: ["Plugin"]
+  /plugins/{name}/disable:
+    post:
+      summary: "Disable a plugin"
+      operationId: "PluginDisable"
+      responses:
+        200:
+          description: "no error"
+        404:
+          description: "plugin is not installed"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "name"
+          in: "path"
+          description: |
+            The name of the plugin. The `:latest` tag is optional, and is the
+            default if omitted.
+          required: true
+          type: "string"
+      tags: ["Plugin"]
+  /plugins/{name}/upgrade:
+    post:
+      summary: "Upgrade a plugin"
+      operationId: "PluginUpgrade"
+      responses:
+        204:
+          description: "no error"
+        404:
+          description: "plugin not installed"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "name"
+          in: "path"
+          description: |
+            The name of the plugin. The `:latest` tag is optional, and is the
+            default if omitted.
+          required: true
+          type: "string"
+        - name: "remote"
+          in: "query"
+          description: |
+            Remote reference to upgrade to.
+
+            The `:latest` tag is optional, and is used as the default if omitted.
+          required: true
+          type: "string"
+        - name: "X-Registry-Auth"
+          in: "header"
+          description: |
+            A base64url-encoded auth configuration to use when pulling a plugin
+            from a registry.
+
+            Refer to the [authentication section](#section/Authentication) for
+            details.
+          type: "string"
+        - name: "body"
+          in: "body"
+          schema:
+            type: "array"
+            items:
+              description: |
+                Describes a permission accepted by the user upon installing the
+                plugin.
+              type: "object"
+              properties:
+                Name:
+                  type: "string"
+                Description:
+                  type: "string"
+                Value:
+                  type: "array"
+                  items:
+                    type: "string"
+            example:
+              - Name: "network"
+                Description: ""
+                Value:
+                  - "host"
+              - Name: "mount"
+                Description: ""
+                Value:
+                  - "/data"
+              - Name: "device"
+                Description: ""
+                Value:
+                  - "/dev/cpu_dma_latency"
+      tags: ["Plugin"]
+  /plugins/create:
+    post:
+      summary: "Create a plugin"
+      operationId: "PluginCreate"
+      consumes:
+        - "application/x-tar"
+      responses:
+        204:
+          description: "no error"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "name"
+          in: "query"
+          description: |
+            The name of the plugin. The `:latest` tag is optional, and is the
+            default if omitted.
+          required: true
+          type: "string"
+        - name: "tarContext"
+          in: "body"
+          description: "Path to tar containing plugin rootfs and manifest"
+          schema:
+            type: "string"
+            format: "binary"
+      tags: ["Plugin"]
+  /plugins/{name}/push:
+    post:
+      summary: "Push a plugin"
+      operationId: "PluginPush"
+      description: |
+        Push a plugin to the registry.
+      parameters:
+        - name: "name"
+          in: "path"
+          description: |
+            The name of the plugin. The `:latest` tag is optional, and is the
+            default if omitted.
+          required: true
+          type: "string"
+      responses:
+        200:
+          description: "no error"
+        404:
+          description: "plugin not installed"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      tags: ["Plugin"]
+  /plugins/{name}/set:
+    post:
+      summary: "Configure a plugin"
+      operationId: "PluginSet"
+      consumes:
+        - "application/json"
+      parameters:
+        - name: "name"
+          in: "path"
+          description: |
+            The name of the plugin. The `:latest` tag is optional, and is the
+            default if omitted.
+          required: true
+          type: "string"
+        - name: "body"
+          in: "body"
+          schema:
+            type: "array"
+            items:
+              type: "string"
+            example: ["DEBUG=1"]
+      responses:
+        204:
+          description: "No error"
+        404:
+          description: "Plugin not installed"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      tags: ["Plugin"]
+  /nodes:
+    get:
+      summary: "List nodes"
+      operationId: "NodeList"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            type: "array"
+            items:
+              $ref: "#/definitions/Node"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "filters"
+          in: "query"
+          description: |
+            Filters to process on the nodes list, encoded as JSON (a `map[string][]string`).
+
+            Available filters:
+            - `id=<node id>`
+            - `label=<engine label>`
+            - `membership=`(`accepted`|`pending`)`
+            - `name=<node name>`
+            - `node.label=<node label>`
+            - `role=`(`manager`|`worker`)`
+          type: "string"
+      tags: ["Node"]
+  /nodes/{id}:
+    get:
+      summary: "Inspect a node"
+      operationId: "NodeInspect"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            $ref: "#/definitions/Node"
+        404:
+          description: "no such node"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          description: "The ID or name of the node"
+          type: "string"
+          required: true
+      tags: ["Node"]
+    delete:
+      summary: "Delete a node"
+      operationId: "NodeDelete"
+      responses:
+        200:
+          description: "no error"
+        404:
+          description: "no such node"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          description: "The ID or name of the node"
+          type: "string"
+          required: true
+        - name: "force"
+          in: "query"
+          description: "Force remove a node from the swarm"
+          default: false
+          type: "boolean"
+      tags: ["Node"]
+  /nodes/{id}/update:
+    post:
+      summary: "Update a node"
+      operationId: "NodeUpdate"
+      responses:
+        200:
+          description: "no error"
+        400:
+          description: "bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        404:
+          description: "no such node"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          description: "The ID of the node"
+          type: "string"
+          required: true
+        - name: "body"
+          in: "body"
+          schema:
+            $ref: "#/definitions/NodeSpec"
+        - name: "version"
+          in: "query"
+          description: |
+            The version number of the node object being updated. This is required
+            to avoid conflicting writes.
+          type: "integer"
+          format: "int64"
+          required: true
+      tags: ["Node"]
+  /swarm:
+    get:
+      summary: "Inspect swarm"
+      operationId: "SwarmInspect"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            $ref: "#/definitions/Swarm"
+        404:
+          description: "no such swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      tags: ["Swarm"]
+  /swarm/init:
+    post:
+      summary: "Initialize a new swarm"
+      operationId: "SwarmInit"
+      produces:
+        - "application/json"
+        - "text/plain"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            description: "The node ID"
+            type: "string"
+            example: "7v2t30z9blmxuhnyo6s4cpenp"
+        400:
+          description: "bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is already part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "body"
+          in: "body"
+          required: true
+          schema:
+            type: "object"
+            properties:
+              ListenAddr:
+                description: |
+                  Listen address used for inter-manager communication, as well
+                  as determining the networking interface used for the VXLAN
+                  Tunnel Endpoint (VTEP). This can either be an address/port
+                  combination in the form `192.168.1.1:4567`, or an interface
+                  followed by a port number, like `eth0:4567`. If the port number
+                  is omitted, the default swarm listening port is used.
+                type: "string"
+              AdvertiseAddr:
+                description: |
+                  Externally reachable address advertised to other nodes. This
+                  can either be an address/port combination in the form
+                  `192.168.1.1:4567`, or an interface followed by a port number,
+                  like `eth0:4567`. If the port number is omitted, the port
+                  number from the listen address is used. If `AdvertiseAddr` is
+                  not specified, it will be automatically detected when possible.
+                type: "string"
+              DataPathAddr:
+                description: |
+                  Address or interface to use for data path traffic (format:
+                  `<ip|interface>`), for example,  `192.168.1.1`, or an interface,
+                  like `eth0`. If `DataPathAddr` is unspecified, the same address
+                  as `AdvertiseAddr` is used.
+
+                  The `DataPathAddr` specifies the address that global scope
+                  network drivers will publish towards other  nodes in order to
+                  reach the containers running on this node. Using this parameter
+                  it is possible to separate the container data traffic from the
+                  management traffic of the cluster.
+                type: "string"
+              DataPathPort:
+                description: |
+                  DataPathPort specifies the data path port number for data traffic.
+                  Acceptable port range is 1024 to 49151.
+                  if no port is set or is set to 0, default port 4789 will be used.
+                type: "integer"
+                format: "uint32"
+              DefaultAddrPool:
+                description: |
+                  Default Address Pool specifies default subnet pools for global
+                  scope networks.
+                type: "array"
+                items:
+                  type: "string"
+                  example: ["10.10.0.0/16", "20.20.0.0/16"]
+              ForceNewCluster:
+                description: "Force creation of a new swarm."
+                type: "boolean"
+              SubnetSize:
+                description: |
+                  SubnetSize specifies the subnet size of the networks created
+                  from the default subnet pool.
+                type: "integer"
+                format: "uint32"
+              Spec:
+                $ref: "#/definitions/SwarmSpec"
+            example:
+              ListenAddr: "0.0.0.0:2377"
+              AdvertiseAddr: "192.168.1.1:2377"
+              DataPathPort: 4789
+              DefaultAddrPool: ["10.10.0.0/8", "20.20.0.0/8"]
+              SubnetSize: 24
+              ForceNewCluster: false
+              Spec:
+                Orchestration: {}
+                Raft: {}
+                Dispatcher: {}
+                CAConfig: {}
+                EncryptionConfig:
+                  AutoLockManagers: false
+      tags: ["Swarm"]
+  /swarm/join:
+    post:
+      summary: "Join an existing swarm"
+      operationId: "SwarmJoin"
+      responses:
+        200:
+          description: "no error"
+        400:
+          description: "bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is already part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "body"
+          in: "body"
+          required: true
+          schema:
+            type: "object"
+            properties:
+              ListenAddr:
+                description: |
+                  Listen address used for inter-manager communication if the node
+                  gets promoted to manager, as well as determining the networking
+                  interface used for the VXLAN Tunnel Endpoint (VTEP).
+                type: "string"
+              AdvertiseAddr:
+                description: |
+                  Externally reachable address advertised to other nodes. This
+                  can either be an address/port combination in the form
+                  `192.168.1.1:4567`, or an interface followed by a port number,
+                  like `eth0:4567`. If the port number is omitted, the port
+                  number from the listen address is used. If `AdvertiseAddr` is
+                  not specified, it will be automatically detected when possible.
+                type: "string"
+              DataPathAddr:
+                description: |
+                  Address or interface to use for data path traffic (format:
+                  `<ip|interface>`), for example,  `192.168.1.1`, or an interface,
+                  like `eth0`. If `DataPathAddr` is unspecified, the same addres
+                  as `AdvertiseAddr` is used.
+
+                  The `DataPathAddr` specifies the address that global scope
+                  network drivers will publish towards other nodes in order to
+                  reach the containers running on this node. Using this parameter
+                  it is possible to separate the container data traffic from the
+                  management traffic of the cluster.
+
+                type: "string"
+              RemoteAddrs:
+                description: |
+                  Addresses of manager nodes already participating in the swarm.
+                type: "array"
+                items:
+                  type: "string"
+              JoinToken:
+                description: "Secret token for joining this swarm."
+                type: "string"
+            example:
+              ListenAddr: "0.0.0.0:2377"
+              AdvertiseAddr: "192.168.1.1:2377"
+              RemoteAddrs:
+                - "node1:2377"
+              JoinToken: "SWMTKN-1-3pu6hszjas19xyp7ghgosyx9k8atbfcr8p2is99znpy26u2lkl-7p73s1dx5in4tatdymyhg9hu2"
+      tags: ["Swarm"]
+  /swarm/leave:
+    post:
+      summary: "Leave a swarm"
+      operationId: "SwarmLeave"
+      responses:
+        200:
+          description: "no error"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "force"
+          description: |
+            Force leave swarm, even if this is the last manager or that it will
+            break the cluster.
+          in: "query"
+          type: "boolean"
+          default: false
+      tags: ["Swarm"]
+  /swarm/update:
+    post:
+      summary: "Update a swarm"
+      operationId: "SwarmUpdate"
+      responses:
+        200:
+          description: "no error"
+        400:
+          description: "bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "body"
+          in: "body"
+          required: true
+          schema:
+            $ref: "#/definitions/SwarmSpec"
+        - name: "version"
+          in: "query"
+          description: |
+            The version number of the swarm object being updated. This is
+            required to avoid conflicting writes.
+          type: "integer"
+          format: "int64"
+          required: true
+        - name: "rotateWorkerToken"
+          in: "query"
+          description: "Rotate the worker join token."
+          type: "boolean"
+          default: false
+        - name: "rotateManagerToken"
+          in: "query"
+          description: "Rotate the manager join token."
+          type: "boolean"
+          default: false
+        - name: "rotateManagerUnlockKey"
+          in: "query"
+          description: "Rotate the manager unlock key."
+          type: "boolean"
+          default: false
+      tags: ["Swarm"]
+  /swarm/unlockkey:
+    get:
+      summary: "Get the unlock key"
+      operationId: "SwarmUnlockkey"
+      consumes:
+        - "application/json"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            type: "object"
+            title: "UnlockKeyResponse"
+            properties:
+              UnlockKey:
+                description: "The swarm's unlock key."
+                type: "string"
+            example:
+              UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      tags: ["Swarm"]
+  /swarm/unlock:
+    post:
+      summary: "Unlock a locked manager"
+      operationId: "SwarmUnlock"
+      consumes:
+        - "application/json"
+      produces:
+        - "application/json"
+      parameters:
+        - name: "body"
+          in: "body"
+          required: true
+          schema:
+            type: "object"
+            properties:
+              UnlockKey:
+                description: "The swarm's unlock key."
+                type: "string"
+            example:
+              UnlockKey: "SWMKEY-1-7c37Cc8654o6p38HnroywCi19pllOnGtbdZEgtKxZu8"
+      responses:
+        200:
+          description: "no error"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      tags: ["Swarm"]
+  /services:
+    get:
+      summary: "List services"
+      operationId: "ServiceList"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            type: "array"
+            items:
+              $ref: "#/definitions/Service"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "filters"
+          in: "query"
+          type: "string"
+          description: |
+            A JSON encoded value of the filters (a `map[string][]string`) to
+            process on the services list.
+
+            Available filters:
+
+            - `id=<service id>`
+            - `label=<service label>`
+            - `mode=["replicated"|"global"]`
+            - `name=<service name>`
+        - name: "status"
+          in: "query"
+          type: "boolean"
+          description: |
+            Include service status, with count of running and desired tasks.
+      tags: ["Service"]
+  /services/create:
+    post:
+      summary: "Create a service"
+      operationId: "ServiceCreate"
+      consumes:
+        - "application/json"
+      produces:
+        - "application/json"
+      responses:
+        201:
+          description: "no error"
+          schema:
+            type: "object"
+            title: "ServiceCreateResponse"
+            properties:
+              ID:
+                description: "The ID of the created service."
+                type: "string"
+              Warning:
+                description: "Optional warning message"
+                type: "string"
+            example:
+              ID: "ak7w3gjqoa3kuz8xcpnyy0pvl"
+              Warning: "unable to pin image doesnotexist:latest to digest: image library/doesnotexist:latest not found"
+        400:
+          description: "bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        403:
+          description: "network is not eligible for services"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        409:
+          description: "name conflicts with an existing service"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "body"
+          in: "body"
+          required: true
+          schema:
+            allOf:
+              - $ref: "#/definitions/ServiceSpec"
+              - type: "object"
+                example:
+                  Name: "web"
+                  TaskTemplate:
+                    ContainerSpec:
+                      Image: "nginx:alpine"
+                      Mounts:
+                        -
+                          ReadOnly: true
+                          Source: "web-data"
+                          Target: "/usr/share/nginx/html"
+                          Type: "volume"
+                          VolumeOptions:
+                            DriverConfig: {}
+                            Labels:
+                              com.example.something: "something-value"
+                      Hosts: ["10.10.10.10 host1", "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789 host2"]
+                      User: "33"
+                      DNSConfig:
+                        Nameservers: ["8.8.8.8"]
+                        Search: ["example.org"]
+                        Options: ["timeout:3"]
+                      Secrets:
+                        -
+                          File:
+                            Name: "www.example.org.key"
+                            UID: "33"
+                            GID: "33"
+                            Mode: 384
+                          SecretID: "fpjqlhnwb19zds35k8wn80lq9"
+                          SecretName: "example_org_domain_key"
+                    LogDriver:
+                      Name: "json-file"
+                      Options:
+                        max-file: "3"
+                        max-size: "10M"
+                    Placement: {}
+                    Resources:
+                      Limits:
+                        MemoryBytes: 104857600
+                      Reservations: {}
+                    RestartPolicy:
+                      Condition: "on-failure"
+                      Delay: 10000000000
+                      MaxAttempts: 10
+                  Mode:
+                    Replicated:
+                      Replicas: 4
+                  UpdateConfig:
+                    Parallelism: 2
+                    Delay: 1000000000
+                    FailureAction: "pause"
+                    Monitor: 15000000000
+                    MaxFailureRatio: 0.15
+                  RollbackConfig:
+                    Parallelism: 1
+                    Delay: 1000000000
+                    FailureAction: "pause"
+                    Monitor: 15000000000
+                    MaxFailureRatio: 0.15
+                  EndpointSpec:
+                    Ports:
+                      -
+                        Protocol: "tcp"
+                        PublishedPort: 8080
+                        TargetPort: 80
+                  Labels:
+                    foo: "bar"
+        - name: "X-Registry-Auth"
+          in: "header"
+          description: |
+            A base64url-encoded auth configuration for pulling from private
+            registries.
+
+            Refer to the [authentication section](#section/Authentication) for
+            details.
+          type: "string"
+      tags: ["Service"]
+  /services/{id}:
+    get:
+      summary: "Inspect a service"
+      operationId: "ServiceInspect"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            $ref: "#/definitions/Service"
+        404:
+          description: "no such service"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          description: "ID or name of service."
+          required: true
+          type: "string"
+        - name: "insertDefaults"
+          in: "query"
+          description: "Fill empty fields with default values."
+          type: "boolean"
+          default: false
+      tags: ["Service"]
+    delete:
+      summary: "Delete a service"
+      operationId: "ServiceDelete"
+      responses:
+        200:
+          description: "no error"
+        404:
+          description: "no such service"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          description: "ID or name of service."
+          required: true
+          type: "string"
+      tags: ["Service"]
+  /services/{id}/update:
+    post:
+      summary: "Update a service"
+      operationId: "ServiceUpdate"
+      consumes: ["application/json"]
+      produces: ["application/json"]
+      responses:
+        200:
+          description: "no error"
+          schema:
+            $ref: "#/definitions/ServiceUpdateResponse"
+        400:
+          description: "bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        404:
+          description: "no such service"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          description: "ID or name of service."
+          required: true
+          type: "string"
+        - name: "body"
+          in: "body"
+          required: true
+          schema:
+            allOf:
+              - $ref: "#/definitions/ServiceSpec"
+              - type: "object"
+                example:
+                  Name: "top"
+                  TaskTemplate:
+                    ContainerSpec:
+                      Image: "busybox"
+                      Args:
+                        - "top"
+                    Resources:
+                      Limits: {}
+                      Reservations: {}
+                    RestartPolicy:
+                      Condition: "any"
+                      MaxAttempts: 0
+                    Placement: {}
+                    ForceUpdate: 0
+                  Mode:
+                    Replicated:
+                      Replicas: 1
+                  UpdateConfig:
+                    Parallelism: 2
+                    Delay: 1000000000
+                    FailureAction: "pause"
+                    Monitor: 15000000000
+                    MaxFailureRatio: 0.15
+                  RollbackConfig:
+                    Parallelism: 1
+                    Delay: 1000000000
+                    FailureAction: "pause"
+                    Monitor: 15000000000
+                    MaxFailureRatio: 0.15
+                  EndpointSpec:
+                    Mode: "vip"
+
+        - name: "version"
+          in: "query"
+          description: |
+            The version number of the service object being updated. This is
+            required to avoid conflicting writes.
+            This version number should be the value as currently set on the
+            service *before* the update. You can find the current version by
+            calling `GET /services/{id}`
+          required: true
+          type: "integer"
+        - name: "registryAuthFrom"
+          in: "query"
+          description: |
+            If the `X-Registry-Auth` header is not specified, this parameter
+            indicates where to find registry authorization credentials.
+          type: "string"
+          enum: ["spec", "previous-spec"]
+          default: "spec"
+        - name: "rollback"
+          in: "query"
+          description: |
+            Set to this parameter to `previous` to cause a server-side rollback
+            to the previous service spec. The supplied spec will be ignored in
+            this case.
+          type: "string"
+        - name: "X-Registry-Auth"
+          in: "header"
+          description: |
+            A base64url-encoded auth configuration for pulling from private
+            registries.
+
+            Refer to the [authentication section](#section/Authentication) for
+            details.
+          type: "string"
+
+      tags: ["Service"]
+  /services/{id}/logs:
+    get:
+      summary: "Get service logs"
+      description: |
+        Get `stdout` and `stderr` logs from a service. See also
+        [`/containers/{id}/logs`](#operation/ContainerLogs).
+
+        **Note**: This endpoint works only for services with the `local`,
+        `json-file` or `journald` logging drivers.
+      operationId: "ServiceLogs"
+      responses:
+        200:
+          description: "logs returned as a stream in response body"
+          schema:
+            type: "string"
+            format: "binary"
+        404:
+          description: "no such service"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such service: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID or name of the service"
+          type: "string"
+        - name: "details"
+          in: "query"
+          description: "Show service context and extra details provided to logs."
+          type: "boolean"
+          default: false
+        - name: "follow"
+          in: "query"
+          description: "Keep connection after returning logs."
+          type: "boolean"
+          default: false
+        - name: "stdout"
+          in: "query"
+          description: "Return logs from `stdout`"
+          type: "boolean"
+          default: false
+        - name: "stderr"
+          in: "query"
+          description: "Return logs from `stderr`"
+          type: "boolean"
+          default: false
+        - name: "since"
+          in: "query"
+          description: "Only return logs since this time, as a UNIX timestamp"
+          type: "integer"
+          default: 0
+        - name: "timestamps"
+          in: "query"
+          description: "Add timestamps to every log line"
+          type: "boolean"
+          default: false
+        - name: "tail"
+          in: "query"
+          description: |
+            Only return this number of log lines from the end of the logs.
+            Specify as an integer or `all` to output all log lines.
+          type: "string"
+          default: "all"
+      tags: ["Service"]
+  /tasks:
+    get:
+      summary: "List tasks"
+      operationId: "TaskList"
+      produces:
+        - "application/json"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            type: "array"
+            items:
+              $ref: "#/definitions/Task"
+            example:
+              - ID: "0kzzo1i0y4jz6027t0k7aezc7"
+                Version:
+                  Index: 71
+                CreatedAt: "2016-06-07T21:07:31.171892745Z"
+                UpdatedAt: "2016-06-07T21:07:31.376370513Z"
+                Spec:
+                  ContainerSpec:
+                    Image: "redis"
+                  Resources:
+                    Limits: {}
+                    Reservations: {}
+                  RestartPolicy:
+                    Condition: "any"
+                    MaxAttempts: 0
+                  Placement: {}
+                ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz"
+                Slot: 1
+                NodeID: "60gvrl6tm78dmak4yl7srz94v"
+                Status:
+                  Timestamp: "2016-06-07T21:07:31.290032978Z"
+                  State: "running"
+                  Message: "started"
+                  ContainerStatus:
+                    ContainerID: "e5d62702a1b48d01c3e02ca1e0212a250801fa8d67caca0b6f35919ebc12f035"
+                    PID: 677
+                DesiredState: "running"
+                NetworksAttachments:
+                  - Network:
+                      ID: "4qvuz4ko70xaltuqbt8956gd1"
+                      Version:
+                        Index: 18
+                      CreatedAt: "2016-06-07T20:31:11.912919752Z"
+                      UpdatedAt: "2016-06-07T21:07:29.955277358Z"
+                      Spec:
+                        Name: "ingress"
+                        Labels:
+                          com.docker.swarm.internal: "true"
+                        DriverConfiguration: {}
+                        IPAMOptions:
+                          Driver: {}
+                          Configs:
+                            - Subnet: "10.255.0.0/16"
+                              Gateway: "10.255.0.1"
+                      DriverState:
+                        Name: "overlay"
+                        Options:
+                          com.docker.network.driver.overlay.vxlanid_list: "256"
+                      IPAMOptions:
+                        Driver:
+                          Name: "default"
+                        Configs:
+                          - Subnet: "10.255.0.0/16"
+                            Gateway: "10.255.0.1"
+                    Addresses:
+                      - "10.255.0.10/16"
+              - ID: "1yljwbmlr8er2waf8orvqpwms"
+                Version:
+                  Index: 30
+                CreatedAt: "2016-06-07T21:07:30.019104782Z"
+                UpdatedAt: "2016-06-07T21:07:30.231958098Z"
+                Name: "hopeful_cori"
+                Spec:
+                  ContainerSpec:
+                    Image: "redis"
+                  Resources:
+                    Limits: {}
+                    Reservations: {}
+                  RestartPolicy:
+                    Condition: "any"
+                    MaxAttempts: 0
+                  Placement: {}
+                ServiceID: "9mnpnzenvg8p8tdbtq4wvbkcz"
+                Slot: 1
+                NodeID: "60gvrl6tm78dmak4yl7srz94v"
+                Status:
+                  Timestamp: "2016-06-07T21:07:30.202183143Z"
+                  State: "shutdown"
+                  Message: "shutdown"
+                  ContainerStatus:
+                    ContainerID: "1cf8d63d18e79668b0004a4be4c6ee58cddfad2dae29506d8781581d0688a213"
+                DesiredState: "shutdown"
+                NetworksAttachments:
+                  - Network:
+                      ID: "4qvuz4ko70xaltuqbt8956gd1"
+                      Version:
+                        Index: 18
+                      CreatedAt: "2016-06-07T20:31:11.912919752Z"
+                      UpdatedAt: "2016-06-07T21:07:29.955277358Z"
+                      Spec:
+                        Name: "ingress"
+                        Labels:
+                          com.docker.swarm.internal: "true"
+                        DriverConfiguration: {}
+                        IPAMOptions:
+                          Driver: {}
+                          Configs:
+                            - Subnet: "10.255.0.0/16"
+                              Gateway: "10.255.0.1"
+                      DriverState:
+                        Name: "overlay"
+                        Options:
+                          com.docker.network.driver.overlay.vxlanid_list: "256"
+                      IPAMOptions:
+                        Driver:
+                          Name: "default"
+                        Configs:
+                          - Subnet: "10.255.0.0/16"
+                            Gateway: "10.255.0.1"
+                    Addresses:
+                      - "10.255.0.5/16"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "filters"
+          in: "query"
+          type: "string"
+          description: |
+            A JSON encoded value of the filters (a `map[string][]string`) to
+            process on the tasks list.
+
+            Available filters:
+
+            - `desired-state=(running | shutdown | accepted)`
+            - `id=<task id>`
+            - `label=key` or `label="key=value"`
+            - `name=<task name>`
+            - `node=<node id or name>`
+            - `service=<service name>`
+      tags: ["Task"]
+  /tasks/{id}:
+    get:
+      summary: "Inspect a task"
+      operationId: "TaskInspect"
+      produces:
+        - "application/json"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            $ref: "#/definitions/Task"
+        404:
+          description: "no such task"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          description: "ID of the task"
+          required: true
+          type: "string"
+      tags: ["Task"]
+  /tasks/{id}/logs:
+    get:
+      summary: "Get task logs"
+      description: |
+        Get `stdout` and `stderr` logs from a task.
+        See also [`/containers/{id}/logs`](#operation/ContainerLogs).
+
+        **Note**: This endpoint works only for services with the `local`,
+        `json-file` or `journald` logging drivers.
+      operationId: "TaskLogs"
+      responses:
+        200:
+          description: "logs returned as a stream in response body"
+          schema:
+            type: "string"
+            format: "binary"
+        404:
+          description: "no such task"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such task: c2ada9df5af8"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          description: "ID of the task"
+          type: "string"
+        - name: "details"
+          in: "query"
+          description: "Show task context and extra details provided to logs."
+          type: "boolean"
+          default: false
+        - name: "follow"
+          in: "query"
+          description: "Keep connection after returning logs."
+          type: "boolean"
+          default: false
+        - name: "stdout"
+          in: "query"
+          description: "Return logs from `stdout`"
+          type: "boolean"
+          default: false
+        - name: "stderr"
+          in: "query"
+          description: "Return logs from `stderr`"
+          type: "boolean"
+          default: false
+        - name: "since"
+          in: "query"
+          description: "Only return logs since this time, as a UNIX timestamp"
+          type: "integer"
+          default: 0
+        - name: "timestamps"
+          in: "query"
+          description: "Add timestamps to every log line"
+          type: "boolean"
+          default: false
+        - name: "tail"
+          in: "query"
+          description: |
+            Only return this number of log lines from the end of the logs.
+            Specify as an integer or `all` to output all log lines.
+          type: "string"
+          default: "all"
+      tags: ["Task"]
+  /secrets:
+    get:
+      summary: "List secrets"
+      operationId: "SecretList"
+      produces:
+        - "application/json"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            type: "array"
+            items:
+              $ref: "#/definitions/Secret"
+            example:
+              - ID: "blt1owaxmitz71s9v5zh81zun"
+                Version:
+                  Index: 85
+                CreatedAt: "2017-07-20T13:55:28.678958722Z"
+                UpdatedAt: "2017-07-20T13:55:28.678958722Z"
+                Spec:
+                  Name: "mysql-passwd"
+                  Labels:
+                    some.label: "some.value"
+                  Driver:
+                    Name: "secret-bucket"
+                    Options:
+                      OptionA: "value for driver option A"
+                      OptionB: "value for driver option B"
+              - ID: "ktnbjxoalbkvbvedmg1urrz8h"
+                Version:
+                  Index: 11
+                CreatedAt: "2016-11-05T01:20:17.327670065Z"
+                UpdatedAt: "2016-11-05T01:20:17.327670065Z"
+                Spec:
+                  Name: "app-dev.crt"
+                  Labels:
+                    foo: "bar"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "filters"
+          in: "query"
+          type: "string"
+          description: |
+            A JSON encoded value of the filters (a `map[string][]string`) to
+            process on the secrets list.
+
+            Available filters:
+
+            - `id=<secret id>`
+            - `label=<key> or label=<key>=value`
+            - `name=<secret name>`
+            - `names=<secret name>`
+      tags: ["Secret"]
+  /secrets/create:
+    post:
+      summary: "Create a secret"
+      operationId: "SecretCreate"
+      consumes:
+        - "application/json"
+      produces:
+        - "application/json"
+      responses:
+        201:
+          description: "no error"
+          schema:
+            $ref: "#/definitions/IdResponse"
+        409:
+          description: "name conflicts with an existing object"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "body"
+          in: "body"
+          schema:
+            allOf:
+              - $ref: "#/definitions/SecretSpec"
+              - type: "object"
+                example:
+                  Name: "app-key.crt"
+                  Labels:
+                    foo: "bar"
+                  Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg=="
+                  Driver:
+                    Name: "secret-bucket"
+                    Options:
+                      OptionA: "value for driver option A"
+                      OptionB: "value for driver option B"
+      tags: ["Secret"]
+  /secrets/{id}:
+    get:
+      summary: "Inspect a secret"
+      operationId: "SecretInspect"
+      produces:
+        - "application/json"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            $ref: "#/definitions/Secret"
+          examples:
+            application/json:
+              ID: "ktnbjxoalbkvbvedmg1urrz8h"
+              Version:
+                Index: 11
+              CreatedAt: "2016-11-05T01:20:17.327670065Z"
+              UpdatedAt: "2016-11-05T01:20:17.327670065Z"
+              Spec:
+                Name: "app-dev.crt"
+                Labels:
+                  foo: "bar"
+                Driver:
+                  Name: "secret-bucket"
+                  Options:
+                    OptionA: "value for driver option A"
+                    OptionB: "value for driver option B"
+
+        404:
+          description: "secret not found"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          type: "string"
+          description: "ID of the secret"
+      tags: ["Secret"]
+    delete:
+      summary: "Delete a secret"
+      operationId: "SecretDelete"
+      produces:
+        - "application/json"
+      responses:
+        204:
+          description: "no error"
+        404:
+          description: "secret not found"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          type: "string"
+          description: "ID of the secret"
+      tags: ["Secret"]
+  /secrets/{id}/update:
+    post:
+      summary: "Update a Secret"
+      operationId: "SecretUpdate"
+      responses:
+        200:
+          description: "no error"
+        400:
+          description: "bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        404:
+          description: "no such secret"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          description: "The ID or name of the secret"
+          type: "string"
+          required: true
+        - name: "body"
+          in: "body"
+          schema:
+            $ref: "#/definitions/SecretSpec"
+          description: |
+            The spec of the secret to update. Currently, only the Labels field
+            can be updated. All other fields must remain unchanged from the
+            [SecretInspect endpoint](#operation/SecretInspect) response values.
+        - name: "version"
+          in: "query"
+          description: |
+            The version number of the secret object being updated. This is
+            required to avoid conflicting writes.
+          type: "integer"
+          format: "int64"
+          required: true
+      tags: ["Secret"]
+  /configs:
+    get:
+      summary: "List configs"
+      operationId: "ConfigList"
+      produces:
+        - "application/json"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            type: "array"
+            items:
+              $ref: "#/definitions/Config"
+            example:
+              - ID: "ktnbjxoalbkvbvedmg1urrz8h"
+                Version:
+                  Index: 11
+                CreatedAt: "2016-11-05T01:20:17.327670065Z"
+                UpdatedAt: "2016-11-05T01:20:17.327670065Z"
+                Spec:
+                  Name: "server.conf"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "filters"
+          in: "query"
+          type: "string"
+          description: |
+            A JSON encoded value of the filters (a `map[string][]string`) to
+            process on the configs list.
+
+            Available filters:
+
+            - `id=<config id>`
+            - `label=<key> or label=<key>=value`
+            - `name=<config name>`
+            - `names=<config name>`
+      tags: ["Config"]
+  /configs/create:
+    post:
+      summary: "Create a config"
+      operationId: "ConfigCreate"
+      consumes:
+        - "application/json"
+      produces:
+        - "application/json"
+      responses:
+        201:
+          description: "no error"
+          schema:
+            $ref: "#/definitions/IdResponse"
+        409:
+          description: "name conflicts with an existing object"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "body"
+          in: "body"
+          schema:
+            allOf:
+              - $ref: "#/definitions/ConfigSpec"
+              - type: "object"
+                example:
+                  Name: "server.conf"
+                  Labels:
+                    foo: "bar"
+                  Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg=="
+      tags: ["Config"]
+  /configs/{id}:
+    get:
+      summary: "Inspect a config"
+      operationId: "ConfigInspect"
+      produces:
+        - "application/json"
+      responses:
+        200:
+          description: "no error"
+          schema:
+            $ref: "#/definitions/Config"
+          examples:
+            application/json:
+              ID: "ktnbjxoalbkvbvedmg1urrz8h"
+              Version:
+                Index: 11
+              CreatedAt: "2016-11-05T01:20:17.327670065Z"
+              UpdatedAt: "2016-11-05T01:20:17.327670065Z"
+              Spec:
+                Name: "app-dev.crt"
+        404:
+          description: "config not found"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          type: "string"
+          description: "ID of the config"
+      tags: ["Config"]
+    delete:
+      summary: "Delete a config"
+      operationId: "ConfigDelete"
+      produces:
+        - "application/json"
+      responses:
+        204:
+          description: "no error"
+        404:
+          description: "config not found"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          required: true
+          type: "string"
+          description: "ID of the config"
+      tags: ["Config"]
+  /configs/{id}/update:
+    post:
+      summary: "Update a Config"
+      operationId: "ConfigUpdate"
+      responses:
+        200:
+          description: "no error"
+        400:
+          description: "bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        404:
+          description: "no such config"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        503:
+          description: "node is not part of a swarm"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "id"
+          in: "path"
+          description: "The ID or name of the config"
+          type: "string"
+          required: true
+        - name: "body"
+          in: "body"
+          schema:
+            $ref: "#/definitions/ConfigSpec"
+          description: |
+            The spec of the config to update. Currently, only the Labels field
+            can be updated. All other fields must remain unchanged from the
+            [ConfigInspect endpoint](#operation/ConfigInspect) response values.
+        - name: "version"
+          in: "query"
+          description: |
+            The version number of the config object being updated. This is
+            required to avoid conflicting writes.
+          type: "integer"
+          format: "int64"
+          required: true
+      tags: ["Config"]
+  /distribution/{name}/json:
+    get:
+      summary: "Get image information from the registry"
+      description: |
+        Return image digest and platform information by contacting the registry.
+      operationId: "DistributionInspect"
+      produces:
+        - "application/json"
+      responses:
+        200:
+          description: "descriptor and platform information"
+          schema:
+            type: "object"
+            x-go-name: DistributionInspect
+            title: "DistributionInspectResponse"
+            required: [Descriptor, Platforms]
+            properties:
+              Descriptor:
+                type: "object"
+                description: |
+                  A descriptor struct containing digest, media type, and size.
+                properties:
+                  MediaType:
+                    type: "string"
+                  Size:
+                    type: "integer"
+                    format: "int64"
+                  Digest:
+                    type: "string"
+                  URLs:
+                    type: "array"
+                    items:
+                      type: "string"
+              Platforms:
+                type: "array"
+                description: |
+                  An array containing all platforms supported by the image.
+                items:
+                  type: "object"
+                  properties:
+                    Architecture:
+                      type: "string"
+                    OS:
+                      type: "string"
+                    OSVersion:
+                      type: "string"
+                    OSFeatures:
+                      type: "array"
+                      items:
+                        type: "string"
+                    Variant:
+                      type: "string"
+                    Features:
+                      type: "array"
+                      items:
+                        type: "string"
+          examples:
+            application/json:
+              Descriptor:
+                MediaType: "application/vnd.docker.distribution.manifest.v2+json"
+                Digest: "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96"
+                Size: 3987495
+                URLs:
+                  - ""
+              Platforms:
+                - Architecture: "amd64"
+                  OS: "linux"
+                  OSVersion: ""
+                  OSFeatures:
+                    - ""
+                  Variant: ""
+                  Features:
+                    - ""
+        401:
+          description: "Failed authentication or no image found"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+          examples:
+            application/json:
+              message: "No such image: someimage (tag: latest)"
+        500:
+          description: "Server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      parameters:
+        - name: "name"
+          in: "path"
+          description: "Image name or id"
+          type: "string"
+          required: true
+      tags: ["Distribution"]
+  /session:
+    post:
+      summary: "Initialize interactive session"
+      description: |
+        Start a new interactive session with a server. Session allows server to
+        call back to the client for advanced capabilities.
+
+        ### Hijacking
+
+        This endpoint hijacks the HTTP connection to HTTP2 transport that allows
+        the client to expose gPRC services on that connection.
+
+        For example, the client sends this request to upgrade the connection:
+
+        ```
+        POST /session HTTP/1.1
+        Upgrade: h2c
+        Connection: Upgrade
+        ```
+
+        The Docker daemon responds with a `101 UPGRADED` response follow with
+        the raw stream:
+
+        ```
+        HTTP/1.1 101 UPGRADED
+        Connection: Upgrade
+        Upgrade: h2c
+        ```
+      operationId: "Session"
+      produces:
+        - "application/vnd.docker.raw-stream"
+      responses:
+        101:
+          description: "no error, hijacking successful"
+        400:
+          description: "bad parameter"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+        500:
+          description: "server error"
+          schema:
+            $ref: "#/definitions/ErrorResponse"
+      tags: ["Session"]
diff --git a/vendor/github.com/docker/docker/api/types/auth.go b/vendor/github.com/docker/docker/api/types/auth.go
new file mode 100644
index 0000000000000..ddf15bb182dd7
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/auth.go
@@ -0,0 +1,22 @@
+package types // import "github.com/docker/docker/api/types"
+
+// AuthConfig contains authorization information for connecting to a Registry
+type AuthConfig struct {
+	Username string `json:"username,omitempty"`
+	Password string `json:"password,omitempty"`
+	Auth     string `json:"auth,omitempty"`
+
+	// Email is an optional value associated with the username.
+	// This field is deprecated and will be removed in a later
+	// version of docker.
+	Email string `json:"email,omitempty"`
+
+	ServerAddress string `json:"serveraddress,omitempty"`
+
+	// IdentityToken is used to authenticate the user and get
+	// an access token for the registry.
+	IdentityToken string `json:"identitytoken,omitempty"`
+
+	// RegistryToken is a bearer token to be sent to a registry
+	RegistryToken string `json:"registrytoken,omitempty"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/blkiodev/blkio.go b/vendor/github.com/docker/docker/api/types/blkiodev/blkio.go
new file mode 100644
index 0000000000000..bf3463b90e711
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/blkiodev/blkio.go
@@ -0,0 +1,23 @@
+package blkiodev // import "github.com/docker/docker/api/types/blkiodev"
+
+import "fmt"
+
+// WeightDevice is a structure that holds device:weight pair
+type WeightDevice struct {
+	Path   string
+	Weight uint16
+}
+
+func (w *WeightDevice) String() string {
+	return fmt.Sprintf("%s:%d", w.Path, w.Weight)
+}
+
+// ThrottleDevice is a structure that holds device:rate_per_second pair
+type ThrottleDevice struct {
+	Path string
+	Rate uint64
+}
+
+func (t *ThrottleDevice) String() string {
+	return fmt.Sprintf("%s:%d", t.Path, t.Rate)
+}
diff --git a/vendor/github.com/docker/docker/api/types/client.go b/vendor/github.com/docker/docker/api/types/client.go
new file mode 100644
index 0000000000000..9c464b73e25d6
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/client.go
@@ -0,0 +1,419 @@
+package types // import "github.com/docker/docker/api/types"
+
+import (
+	"bufio"
+	"io"
+	"net"
+
+	"github.com/docker/docker/api/types/container"
+	"github.com/docker/docker/api/types/filters"
+	units "github.com/docker/go-units"
+)
+
+// CheckpointCreateOptions holds parameters to create a checkpoint from a container
+type CheckpointCreateOptions struct {
+	CheckpointID  string
+	CheckpointDir string
+	Exit          bool
+}
+
+// CheckpointListOptions holds parameters to list checkpoints for a container
+type CheckpointListOptions struct {
+	CheckpointDir string
+}
+
+// CheckpointDeleteOptions holds parameters to delete a checkpoint from a container
+type CheckpointDeleteOptions struct {
+	CheckpointID  string
+	CheckpointDir string
+}
+
+// ContainerAttachOptions holds parameters to attach to a container.
+type ContainerAttachOptions struct {
+	Stream     bool
+	Stdin      bool
+	Stdout     bool
+	Stderr     bool
+	DetachKeys string
+	Logs       bool
+}
+
+// ContainerCommitOptions holds parameters to commit changes into a container.
+type ContainerCommitOptions struct {
+	Reference string
+	Comment   string
+	Author    string
+	Changes   []string
+	Pause     bool
+	Config    *container.Config
+}
+
+// ContainerExecInspect holds information returned by exec inspect.
+type ContainerExecInspect struct {
+	ExecID      string `json:"ID"`
+	ContainerID string
+	Running     bool
+	ExitCode    int
+	Pid         int
+}
+
+// ContainerListOptions holds parameters to list containers with.
+type ContainerListOptions struct {
+	Quiet   bool
+	Size    bool
+	All     bool
+	Latest  bool
+	Since   string
+	Before  string
+	Limit   int
+	Filters filters.Args
+}
+
+// ContainerLogsOptions holds parameters to filter logs with.
+type ContainerLogsOptions struct {
+	ShowStdout bool
+	ShowStderr bool
+	Since      string
+	Until      string
+	Timestamps bool
+	Follow     bool
+	Tail       string
+	Details    bool
+}
+
+// ContainerRemoveOptions holds parameters to remove containers.
+type ContainerRemoveOptions struct {
+	RemoveVolumes bool
+	RemoveLinks   bool
+	Force         bool
+}
+
+// ContainerStartOptions holds parameters to start containers.
+type ContainerStartOptions struct {
+	CheckpointID  string
+	CheckpointDir string
+}
+
+// CopyToContainerOptions holds information
+// about files to copy into a container
+type CopyToContainerOptions struct {
+	AllowOverwriteDirWithFile bool
+	CopyUIDGID                bool
+}
+
+// EventsOptions holds parameters to filter events with.
+type EventsOptions struct {
+	Since   string
+	Until   string
+	Filters filters.Args
+}
+
+// NetworkListOptions holds parameters to filter the list of networks with.
+type NetworkListOptions struct {
+	Filters filters.Args
+}
+
+// HijackedResponse holds connection information for a hijacked request.
+type HijackedResponse struct {
+	Conn   net.Conn
+	Reader *bufio.Reader
+}
+
+// Close closes the hijacked connection and reader.
+func (h *HijackedResponse) Close() {
+	h.Conn.Close()
+}
+
+// CloseWriter is an interface that implements structs
+// that close input streams to prevent from writing.
+type CloseWriter interface {
+	CloseWrite() error
+}
+
+// CloseWrite closes a readWriter for writing.
+func (h *HijackedResponse) CloseWrite() error {
+	if conn, ok := h.Conn.(CloseWriter); ok {
+		return conn.CloseWrite()
+	}
+	return nil
+}
+
+// ImageBuildOptions holds the information
+// necessary to build images.
+type ImageBuildOptions struct {
+	Tags           []string
+	SuppressOutput bool
+	RemoteContext  string
+	NoCache        bool
+	Remove         bool
+	ForceRemove    bool
+	PullParent     bool
+	Isolation      container.Isolation
+	CPUSetCPUs     string
+	CPUSetMems     string
+	CPUShares      int64
+	CPUQuota       int64
+	CPUPeriod      int64
+	Memory         int64
+	MemorySwap     int64
+	CgroupParent   string
+	NetworkMode    string
+	ShmSize        int64
+	Dockerfile     string
+	Ulimits        []*units.Ulimit
+	// BuildArgs needs to be a *string instead of just a string so that
+	// we can tell the difference between "" (empty string) and no value
+	// at all (nil). See the parsing of buildArgs in
+	// api/server/router/build/build_routes.go for even more info.
+	BuildArgs   map[string]*string
+	AuthConfigs map[string]AuthConfig
+	Context     io.Reader
+	Labels      map[string]string
+	// squash the resulting image's layers to the parent
+	// preserves the original image and creates a new one from the parent with all
+	// the changes applied to a single layer
+	Squash bool
+	// CacheFrom specifies images that are used for matching cache. Images
+	// specified here do not need to have a valid parent chain to match cache.
+	CacheFrom   []string
+	SecurityOpt []string
+	ExtraHosts  []string // List of extra hosts
+	Target      string
+	SessionID   string
+	Platform    string
+	// Version specifies the version of the unerlying builder to use
+	Version BuilderVersion
+	// BuildID is an optional identifier that can be passed together with the
+	// build request. The same identifier can be used to gracefully cancel the
+	// build with the cancel request.
+	BuildID string
+	// Outputs defines configurations for exporting build results. Only supported
+	// in BuildKit mode
+	Outputs []ImageBuildOutput
+}
+
+// ImageBuildOutput defines configuration for exporting a build result
+type ImageBuildOutput struct {
+	Type  string
+	Attrs map[string]string
+}
+
+// BuilderVersion sets the version of underlying builder to use
+type BuilderVersion string
+
+const (
+	// BuilderV1 is the first generation builder in docker daemon
+	BuilderV1 BuilderVersion = "1"
+	// BuilderBuildKit is builder based on moby/buildkit project
+	BuilderBuildKit BuilderVersion = "2"
+)
+
+// ImageBuildResponse holds information
+// returned by a server after building
+// an image.
+type ImageBuildResponse struct {
+	Body   io.ReadCloser
+	OSType string
+}
+
+// ImageCreateOptions holds information to create images.
+type ImageCreateOptions struct {
+	RegistryAuth string // RegistryAuth is the base64 encoded credentials for the registry.
+	Platform     string // Platform is the target platform of the image if it needs to be pulled from the registry.
+}
+
+// ImageImportSource holds source information for ImageImport
+type ImageImportSource struct {
+	Source     io.Reader // Source is the data to send to the server to create this image from. You must set SourceName to "-" to leverage this.
+	SourceName string    // SourceName is the name of the image to pull. Set to "-" to leverage the Source attribute.
+}
+
+// ImageImportOptions holds information to import images from the client host.
+type ImageImportOptions struct {
+	Tag      string   // Tag is the name to tag this image with. This attribute is deprecated.
+	Message  string   // Message is the message to tag the image with
+	Changes  []string // Changes are the raw changes to apply to this image
+	Platform string   // Platform is the target platform of the image
+}
+
+// ImageListOptions holds parameters to filter the list of images with.
+type ImageListOptions struct {
+	All     bool
+	Filters filters.Args
+}
+
+// ImageLoadResponse returns information to the client about a load process.
+type ImageLoadResponse struct {
+	// Body must be closed to avoid a resource leak
+	Body io.ReadCloser
+	JSON bool
+}
+
+// ImagePullOptions holds information to pull images.
+type ImagePullOptions struct {
+	All           bool
+	RegistryAuth  string // RegistryAuth is the base64 encoded credentials for the registry
+	PrivilegeFunc RequestPrivilegeFunc
+	Platform      string
+}
+
+// RequestPrivilegeFunc is a function interface that
+// clients can supply to retry operations after
+// getting an authorization error.
+// This function returns the registry authentication
+// header value in base 64 format, or an error
+// if the privilege request fails.
+type RequestPrivilegeFunc func() (string, error)
+
+// ImagePushOptions holds information to push images.
+type ImagePushOptions ImagePullOptions
+
+// ImageRemoveOptions holds parameters to remove images.
+type ImageRemoveOptions struct {
+	Force         bool
+	PruneChildren bool
+}
+
+// ImageSearchOptions holds parameters to search images with.
+type ImageSearchOptions struct {
+	RegistryAuth  string
+	PrivilegeFunc RequestPrivilegeFunc
+	Filters       filters.Args
+	Limit         int
+}
+
+// ResizeOptions holds parameters to resize a tty.
+// It can be used to resize container ttys and
+// exec process ttys too.
+type ResizeOptions struct {
+	Height uint
+	Width  uint
+}
+
+// NodeListOptions holds parameters to list nodes with.
+type NodeListOptions struct {
+	Filters filters.Args
+}
+
+// NodeRemoveOptions holds parameters to remove nodes with.
+type NodeRemoveOptions struct {
+	Force bool
+}
+
+// ServiceCreateOptions contains the options to use when creating a service.
+type ServiceCreateOptions struct {
+	// EncodedRegistryAuth is the encoded registry authorization credentials to
+	// use when updating the service.
+	//
+	// This field follows the format of the X-Registry-Auth header.
+	EncodedRegistryAuth string
+
+	// QueryRegistry indicates whether the service update requires
+	// contacting a registry. A registry may be contacted to retrieve
+	// the image digest and manifest, which in turn can be used to update
+	// platform or other information about the service.
+	QueryRegistry bool
+}
+
+// ServiceCreateResponse contains the information returned to a client
+// on the creation of a new service.
+type ServiceCreateResponse struct {
+	// ID is the ID of the created service.
+	ID string
+	// Warnings is a set of non-fatal warning messages to pass on to the user.
+	Warnings []string `json:",omitempty"`
+}
+
+// Values for RegistryAuthFrom in ServiceUpdateOptions
+const (
+	RegistryAuthFromSpec         = "spec"
+	RegistryAuthFromPreviousSpec = "previous-spec"
+)
+
+// ServiceUpdateOptions contains the options to be used for updating services.
+type ServiceUpdateOptions struct {
+	// EncodedRegistryAuth is the encoded registry authorization credentials to
+	// use when updating the service.
+	//
+	// This field follows the format of the X-Registry-Auth header.
+	EncodedRegistryAuth string
+
+	// TODO(stevvooe): Consider moving the version parameter of ServiceUpdate
+	// into this field. While it does open API users up to racy writes, most
+	// users may not need that level of consistency in practice.
+
+	// RegistryAuthFrom specifies where to find the registry authorization
+	// credentials if they are not given in EncodedRegistryAuth. Valid
+	// values are "spec" and "previous-spec".
+	RegistryAuthFrom string
+
+	// Rollback indicates whether a server-side rollback should be
+	// performed. When this is set, the provided spec will be ignored.
+	// The valid values are "previous" and "none". An empty value is the
+	// same as "none".
+	Rollback string
+
+	// QueryRegistry indicates whether the service update requires
+	// contacting a registry. A registry may be contacted to retrieve
+	// the image digest and manifest, which in turn can be used to update
+	// platform or other information about the service.
+	QueryRegistry bool
+}
+
+// ServiceListOptions holds parameters to list services with.
+type ServiceListOptions struct {
+	Filters filters.Args
+
+	// Status indicates whether the server should include the service task
+	// count of running and desired tasks.
+	Status bool
+}
+
+// ServiceInspectOptions holds parameters related to the "service inspect"
+// operation.
+type ServiceInspectOptions struct {
+	InsertDefaults bool
+}
+
+// TaskListOptions holds parameters to list tasks with.
+type TaskListOptions struct {
+	Filters filters.Args
+}
+
+// PluginRemoveOptions holds parameters to remove plugins.
+type PluginRemoveOptions struct {
+	Force bool
+}
+
+// PluginEnableOptions holds parameters to enable plugins.
+type PluginEnableOptions struct {
+	Timeout int
+}
+
+// PluginDisableOptions holds parameters to disable plugins.
+type PluginDisableOptions struct {
+	Force bool
+}
+
+// PluginInstallOptions holds parameters to install a plugin.
+type PluginInstallOptions struct {
+	Disabled              bool
+	AcceptAllPermissions  bool
+	RegistryAuth          string // RegistryAuth is the base64 encoded credentials for the registry
+	RemoteRef             string // RemoteRef is the plugin name on the registry
+	PrivilegeFunc         RequestPrivilegeFunc
+	AcceptPermissionsFunc func(PluginPrivileges) (bool, error)
+	Args                  []string
+}
+
+// SwarmUnlockKeyResponse contains the response for Engine API:
+// GET /swarm/unlockkey
+type SwarmUnlockKeyResponse struct {
+	// UnlockKey is the unlock key in ASCII-armored format.
+	UnlockKey string
+}
+
+// PluginCreateOptions hold all options to plugin create.
+type PluginCreateOptions struct {
+	RepoName string
+}
diff --git a/vendor/github.com/docker/docker/api/types/configs.go b/vendor/github.com/docker/docker/api/types/configs.go
new file mode 100644
index 0000000000000..3dd133a3a58a4
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/configs.go
@@ -0,0 +1,66 @@
+package types // import "github.com/docker/docker/api/types"
+
+import (
+	"github.com/docker/docker/api/types/container"
+	"github.com/docker/docker/api/types/network"
+	specs "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+// configs holds structs used for internal communication between the
+// frontend (such as an http server) and the backend (such as the
+// docker daemon).
+
+// ContainerCreateConfig is the parameter set to ContainerCreate()
+type ContainerCreateConfig struct {
+	Name             string
+	Config           *container.Config
+	HostConfig       *container.HostConfig
+	NetworkingConfig *network.NetworkingConfig
+	Platform         *specs.Platform
+	AdjustCPUShares  bool
+}
+
+// ContainerRmConfig holds arguments for the container remove
+// operation. This struct is used to tell the backend what operations
+// to perform.
+type ContainerRmConfig struct {
+	ForceRemove, RemoveVolume, RemoveLink bool
+}
+
+// ExecConfig is a small subset of the Config struct that holds the configuration
+// for the exec feature of docker.
+type ExecConfig struct {
+	User         string   // User that will run the command
+	Privileged   bool     // Is the container in privileged mode
+	Tty          bool     // Attach standard streams to a tty.
+	AttachStdin  bool     // Attach the standard input, makes possible user interaction
+	AttachStderr bool     // Attach the standard error
+	AttachStdout bool     // Attach the standard output
+	Detach       bool     // Execute in detach mode
+	DetachKeys   string   // Escape keys for detach
+	Env          []string // Environment variables
+	WorkingDir   string   // Working directory
+	Cmd          []string // Execution commands and args
+}
+
+// PluginRmConfig holds arguments for plugin remove.
+type PluginRmConfig struct {
+	ForceRemove bool
+}
+
+// PluginEnableConfig holds arguments for plugin enable
+type PluginEnableConfig struct {
+	Timeout int
+}
+
+// PluginDisableConfig holds arguments for plugin disable.
+type PluginDisableConfig struct {
+	ForceDisable bool
+}
+
+// NetworkListConfig stores the options available for listing networks
+type NetworkListConfig struct {
+	// TODO(@cpuguy83): naming is hard, this is pulled from what was being used in the router before moving here
+	Detailed bool
+	Verbose  bool
+}
diff --git a/vendor/github.com/docker/docker/api/types/container/config.go b/vendor/github.com/docker/docker/api/types/container/config.go
new file mode 100644
index 0000000000000..f767195b94b41
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/container/config.go
@@ -0,0 +1,69 @@
+package container // import "github.com/docker/docker/api/types/container"
+
+import (
+	"time"
+
+	"github.com/docker/docker/api/types/strslice"
+	"github.com/docker/go-connections/nat"
+)
+
+// MinimumDuration puts a minimum on user configured duration.
+// This is to prevent API error on time unit. For example, API may
+// set 3 as healthcheck interval with intention of 3 seconds, but
+// Docker interprets it as 3 nanoseconds.
+const MinimumDuration = 1 * time.Millisecond
+
+// HealthConfig holds configuration settings for the HEALTHCHECK feature.
+type HealthConfig struct {
+	// Test is the test to perform to check that the container is healthy.
+	// An empty slice means to inherit the default.
+	// The options are:
+	// {} : inherit healthcheck
+	// {"NONE"} : disable healthcheck
+	// {"CMD", args...} : exec arguments directly
+	// {"CMD-SHELL", command} : run command with system's default shell
+	Test []string `json:",omitempty"`
+
+	// Zero means to inherit. Durations are expressed as integer nanoseconds.
+	Interval    time.Duration `json:",omitempty"` // Interval is the time to wait between checks.
+	Timeout     time.Duration `json:",omitempty"` // Timeout is the time to wait before considering the check to have hung.
+	StartPeriod time.Duration `json:",omitempty"` // The start period for the container to initialize before the retries starts to count down.
+
+	// Retries is the number of consecutive failures needed to consider a container as unhealthy.
+	// Zero means inherit.
+	Retries int `json:",omitempty"`
+}
+
+// Config contains the configuration data about a container.
+// It should hold only portable information about the container.
+// Here, "portable" means "independent from the host we are running on".
+// Non-portable information *should* appear in HostConfig.
+// All fields added to this struct must be marked `omitempty` to keep getting
+// predictable hashes from the old `v1Compatibility` configuration.
+type Config struct {
+	Hostname        string              // Hostname
+	Domainname      string              // Domainname
+	User            string              // User that will run the command(s) inside the container, also support user:group
+	AttachStdin     bool                // Attach the standard input, makes possible user interaction
+	AttachStdout    bool                // Attach the standard output
+	AttachStderr    bool                // Attach the standard error
+	ExposedPorts    nat.PortSet         `json:",omitempty"` // List of exposed ports
+	Tty             bool                // Attach standard streams to a tty, including stdin if it is not closed.
+	OpenStdin       bool                // Open stdin
+	StdinOnce       bool                // If true, close stdin after the 1 attached client disconnects.
+	Env             []string            // List of environment variable to set in the container
+	Cmd             strslice.StrSlice   // Command to run when starting the container
+	Healthcheck     *HealthConfig       `json:",omitempty"` // Healthcheck describes how to check the container is healthy
+	ArgsEscaped     bool                `json:",omitempty"` // True if command is already escaped (meaning treat as a command line) (Windows specific).
+	Image           string              // Name of the image as it was passed by the operator (e.g. could be symbolic)
+	Volumes         map[string]struct{} // List of volumes (mounts) used for the container
+	WorkingDir      string              // Current directory (PWD) in the command will be launched
+	Entrypoint      strslice.StrSlice   // Entrypoint to run when starting the container
+	NetworkDisabled bool                `json:",omitempty"` // Is network disabled
+	MacAddress      string              `json:",omitempty"` // Mac Address of the container
+	OnBuild         []string            // ONBUILD metadata that were defined on the image Dockerfile
+	Labels          map[string]string   // List of labels set to this container
+	StopSignal      string              `json:",omitempty"` // Signal to stop a container
+	StopTimeout     *int                `json:",omitempty"` // Timeout (in seconds) to stop a container
+	Shell           strslice.StrSlice   `json:",omitempty"` // Shell for shell-form of RUN, CMD, ENTRYPOINT
+}
diff --git a/vendor/github.com/docker/docker/api/types/container/container_changes.go b/vendor/github.com/docker/docker/api/types/container/container_changes.go
new file mode 100644
index 0000000000000..16dd5019eef88
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/container/container_changes.go
@@ -0,0 +1,20 @@
+package container // import "github.com/docker/docker/api/types/container"
+
+// ----------------------------------------------------------------------------
+// Code generated by `swagger generate operation`. DO NOT EDIT.
+//
+// See hack/generate-swagger-api.sh
+// ----------------------------------------------------------------------------
+
+// ContainerChangeResponseItem change item in response to ContainerChanges operation
+// swagger:model ContainerChangeResponseItem
+type ContainerChangeResponseItem struct {
+
+	// Kind of change
+	// Required: true
+	Kind uint8 `json:"Kind"`
+
+	// Path to file that has changed
+	// Required: true
+	Path string `json:"Path"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/container/container_create.go b/vendor/github.com/docker/docker/api/types/container/container_create.go
new file mode 100644
index 0000000000000..d0c852f84d5c2
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/container/container_create.go
@@ -0,0 +1,20 @@
+package container // import "github.com/docker/docker/api/types/container"
+
+// ----------------------------------------------------------------------------
+// Code generated by `swagger generate operation`. DO NOT EDIT.
+//
+// See hack/generate-swagger-api.sh
+// ----------------------------------------------------------------------------
+
+// ContainerCreateCreatedBody OK response to ContainerCreate operation
+// swagger:model ContainerCreateCreatedBody
+type ContainerCreateCreatedBody struct {
+
+	// The ID of the created container
+	// Required: true
+	ID string `json:"Id"`
+
+	// Warnings encountered when creating the container
+	// Required: true
+	Warnings []string `json:"Warnings"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/container/container_top.go b/vendor/github.com/docker/docker/api/types/container/container_top.go
new file mode 100644
index 0000000000000..63381da36749a
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/container/container_top.go
@@ -0,0 +1,22 @@
+package container // import "github.com/docker/docker/api/types/container"
+
+// ----------------------------------------------------------------------------
+// Code generated by `swagger generate operation`. DO NOT EDIT.
+//
+// See hack/generate-swagger-api.sh
+// ----------------------------------------------------------------------------
+
+// ContainerTopOKBody OK response to ContainerTop operation
+// swagger:model ContainerTopOKBody
+type ContainerTopOKBody struct {
+
+	// Each process running in the container, where each is process
+	// is an array of values corresponding to the titles.
+	//
+	// Required: true
+	Processes [][]string `json:"Processes"`
+
+	// The ps column titles
+	// Required: true
+	Titles []string `json:"Titles"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/container/container_update.go b/vendor/github.com/docker/docker/api/types/container/container_update.go
new file mode 100644
index 0000000000000..c10f175ea82f7
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/container/container_update.go
@@ -0,0 +1,16 @@
+package container // import "github.com/docker/docker/api/types/container"
+
+// ----------------------------------------------------------------------------
+// Code generated by `swagger generate operation`. DO NOT EDIT.
+//
+// See hack/generate-swagger-api.sh
+// ----------------------------------------------------------------------------
+
+// ContainerUpdateOKBody OK response to ContainerUpdate operation
+// swagger:model ContainerUpdateOKBody
+type ContainerUpdateOKBody struct {
+
+	// warnings
+	// Required: true
+	Warnings []string `json:"Warnings"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/container/container_wait.go b/vendor/github.com/docker/docker/api/types/container/container_wait.go
new file mode 100644
index 0000000000000..49e05ae669449
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/container/container_wait.go
@@ -0,0 +1,28 @@
+package container // import "github.com/docker/docker/api/types/container"
+
+// ----------------------------------------------------------------------------
+// Code generated by `swagger generate operation`. DO NOT EDIT.
+//
+// See hack/generate-swagger-api.sh
+// ----------------------------------------------------------------------------
+
+// ContainerWaitOKBodyError container waiting error, if any
+// swagger:model ContainerWaitOKBodyError
+type ContainerWaitOKBodyError struct {
+
+	// Details of an error
+	Message string `json:"Message,omitempty"`
+}
+
+// ContainerWaitOKBody OK response to ContainerWait operation
+// swagger:model ContainerWaitOKBody
+type ContainerWaitOKBody struct {
+
+	// error
+	// Required: true
+	Error *ContainerWaitOKBodyError `json:"Error"`
+
+	// Exit code of the container
+	// Required: true
+	StatusCode int64 `json:"StatusCode"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/container/host_config.go b/vendor/github.com/docker/docker/api/types/container/host_config.go
new file mode 100644
index 0000000000000..2d1cbaa9abd9e
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/container/host_config.go
@@ -0,0 +1,447 @@
+package container // import "github.com/docker/docker/api/types/container"
+
+import (
+	"strings"
+
+	"github.com/docker/docker/api/types/blkiodev"
+	"github.com/docker/docker/api/types/mount"
+	"github.com/docker/docker/api/types/strslice"
+	"github.com/docker/go-connections/nat"
+	units "github.com/docker/go-units"
+)
+
+// CgroupnsMode represents the cgroup namespace mode of the container
+type CgroupnsMode string
+
+// IsPrivate indicates whether the container uses its own private cgroup namespace
+func (c CgroupnsMode) IsPrivate() bool {
+	return c == "private"
+}
+
+// IsHost indicates whether the container shares the host's cgroup namespace
+func (c CgroupnsMode) IsHost() bool {
+	return c == "host"
+}
+
+// IsEmpty indicates whether the container cgroup namespace mode is unset
+func (c CgroupnsMode) IsEmpty() bool {
+	return c == ""
+}
+
+// Valid indicates whether the cgroup namespace mode is valid
+func (c CgroupnsMode) Valid() bool {
+	return c.IsEmpty() || c.IsPrivate() || c.IsHost()
+}
+
+// Isolation represents the isolation technology of a container. The supported
+// values are platform specific
+type Isolation string
+
+// IsDefault indicates the default isolation technology of a container. On Linux this
+// is the native driver. On Windows, this is a Windows Server Container.
+func (i Isolation) IsDefault() bool {
+	return strings.ToLower(string(i)) == "default" || string(i) == ""
+}
+
+// IsHyperV indicates the use of a Hyper-V partition for isolation
+func (i Isolation) IsHyperV() bool {
+	return strings.ToLower(string(i)) == "hyperv"
+}
+
+// IsProcess indicates the use of process isolation
+func (i Isolation) IsProcess() bool {
+	return strings.ToLower(string(i)) == "process"
+}
+
+const (
+	// IsolationEmpty is unspecified (same behavior as default)
+	IsolationEmpty = Isolation("")
+	// IsolationDefault is the default isolation mode on current daemon
+	IsolationDefault = Isolation("default")
+	// IsolationProcess is process isolation mode
+	IsolationProcess = Isolation("process")
+	// IsolationHyperV is HyperV isolation mode
+	IsolationHyperV = Isolation("hyperv")
+)
+
+// IpcMode represents the container ipc stack.
+type IpcMode string
+
+// IsPrivate indicates whether the container uses its own private ipc namespace which can not be shared.
+func (n IpcMode) IsPrivate() bool {
+	return n == "private"
+}
+
+// IsHost indicates whether the container shares the host's ipc namespace.
+func (n IpcMode) IsHost() bool {
+	return n == "host"
+}
+
+// IsShareable indicates whether the container's ipc namespace can be shared with another container.
+func (n IpcMode) IsShareable() bool {
+	return n == "shareable"
+}
+
+// IsContainer indicates whether the container uses another container's ipc namespace.
+func (n IpcMode) IsContainer() bool {
+	parts := strings.SplitN(string(n), ":", 2)
+	return len(parts) > 1 && parts[0] == "container"
+}
+
+// IsNone indicates whether container IpcMode is set to "none".
+func (n IpcMode) IsNone() bool {
+	return n == "none"
+}
+
+// IsEmpty indicates whether container IpcMode is empty
+func (n IpcMode) IsEmpty() bool {
+	return n == ""
+}
+
+// Valid indicates whether the ipc mode is valid.
+func (n IpcMode) Valid() bool {
+	return n.IsEmpty() || n.IsNone() || n.IsPrivate() || n.IsHost() || n.IsShareable() || n.IsContainer()
+}
+
+// Container returns the name of the container ipc stack is going to be used.
+func (n IpcMode) Container() string {
+	parts := strings.SplitN(string(n), ":", 2)
+	if len(parts) > 1 && parts[0] == "container" {
+		return parts[1]
+	}
+	return ""
+}
+
+// NetworkMode represents the container network stack.
+type NetworkMode string
+
+// IsNone indicates whether container isn't using a network stack.
+func (n NetworkMode) IsNone() bool {
+	return n == "none"
+}
+
+// IsDefault indicates whether container uses the default network stack.
+func (n NetworkMode) IsDefault() bool {
+	return n == "default"
+}
+
+// IsPrivate indicates whether container uses its private network stack.
+func (n NetworkMode) IsPrivate() bool {
+	return !(n.IsHost() || n.IsContainer())
+}
+
+// IsContainer indicates whether container uses a container network stack.
+func (n NetworkMode) IsContainer() bool {
+	parts := strings.SplitN(string(n), ":", 2)
+	return len(parts) > 1 && parts[0] == "container"
+}
+
+// ConnectedContainer is the id of the container which network this container is connected to.
+func (n NetworkMode) ConnectedContainer() string {
+	parts := strings.SplitN(string(n), ":", 2)
+	if len(parts) > 1 {
+		return parts[1]
+	}
+	return ""
+}
+
+// UserDefined indicates user-created network
+func (n NetworkMode) UserDefined() string {
+	if n.IsUserDefined() {
+		return string(n)
+	}
+	return ""
+}
+
+// UsernsMode represents userns mode in the container.
+type UsernsMode string
+
+// IsHost indicates whether the container uses the host's userns.
+func (n UsernsMode) IsHost() bool {
+	return n == "host"
+}
+
+// IsPrivate indicates whether the container uses the a private userns.
+func (n UsernsMode) IsPrivate() bool {
+	return !(n.IsHost())
+}
+
+// Valid indicates whether the userns is valid.
+func (n UsernsMode) Valid() bool {
+	parts := strings.Split(string(n), ":")
+	switch mode := parts[0]; mode {
+	case "", "host":
+	default:
+		return false
+	}
+	return true
+}
+
+// CgroupSpec represents the cgroup to use for the container.
+type CgroupSpec string
+
+// IsContainer indicates whether the container is using another container cgroup
+func (c CgroupSpec) IsContainer() bool {
+	parts := strings.SplitN(string(c), ":", 2)
+	return len(parts) > 1 && parts[0] == "container"
+}
+
+// Valid indicates whether the cgroup spec is valid.
+func (c CgroupSpec) Valid() bool {
+	return c.IsContainer() || c == ""
+}
+
+// Container returns the name of the container whose cgroup will be used.
+func (c CgroupSpec) Container() string {
+	parts := strings.SplitN(string(c), ":", 2)
+	if len(parts) > 1 {
+		return parts[1]
+	}
+	return ""
+}
+
+// UTSMode represents the UTS namespace of the container.
+type UTSMode string
+
+// IsPrivate indicates whether the container uses its private UTS namespace.
+func (n UTSMode) IsPrivate() bool {
+	return !(n.IsHost())
+}
+
+// IsHost indicates whether the container uses the host's UTS namespace.
+func (n UTSMode) IsHost() bool {
+	return n == "host"
+}
+
+// Valid indicates whether the UTS namespace is valid.
+func (n UTSMode) Valid() bool {
+	parts := strings.Split(string(n), ":")
+	switch mode := parts[0]; mode {
+	case "", "host":
+	default:
+		return false
+	}
+	return true
+}
+
+// PidMode represents the pid namespace of the container.
+type PidMode string
+
+// IsPrivate indicates whether the container uses its own new pid namespace.
+func (n PidMode) IsPrivate() bool {
+	return !(n.IsHost() || n.IsContainer())
+}
+
+// IsHost indicates whether the container uses the host's pid namespace.
+func (n PidMode) IsHost() bool {
+	return n == "host"
+}
+
+// IsContainer indicates whether the container uses a container's pid namespace.
+func (n PidMode) IsContainer() bool {
+	parts := strings.SplitN(string(n), ":", 2)
+	return len(parts) > 1 && parts[0] == "container"
+}
+
+// Valid indicates whether the pid namespace is valid.
+func (n PidMode) Valid() bool {
+	parts := strings.Split(string(n), ":")
+	switch mode := parts[0]; mode {
+	case "", "host":
+	case "container":
+		if len(parts) != 2 || parts[1] == "" {
+			return false
+		}
+	default:
+		return false
+	}
+	return true
+}
+
+// Container returns the name of the container whose pid namespace is going to be used.
+func (n PidMode) Container() string {
+	parts := strings.SplitN(string(n), ":", 2)
+	if len(parts) > 1 {
+		return parts[1]
+	}
+	return ""
+}
+
+// DeviceRequest represents a request for devices from a device driver.
+// Used by GPU device drivers.
+type DeviceRequest struct {
+	Driver       string            // Name of device driver
+	Count        int               // Number of devices to request (-1 = All)
+	DeviceIDs    []string          // List of device IDs as recognizable by the device driver
+	Capabilities [][]string        // An OR list of AND lists of device capabilities (e.g. "gpu")
+	Options      map[string]string // Options to pass onto the device driver
+}
+
+// DeviceMapping represents the device mapping between the host and the container.
+type DeviceMapping struct {
+	PathOnHost        string
+	PathInContainer   string
+	CgroupPermissions string
+}
+
+// RestartPolicy represents the restart policies of the container.
+type RestartPolicy struct {
+	Name              string
+	MaximumRetryCount int
+}
+
+// IsNone indicates whether the container has the "no" restart policy.
+// This means the container will not automatically restart when exiting.
+func (rp *RestartPolicy) IsNone() bool {
+	return rp.Name == "no" || rp.Name == ""
+}
+
+// IsAlways indicates whether the container has the "always" restart policy.
+// This means the container will automatically restart regardless of the exit status.
+func (rp *RestartPolicy) IsAlways() bool {
+	return rp.Name == "always"
+}
+
+// IsOnFailure indicates whether the container has the "on-failure" restart policy.
+// This means the container will automatically restart of exiting with a non-zero exit status.
+func (rp *RestartPolicy) IsOnFailure() bool {
+	return rp.Name == "on-failure"
+}
+
+// IsUnlessStopped indicates whether the container has the
+// "unless-stopped" restart policy. This means the container will
+// automatically restart unless user has put it to stopped state.
+func (rp *RestartPolicy) IsUnlessStopped() bool {
+	return rp.Name == "unless-stopped"
+}
+
+// IsSame compares two RestartPolicy to see if they are the same
+func (rp *RestartPolicy) IsSame(tp *RestartPolicy) bool {
+	return rp.Name == tp.Name && rp.MaximumRetryCount == tp.MaximumRetryCount
+}
+
+// LogMode is a type to define the available modes for logging
+// These modes affect how logs are handled when log messages start piling up.
+type LogMode string
+
+// Available logging modes
+const (
+	LogModeUnset            = ""
+	LogModeBlocking LogMode = "blocking"
+	LogModeNonBlock LogMode = "non-blocking"
+)
+
+// LogConfig represents the logging configuration of the container.
+type LogConfig struct {
+	Type   string
+	Config map[string]string
+}
+
+// Resources contains container's resources (cgroups config, ulimits...)
+type Resources struct {
+	// Applicable to all platforms
+	CPUShares int64 `json:"CpuShares"` // CPU shares (relative weight vs. other containers)
+	Memory    int64 // Memory limit (in bytes)
+	NanoCPUs  int64 `json:"NanoCpus"` // CPU quota in units of 10<sup>-9</sup> CPUs.
+
+	// Applicable to UNIX platforms
+	CgroupParent         string // Parent cgroup.
+	BlkioWeight          uint16 // Block IO weight (relative weight vs. other containers)
+	BlkioWeightDevice    []*blkiodev.WeightDevice
+	BlkioDeviceReadBps   []*blkiodev.ThrottleDevice
+	BlkioDeviceWriteBps  []*blkiodev.ThrottleDevice
+	BlkioDeviceReadIOps  []*blkiodev.ThrottleDevice
+	BlkioDeviceWriteIOps []*blkiodev.ThrottleDevice
+	CPUPeriod            int64           `json:"CpuPeriod"`          // CPU CFS (Completely Fair Scheduler) period
+	CPUQuota             int64           `json:"CpuQuota"`           // CPU CFS (Completely Fair Scheduler) quota
+	CPURealtimePeriod    int64           `json:"CpuRealtimePeriod"`  // CPU real-time period
+	CPURealtimeRuntime   int64           `json:"CpuRealtimeRuntime"` // CPU real-time runtime
+	CpusetCpus           string          // CpusetCpus 0-2, 0,1
+	CpusetMems           string          // CpusetMems 0-2, 0,1
+	Devices              []DeviceMapping // List of devices to map inside the container
+	DeviceCgroupRules    []string        // List of rule to be added to the device cgroup
+	DeviceRequests       []DeviceRequest // List of device requests for device drivers
+	KernelMemory         int64           // Kernel memory limit (in bytes), Deprecated: kernel 5.4 deprecated kmem.limit_in_bytes
+	KernelMemoryTCP      int64           // Hard limit for kernel TCP buffer memory (in bytes)
+	MemoryReservation    int64           // Memory soft limit (in bytes)
+	MemorySwap           int64           // Total memory usage (memory + swap); set `-1` to enable unlimited swap
+	MemorySwappiness     *int64          // Tuning container memory swappiness behaviour
+	OomKillDisable       *bool           // Whether to disable OOM Killer or not
+	PidsLimit            *int64          // Setting PIDs limit for a container; Set `0` or `-1` for unlimited, or `null` to not change.
+	Ulimits              []*units.Ulimit // List of ulimits to be set in the container
+
+	// Applicable to Windows
+	CPUCount           int64  `json:"CpuCount"`   // CPU count
+	CPUPercent         int64  `json:"CpuPercent"` // CPU percent
+	IOMaximumIOps      uint64 // Maximum IOps for the container system drive
+	IOMaximumBandwidth uint64 // Maximum IO in bytes per second for the container system drive
+}
+
+// UpdateConfig holds the mutable attributes of a Container.
+// Those attributes can be updated at runtime.
+type UpdateConfig struct {
+	// Contains container's resources (cgroups, ulimits)
+	Resources
+	RestartPolicy RestartPolicy
+}
+
+// HostConfig the non-portable Config structure of a container.
+// Here, "non-portable" means "dependent of the host we are running on".
+// Portable information *should* appear in Config.
+type HostConfig struct {
+	// Applicable to all platforms
+	Binds           []string      // List of volume bindings for this container
+	ContainerIDFile string        // File (path) where the containerId is written
+	LogConfig       LogConfig     // Configuration of the logs for this container
+	NetworkMode     NetworkMode   // Network mode to use for the container
+	PortBindings    nat.PortMap   // Port mapping between the exposed port (container) and the host
+	RestartPolicy   RestartPolicy // Restart policy to be used for the container
+	AutoRemove      bool          // Automatically remove container when it exits
+	VolumeDriver    string        // Name of the volume driver used to mount volumes
+	VolumesFrom     []string      // List of volumes to take from other container
+
+	// Applicable to UNIX platforms
+	CapAdd          strslice.StrSlice // List of kernel capabilities to add to the container
+	CapDrop         strslice.StrSlice // List of kernel capabilities to remove from the container
+	CgroupnsMode    CgroupnsMode      // Cgroup namespace mode to use for the container
+	DNS             []string          `json:"Dns"`        // List of DNS server to lookup
+	DNSOptions      []string          `json:"DnsOptions"` // List of DNSOption to look for
+	DNSSearch       []string          `json:"DnsSearch"`  // List of DNSSearch to look for
+	ExtraHosts      []string          // List of extra hosts
+	GroupAdd        []string          // List of additional groups that the container process will run as
+	IpcMode         IpcMode           // IPC namespace to use for the container
+	Cgroup          CgroupSpec        // Cgroup to use for the container
+	Links           []string          // List of links (in the name:alias form)
+	OomScoreAdj     int               // Container preference for OOM-killing
+	PidMode         PidMode           // PID namespace to use for the container
+	Privileged      bool              // Is the container in privileged mode
+	PublishAllPorts bool              // Should docker publish all exposed port for the container
+	ReadonlyRootfs  bool              // Is the container root filesystem in read-only
+	SecurityOpt     []string          // List of string values to customize labels for MLS systems, such as SELinux.
+	StorageOpt      map[string]string `json:",omitempty"` // Storage driver options per container.
+	Tmpfs           map[string]string `json:",omitempty"` // List of tmpfs (mounts) used for the container
+	UTSMode         UTSMode           // UTS namespace to use for the container
+	UsernsMode      UsernsMode        // The user namespace to use for the container
+	ShmSize         int64             // Total shm memory usage
+	Sysctls         map[string]string `json:",omitempty"` // List of Namespaced sysctls used for the container
+	Runtime         string            `json:",omitempty"` // Runtime to use with this container
+
+	// Applicable to Windows
+	ConsoleSize [2]uint   // Initial console size (height,width)
+	Isolation   Isolation // Isolation technology of the container (e.g. default, hyperv)
+
+	// Contains container's resources (cgroups, ulimits)
+	Resources
+
+	// Mounts specs used by the container
+	Mounts []mount.Mount `json:",omitempty"`
+
+	// MaskedPaths is the list of paths to be masked inside the container (this overrides the default set of paths)
+	MaskedPaths []string
+
+	// ReadonlyPaths is the list of paths to be set as read-only inside the container (this overrides the default set of paths)
+	ReadonlyPaths []string
+
+	// Run a custom init inside the container, if null, use the daemon's configured settings
+	Init *bool `json:",omitempty"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/container/hostconfig_unix.go b/vendor/github.com/docker/docker/api/types/container/hostconfig_unix.go
new file mode 100644
index 0000000000000..24c4fa8d9002e
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/container/hostconfig_unix.go
@@ -0,0 +1,42 @@
+//go:build !windows
+// +build !windows
+
+package container // import "github.com/docker/docker/api/types/container"
+
+// IsValid indicates if an isolation technology is valid
+func (i Isolation) IsValid() bool {
+	return i.IsDefault()
+}
+
+// NetworkName returns the name of the network stack.
+func (n NetworkMode) NetworkName() string {
+	if n.IsBridge() {
+		return "bridge"
+	} else if n.IsHost() {
+		return "host"
+	} else if n.IsContainer() {
+		return "container"
+	} else if n.IsNone() {
+		return "none"
+	} else if n.IsDefault() {
+		return "default"
+	} else if n.IsUserDefined() {
+		return n.UserDefined()
+	}
+	return ""
+}
+
+// IsBridge indicates whether container uses the bridge network stack
+func (n NetworkMode) IsBridge() bool {
+	return n == "bridge"
+}
+
+// IsHost indicates whether container uses the host network stack.
+func (n NetworkMode) IsHost() bool {
+	return n == "host"
+}
+
+// IsUserDefined indicates user-created network
+func (n NetworkMode) IsUserDefined() bool {
+	return !n.IsDefault() && !n.IsBridge() && !n.IsHost() && !n.IsNone() && !n.IsContainer()
+}
diff --git a/vendor/github.com/docker/docker/api/types/container/hostconfig_windows.go b/vendor/github.com/docker/docker/api/types/container/hostconfig_windows.go
new file mode 100644
index 0000000000000..99f803a5bb170
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/container/hostconfig_windows.go
@@ -0,0 +1,40 @@
+package container // import "github.com/docker/docker/api/types/container"
+
+// IsBridge indicates whether container uses the bridge network stack
+// in windows it is given the name NAT
+func (n NetworkMode) IsBridge() bool {
+	return n == "nat"
+}
+
+// IsHost indicates whether container uses the host network stack.
+// returns false as this is not supported by windows
+func (n NetworkMode) IsHost() bool {
+	return false
+}
+
+// IsUserDefined indicates user-created network
+func (n NetworkMode) IsUserDefined() bool {
+	return !n.IsDefault() && !n.IsNone() && !n.IsBridge() && !n.IsContainer()
+}
+
+// IsValid indicates if an isolation technology is valid
+func (i Isolation) IsValid() bool {
+	return i.IsDefault() || i.IsHyperV() || i.IsProcess()
+}
+
+// NetworkName returns the name of the network stack.
+func (n NetworkMode) NetworkName() string {
+	if n.IsDefault() {
+		return "default"
+	} else if n.IsBridge() {
+		return "nat"
+	} else if n.IsNone() {
+		return "none"
+	} else if n.IsContainer() {
+		return "container"
+	} else if n.IsUserDefined() {
+		return n.UserDefined()
+	}
+
+	return ""
+}
diff --git a/vendor/github.com/docker/docker/api/types/container/waitcondition.go b/vendor/github.com/docker/docker/api/types/container/waitcondition.go
new file mode 100644
index 0000000000000..cd8311f99cfb1
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/container/waitcondition.go
@@ -0,0 +1,22 @@
+package container // import "github.com/docker/docker/api/types/container"
+
+// WaitCondition is a type used to specify a container state for which
+// to wait.
+type WaitCondition string
+
+// Possible WaitCondition Values.
+//
+// WaitConditionNotRunning (default) is used to wait for any of the non-running
+// states: "created", "exited", "dead", "removing", or "removed".
+//
+// WaitConditionNextExit is used to wait for the next time the state changes
+// to a non-running state. If the state is currently "created" or "exited",
+// this would cause Wait() to block until either the container runs and exits
+// or is removed.
+//
+// WaitConditionRemoved is used to wait for the container to be removed.
+const (
+	WaitConditionNotRunning WaitCondition = "not-running"
+	WaitConditionNextExit   WaitCondition = "next-exit"
+	WaitConditionRemoved    WaitCondition = "removed"
+)
diff --git a/vendor/github.com/docker/docker/api/types/error_response.go b/vendor/github.com/docker/docker/api/types/error_response.go
new file mode 100644
index 0000000000000..dc942d9d9efa3
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/error_response.go
@@ -0,0 +1,13 @@
+package types
+
+// This file was generated by the swagger tool.
+// Editing this file might prove futile when you re-run the swagger generate command
+
+// ErrorResponse Represents an error.
+// swagger:model ErrorResponse
+type ErrorResponse struct {
+
+	// The error message.
+	// Required: true
+	Message string `json:"message"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/error_response_ext.go b/vendor/github.com/docker/docker/api/types/error_response_ext.go
new file mode 100644
index 0000000000000..f84f034cd545c
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/error_response_ext.go
@@ -0,0 +1,6 @@
+package types
+
+// Error returns the error message
+func (e ErrorResponse) Error() string {
+	return e.Message
+}
diff --git a/vendor/github.com/docker/docker/api/types/events/events.go b/vendor/github.com/docker/docker/api/types/events/events.go
new file mode 100644
index 0000000000000..aa8fba815484c
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/events/events.go
@@ -0,0 +1,54 @@
+package events // import "github.com/docker/docker/api/types/events"
+
+const (
+	// BuilderEventType is the event type that the builder generates
+	BuilderEventType = "builder"
+	// ContainerEventType is the event type that containers generate
+	ContainerEventType = "container"
+	// DaemonEventType is the event type that daemon generate
+	DaemonEventType = "daemon"
+	// ImageEventType is the event type that images generate
+	ImageEventType = "image"
+	// NetworkEventType is the event type that networks generate
+	NetworkEventType = "network"
+	// PluginEventType is the event type that plugins generate
+	PluginEventType = "plugin"
+	// VolumeEventType is the event type that volumes generate
+	VolumeEventType = "volume"
+	// ServiceEventType is the event type that services generate
+	ServiceEventType = "service"
+	// NodeEventType is the event type that nodes generate
+	NodeEventType = "node"
+	// SecretEventType is the event type that secrets generate
+	SecretEventType = "secret"
+	// ConfigEventType is the event type that configs generate
+	ConfigEventType = "config"
+)
+
+// Actor describes something that generates events,
+// like a container, or a network, or a volume.
+// It has a defined name and a set or attributes.
+// The container attributes are its labels, other actors
+// can generate these attributes from other properties.
+type Actor struct {
+	ID         string
+	Attributes map[string]string
+}
+
+// Message represents the information an event contains
+type Message struct {
+	// Deprecated information from JSONMessage.
+	// With data only in container events.
+	Status string `json:"status,omitempty"`
+	ID     string `json:"id,omitempty"`
+	From   string `json:"from,omitempty"`
+
+	Type   string
+	Action string
+	Actor  Actor
+	// Engine events are local scope. Cluster events are swarm scope.
+	Scope string `json:"scope,omitempty"`
+
+	Time     int64 `json:"time,omitempty"`
+	TimeNano int64 `json:"timeNano,omitempty"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/filters/parse.go b/vendor/github.com/docker/docker/api/types/filters/parse.go
new file mode 100644
index 0000000000000..b4976a347175b
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/filters/parse.go
@@ -0,0 +1,322 @@
+/*
+Package filters provides tools for encoding a mapping of keys to a set of
+multiple values.
+*/
+package filters // import "github.com/docker/docker/api/types/filters"
+
+import (
+	"encoding/json"
+	"regexp"
+	"strings"
+
+	"github.com/docker/docker/api/types/versions"
+)
+
+// Args stores a mapping of keys to a set of multiple values.
+type Args struct {
+	fields map[string]map[string]bool
+}
+
+// KeyValuePair are used to initialize a new Args
+type KeyValuePair struct {
+	Key   string
+	Value string
+}
+
+// Arg creates a new KeyValuePair for initializing Args
+func Arg(key, value string) KeyValuePair {
+	return KeyValuePair{Key: key, Value: value}
+}
+
+// NewArgs returns a new Args populated with the initial args
+func NewArgs(initialArgs ...KeyValuePair) Args {
+	args := Args{fields: map[string]map[string]bool{}}
+	for _, arg := range initialArgs {
+		args.Add(arg.Key, arg.Value)
+	}
+	return args
+}
+
+// Keys returns all the keys in list of Args
+func (args Args) Keys() []string {
+	keys := make([]string, 0, len(args.fields))
+	for k := range args.fields {
+		keys = append(keys, k)
+	}
+	return keys
+}
+
+// MarshalJSON returns a JSON byte representation of the Args
+func (args Args) MarshalJSON() ([]byte, error) {
+	if len(args.fields) == 0 {
+		return []byte("{}"), nil
+	}
+	return json.Marshal(args.fields)
+}
+
+// ToJSON returns the Args as a JSON encoded string
+func ToJSON(a Args) (string, error) {
+	if a.Len() == 0 {
+		return "", nil
+	}
+	buf, err := json.Marshal(a)
+	return string(buf), err
+}
+
+// ToParamWithVersion encodes Args as a JSON string. If version is less than 1.22
+// then the encoded format will use an older legacy format where the values are a
+// list of strings, instead of a set.
+//
+// Deprecated: do not use in any new code; use ToJSON instead
+func ToParamWithVersion(version string, a Args) (string, error) {
+	if a.Len() == 0 {
+		return "", nil
+	}
+
+	if version != "" && versions.LessThan(version, "1.22") {
+		buf, err := json.Marshal(convertArgsToSlice(a.fields))
+		return string(buf), err
+	}
+
+	return ToJSON(a)
+}
+
+// FromJSON decodes a JSON encoded string into Args
+func FromJSON(p string) (Args, error) {
+	args := NewArgs()
+
+	if p == "" {
+		return args, nil
+	}
+
+	raw := []byte(p)
+	err := json.Unmarshal(raw, &args)
+	if err == nil {
+		return args, nil
+	}
+
+	// Fallback to parsing arguments in the legacy slice format
+	deprecated := map[string][]string{}
+	if legacyErr := json.Unmarshal(raw, &deprecated); legacyErr != nil {
+		return args, err
+	}
+
+	args.fields = deprecatedArgs(deprecated)
+	return args, nil
+}
+
+// UnmarshalJSON populates the Args from JSON encode bytes
+func (args Args) UnmarshalJSON(raw []byte) error {
+	return json.Unmarshal(raw, &args.fields)
+}
+
+// Get returns the list of values associated with the key
+func (args Args) Get(key string) []string {
+	values := args.fields[key]
+	if values == nil {
+		return make([]string, 0)
+	}
+	slice := make([]string, 0, len(values))
+	for key := range values {
+		slice = append(slice, key)
+	}
+	return slice
+}
+
+// Add a new value to the set of values
+func (args Args) Add(key, value string) {
+	if _, ok := args.fields[key]; ok {
+		args.fields[key][value] = true
+	} else {
+		args.fields[key] = map[string]bool{value: true}
+	}
+}
+
+// Del removes a value from the set
+func (args Args) Del(key, value string) {
+	if _, ok := args.fields[key]; ok {
+		delete(args.fields[key], value)
+		if len(args.fields[key]) == 0 {
+			delete(args.fields, key)
+		}
+	}
+}
+
+// Len returns the number of keys in the mapping
+func (args Args) Len() int {
+	return len(args.fields)
+}
+
+// MatchKVList returns true if all the pairs in sources exist as key=value
+// pairs in the mapping at key, or if there are no values at key.
+func (args Args) MatchKVList(key string, sources map[string]string) bool {
+	fieldValues := args.fields[key]
+
+	// do not filter if there is no filter set or cannot determine filter
+	if len(fieldValues) == 0 {
+		return true
+	}
+
+	if len(sources) == 0 {
+		return false
+	}
+
+	for value := range fieldValues {
+		testKV := strings.SplitN(value, "=", 2)
+
+		v, ok := sources[testKV[0]]
+		if !ok {
+			return false
+		}
+		if len(testKV) == 2 && testKV[1] != v {
+			return false
+		}
+	}
+
+	return true
+}
+
+// Match returns true if any of the values at key match the source string
+func (args Args) Match(field, source string) bool {
+	if args.ExactMatch(field, source) {
+		return true
+	}
+
+	fieldValues := args.fields[field]
+	for name2match := range fieldValues {
+		match, err := regexp.MatchString(name2match, source)
+		if err != nil {
+			continue
+		}
+		if match {
+			return true
+		}
+	}
+	return false
+}
+
+// ExactMatch returns true if the source matches exactly one of the values.
+func (args Args) ExactMatch(key, source string) bool {
+	fieldValues, ok := args.fields[key]
+	// do not filter if there is no filter set or cannot determine filter
+	if !ok || len(fieldValues) == 0 {
+		return true
+	}
+
+	// try to match full name value to avoid O(N) regular expression matching
+	return fieldValues[source]
+}
+
+// UniqueExactMatch returns true if there is only one value and the source
+// matches exactly the value.
+func (args Args) UniqueExactMatch(key, source string) bool {
+	fieldValues := args.fields[key]
+	// do not filter if there is no filter set or cannot determine filter
+	if len(fieldValues) == 0 {
+		return true
+	}
+	if len(args.fields[key]) != 1 {
+		return false
+	}
+
+	// try to match full name value to avoid O(N) regular expression matching
+	return fieldValues[source]
+}
+
+// FuzzyMatch returns true if the source matches exactly one value,  or the
+// source has one of the values as a prefix.
+func (args Args) FuzzyMatch(key, source string) bool {
+	if args.ExactMatch(key, source) {
+		return true
+	}
+
+	fieldValues := args.fields[key]
+	for prefix := range fieldValues {
+		if strings.HasPrefix(source, prefix) {
+			return true
+		}
+	}
+	return false
+}
+
+// Contains returns true if the key exists in the mapping
+func (args Args) Contains(field string) bool {
+	_, ok := args.fields[field]
+	return ok
+}
+
+type invalidFilter string
+
+func (e invalidFilter) Error() string {
+	return "Invalid filter '" + string(e) + "'"
+}
+
+func (invalidFilter) InvalidParameter() {}
+
+// Validate compared the set of accepted keys against the keys in the mapping.
+// An error is returned if any mapping keys are not in the accepted set.
+func (args Args) Validate(accepted map[string]bool) error {
+	for name := range args.fields {
+		if !accepted[name] {
+			return invalidFilter(name)
+		}
+	}
+	return nil
+}
+
+// WalkValues iterates over the list of values for a key in the mapping and calls
+// op() for each value. If op returns an error the iteration stops and the
+// error is returned.
+func (args Args) WalkValues(field string, op func(value string) error) error {
+	if _, ok := args.fields[field]; !ok {
+		return nil
+	}
+	for v := range args.fields[field] {
+		if err := op(v); err != nil {
+			return err
+		}
+	}
+	return nil
+}
+
+// Clone returns a copy of args.
+func (args Args) Clone() (newArgs Args) {
+	newArgs.fields = make(map[string]map[string]bool, len(args.fields))
+	for k, m := range args.fields {
+		var mm map[string]bool
+		if m != nil {
+			mm = make(map[string]bool, len(m))
+			for kk, v := range m {
+				mm[kk] = v
+			}
+		}
+		newArgs.fields[k] = mm
+	}
+	return newArgs
+}
+
+func deprecatedArgs(d map[string][]string) map[string]map[string]bool {
+	m := map[string]map[string]bool{}
+	for k, v := range d {
+		values := map[string]bool{}
+		for _, vv := range v {
+			values[vv] = true
+		}
+		m[k] = values
+	}
+	return m
+}
+
+func convertArgsToSlice(f map[string]map[string]bool) map[string][]string {
+	m := map[string][]string{}
+	for k, v := range f {
+		values := []string{}
+		for kk := range v {
+			if v[kk] {
+				values = append(values, kk)
+			}
+		}
+		m[k] = values
+	}
+	return m
+}
diff --git a/vendor/github.com/docker/docker/api/types/graph_driver_data.go b/vendor/github.com/docker/docker/api/types/graph_driver_data.go
new file mode 100644
index 0000000000000..4d9bf1c62c892
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/graph_driver_data.go
@@ -0,0 +1,17 @@
+package types
+
+// This file was generated by the swagger tool.
+// Editing this file might prove futile when you re-run the swagger generate command
+
+// GraphDriverData Information about a container's graph driver.
+// swagger:model GraphDriverData
+type GraphDriverData struct {
+
+	// data
+	// Required: true
+	Data map[string]string `json:"Data"`
+
+	// name
+	// Required: true
+	Name string `json:"Name"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/id_response.go b/vendor/github.com/docker/docker/api/types/id_response.go
new file mode 100644
index 0000000000000..7592d2f8b152c
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/id_response.go
@@ -0,0 +1,13 @@
+package types
+
+// This file was generated by the swagger tool.
+// Editing this file might prove futile when you re-run the swagger generate command
+
+// IDResponse Response to an API call that returns just an Id
+// swagger:model IdResponse
+type IDResponse struct {
+
+	// The id of the newly created object.
+	// Required: true
+	ID string `json:"Id"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/image/image_history.go b/vendor/github.com/docker/docker/api/types/image/image_history.go
new file mode 100644
index 0000000000000..e302bb0aebbe7
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/image/image_history.go
@@ -0,0 +1,36 @@
+package image // import "github.com/docker/docker/api/types/image"
+
+// ----------------------------------------------------------------------------
+// Code generated by `swagger generate operation`. DO NOT EDIT.
+//
+// See hack/generate-swagger-api.sh
+// ----------------------------------------------------------------------------
+
+// HistoryResponseItem individual image layer information in response to ImageHistory operation
+// swagger:model HistoryResponseItem
+type HistoryResponseItem struct {
+
+	// comment
+	// Required: true
+	Comment string `json:"Comment"`
+
+	// created
+	// Required: true
+	Created int64 `json:"Created"`
+
+	// created by
+	// Required: true
+	CreatedBy string `json:"CreatedBy"`
+
+	// Id
+	// Required: true
+	ID string `json:"Id"`
+
+	// size
+	// Required: true
+	Size int64 `json:"Size"`
+
+	// tags
+	// Required: true
+	Tags []string `json:"Tags"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/image_delete_response_item.go b/vendor/github.com/docker/docker/api/types/image_delete_response_item.go
new file mode 100644
index 0000000000000..b9a65a0d8e862
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/image_delete_response_item.go
@@ -0,0 +1,15 @@
+package types
+
+// This file was generated by the swagger tool.
+// Editing this file might prove futile when you re-run the swagger generate command
+
+// ImageDeleteResponseItem image delete response item
+// swagger:model ImageDeleteResponseItem
+type ImageDeleteResponseItem struct {
+
+	// The image ID of an image that was deleted
+	Deleted string `json:"Deleted,omitempty"`
+
+	// The image ID of an image that was untagged
+	Untagged string `json:"Untagged,omitempty"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/image_summary.go b/vendor/github.com/docker/docker/api/types/image_summary.go
new file mode 100644
index 0000000000000..e145b3dcfcd1a
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/image_summary.go
@@ -0,0 +1,49 @@
+package types
+
+// This file was generated by the swagger tool.
+// Editing this file might prove futile when you re-run the swagger generate command
+
+// ImageSummary image summary
+// swagger:model ImageSummary
+type ImageSummary struct {
+
+	// containers
+	// Required: true
+	Containers int64 `json:"Containers"`
+
+	// created
+	// Required: true
+	Created int64 `json:"Created"`
+
+	// Id
+	// Required: true
+	ID string `json:"Id"`
+
+	// labels
+	// Required: true
+	Labels map[string]string `json:"Labels"`
+
+	// parent Id
+	// Required: true
+	ParentID string `json:"ParentId"`
+
+	// repo digests
+	// Required: true
+	RepoDigests []string `json:"RepoDigests"`
+
+	// repo tags
+	// Required: true
+	RepoTags []string `json:"RepoTags"`
+
+	// shared size
+	// Required: true
+	SharedSize int64 `json:"SharedSize"`
+
+	// size
+	// Required: true
+	Size int64 `json:"Size"`
+
+	// virtual size
+	// Required: true
+	VirtualSize int64 `json:"VirtualSize"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/mount/mount.go b/vendor/github.com/docker/docker/api/types/mount/mount.go
new file mode 100644
index 0000000000000..443b8d07a9f3c
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/mount/mount.go
@@ -0,0 +1,131 @@
+package mount // import "github.com/docker/docker/api/types/mount"
+
+import (
+	"os"
+)
+
+// Type represents the type of a mount.
+type Type string
+
+// Type constants
+const (
+	// TypeBind is the type for mounting host dir
+	TypeBind Type = "bind"
+	// TypeVolume is the type for remote storage volumes
+	TypeVolume Type = "volume"
+	// TypeTmpfs is the type for mounting tmpfs
+	TypeTmpfs Type = "tmpfs"
+	// TypeNamedPipe is the type for mounting Windows named pipes
+	TypeNamedPipe Type = "npipe"
+)
+
+// Mount represents a mount (volume).
+type Mount struct {
+	Type Type `json:",omitempty"`
+	// Source specifies the name of the mount. Depending on mount type, this
+	// may be a volume name or a host path, or even ignored.
+	// Source is not supported for tmpfs (must be an empty value)
+	Source      string      `json:",omitempty"`
+	Target      string      `json:",omitempty"`
+	ReadOnly    bool        `json:",omitempty"`
+	Consistency Consistency `json:",omitempty"`
+
+	BindOptions   *BindOptions   `json:",omitempty"`
+	VolumeOptions *VolumeOptions `json:",omitempty"`
+	TmpfsOptions  *TmpfsOptions  `json:",omitempty"`
+}
+
+// Propagation represents the propagation of a mount.
+type Propagation string
+
+const (
+	// PropagationRPrivate RPRIVATE
+	PropagationRPrivate Propagation = "rprivate"
+	// PropagationPrivate PRIVATE
+	PropagationPrivate Propagation = "private"
+	// PropagationRShared RSHARED
+	PropagationRShared Propagation = "rshared"
+	// PropagationShared SHARED
+	PropagationShared Propagation = "shared"
+	// PropagationRSlave RSLAVE
+	PropagationRSlave Propagation = "rslave"
+	// PropagationSlave SLAVE
+	PropagationSlave Propagation = "slave"
+)
+
+// Propagations is the list of all valid mount propagations
+var Propagations = []Propagation{
+	PropagationRPrivate,
+	PropagationPrivate,
+	PropagationRShared,
+	PropagationShared,
+	PropagationRSlave,
+	PropagationSlave,
+}
+
+// Consistency represents the consistency requirements of a mount.
+type Consistency string
+
+const (
+	// ConsistencyFull guarantees bind mount-like consistency
+	ConsistencyFull Consistency = "consistent"
+	// ConsistencyCached mounts can cache read data and FS structure
+	ConsistencyCached Consistency = "cached"
+	// ConsistencyDelegated mounts can cache read and written data and structure
+	ConsistencyDelegated Consistency = "delegated"
+	// ConsistencyDefault provides "consistent" behavior unless overridden
+	ConsistencyDefault Consistency = "default"
+)
+
+// BindOptions defines options specific to mounts of type "bind".
+type BindOptions struct {
+	Propagation  Propagation `json:",omitempty"`
+	NonRecursive bool        `json:",omitempty"`
+}
+
+// VolumeOptions represents the options for a mount of type volume.
+type VolumeOptions struct {
+	NoCopy       bool              `json:",omitempty"`
+	Labels       map[string]string `json:",omitempty"`
+	DriverConfig *Driver           `json:",omitempty"`
+}
+
+// Driver represents a volume driver.
+type Driver struct {
+	Name    string            `json:",omitempty"`
+	Options map[string]string `json:",omitempty"`
+}
+
+// TmpfsOptions defines options specific to mounts of type "tmpfs".
+type TmpfsOptions struct {
+	// Size sets the size of the tmpfs, in bytes.
+	//
+	// This will be converted to an operating system specific value
+	// depending on the host. For example, on linux, it will be converted to
+	// use a 'k', 'm' or 'g' syntax. BSD, though not widely supported with
+	// docker, uses a straight byte value.
+	//
+	// Percentages are not supported.
+	SizeBytes int64 `json:",omitempty"`
+	// Mode of the tmpfs upon creation
+	Mode os.FileMode `json:",omitempty"`
+
+	// TODO(stevvooe): There are several more tmpfs flags, specified in the
+	// daemon, that are accepted. Only the most basic are added for now.
+	//
+	// From https://github.com/moby/sys/blob/mount/v0.1.1/mount/flags.go#L47-L56
+	//
+	// var validFlags = map[string]bool{
+	// 	"":          true,
+	// 	"size":      true, X
+	// 	"mode":      true, X
+	// 	"uid":       true,
+	// 	"gid":       true,
+	// 	"nr_inodes": true,
+	// 	"nr_blocks": true,
+	// 	"mpol":      true,
+	// }
+	//
+	// Some of these may be straightforward to add, but others, such as
+	// uid/gid have implications in a clustered system.
+}
diff --git a/vendor/github.com/docker/docker/api/types/network/network.go b/vendor/github.com/docker/docker/api/types/network/network.go
new file mode 100644
index 0000000000000..437b184c67b5b
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/network/network.go
@@ -0,0 +1,126 @@
+package network // import "github.com/docker/docker/api/types/network"
+import (
+	"github.com/docker/docker/api/types/filters"
+)
+
+// Address represents an IP address
+type Address struct {
+	Addr      string
+	PrefixLen int
+}
+
+// IPAM represents IP Address Management
+type IPAM struct {
+	Driver  string
+	Options map[string]string // Per network IPAM driver options
+	Config  []IPAMConfig
+}
+
+// IPAMConfig represents IPAM configurations
+type IPAMConfig struct {
+	Subnet     string            `json:",omitempty"`
+	IPRange    string            `json:",omitempty"`
+	Gateway    string            `json:",omitempty"`
+	AuxAddress map[string]string `json:"AuxiliaryAddresses,omitempty"`
+}
+
+// EndpointIPAMConfig represents IPAM configurations for the endpoint
+type EndpointIPAMConfig struct {
+	IPv4Address  string   `json:",omitempty"`
+	IPv6Address  string   `json:",omitempty"`
+	LinkLocalIPs []string `json:",omitempty"`
+}
+
+// Copy makes a copy of the endpoint ipam config
+func (cfg *EndpointIPAMConfig) Copy() *EndpointIPAMConfig {
+	cfgCopy := *cfg
+	cfgCopy.LinkLocalIPs = make([]string, 0, len(cfg.LinkLocalIPs))
+	cfgCopy.LinkLocalIPs = append(cfgCopy.LinkLocalIPs, cfg.LinkLocalIPs...)
+	return &cfgCopy
+}
+
+// PeerInfo represents one peer of an overlay network
+type PeerInfo struct {
+	Name string
+	IP   string
+}
+
+// EndpointSettings stores the network endpoint details
+type EndpointSettings struct {
+	// Configurations
+	IPAMConfig *EndpointIPAMConfig
+	Links      []string
+	Aliases    []string
+	// Operational data
+	NetworkID           string
+	EndpointID          string
+	Gateway             string
+	IPAddress           string
+	IPPrefixLen         int
+	IPv6Gateway         string
+	GlobalIPv6Address   string
+	GlobalIPv6PrefixLen int
+	MacAddress          string
+	DriverOpts          map[string]string
+}
+
+// Task carries the information about one backend task
+type Task struct {
+	Name       string
+	EndpointID string
+	EndpointIP string
+	Info       map[string]string
+}
+
+// ServiceInfo represents service parameters with the list of service's tasks
+type ServiceInfo struct {
+	VIP          string
+	Ports        []string
+	LocalLBIndex int
+	Tasks        []Task
+}
+
+// Copy makes a deep copy of `EndpointSettings`
+func (es *EndpointSettings) Copy() *EndpointSettings {
+	epCopy := *es
+	if es.IPAMConfig != nil {
+		epCopy.IPAMConfig = es.IPAMConfig.Copy()
+	}
+
+	if es.Links != nil {
+		links := make([]string, 0, len(es.Links))
+		epCopy.Links = append(links, es.Links...)
+	}
+
+	if es.Aliases != nil {
+		aliases := make([]string, 0, len(es.Aliases))
+		epCopy.Aliases = append(aliases, es.Aliases...)
+	}
+	return &epCopy
+}
+
+// NetworkingConfig represents the container's networking configuration for each of its interfaces
+// Carries the networking configs specified in the `docker run` and `docker network connect` commands
+type NetworkingConfig struct {
+	EndpointsConfig map[string]*EndpointSettings // Endpoint configs for each connecting network
+}
+
+// ConfigReference specifies the source which provides a network's configuration
+type ConfigReference struct {
+	Network string
+}
+
+var acceptedFilters = map[string]bool{
+	"dangling": true,
+	"driver":   true,
+	"id":       true,
+	"label":    true,
+	"name":     true,
+	"scope":    true,
+	"type":     true,
+}
+
+// ValidateFilters validates the list of filter args with the available filters.
+func ValidateFilters(filter filters.Args) error {
+	return filter.Validate(acceptedFilters)
+}
diff --git a/vendor/github.com/docker/docker/api/types/plugin.go b/vendor/github.com/docker/docker/api/types/plugin.go
new file mode 100644
index 0000000000000..abae48b9ab010
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/plugin.go
@@ -0,0 +1,203 @@
+package types
+
+// This file was generated by the swagger tool.
+// Editing this file might prove futile when you re-run the swagger generate command
+
+// Plugin A plugin for the Engine API
+// swagger:model Plugin
+type Plugin struct {
+
+	// config
+	// Required: true
+	Config PluginConfig `json:"Config"`
+
+	// True if the plugin is running. False if the plugin is not running, only installed.
+	// Required: true
+	Enabled bool `json:"Enabled"`
+
+	// Id
+	ID string `json:"Id,omitempty"`
+
+	// name
+	// Required: true
+	Name string `json:"Name"`
+
+	// plugin remote reference used to push/pull the plugin
+	PluginReference string `json:"PluginReference,omitempty"`
+
+	// settings
+	// Required: true
+	Settings PluginSettings `json:"Settings"`
+}
+
+// PluginConfig The config of a plugin.
+// swagger:model PluginConfig
+type PluginConfig struct {
+
+	// args
+	// Required: true
+	Args PluginConfigArgs `json:"Args"`
+
+	// description
+	// Required: true
+	Description string `json:"Description"`
+
+	// Docker Version used to create the plugin
+	DockerVersion string `json:"DockerVersion,omitempty"`
+
+	// documentation
+	// Required: true
+	Documentation string `json:"Documentation"`
+
+	// entrypoint
+	// Required: true
+	Entrypoint []string `json:"Entrypoint"`
+
+	// env
+	// Required: true
+	Env []PluginEnv `json:"Env"`
+
+	// interface
+	// Required: true
+	Interface PluginConfigInterface `json:"Interface"`
+
+	// ipc host
+	// Required: true
+	IpcHost bool `json:"IpcHost"`
+
+	// linux
+	// Required: true
+	Linux PluginConfigLinux `json:"Linux"`
+
+	// mounts
+	// Required: true
+	Mounts []PluginMount `json:"Mounts"`
+
+	// network
+	// Required: true
+	Network PluginConfigNetwork `json:"Network"`
+
+	// pid host
+	// Required: true
+	PidHost bool `json:"PidHost"`
+
+	// propagated mount
+	// Required: true
+	PropagatedMount string `json:"PropagatedMount"`
+
+	// user
+	User PluginConfigUser `json:"User,omitempty"`
+
+	// work dir
+	// Required: true
+	WorkDir string `json:"WorkDir"`
+
+	// rootfs
+	Rootfs *PluginConfigRootfs `json:"rootfs,omitempty"`
+}
+
+// PluginConfigArgs plugin config args
+// swagger:model PluginConfigArgs
+type PluginConfigArgs struct {
+
+	// description
+	// Required: true
+	Description string `json:"Description"`
+
+	// name
+	// Required: true
+	Name string `json:"Name"`
+
+	// settable
+	// Required: true
+	Settable []string `json:"Settable"`
+
+	// value
+	// Required: true
+	Value []string `json:"Value"`
+}
+
+// PluginConfigInterface The interface between Docker and the plugin
+// swagger:model PluginConfigInterface
+type PluginConfigInterface struct {
+
+	// Protocol to use for clients connecting to the plugin.
+	ProtocolScheme string `json:"ProtocolScheme,omitempty"`
+
+	// socket
+	// Required: true
+	Socket string `json:"Socket"`
+
+	// types
+	// Required: true
+	Types []PluginInterfaceType `json:"Types"`
+}
+
+// PluginConfigLinux plugin config linux
+// swagger:model PluginConfigLinux
+type PluginConfigLinux struct {
+
+	// allow all devices
+	// Required: true
+	AllowAllDevices bool `json:"AllowAllDevices"`
+
+	// capabilities
+	// Required: true
+	Capabilities []string `json:"Capabilities"`
+
+	// devices
+	// Required: true
+	Devices []PluginDevice `json:"Devices"`
+}
+
+// PluginConfigNetwork plugin config network
+// swagger:model PluginConfigNetwork
+type PluginConfigNetwork struct {
+
+	// type
+	// Required: true
+	Type string `json:"Type"`
+}
+
+// PluginConfigRootfs plugin config rootfs
+// swagger:model PluginConfigRootfs
+type PluginConfigRootfs struct {
+
+	// diff ids
+	DiffIds []string `json:"diff_ids"`
+
+	// type
+	Type string `json:"type,omitempty"`
+}
+
+// PluginConfigUser plugin config user
+// swagger:model PluginConfigUser
+type PluginConfigUser struct {
+
+	// g ID
+	GID uint32 `json:"GID,omitempty"`
+
+	// UID
+	UID uint32 `json:"UID,omitempty"`
+}
+
+// PluginSettings Settings that can be modified by users.
+// swagger:model PluginSettings
+type PluginSettings struct {
+
+	// args
+	// Required: true
+	Args []string `json:"Args"`
+
+	// devices
+	// Required: true
+	Devices []PluginDevice `json:"Devices"`
+
+	// env
+	// Required: true
+	Env []string `json:"Env"`
+
+	// mounts
+	// Required: true
+	Mounts []PluginMount `json:"Mounts"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/plugin_device.go b/vendor/github.com/docker/docker/api/types/plugin_device.go
new file mode 100644
index 0000000000000..569901067559b
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/plugin_device.go
@@ -0,0 +1,25 @@
+package types
+
+// This file was generated by the swagger tool.
+// Editing this file might prove futile when you re-run the swagger generate command
+
+// PluginDevice plugin device
+// swagger:model PluginDevice
+type PluginDevice struct {
+
+	// description
+	// Required: true
+	Description string `json:"Description"`
+
+	// name
+	// Required: true
+	Name string `json:"Name"`
+
+	// path
+	// Required: true
+	Path *string `json:"Path"`
+
+	// settable
+	// Required: true
+	Settable []string `json:"Settable"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/plugin_env.go b/vendor/github.com/docker/docker/api/types/plugin_env.go
new file mode 100644
index 0000000000000..32962dc2ebeab
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/plugin_env.go
@@ -0,0 +1,25 @@
+package types
+
+// This file was generated by the swagger tool.
+// Editing this file might prove futile when you re-run the swagger generate command
+
+// PluginEnv plugin env
+// swagger:model PluginEnv
+type PluginEnv struct {
+
+	// description
+	// Required: true
+	Description string `json:"Description"`
+
+	// name
+	// Required: true
+	Name string `json:"Name"`
+
+	// settable
+	// Required: true
+	Settable []string `json:"Settable"`
+
+	// value
+	// Required: true
+	Value *string `json:"Value"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/plugin_interface_type.go b/vendor/github.com/docker/docker/api/types/plugin_interface_type.go
new file mode 100644
index 0000000000000..c82f204e87080
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/plugin_interface_type.go
@@ -0,0 +1,21 @@
+package types
+
+// This file was generated by the swagger tool.
+// Editing this file might prove futile when you re-run the swagger generate command
+
+// PluginInterfaceType plugin interface type
+// swagger:model PluginInterfaceType
+type PluginInterfaceType struct {
+
+	// capability
+	// Required: true
+	Capability string `json:"Capability"`
+
+	// prefix
+	// Required: true
+	Prefix string `json:"Prefix"`
+
+	// version
+	// Required: true
+	Version string `json:"Version"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/plugin_mount.go b/vendor/github.com/docker/docker/api/types/plugin_mount.go
new file mode 100644
index 0000000000000..5c031cf8b5cc0
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/plugin_mount.go
@@ -0,0 +1,37 @@
+package types
+
+// This file was generated by the swagger tool.
+// Editing this file might prove futile when you re-run the swagger generate command
+
+// PluginMount plugin mount
+// swagger:model PluginMount
+type PluginMount struct {
+
+	// description
+	// Required: true
+	Description string `json:"Description"`
+
+	// destination
+	// Required: true
+	Destination string `json:"Destination"`
+
+	// name
+	// Required: true
+	Name string `json:"Name"`
+
+	// options
+	// Required: true
+	Options []string `json:"Options"`
+
+	// settable
+	// Required: true
+	Settable []string `json:"Settable"`
+
+	// source
+	// Required: true
+	Source *string `json:"Source"`
+
+	// type
+	// Required: true
+	Type string `json:"Type"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/plugin_responses.go b/vendor/github.com/docker/docker/api/types/plugin_responses.go
new file mode 100644
index 0000000000000..60d1fb5ad8550
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/plugin_responses.go
@@ -0,0 +1,71 @@
+package types // import "github.com/docker/docker/api/types"
+
+import (
+	"encoding/json"
+	"fmt"
+	"sort"
+)
+
+// PluginsListResponse contains the response for the Engine API
+type PluginsListResponse []*Plugin
+
+// UnmarshalJSON implements json.Unmarshaler for PluginInterfaceType
+func (t *PluginInterfaceType) UnmarshalJSON(p []byte) error {
+	versionIndex := len(p)
+	prefixIndex := 0
+	if len(p) < 2 || p[0] != '"' || p[len(p)-1] != '"' {
+		return fmt.Errorf("%q is not a plugin interface type", p)
+	}
+	p = p[1 : len(p)-1]
+loop:
+	for i, b := range p {
+		switch b {
+		case '.':
+			prefixIndex = i
+		case '/':
+			versionIndex = i
+			break loop
+		}
+	}
+	t.Prefix = string(p[:prefixIndex])
+	t.Capability = string(p[prefixIndex+1 : versionIndex])
+	if versionIndex < len(p) {
+		t.Version = string(p[versionIndex+1:])
+	}
+	return nil
+}
+
+// MarshalJSON implements json.Marshaler for PluginInterfaceType
+func (t *PluginInterfaceType) MarshalJSON() ([]byte, error) {
+	return json.Marshal(t.String())
+}
+
+// String implements fmt.Stringer for PluginInterfaceType
+func (t PluginInterfaceType) String() string {
+	return fmt.Sprintf("%s.%s/%s", t.Prefix, t.Capability, t.Version)
+}
+
+// PluginPrivilege describes a permission the user has to accept
+// upon installing a plugin.
+type PluginPrivilege struct {
+	Name        string
+	Description string
+	Value       []string
+}
+
+// PluginPrivileges is a list of PluginPrivilege
+type PluginPrivileges []PluginPrivilege
+
+func (s PluginPrivileges) Len() int {
+	return len(s)
+}
+
+func (s PluginPrivileges) Less(i, j int) bool {
+	return s[i].Name < s[j].Name
+}
+
+func (s PluginPrivileges) Swap(i, j int) {
+	sort.Strings(s[i].Value)
+	sort.Strings(s[j].Value)
+	s[i], s[j] = s[j], s[i]
+}
diff --git a/vendor/github.com/docker/docker/api/types/port.go b/vendor/github.com/docker/docker/api/types/port.go
new file mode 100644
index 0000000000000..d91234744c6bc
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/port.go
@@ -0,0 +1,23 @@
+package types
+
+// This file was generated by the swagger tool.
+// Editing this file might prove futile when you re-run the swagger generate command
+
+// Port An open port on a container
+// swagger:model Port
+type Port struct {
+
+	// Host IP address that the container's port is mapped to
+	IP string `json:"IP,omitempty"`
+
+	// Port on the container
+	// Required: true
+	PrivatePort uint16 `json:"PrivatePort"`
+
+	// Port exposed on the host
+	PublicPort uint16 `json:"PublicPort,omitempty"`
+
+	// type
+	// Required: true
+	Type string `json:"Type"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/registry/authenticate.go b/vendor/github.com/docker/docker/api/types/registry/authenticate.go
new file mode 100644
index 0000000000000..f0a2113e405a1
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/registry/authenticate.go
@@ -0,0 +1,21 @@
+package registry // import "github.com/docker/docker/api/types/registry"
+
+// ----------------------------------------------------------------------------
+// DO NOT EDIT THIS FILE
+// This file was generated by `swagger generate operation`
+//
+// See hack/generate-swagger-api.sh
+// ----------------------------------------------------------------------------
+
+// AuthenticateOKBody authenticate o k body
+// swagger:model AuthenticateOKBody
+type AuthenticateOKBody struct {
+
+	// An opaque token used to authenticate a user after a successful login
+	// Required: true
+	IdentityToken string `json:"IdentityToken"`
+
+	// The status of the authentication
+	// Required: true
+	Status string `json:"Status"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/registry/registry.go b/vendor/github.com/docker/docker/api/types/registry/registry.go
new file mode 100644
index 0000000000000..62a88f5be89d5
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/registry/registry.go
@@ -0,0 +1,120 @@
+package registry // import "github.com/docker/docker/api/types/registry"
+
+import (
+	"encoding/json"
+	"net"
+
+	v1 "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+// ServiceConfig stores daemon registry services configuration.
+type ServiceConfig struct {
+	AllowNondistributableArtifactsCIDRs     []*NetIPNet
+	AllowNondistributableArtifactsHostnames []string
+	InsecureRegistryCIDRs                   []*NetIPNet           `json:"InsecureRegistryCIDRs"`
+	IndexConfigs                            map[string]*IndexInfo `json:"IndexConfigs"`
+	Mirrors                                 []string
+}
+
+// NetIPNet is the net.IPNet type, which can be marshalled and
+// unmarshalled to JSON
+type NetIPNet net.IPNet
+
+// String returns the CIDR notation of ipnet
+func (ipnet *NetIPNet) String() string {
+	return (*net.IPNet)(ipnet).String()
+}
+
+// MarshalJSON returns the JSON representation of the IPNet
+func (ipnet *NetIPNet) MarshalJSON() ([]byte, error) {
+	return json.Marshal((*net.IPNet)(ipnet).String())
+}
+
+// UnmarshalJSON sets the IPNet from a byte array of JSON
+func (ipnet *NetIPNet) UnmarshalJSON(b []byte) (err error) {
+	var ipnetStr string
+	if err = json.Unmarshal(b, &ipnetStr); err == nil {
+		var cidr *net.IPNet
+		if _, cidr, err = net.ParseCIDR(ipnetStr); err == nil {
+			*ipnet = NetIPNet(*cidr)
+		}
+	}
+	return
+}
+
+// IndexInfo contains information about a registry
+//
+// RepositoryInfo Examples:
+//
+//	{
+//	  "Index" : {
+//	    "Name" : "docker.io",
+//	    "Mirrors" : ["https://registry-2.docker.io/v1/", "https://registry-3.docker.io/v1/"],
+//	    "Secure" : true,
+//	    "Official" : true,
+//	  },
+//	  "RemoteName" : "library/debian",
+//	  "LocalName" : "debian",
+//	  "CanonicalName" : "docker.io/debian"
+//	  "Official" : true,
+//	}
+//
+//	{
+//	  "Index" : {
+//	    "Name" : "127.0.0.1:5000",
+//	    "Mirrors" : [],
+//	    "Secure" : false,
+//	    "Official" : false,
+//	  },
+//	  "RemoteName" : "user/repo",
+//	  "LocalName" : "127.0.0.1:5000/user/repo",
+//	  "CanonicalName" : "127.0.0.1:5000/user/repo",
+//	  "Official" : false,
+//	}
+type IndexInfo struct {
+	// Name is the name of the registry, such as "docker.io"
+	Name string
+	// Mirrors is a list of mirrors, expressed as URIs
+	Mirrors []string
+	// Secure is set to false if the registry is part of the list of
+	// insecure registries. Insecure registries accept HTTP and/or accept
+	// HTTPS with certificates from unknown CAs.
+	Secure bool
+	// Official indicates whether this is an official registry
+	Official bool
+}
+
+// SearchResult describes a search result returned from a registry
+type SearchResult struct {
+	// StarCount indicates the number of stars this repository has
+	StarCount int `json:"star_count"`
+	// IsOfficial is true if the result is from an official repository.
+	IsOfficial bool `json:"is_official"`
+	// Name is the name of the repository
+	Name string `json:"name"`
+	// IsAutomated indicates whether the result is automated
+	IsAutomated bool `json:"is_automated"`
+	// Description is a textual description of the repository
+	Description string `json:"description"`
+}
+
+// SearchResults lists a collection search results returned from a registry
+type SearchResults struct {
+	// Query contains the query string that generated the search results
+	Query string `json:"query"`
+	// NumResults indicates the number of results the query returned
+	NumResults int `json:"num_results"`
+	// Results is a slice containing the actual results for the search
+	Results []SearchResult `json:"results"`
+}
+
+// DistributionInspect describes the result obtained from contacting the
+// registry to retrieve image metadata
+type DistributionInspect struct {
+	// Descriptor contains information about the manifest, including
+	// the content addressable digest
+	Descriptor v1.Descriptor
+	// Platforms contains the list of platforms supported by the image,
+	// obtained by parsing the manifest
+	Platforms []v1.Platform
+}
diff --git a/vendor/github.com/docker/docker/api/types/service_update_response.go b/vendor/github.com/docker/docker/api/types/service_update_response.go
new file mode 100644
index 0000000000000..74ea64b1bb671
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/service_update_response.go
@@ -0,0 +1,12 @@
+package types
+
+// This file was generated by the swagger tool.
+// Editing this file might prove futile when you re-run the swagger generate command
+
+// ServiceUpdateResponse service update response
+// swagger:model ServiceUpdateResponse
+type ServiceUpdateResponse struct {
+
+	// Optional warning messages
+	Warnings []string `json:"Warnings"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/stats.go b/vendor/github.com/docker/docker/api/types/stats.go
new file mode 100644
index 0000000000000..20daebed14bd2
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/stats.go
@@ -0,0 +1,181 @@
+// Package types is used for API stability in the types and response to the
+// consumers of the API stats endpoint.
+package types // import "github.com/docker/docker/api/types"
+
+import "time"
+
+// ThrottlingData stores CPU throttling stats of one running container.
+// Not used on Windows.
+type ThrottlingData struct {
+	// Number of periods with throttling active
+	Periods uint64 `json:"periods"`
+	// Number of periods when the container hits its throttling limit.
+	ThrottledPeriods uint64 `json:"throttled_periods"`
+	// Aggregate time the container was throttled for in nanoseconds.
+	ThrottledTime uint64 `json:"throttled_time"`
+}
+
+// CPUUsage stores All CPU stats aggregated since container inception.
+type CPUUsage struct {
+	// Total CPU time consumed.
+	// Units: nanoseconds (Linux)
+	// Units: 100's of nanoseconds (Windows)
+	TotalUsage uint64 `json:"total_usage"`
+
+	// Total CPU time consumed per core (Linux). Not used on Windows.
+	// Units: nanoseconds.
+	PercpuUsage []uint64 `json:"percpu_usage,omitempty"`
+
+	// Time spent by tasks of the cgroup in kernel mode (Linux).
+	// Time spent by all container processes in kernel mode (Windows).
+	// Units: nanoseconds (Linux).
+	// Units: 100's of nanoseconds (Windows). Not populated for Hyper-V Containers.
+	UsageInKernelmode uint64 `json:"usage_in_kernelmode"`
+
+	// Time spent by tasks of the cgroup in user mode (Linux).
+	// Time spent by all container processes in user mode (Windows).
+	// Units: nanoseconds (Linux).
+	// Units: 100's of nanoseconds (Windows). Not populated for Hyper-V Containers
+	UsageInUsermode uint64 `json:"usage_in_usermode"`
+}
+
+// CPUStats aggregates and wraps all CPU related info of container
+type CPUStats struct {
+	// CPU Usage. Linux and Windows.
+	CPUUsage CPUUsage `json:"cpu_usage"`
+
+	// System Usage. Linux only.
+	SystemUsage uint64 `json:"system_cpu_usage,omitempty"`
+
+	// Online CPUs. Linux only.
+	OnlineCPUs uint32 `json:"online_cpus,omitempty"`
+
+	// Throttling Data. Linux only.
+	ThrottlingData ThrottlingData `json:"throttling_data,omitempty"`
+}
+
+// MemoryStats aggregates all memory stats since container inception on Linux.
+// Windows returns stats for commit and private working set only.
+type MemoryStats struct {
+	// Linux Memory Stats
+
+	// current res_counter usage for memory
+	Usage uint64 `json:"usage,omitempty"`
+	// maximum usage ever recorded.
+	MaxUsage uint64 `json:"max_usage,omitempty"`
+	// TODO(vishh): Export these as stronger types.
+	// all the stats exported via memory.stat.
+	Stats map[string]uint64 `json:"stats,omitempty"`
+	// number of times memory usage hits limits.
+	Failcnt uint64 `json:"failcnt,omitempty"`
+	Limit   uint64 `json:"limit,omitempty"`
+
+	// Windows Memory Stats
+	// See https://technet.microsoft.com/en-us/magazine/ff382715.aspx
+
+	// committed bytes
+	Commit uint64 `json:"commitbytes,omitempty"`
+	// peak committed bytes
+	CommitPeak uint64 `json:"commitpeakbytes,omitempty"`
+	// private working set
+	PrivateWorkingSet uint64 `json:"privateworkingset,omitempty"`
+}
+
+// BlkioStatEntry is one small entity to store a piece of Blkio stats
+// Not used on Windows.
+type BlkioStatEntry struct {
+	Major uint64 `json:"major"`
+	Minor uint64 `json:"minor"`
+	Op    string `json:"op"`
+	Value uint64 `json:"value"`
+}
+
+// BlkioStats stores All IO service stats for data read and write.
+// This is a Linux specific structure as the differences between expressing
+// block I/O on Windows and Linux are sufficiently significant to make
+// little sense attempting to morph into a combined structure.
+type BlkioStats struct {
+	// number of bytes transferred to and from the block device
+	IoServiceBytesRecursive []BlkioStatEntry `json:"io_service_bytes_recursive"`
+	IoServicedRecursive     []BlkioStatEntry `json:"io_serviced_recursive"`
+	IoQueuedRecursive       []BlkioStatEntry `json:"io_queue_recursive"`
+	IoServiceTimeRecursive  []BlkioStatEntry `json:"io_service_time_recursive"`
+	IoWaitTimeRecursive     []BlkioStatEntry `json:"io_wait_time_recursive"`
+	IoMergedRecursive       []BlkioStatEntry `json:"io_merged_recursive"`
+	IoTimeRecursive         []BlkioStatEntry `json:"io_time_recursive"`
+	SectorsRecursive        []BlkioStatEntry `json:"sectors_recursive"`
+}
+
+// StorageStats is the disk I/O stats for read/write on Windows.
+type StorageStats struct {
+	ReadCountNormalized  uint64 `json:"read_count_normalized,omitempty"`
+	ReadSizeBytes        uint64 `json:"read_size_bytes,omitempty"`
+	WriteCountNormalized uint64 `json:"write_count_normalized,omitempty"`
+	WriteSizeBytes       uint64 `json:"write_size_bytes,omitempty"`
+}
+
+// NetworkStats aggregates the network stats of one container
+type NetworkStats struct {
+	// Bytes received. Windows and Linux.
+	RxBytes uint64 `json:"rx_bytes"`
+	// Packets received. Windows and Linux.
+	RxPackets uint64 `json:"rx_packets"`
+	// Received errors. Not used on Windows. Note that we don't `omitempty` this
+	// field as it is expected in the >=v1.21 API stats structure.
+	RxErrors uint64 `json:"rx_errors"`
+	// Incoming packets dropped. Windows and Linux.
+	RxDropped uint64 `json:"rx_dropped"`
+	// Bytes sent. Windows and Linux.
+	TxBytes uint64 `json:"tx_bytes"`
+	// Packets sent. Windows and Linux.
+	TxPackets uint64 `json:"tx_packets"`
+	// Sent errors. Not used on Windows. Note that we don't `omitempty` this
+	// field as it is expected in the >=v1.21 API stats structure.
+	TxErrors uint64 `json:"tx_errors"`
+	// Outgoing packets dropped. Windows and Linux.
+	TxDropped uint64 `json:"tx_dropped"`
+	// Endpoint ID. Not used on Linux.
+	EndpointID string `json:"endpoint_id,omitempty"`
+	// Instance ID. Not used on Linux.
+	InstanceID string `json:"instance_id,omitempty"`
+}
+
+// PidsStats contains the stats of a container's pids
+type PidsStats struct {
+	// Current is the number of pids in the cgroup
+	Current uint64 `json:"current,omitempty"`
+	// Limit is the hard limit on the number of pids in the cgroup.
+	// A "Limit" of 0 means that there is no limit.
+	Limit uint64 `json:"limit,omitempty"`
+}
+
+// Stats is Ultimate struct aggregating all types of stats of one container
+type Stats struct {
+	// Common stats
+	Read    time.Time `json:"read"`
+	PreRead time.Time `json:"preread"`
+
+	// Linux specific stats, not populated on Windows.
+	PidsStats  PidsStats  `json:"pids_stats,omitempty"`
+	BlkioStats BlkioStats `json:"blkio_stats,omitempty"`
+
+	// Windows specific stats, not populated on Linux.
+	NumProcs     uint32       `json:"num_procs"`
+	StorageStats StorageStats `json:"storage_stats,omitempty"`
+
+	// Shared stats
+	CPUStats    CPUStats    `json:"cpu_stats,omitempty"`
+	PreCPUStats CPUStats    `json:"precpu_stats,omitempty"` // "Pre"="Previous"
+	MemoryStats MemoryStats `json:"memory_stats,omitempty"`
+}
+
+// StatsJSON is newly used Networks
+type StatsJSON struct {
+	Stats
+
+	Name string `json:"name,omitempty"`
+	ID   string `json:"id,omitempty"`
+
+	// Networks request version >=1.21
+	Networks map[string]NetworkStats `json:"networks,omitempty"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/strslice/strslice.go b/vendor/github.com/docker/docker/api/types/strslice/strslice.go
new file mode 100644
index 0000000000000..82921cebc1502
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/strslice/strslice.go
@@ -0,0 +1,30 @@
+package strslice // import "github.com/docker/docker/api/types/strslice"
+
+import "encoding/json"
+
+// StrSlice represents a string or an array of strings.
+// We need to override the json decoder to accept both options.
+type StrSlice []string
+
+// UnmarshalJSON decodes the byte slice whether it's a string or an array of
+// strings. This method is needed to implement json.Unmarshaler.
+func (e *StrSlice) UnmarshalJSON(b []byte) error {
+	if len(b) == 0 {
+		// With no input, we preserve the existing value by returning nil and
+		// leaving the target alone. This allows defining default values for
+		// the type.
+		return nil
+	}
+
+	p := make([]string, 0, 1)
+	if err := json.Unmarshal(b, &p); err != nil {
+		var s string
+		if err := json.Unmarshal(b, &s); err != nil {
+			return err
+		}
+		p = append(p, s)
+	}
+
+	*e = p
+	return nil
+}
diff --git a/vendor/github.com/docker/docker/api/types/swarm/common.go b/vendor/github.com/docker/docker/api/types/swarm/common.go
new file mode 100644
index 0000000000000..ef020f458bd4c
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/swarm/common.go
@@ -0,0 +1,40 @@
+package swarm // import "github.com/docker/docker/api/types/swarm"
+
+import "time"
+
+// Version represents the internal object version.
+type Version struct {
+	Index uint64 `json:",omitempty"`
+}
+
+// Meta is a base object inherited by most of the other once.
+type Meta struct {
+	Version   Version   `json:",omitempty"`
+	CreatedAt time.Time `json:",omitempty"`
+	UpdatedAt time.Time `json:",omitempty"`
+}
+
+// Annotations represents how to describe an object.
+type Annotations struct {
+	Name   string            `json:",omitempty"`
+	Labels map[string]string `json:"Labels"`
+}
+
+// Driver represents a driver (network, logging, secrets backend).
+type Driver struct {
+	Name    string            `json:",omitempty"`
+	Options map[string]string `json:",omitempty"`
+}
+
+// TLSInfo represents the TLS information about what CA certificate is trusted,
+// and who the issuer for a TLS certificate is
+type TLSInfo struct {
+	// TrustRoot is the trusted CA root certificate in PEM format
+	TrustRoot string `json:",omitempty"`
+
+	// CertIssuer is the raw subject bytes of the issuer
+	CertIssuerSubject []byte `json:",omitempty"`
+
+	// CertIssuerPublicKey is the raw public key bytes of the issuer
+	CertIssuerPublicKey []byte `json:",omitempty"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/swarm/config.go b/vendor/github.com/docker/docker/api/types/swarm/config.go
new file mode 100644
index 0000000000000..16202ccce6151
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/swarm/config.go
@@ -0,0 +1,40 @@
+package swarm // import "github.com/docker/docker/api/types/swarm"
+
+import "os"
+
+// Config represents a config.
+type Config struct {
+	ID string
+	Meta
+	Spec ConfigSpec
+}
+
+// ConfigSpec represents a config specification from a config in swarm
+type ConfigSpec struct {
+	Annotations
+	Data []byte `json:",omitempty"`
+
+	// Templating controls whether and how to evaluate the config payload as
+	// a template. If it is not set, no templating is used.
+	Templating *Driver `json:",omitempty"`
+}
+
+// ConfigReferenceFileTarget is a file target in a config reference
+type ConfigReferenceFileTarget struct {
+	Name string
+	UID  string
+	GID  string
+	Mode os.FileMode
+}
+
+// ConfigReferenceRuntimeTarget is a target for a config specifying that it
+// isn't mounted into the container but instead has some other purpose.
+type ConfigReferenceRuntimeTarget struct{}
+
+// ConfigReference is a reference to a config in swarm
+type ConfigReference struct {
+	File       *ConfigReferenceFileTarget    `json:",omitempty"`
+	Runtime    *ConfigReferenceRuntimeTarget `json:",omitempty"`
+	ConfigID   string
+	ConfigName string
+}
diff --git a/vendor/github.com/docker/docker/api/types/swarm/container.go b/vendor/github.com/docker/docker/api/types/swarm/container.go
new file mode 100644
index 0000000000000..af5e1c0bc2799
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/swarm/container.go
@@ -0,0 +1,80 @@
+package swarm // import "github.com/docker/docker/api/types/swarm"
+
+import (
+	"time"
+
+	"github.com/docker/docker/api/types/container"
+	"github.com/docker/docker/api/types/mount"
+	"github.com/docker/go-units"
+)
+
+// DNSConfig specifies DNS related configurations in resolver configuration file (resolv.conf)
+// Detailed documentation is available in:
+// http://man7.org/linux/man-pages/man5/resolv.conf.5.html
+// `nameserver`, `search`, `options` have been supported.
+// TODO: `domain` is not supported yet.
+type DNSConfig struct {
+	// Nameservers specifies the IP addresses of the name servers
+	Nameservers []string `json:",omitempty"`
+	// Search specifies the search list for host-name lookup
+	Search []string `json:",omitempty"`
+	// Options allows certain internal resolver variables to be modified
+	Options []string `json:",omitempty"`
+}
+
+// SELinuxContext contains the SELinux labels of the container.
+type SELinuxContext struct {
+	Disable bool
+
+	User  string
+	Role  string
+	Type  string
+	Level string
+}
+
+// CredentialSpec for managed service account (Windows only)
+type CredentialSpec struct {
+	Config   string
+	File     string
+	Registry string
+}
+
+// Privileges defines the security options for the container.
+type Privileges struct {
+	CredentialSpec *CredentialSpec
+	SELinuxContext *SELinuxContext
+}
+
+// ContainerSpec represents the spec of a container.
+type ContainerSpec struct {
+	Image           string                  `json:",omitempty"`
+	Labels          map[string]string       `json:",omitempty"`
+	Command         []string                `json:",omitempty"`
+	Args            []string                `json:",omitempty"`
+	Hostname        string                  `json:",omitempty"`
+	Env             []string                `json:",omitempty"`
+	Dir             string                  `json:",omitempty"`
+	User            string                  `json:",omitempty"`
+	Groups          []string                `json:",omitempty"`
+	Privileges      *Privileges             `json:",omitempty"`
+	Init            *bool                   `json:",omitempty"`
+	StopSignal      string                  `json:",omitempty"`
+	TTY             bool                    `json:",omitempty"`
+	OpenStdin       bool                    `json:",omitempty"`
+	ReadOnly        bool                    `json:",omitempty"`
+	Mounts          []mount.Mount           `json:",omitempty"`
+	StopGracePeriod *time.Duration          `json:",omitempty"`
+	Healthcheck     *container.HealthConfig `json:",omitempty"`
+	// The format of extra hosts on swarmkit is specified in:
+	// http://man7.org/linux/man-pages/man5/hosts.5.html
+	//    IP_address canonical_hostname [aliases...]
+	Hosts          []string            `json:",omitempty"`
+	DNSConfig      *DNSConfig          `json:",omitempty"`
+	Secrets        []*SecretReference  `json:",omitempty"`
+	Configs        []*ConfigReference  `json:",omitempty"`
+	Isolation      container.Isolation `json:",omitempty"`
+	Sysctls        map[string]string   `json:",omitempty"`
+	CapabilityAdd  []string            `json:",omitempty"`
+	CapabilityDrop []string            `json:",omitempty"`
+	Ulimits        []*units.Ulimit     `json:",omitempty"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/swarm/network.go b/vendor/github.com/docker/docker/api/types/swarm/network.go
new file mode 100644
index 0000000000000..98ef3284d1da0
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/swarm/network.go
@@ -0,0 +1,121 @@
+package swarm // import "github.com/docker/docker/api/types/swarm"
+
+import (
+	"github.com/docker/docker/api/types/network"
+)
+
+// Endpoint represents an endpoint.
+type Endpoint struct {
+	Spec       EndpointSpec        `json:",omitempty"`
+	Ports      []PortConfig        `json:",omitempty"`
+	VirtualIPs []EndpointVirtualIP `json:",omitempty"`
+}
+
+// EndpointSpec represents the spec of an endpoint.
+type EndpointSpec struct {
+	Mode  ResolutionMode `json:",omitempty"`
+	Ports []PortConfig   `json:",omitempty"`
+}
+
+// ResolutionMode represents a resolution mode.
+type ResolutionMode string
+
+const (
+	// ResolutionModeVIP VIP
+	ResolutionModeVIP ResolutionMode = "vip"
+	// ResolutionModeDNSRR DNSRR
+	ResolutionModeDNSRR ResolutionMode = "dnsrr"
+)
+
+// PortConfig represents the config of a port.
+type PortConfig struct {
+	Name     string             `json:",omitempty"`
+	Protocol PortConfigProtocol `json:",omitempty"`
+	// TargetPort is the port inside the container
+	TargetPort uint32 `json:",omitempty"`
+	// PublishedPort is the port on the swarm hosts
+	PublishedPort uint32 `json:",omitempty"`
+	// PublishMode is the mode in which port is published
+	PublishMode PortConfigPublishMode `json:",omitempty"`
+}
+
+// PortConfigPublishMode represents the mode in which the port is to
+// be published.
+type PortConfigPublishMode string
+
+const (
+	// PortConfigPublishModeIngress is used for ports published
+	// for ingress load balancing using routing mesh.
+	PortConfigPublishModeIngress PortConfigPublishMode = "ingress"
+	// PortConfigPublishModeHost is used for ports published
+	// for direct host level access on the host where the task is running.
+	PortConfigPublishModeHost PortConfigPublishMode = "host"
+)
+
+// PortConfigProtocol represents the protocol of a port.
+type PortConfigProtocol string
+
+const (
+	// TODO(stevvooe): These should be used generally, not just for PortConfig.
+
+	// PortConfigProtocolTCP TCP
+	PortConfigProtocolTCP PortConfigProtocol = "tcp"
+	// PortConfigProtocolUDP UDP
+	PortConfigProtocolUDP PortConfigProtocol = "udp"
+	// PortConfigProtocolSCTP SCTP
+	PortConfigProtocolSCTP PortConfigProtocol = "sctp"
+)
+
+// EndpointVirtualIP represents the virtual ip of a port.
+type EndpointVirtualIP struct {
+	NetworkID string `json:",omitempty"`
+	Addr      string `json:",omitempty"`
+}
+
+// Network represents a network.
+type Network struct {
+	ID string
+	Meta
+	Spec        NetworkSpec  `json:",omitempty"`
+	DriverState Driver       `json:",omitempty"`
+	IPAMOptions *IPAMOptions `json:",omitempty"`
+}
+
+// NetworkSpec represents the spec of a network.
+type NetworkSpec struct {
+	Annotations
+	DriverConfiguration *Driver                  `json:",omitempty"`
+	IPv6Enabled         bool                     `json:",omitempty"`
+	Internal            bool                     `json:",omitempty"`
+	Attachable          bool                     `json:",omitempty"`
+	Ingress             bool                     `json:",omitempty"`
+	IPAMOptions         *IPAMOptions             `json:",omitempty"`
+	ConfigFrom          *network.ConfigReference `json:",omitempty"`
+	Scope               string                   `json:",omitempty"`
+}
+
+// NetworkAttachmentConfig represents the configuration of a network attachment.
+type NetworkAttachmentConfig struct {
+	Target     string            `json:",omitempty"`
+	Aliases    []string          `json:",omitempty"`
+	DriverOpts map[string]string `json:",omitempty"`
+}
+
+// NetworkAttachment represents a network attachment.
+type NetworkAttachment struct {
+	Network   Network  `json:",omitempty"`
+	Addresses []string `json:",omitempty"`
+}
+
+// IPAMOptions represents ipam options.
+type IPAMOptions struct {
+	Driver  Driver       `json:",omitempty"`
+	Configs []IPAMConfig `json:",omitempty"`
+}
+
+// IPAMConfig represents ipam configuration.
+type IPAMConfig struct {
+	Subnet  string `json:",omitempty"`
+	Range   string `json:",omitempty"`
+	Gateway string `json:",omitempty"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/swarm/node.go b/vendor/github.com/docker/docker/api/types/swarm/node.go
new file mode 100644
index 0000000000000..1e30f5fa10ddb
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/swarm/node.go
@@ -0,0 +1,115 @@
+package swarm // import "github.com/docker/docker/api/types/swarm"
+
+// Node represents a node.
+type Node struct {
+	ID string
+	Meta
+	// Spec defines the desired state of the node as specified by the user.
+	// The system will honor this and will *never* modify it.
+	Spec NodeSpec `json:",omitempty"`
+	// Description encapsulates the properties of the Node as reported by the
+	// agent.
+	Description NodeDescription `json:",omitempty"`
+	// Status provides the current status of the node, as seen by the manager.
+	Status NodeStatus `json:",omitempty"`
+	// ManagerStatus provides the current status of the node's manager
+	// component, if the node is a manager.
+	ManagerStatus *ManagerStatus `json:",omitempty"`
+}
+
+// NodeSpec represents the spec of a node.
+type NodeSpec struct {
+	Annotations
+	Role         NodeRole         `json:",omitempty"`
+	Availability NodeAvailability `json:",omitempty"`
+}
+
+// NodeRole represents the role of a node.
+type NodeRole string
+
+const (
+	// NodeRoleWorker WORKER
+	NodeRoleWorker NodeRole = "worker"
+	// NodeRoleManager MANAGER
+	NodeRoleManager NodeRole = "manager"
+)
+
+// NodeAvailability represents the availability of a node.
+type NodeAvailability string
+
+const (
+	// NodeAvailabilityActive ACTIVE
+	NodeAvailabilityActive NodeAvailability = "active"
+	// NodeAvailabilityPause PAUSE
+	NodeAvailabilityPause NodeAvailability = "pause"
+	// NodeAvailabilityDrain DRAIN
+	NodeAvailabilityDrain NodeAvailability = "drain"
+)
+
+// NodeDescription represents the description of a node.
+type NodeDescription struct {
+	Hostname  string            `json:",omitempty"`
+	Platform  Platform          `json:",omitempty"`
+	Resources Resources         `json:",omitempty"`
+	Engine    EngineDescription `json:",omitempty"`
+	TLSInfo   TLSInfo           `json:",omitempty"`
+}
+
+// Platform represents the platform (Arch/OS).
+type Platform struct {
+	Architecture string `json:",omitempty"`
+	OS           string `json:",omitempty"`
+}
+
+// EngineDescription represents the description of an engine.
+type EngineDescription struct {
+	EngineVersion string              `json:",omitempty"`
+	Labels        map[string]string   `json:",omitempty"`
+	Plugins       []PluginDescription `json:",omitempty"`
+}
+
+// PluginDescription represents the description of an engine plugin.
+type PluginDescription struct {
+	Type string `json:",omitempty"`
+	Name string `json:",omitempty"`
+}
+
+// NodeStatus represents the status of a node.
+type NodeStatus struct {
+	State   NodeState `json:",omitempty"`
+	Message string    `json:",omitempty"`
+	Addr    string    `json:",omitempty"`
+}
+
+// Reachability represents the reachability of a node.
+type Reachability string
+
+const (
+	// ReachabilityUnknown UNKNOWN
+	ReachabilityUnknown Reachability = "unknown"
+	// ReachabilityUnreachable UNREACHABLE
+	ReachabilityUnreachable Reachability = "unreachable"
+	// ReachabilityReachable REACHABLE
+	ReachabilityReachable Reachability = "reachable"
+)
+
+// ManagerStatus represents the status of a manager.
+type ManagerStatus struct {
+	Leader       bool         `json:",omitempty"`
+	Reachability Reachability `json:",omitempty"`
+	Addr         string       `json:",omitempty"`
+}
+
+// NodeState represents the state of a node.
+type NodeState string
+
+const (
+	// NodeStateUnknown UNKNOWN
+	NodeStateUnknown NodeState = "unknown"
+	// NodeStateDown DOWN
+	NodeStateDown NodeState = "down"
+	// NodeStateReady READY
+	NodeStateReady NodeState = "ready"
+	// NodeStateDisconnected DISCONNECTED
+	NodeStateDisconnected NodeState = "disconnected"
+)
diff --git a/vendor/github.com/docker/docker/api/types/swarm/runtime.go b/vendor/github.com/docker/docker/api/types/swarm/runtime.go
new file mode 100644
index 0000000000000..0c77403ccff93
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/swarm/runtime.go
@@ -0,0 +1,27 @@
+package swarm // import "github.com/docker/docker/api/types/swarm"
+
+// RuntimeType is the type of runtime used for the TaskSpec
+type RuntimeType string
+
+// RuntimeURL is the proto type url
+type RuntimeURL string
+
+const (
+	// RuntimeContainer is the container based runtime
+	RuntimeContainer RuntimeType = "container"
+	// RuntimePlugin is the plugin based runtime
+	RuntimePlugin RuntimeType = "plugin"
+	// RuntimeNetworkAttachment is the network attachment runtime
+	RuntimeNetworkAttachment RuntimeType = "attachment"
+
+	// RuntimeURLContainer is the proto url for the container type
+	RuntimeURLContainer RuntimeURL = "types.docker.com/RuntimeContainer"
+	// RuntimeURLPlugin is the proto url for the plugin type
+	RuntimeURLPlugin RuntimeURL = "types.docker.com/RuntimePlugin"
+)
+
+// NetworkAttachmentSpec represents the runtime spec type for network
+// attachment tasks
+type NetworkAttachmentSpec struct {
+	ContainerID string
+}
diff --git a/vendor/github.com/docker/docker/api/types/swarm/runtime/gen.go b/vendor/github.com/docker/docker/api/types/swarm/runtime/gen.go
new file mode 100644
index 0000000000000..98c2806c31dc4
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/swarm/runtime/gen.go
@@ -0,0 +1,3 @@
+//go:generate protoc -I . --gogofast_out=import_path=github.com/docker/docker/api/types/swarm/runtime:. plugin.proto
+
+package runtime // import "github.com/docker/docker/api/types/swarm/runtime"
diff --git a/vendor/github.com/docker/docker/api/types/swarm/runtime/plugin.pb.go b/vendor/github.com/docker/docker/api/types/swarm/runtime/plugin.pb.go
new file mode 100644
index 0000000000000..e45045866a6ea
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/swarm/runtime/plugin.pb.go
@@ -0,0 +1,754 @@
+// Code generated by protoc-gen-gogo. DO NOT EDIT.
+// source: plugin.proto
+
+/*
+	Package runtime is a generated protocol buffer package.
+
+	It is generated from these files:
+		plugin.proto
+
+	It has these top-level messages:
+		PluginSpec
+		PluginPrivilege
+*/
+package runtime
+
+import proto "github.com/gogo/protobuf/proto"
+import fmt "fmt"
+import math "math"
+
+import io "io"
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+// A compilation error at this line likely means your copy of the
+// proto package needs to be updated.
+const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
+
+// PluginSpec defines the base payload which clients can specify for creating
+// a service with the plugin runtime.
+type PluginSpec struct {
+	Name       string             `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
+	Remote     string             `protobuf:"bytes,2,opt,name=remote,proto3" json:"remote,omitempty"`
+	Privileges []*PluginPrivilege `protobuf:"bytes,3,rep,name=privileges" json:"privileges,omitempty"`
+	Disabled   bool               `protobuf:"varint,4,opt,name=disabled,proto3" json:"disabled,omitempty"`
+	Env        []string           `protobuf:"bytes,5,rep,name=env" json:"env,omitempty"`
+}
+
+func (m *PluginSpec) Reset()                    { *m = PluginSpec{} }
+func (m *PluginSpec) String() string            { return proto.CompactTextString(m) }
+func (*PluginSpec) ProtoMessage()               {}
+func (*PluginSpec) Descriptor() ([]byte, []int) { return fileDescriptorPlugin, []int{0} }
+
+func (m *PluginSpec) GetName() string {
+	if m != nil {
+		return m.Name
+	}
+	return ""
+}
+
+func (m *PluginSpec) GetRemote() string {
+	if m != nil {
+		return m.Remote
+	}
+	return ""
+}
+
+func (m *PluginSpec) GetPrivileges() []*PluginPrivilege {
+	if m != nil {
+		return m.Privileges
+	}
+	return nil
+}
+
+func (m *PluginSpec) GetDisabled() bool {
+	if m != nil {
+		return m.Disabled
+	}
+	return false
+}
+
+func (m *PluginSpec) GetEnv() []string {
+	if m != nil {
+		return m.Env
+	}
+	return nil
+}
+
+// PluginPrivilege describes a permission the user has to accept
+// upon installing a plugin.
+type PluginPrivilege struct {
+	Name        string   `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
+	Description string   `protobuf:"bytes,2,opt,name=description,proto3" json:"description,omitempty"`
+	Value       []string `protobuf:"bytes,3,rep,name=value" json:"value,omitempty"`
+}
+
+func (m *PluginPrivilege) Reset()                    { *m = PluginPrivilege{} }
+func (m *PluginPrivilege) String() string            { return proto.CompactTextString(m) }
+func (*PluginPrivilege) ProtoMessage()               {}
+func (*PluginPrivilege) Descriptor() ([]byte, []int) { return fileDescriptorPlugin, []int{1} }
+
+func (m *PluginPrivilege) GetName() string {
+	if m != nil {
+		return m.Name
+	}
+	return ""
+}
+
+func (m *PluginPrivilege) GetDescription() string {
+	if m != nil {
+		return m.Description
+	}
+	return ""
+}
+
+func (m *PluginPrivilege) GetValue() []string {
+	if m != nil {
+		return m.Value
+	}
+	return nil
+}
+
+func init() {
+	proto.RegisterType((*PluginSpec)(nil), "PluginSpec")
+	proto.RegisterType((*PluginPrivilege)(nil), "PluginPrivilege")
+}
+func (m *PluginSpec) Marshal() (dAtA []byte, err error) {
+	size := m.Size()
+	dAtA = make([]byte, size)
+	n, err := m.MarshalTo(dAtA)
+	if err != nil {
+		return nil, err
+	}
+	return dAtA[:n], nil
+}
+
+func (m *PluginSpec) MarshalTo(dAtA []byte) (int, error) {
+	var i int
+	_ = i
+	var l int
+	_ = l
+	if len(m.Name) > 0 {
+		dAtA[i] = 0xa
+		i++
+		i = encodeVarintPlugin(dAtA, i, uint64(len(m.Name)))
+		i += copy(dAtA[i:], m.Name)
+	}
+	if len(m.Remote) > 0 {
+		dAtA[i] = 0x12
+		i++
+		i = encodeVarintPlugin(dAtA, i, uint64(len(m.Remote)))
+		i += copy(dAtA[i:], m.Remote)
+	}
+	if len(m.Privileges) > 0 {
+		for _, msg := range m.Privileges {
+			dAtA[i] = 0x1a
+			i++
+			i = encodeVarintPlugin(dAtA, i, uint64(msg.Size()))
+			n, err := msg.MarshalTo(dAtA[i:])
+			if err != nil {
+				return 0, err
+			}
+			i += n
+		}
+	}
+	if m.Disabled {
+		dAtA[i] = 0x20
+		i++
+		if m.Disabled {
+			dAtA[i] = 1
+		} else {
+			dAtA[i] = 0
+		}
+		i++
+	}
+	if len(m.Env) > 0 {
+		for _, s := range m.Env {
+			dAtA[i] = 0x2a
+			i++
+			l = len(s)
+			for l >= 1<<7 {
+				dAtA[i] = uint8(uint64(l)&0x7f | 0x80)
+				l >>= 7
+				i++
+			}
+			dAtA[i] = uint8(l)
+			i++
+			i += copy(dAtA[i:], s)
+		}
+	}
+	return i, nil
+}
+
+func (m *PluginPrivilege) Marshal() (dAtA []byte, err error) {
+	size := m.Size()
+	dAtA = make([]byte, size)
+	n, err := m.MarshalTo(dAtA)
+	if err != nil {
+		return nil, err
+	}
+	return dAtA[:n], nil
+}
+
+func (m *PluginPrivilege) MarshalTo(dAtA []byte) (int, error) {
+	var i int
+	_ = i
+	var l int
+	_ = l
+	if len(m.Name) > 0 {
+		dAtA[i] = 0xa
+		i++
+		i = encodeVarintPlugin(dAtA, i, uint64(len(m.Name)))
+		i += copy(dAtA[i:], m.Name)
+	}
+	if len(m.Description) > 0 {
+		dAtA[i] = 0x12
+		i++
+		i = encodeVarintPlugin(dAtA, i, uint64(len(m.Description)))
+		i += copy(dAtA[i:], m.Description)
+	}
+	if len(m.Value) > 0 {
+		for _, s := range m.Value {
+			dAtA[i] = 0x1a
+			i++
+			l = len(s)
+			for l >= 1<<7 {
+				dAtA[i] = uint8(uint64(l)&0x7f | 0x80)
+				l >>= 7
+				i++
+			}
+			dAtA[i] = uint8(l)
+			i++
+			i += copy(dAtA[i:], s)
+		}
+	}
+	return i, nil
+}
+
+func encodeVarintPlugin(dAtA []byte, offset int, v uint64) int {
+	for v >= 1<<7 {
+		dAtA[offset] = uint8(v&0x7f | 0x80)
+		v >>= 7
+		offset++
+	}
+	dAtA[offset] = uint8(v)
+	return offset + 1
+}
+func (m *PluginSpec) Size() (n int) {
+	var l int
+	_ = l
+	l = len(m.Name)
+	if l > 0 {
+		n += 1 + l + sovPlugin(uint64(l))
+	}
+	l = len(m.Remote)
+	if l > 0 {
+		n += 1 + l + sovPlugin(uint64(l))
+	}
+	if len(m.Privileges) > 0 {
+		for _, e := range m.Privileges {
+			l = e.Size()
+			n += 1 + l + sovPlugin(uint64(l))
+		}
+	}
+	if m.Disabled {
+		n += 2
+	}
+	if len(m.Env) > 0 {
+		for _, s := range m.Env {
+			l = len(s)
+			n += 1 + l + sovPlugin(uint64(l))
+		}
+	}
+	return n
+}
+
+func (m *PluginPrivilege) Size() (n int) {
+	var l int
+	_ = l
+	l = len(m.Name)
+	if l > 0 {
+		n += 1 + l + sovPlugin(uint64(l))
+	}
+	l = len(m.Description)
+	if l > 0 {
+		n += 1 + l + sovPlugin(uint64(l))
+	}
+	if len(m.Value) > 0 {
+		for _, s := range m.Value {
+			l = len(s)
+			n += 1 + l + sovPlugin(uint64(l))
+		}
+	}
+	return n
+}
+
+func sovPlugin(x uint64) (n int) {
+	for {
+		n++
+		x >>= 7
+		if x == 0 {
+			break
+		}
+	}
+	return n
+}
+func sozPlugin(x uint64) (n int) {
+	return sovPlugin(uint64((x << 1) ^ uint64((int64(x) >> 63))))
+}
+func (m *PluginSpec) Unmarshal(dAtA []byte) error {
+	l := len(dAtA)
+	iNdEx := 0
+	for iNdEx < l {
+		preIndex := iNdEx
+		var wire uint64
+		for shift := uint(0); ; shift += 7 {
+			if shift >= 64 {
+				return ErrIntOverflowPlugin
+			}
+			if iNdEx >= l {
+				return io.ErrUnexpectedEOF
+			}
+			b := dAtA[iNdEx]
+			iNdEx++
+			wire |= (uint64(b) & 0x7F) << shift
+			if b < 0x80 {
+				break
+			}
+		}
+		fieldNum := int32(wire >> 3)
+		wireType := int(wire & 0x7)
+		if wireType == 4 {
+			return fmt.Errorf("proto: PluginSpec: wiretype end group for non-group")
+		}
+		if fieldNum <= 0 {
+			return fmt.Errorf("proto: PluginSpec: illegal tag %d (wire type %d)", fieldNum, wire)
+		}
+		switch fieldNum {
+		case 1:
+			if wireType != 2 {
+				return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType)
+			}
+			var stringLen uint64
+			for shift := uint(0); ; shift += 7 {
+				if shift >= 64 {
+					return ErrIntOverflowPlugin
+				}
+				if iNdEx >= l {
+					return io.ErrUnexpectedEOF
+				}
+				b := dAtA[iNdEx]
+				iNdEx++
+				stringLen |= (uint64(b) & 0x7F) << shift
+				if b < 0x80 {
+					break
+				}
+			}
+			intStringLen := int(stringLen)
+			if intStringLen < 0 {
+				return ErrInvalidLengthPlugin
+			}
+			postIndex := iNdEx + intStringLen
+			if postIndex > l {
+				return io.ErrUnexpectedEOF
+			}
+			m.Name = string(dAtA[iNdEx:postIndex])
+			iNdEx = postIndex
+		case 2:
+			if wireType != 2 {
+				return fmt.Errorf("proto: wrong wireType = %d for field Remote", wireType)
+			}
+			var stringLen uint64
+			for shift := uint(0); ; shift += 7 {
+				if shift >= 64 {
+					return ErrIntOverflowPlugin
+				}
+				if iNdEx >= l {
+					return io.ErrUnexpectedEOF
+				}
+				b := dAtA[iNdEx]
+				iNdEx++
+				stringLen |= (uint64(b) & 0x7F) << shift
+				if b < 0x80 {
+					break
+				}
+			}
+			intStringLen := int(stringLen)
+			if intStringLen < 0 {
+				return ErrInvalidLengthPlugin
+			}
+			postIndex := iNdEx + intStringLen
+			if postIndex > l {
+				return io.ErrUnexpectedEOF
+			}
+			m.Remote = string(dAtA[iNdEx:postIndex])
+			iNdEx = postIndex
+		case 3:
+			if wireType != 2 {
+				return fmt.Errorf("proto: wrong wireType = %d for field Privileges", wireType)
+			}
+			var msglen int
+			for shift := uint(0); ; shift += 7 {
+				if shift >= 64 {
+					return ErrIntOverflowPlugin
+				}
+				if iNdEx >= l {
+					return io.ErrUnexpectedEOF
+				}
+				b := dAtA[iNdEx]
+				iNdEx++
+				msglen |= (int(b) & 0x7F) << shift
+				if b < 0x80 {
+					break
+				}
+			}
+			if msglen < 0 {
+				return ErrInvalidLengthPlugin
+			}
+			postIndex := iNdEx + msglen
+			if postIndex > l {
+				return io.ErrUnexpectedEOF
+			}
+			m.Privileges = append(m.Privileges, &PluginPrivilege{})
+			if err := m.Privileges[len(m.Privileges)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+				return err
+			}
+			iNdEx = postIndex
+		case 4:
+			if wireType != 0 {
+				return fmt.Errorf("proto: wrong wireType = %d for field Disabled", wireType)
+			}
+			var v int
+			for shift := uint(0); ; shift += 7 {
+				if shift >= 64 {
+					return ErrIntOverflowPlugin
+				}
+				if iNdEx >= l {
+					return io.ErrUnexpectedEOF
+				}
+				b := dAtA[iNdEx]
+				iNdEx++
+				v |= (int(b) & 0x7F) << shift
+				if b < 0x80 {
+					break
+				}
+			}
+			m.Disabled = bool(v != 0)
+		case 5:
+			if wireType != 2 {
+				return fmt.Errorf("proto: wrong wireType = %d for field Env", wireType)
+			}
+			var stringLen uint64
+			for shift := uint(0); ; shift += 7 {
+				if shift >= 64 {
+					return ErrIntOverflowPlugin
+				}
+				if iNdEx >= l {
+					return io.ErrUnexpectedEOF
+				}
+				b := dAtA[iNdEx]
+				iNdEx++
+				stringLen |= (uint64(b) & 0x7F) << shift
+				if b < 0x80 {
+					break
+				}
+			}
+			intStringLen := int(stringLen)
+			if intStringLen < 0 {
+				return ErrInvalidLengthPlugin
+			}
+			postIndex := iNdEx + intStringLen
+			if postIndex > l {
+				return io.ErrUnexpectedEOF
+			}
+			m.Env = append(m.Env, string(dAtA[iNdEx:postIndex]))
+			iNdEx = postIndex
+		default:
+			iNdEx = preIndex
+			skippy, err := skipPlugin(dAtA[iNdEx:])
+			if err != nil {
+				return err
+			}
+			if skippy < 0 {
+				return ErrInvalidLengthPlugin
+			}
+			if (iNdEx + skippy) > l {
+				return io.ErrUnexpectedEOF
+			}
+			iNdEx += skippy
+		}
+	}
+
+	if iNdEx > l {
+		return io.ErrUnexpectedEOF
+	}
+	return nil
+}
+func (m *PluginPrivilege) Unmarshal(dAtA []byte) error {
+	l := len(dAtA)
+	iNdEx := 0
+	for iNdEx < l {
+		preIndex := iNdEx
+		var wire uint64
+		for shift := uint(0); ; shift += 7 {
+			if shift >= 64 {
+				return ErrIntOverflowPlugin
+			}
+			if iNdEx >= l {
+				return io.ErrUnexpectedEOF
+			}
+			b := dAtA[iNdEx]
+			iNdEx++
+			wire |= (uint64(b) & 0x7F) << shift
+			if b < 0x80 {
+				break
+			}
+		}
+		fieldNum := int32(wire >> 3)
+		wireType := int(wire & 0x7)
+		if wireType == 4 {
+			return fmt.Errorf("proto: PluginPrivilege: wiretype end group for non-group")
+		}
+		if fieldNum <= 0 {
+			return fmt.Errorf("proto: PluginPrivilege: illegal tag %d (wire type %d)", fieldNum, wire)
+		}
+		switch fieldNum {
+		case 1:
+			if wireType != 2 {
+				return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType)
+			}
+			var stringLen uint64
+			for shift := uint(0); ; shift += 7 {
+				if shift >= 64 {
+					return ErrIntOverflowPlugin
+				}
+				if iNdEx >= l {
+					return io.ErrUnexpectedEOF
+				}
+				b := dAtA[iNdEx]
+				iNdEx++
+				stringLen |= (uint64(b) & 0x7F) << shift
+				if b < 0x80 {
+					break
+				}
+			}
+			intStringLen := int(stringLen)
+			if intStringLen < 0 {
+				return ErrInvalidLengthPlugin
+			}
+			postIndex := iNdEx + intStringLen
+			if postIndex > l {
+				return io.ErrUnexpectedEOF
+			}
+			m.Name = string(dAtA[iNdEx:postIndex])
+			iNdEx = postIndex
+		case 2:
+			if wireType != 2 {
+				return fmt.Errorf("proto: wrong wireType = %d for field Description", wireType)
+			}
+			var stringLen uint64
+			for shift := uint(0); ; shift += 7 {
+				if shift >= 64 {
+					return ErrIntOverflowPlugin
+				}
+				if iNdEx >= l {
+					return io.ErrUnexpectedEOF
+				}
+				b := dAtA[iNdEx]
+				iNdEx++
+				stringLen |= (uint64(b) & 0x7F) << shift
+				if b < 0x80 {
+					break
+				}
+			}
+			intStringLen := int(stringLen)
+			if intStringLen < 0 {
+				return ErrInvalidLengthPlugin
+			}
+			postIndex := iNdEx + intStringLen
+			if postIndex > l {
+				return io.ErrUnexpectedEOF
+			}
+			m.Description = string(dAtA[iNdEx:postIndex])
+			iNdEx = postIndex
+		case 3:
+			if wireType != 2 {
+				return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType)
+			}
+			var stringLen uint64
+			for shift := uint(0); ; shift += 7 {
+				if shift >= 64 {
+					return ErrIntOverflowPlugin
+				}
+				if iNdEx >= l {
+					return io.ErrUnexpectedEOF
+				}
+				b := dAtA[iNdEx]
+				iNdEx++
+				stringLen |= (uint64(b) & 0x7F) << shift
+				if b < 0x80 {
+					break
+				}
+			}
+			intStringLen := int(stringLen)
+			if intStringLen < 0 {
+				return ErrInvalidLengthPlugin
+			}
+			postIndex := iNdEx + intStringLen
+			if postIndex > l {
+				return io.ErrUnexpectedEOF
+			}
+			m.Value = append(m.Value, string(dAtA[iNdEx:postIndex]))
+			iNdEx = postIndex
+		default:
+			iNdEx = preIndex
+			skippy, err := skipPlugin(dAtA[iNdEx:])
+			if err != nil {
+				return err
+			}
+			if skippy < 0 {
+				return ErrInvalidLengthPlugin
+			}
+			if (iNdEx + skippy) > l {
+				return io.ErrUnexpectedEOF
+			}
+			iNdEx += skippy
+		}
+	}
+
+	if iNdEx > l {
+		return io.ErrUnexpectedEOF
+	}
+	return nil
+}
+func skipPlugin(dAtA []byte) (n int, err error) {
+	l := len(dAtA)
+	iNdEx := 0
+	for iNdEx < l {
+		var wire uint64
+		for shift := uint(0); ; shift += 7 {
+			if shift >= 64 {
+				return 0, ErrIntOverflowPlugin
+			}
+			if iNdEx >= l {
+				return 0, io.ErrUnexpectedEOF
+			}
+			b := dAtA[iNdEx]
+			iNdEx++
+			wire |= (uint64(b) & 0x7F) << shift
+			if b < 0x80 {
+				break
+			}
+		}
+		wireType := int(wire & 0x7)
+		switch wireType {
+		case 0:
+			for shift := uint(0); ; shift += 7 {
+				if shift >= 64 {
+					return 0, ErrIntOverflowPlugin
+				}
+				if iNdEx >= l {
+					return 0, io.ErrUnexpectedEOF
+				}
+				iNdEx++
+				if dAtA[iNdEx-1] < 0x80 {
+					break
+				}
+			}
+			return iNdEx, nil
+		case 1:
+			iNdEx += 8
+			return iNdEx, nil
+		case 2:
+			var length int
+			for shift := uint(0); ; shift += 7 {
+				if shift >= 64 {
+					return 0, ErrIntOverflowPlugin
+				}
+				if iNdEx >= l {
+					return 0, io.ErrUnexpectedEOF
+				}
+				b := dAtA[iNdEx]
+				iNdEx++
+				length |= (int(b) & 0x7F) << shift
+				if b < 0x80 {
+					break
+				}
+			}
+			iNdEx += length
+			if length < 0 {
+				return 0, ErrInvalidLengthPlugin
+			}
+			return iNdEx, nil
+		case 3:
+			for {
+				var innerWire uint64
+				var start int = iNdEx
+				for shift := uint(0); ; shift += 7 {
+					if shift >= 64 {
+						return 0, ErrIntOverflowPlugin
+					}
+					if iNdEx >= l {
+						return 0, io.ErrUnexpectedEOF
+					}
+					b := dAtA[iNdEx]
+					iNdEx++
+					innerWire |= (uint64(b) & 0x7F) << shift
+					if b < 0x80 {
+						break
+					}
+				}
+				innerWireType := int(innerWire & 0x7)
+				if innerWireType == 4 {
+					break
+				}
+				next, err := skipPlugin(dAtA[start:])
+				if err != nil {
+					return 0, err
+				}
+				iNdEx = start + next
+			}
+			return iNdEx, nil
+		case 4:
+			return iNdEx, nil
+		case 5:
+			iNdEx += 4
+			return iNdEx, nil
+		default:
+			return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
+		}
+	}
+	panic("unreachable")
+}
+
+var (
+	ErrInvalidLengthPlugin = fmt.Errorf("proto: negative length found during unmarshaling")
+	ErrIntOverflowPlugin   = fmt.Errorf("proto: integer overflow")
+)
+
+func init() { proto.RegisterFile("plugin.proto", fileDescriptorPlugin) }
+
+var fileDescriptorPlugin = []byte{
+	// 256 bytes of a gzipped FileDescriptorProto
+	0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x6c, 0x90, 0x4d, 0x4b, 0xc3, 0x30,
+	0x18, 0xc7, 0x89, 0xdd, 0xc6, 0xfa, 0x4c, 0x70, 0x04, 0x91, 0xe2, 0xa1, 0x94, 0x9d, 0x7a, 0x6a,
+	0x45, 0x2f, 0x82, 0x37, 0x0f, 0x9e, 0x47, 0xbc, 0x09, 0x1e, 0xd2, 0xf6, 0xa1, 0x06, 0x9b, 0x17,
+	0x92, 0xb4, 0xe2, 0x37, 0xf1, 0x23, 0x79, 0xf4, 0x23, 0x48, 0x3f, 0x89, 0x98, 0x75, 0x32, 0x64,
+	0xa7, 0xff, 0x4b, 0xc2, 0x9f, 0x1f, 0x0f, 0x9c, 0x9a, 0xae, 0x6f, 0x85, 0x2a, 0x8c, 0xd5, 0x5e,
+	0x6f, 0x3e, 0x08, 0xc0, 0x36, 0x14, 0x8f, 0x06, 0x6b, 0x4a, 0x61, 0xa6, 0xb8, 0xc4, 0x84, 0x64,
+	0x24, 0x8f, 0x59, 0xf0, 0xf4, 0x02, 0x16, 0x16, 0xa5, 0xf6, 0x98, 0x9c, 0x84, 0x76, 0x4a, 0xf4,
+	0x0a, 0xc0, 0x58, 0x31, 0x88, 0x0e, 0x5b, 0x74, 0x49, 0x94, 0x45, 0xf9, 0xea, 0x7a, 0x5d, 0xec,
+	0xc6, 0xb6, 0xfb, 0x07, 0x76, 0xf0, 0x87, 0x5e, 0xc2, 0xb2, 0x11, 0x8e, 0x57, 0x1d, 0x36, 0xc9,
+	0x2c, 0x23, 0xf9, 0x92, 0xfd, 0x65, 0xba, 0x86, 0x08, 0xd5, 0x90, 0xcc, 0xb3, 0x28, 0x8f, 0xd9,
+	0xaf, 0xdd, 0x3c, 0xc3, 0xd9, 0xbf, 0xb1, 0xa3, 0x78, 0x19, 0xac, 0x1a, 0x74, 0xb5, 0x15, 0xc6,
+	0x0b, 0xad, 0x26, 0xc6, 0xc3, 0x8a, 0x9e, 0xc3, 0x7c, 0xe0, 0x5d, 0x8f, 0x81, 0x31, 0x66, 0xbb,
+	0x70, 0xff, 0xf0, 0x39, 0xa6, 0xe4, 0x6b, 0x4c, 0xc9, 0xf7, 0x98, 0x92, 0xa7, 0xdb, 0x56, 0xf8,
+	0x97, 0xbe, 0x2a, 0x6a, 0x2d, 0xcb, 0x46, 0xd7, 0xaf, 0x68, 0xf7, 0xc2, 0x8d, 0x28, 0xfd, 0xbb,
+	0x41, 0x57, 0xba, 0x37, 0x6e, 0x65, 0x69, 0x7b, 0xe5, 0x85, 0xc4, 0xbb, 0x49, 0xab, 0x45, 0x38,
+	0xe4, 0xcd, 0x4f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x99, 0xa8, 0xd9, 0x9b, 0x58, 0x01, 0x00, 0x00,
+}
diff --git a/vendor/github.com/docker/docker/api/types/swarm/runtime/plugin.proto b/vendor/github.com/docker/docker/api/types/swarm/runtime/plugin.proto
new file mode 100644
index 0000000000000..9ef169046b4fa
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/swarm/runtime/plugin.proto
@@ -0,0 +1,21 @@
+syntax = "proto3";
+
+option go_package = "github.com/docker/docker/api/types/swarm/runtime;runtime";
+
+// PluginSpec defines the base payload which clients can specify for creating
+// a service with the plugin runtime.
+message PluginSpec {
+	string name = 1;
+	string remote = 2;
+	repeated PluginPrivilege privileges = 3;
+	bool disabled = 4;
+	repeated string env = 5;
+}
+
+// PluginPrivilege describes a permission the user has to accept
+// upon installing a plugin.
+message PluginPrivilege {
+	string name = 1;
+	string description = 2;
+	repeated string value = 3;
+}
diff --git a/vendor/github.com/docker/docker/api/types/swarm/secret.go b/vendor/github.com/docker/docker/api/types/swarm/secret.go
new file mode 100644
index 0000000000000..d5213ec981c3d
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/swarm/secret.go
@@ -0,0 +1,36 @@
+package swarm // import "github.com/docker/docker/api/types/swarm"
+
+import "os"
+
+// Secret represents a secret.
+type Secret struct {
+	ID string
+	Meta
+	Spec SecretSpec
+}
+
+// SecretSpec represents a secret specification from a secret in swarm
+type SecretSpec struct {
+	Annotations
+	Data   []byte  `json:",omitempty"`
+	Driver *Driver `json:",omitempty"` // name of the secrets driver used to fetch the secret's value from an external secret store
+
+	// Templating controls whether and how to evaluate the secret payload as
+	// a template. If it is not set, no templating is used.
+	Templating *Driver `json:",omitempty"`
+}
+
+// SecretReferenceFileTarget is a file target in a secret reference
+type SecretReferenceFileTarget struct {
+	Name string
+	UID  string
+	GID  string
+	Mode os.FileMode
+}
+
+// SecretReference is a reference to a secret in swarm
+type SecretReference struct {
+	File       *SecretReferenceFileTarget
+	SecretID   string
+	SecretName string
+}
diff --git a/vendor/github.com/docker/docker/api/types/swarm/service.go b/vendor/github.com/docker/docker/api/types/swarm/service.go
new file mode 100644
index 0000000000000..6eb452d24d122
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/swarm/service.go
@@ -0,0 +1,202 @@
+package swarm // import "github.com/docker/docker/api/types/swarm"
+
+import "time"
+
+// Service represents a service.
+type Service struct {
+	ID string
+	Meta
+	Spec         ServiceSpec   `json:",omitempty"`
+	PreviousSpec *ServiceSpec  `json:",omitempty"`
+	Endpoint     Endpoint      `json:",omitempty"`
+	UpdateStatus *UpdateStatus `json:",omitempty"`
+
+	// ServiceStatus is an optional, extra field indicating the number of
+	// desired and running tasks. It is provided primarily as a shortcut to
+	// calculating these values client-side, which otherwise would require
+	// listing all tasks for a service, an operation that could be
+	// computation and network expensive.
+	ServiceStatus *ServiceStatus `json:",omitempty"`
+
+	// JobStatus is the status of a Service which is in one of ReplicatedJob or
+	// GlobalJob modes. It is absent on Replicated and Global services.
+	JobStatus *JobStatus `json:",omitempty"`
+}
+
+// ServiceSpec represents the spec of a service.
+type ServiceSpec struct {
+	Annotations
+
+	// TaskTemplate defines how the service should construct new tasks when
+	// orchestrating this service.
+	TaskTemplate   TaskSpec      `json:",omitempty"`
+	Mode           ServiceMode   `json:",omitempty"`
+	UpdateConfig   *UpdateConfig `json:",omitempty"`
+	RollbackConfig *UpdateConfig `json:",omitempty"`
+
+	// Networks field in ServiceSpec is deprecated. The
+	// same field in TaskSpec should be used instead.
+	// This field will be removed in a future release.
+	Networks     []NetworkAttachmentConfig `json:",omitempty"`
+	EndpointSpec *EndpointSpec             `json:",omitempty"`
+}
+
+// ServiceMode represents the mode of a service.
+type ServiceMode struct {
+	Replicated    *ReplicatedService `json:",omitempty"`
+	Global        *GlobalService     `json:",omitempty"`
+	ReplicatedJob *ReplicatedJob     `json:",omitempty"`
+	GlobalJob     *GlobalJob         `json:",omitempty"`
+}
+
+// UpdateState is the state of a service update.
+type UpdateState string
+
+const (
+	// UpdateStateUpdating is the updating state.
+	UpdateStateUpdating UpdateState = "updating"
+	// UpdateStatePaused is the paused state.
+	UpdateStatePaused UpdateState = "paused"
+	// UpdateStateCompleted is the completed state.
+	UpdateStateCompleted UpdateState = "completed"
+	// UpdateStateRollbackStarted is the state with a rollback in progress.
+	UpdateStateRollbackStarted UpdateState = "rollback_started"
+	// UpdateStateRollbackPaused is the state with a rollback in progress.
+	UpdateStateRollbackPaused UpdateState = "rollback_paused"
+	// UpdateStateRollbackCompleted is the state with a rollback in progress.
+	UpdateStateRollbackCompleted UpdateState = "rollback_completed"
+)
+
+// UpdateStatus reports the status of a service update.
+type UpdateStatus struct {
+	State       UpdateState `json:",omitempty"`
+	StartedAt   *time.Time  `json:",omitempty"`
+	CompletedAt *time.Time  `json:",omitempty"`
+	Message     string      `json:",omitempty"`
+}
+
+// ReplicatedService is a kind of ServiceMode.
+type ReplicatedService struct {
+	Replicas *uint64 `json:",omitempty"`
+}
+
+// GlobalService is a kind of ServiceMode.
+type GlobalService struct{}
+
+// ReplicatedJob is the a type of Service which executes a defined Tasks
+// in parallel until the specified number of Tasks have succeeded.
+type ReplicatedJob struct {
+	// MaxConcurrent indicates the maximum number of Tasks that should be
+	// executing simultaneously for this job at any given time. There may be
+	// fewer Tasks that MaxConcurrent executing simultaneously; for example, if
+	// there are fewer than MaxConcurrent tasks needed to reach
+	// TotalCompletions.
+	//
+	// If this field is empty, it will default to a max concurrency of 1.
+	MaxConcurrent *uint64 `json:",omitempty"`
+
+	// TotalCompletions is the total number of Tasks desired to run to
+	// completion.
+	//
+	// If this field is empty, the value of MaxConcurrent will be used.
+	TotalCompletions *uint64 `json:",omitempty"`
+}
+
+// GlobalJob is the type of a Service which executes a Task on every Node
+// matching the Service's placement constraints. These tasks run to completion
+// and then exit.
+//
+// This type is deliberately empty.
+type GlobalJob struct{}
+
+const (
+	// UpdateFailureActionPause PAUSE
+	UpdateFailureActionPause = "pause"
+	// UpdateFailureActionContinue CONTINUE
+	UpdateFailureActionContinue = "continue"
+	// UpdateFailureActionRollback ROLLBACK
+	UpdateFailureActionRollback = "rollback"
+
+	// UpdateOrderStopFirst STOP_FIRST
+	UpdateOrderStopFirst = "stop-first"
+	// UpdateOrderStartFirst START_FIRST
+	UpdateOrderStartFirst = "start-first"
+)
+
+// UpdateConfig represents the update configuration.
+type UpdateConfig struct {
+	// Maximum number of tasks to be updated in one iteration.
+	// 0 means unlimited parallelism.
+	Parallelism uint64
+
+	// Amount of time between updates.
+	Delay time.Duration `json:",omitempty"`
+
+	// FailureAction is the action to take when an update failures.
+	FailureAction string `json:",omitempty"`
+
+	// Monitor indicates how long to monitor a task for failure after it is
+	// created. If the task fails by ending up in one of the states
+	// REJECTED, COMPLETED, or FAILED, within Monitor from its creation,
+	// this counts as a failure. If it fails after Monitor, it does not
+	// count as a failure. If Monitor is unspecified, a default value will
+	// be used.
+	Monitor time.Duration `json:",omitempty"`
+
+	// MaxFailureRatio is the fraction of tasks that may fail during
+	// an update before the failure action is invoked. Any task created by
+	// the current update which ends up in one of the states REJECTED,
+	// COMPLETED or FAILED within Monitor from its creation counts as a
+	// failure. The number of failures is divided by the number of tasks
+	// being updated, and if this fraction is greater than
+	// MaxFailureRatio, the failure action is invoked.
+	//
+	// If the failure action is CONTINUE, there is no effect.
+	// If the failure action is PAUSE, no more tasks will be updated until
+	// another update is started.
+	MaxFailureRatio float32
+
+	// Order indicates the order of operations when rolling out an updated
+	// task. Either the old task is shut down before the new task is
+	// started, or the new task is started before the old task is shut down.
+	Order string
+}
+
+// ServiceStatus represents the number of running tasks in a service and the
+// number of tasks desired to be running.
+type ServiceStatus struct {
+	// RunningTasks is the number of tasks for the service actually in the
+	// Running state
+	RunningTasks uint64
+
+	// DesiredTasks is the number of tasks desired to be running by the
+	// service. For replicated services, this is the replica count. For global
+	// services, this is computed by taking the number of tasks with desired
+	// state of not-Shutdown.
+	DesiredTasks uint64
+
+	// CompletedTasks is the number of tasks in the state Completed, if this
+	// service is in ReplicatedJob or GlobalJob mode. This field must be
+	// cross-referenced with the service type, because the default value of 0
+	// may mean that a service is not in a job mode, or it may mean that the
+	// job has yet to complete any tasks.
+	CompletedTasks uint64
+}
+
+// JobStatus is the status of a job-type service.
+type JobStatus struct {
+	// JobIteration is a value increased each time a Job is executed,
+	// successfully or otherwise. "Executed", in this case, means the job as a
+	// whole has been started, not that an individual Task has been launched. A
+	// job is "Executed" when its ServiceSpec is updated. JobIteration can be
+	// used to disambiguate Tasks belonging to different executions of a job.
+	//
+	// Though JobIteration will increase with each subsequent execution, it may
+	// not necessarily increase by 1, and so JobIteration should not be used to
+	// keep track of the number of times a job has been executed.
+	JobIteration Version
+
+	// LastExecution is the time that the job was last executed, as observed by
+	// Swarm manager.
+	LastExecution time.Time `json:",omitempty"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/swarm/swarm.go b/vendor/github.com/docker/docker/api/types/swarm/swarm.go
new file mode 100644
index 0000000000000..b25f9996462e0
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/swarm/swarm.go
@@ -0,0 +1,227 @@
+package swarm // import "github.com/docker/docker/api/types/swarm"
+
+import (
+	"time"
+)
+
+// ClusterInfo represents info about the cluster for outputting in "info"
+// it contains the same information as "Swarm", but without the JoinTokens
+type ClusterInfo struct {
+	ID string
+	Meta
+	Spec                   Spec
+	TLSInfo                TLSInfo
+	RootRotationInProgress bool
+	DefaultAddrPool        []string
+	SubnetSize             uint32
+	DataPathPort           uint32
+}
+
+// Swarm represents a swarm.
+type Swarm struct {
+	ClusterInfo
+	JoinTokens JoinTokens
+}
+
+// JoinTokens contains the tokens workers and managers need to join the swarm.
+type JoinTokens struct {
+	// Worker is the join token workers may use to join the swarm.
+	Worker string
+	// Manager is the join token managers may use to join the swarm.
+	Manager string
+}
+
+// Spec represents the spec of a swarm.
+type Spec struct {
+	Annotations
+
+	Orchestration    OrchestrationConfig `json:",omitempty"`
+	Raft             RaftConfig          `json:",omitempty"`
+	Dispatcher       DispatcherConfig    `json:",omitempty"`
+	CAConfig         CAConfig            `json:",omitempty"`
+	TaskDefaults     TaskDefaults        `json:",omitempty"`
+	EncryptionConfig EncryptionConfig    `json:",omitempty"`
+}
+
+// OrchestrationConfig represents orchestration configuration.
+type OrchestrationConfig struct {
+	// TaskHistoryRetentionLimit is the number of historic tasks to keep per instance or
+	// node. If negative, never remove completed or failed tasks.
+	TaskHistoryRetentionLimit *int64 `json:",omitempty"`
+}
+
+// TaskDefaults parameterizes cluster-level task creation with default values.
+type TaskDefaults struct {
+	// LogDriver selects the log driver to use for tasks created in the
+	// orchestrator if unspecified by a service.
+	//
+	// Updating this value will only have an affect on new tasks. Old tasks
+	// will continue use their previously configured log driver until
+	// recreated.
+	LogDriver *Driver `json:",omitempty"`
+}
+
+// EncryptionConfig controls at-rest encryption of data and keys.
+type EncryptionConfig struct {
+	// AutoLockManagers specifies whether or not managers TLS keys and raft data
+	// should be encrypted at rest in such a way that they must be unlocked
+	// before the manager node starts up again.
+	AutoLockManagers bool
+}
+
+// RaftConfig represents raft configuration.
+type RaftConfig struct {
+	// SnapshotInterval is the number of log entries between snapshots.
+	SnapshotInterval uint64 `json:",omitempty"`
+
+	// KeepOldSnapshots is the number of snapshots to keep beyond the
+	// current snapshot.
+	KeepOldSnapshots *uint64 `json:",omitempty"`
+
+	// LogEntriesForSlowFollowers is the number of log entries to keep
+	// around to sync up slow followers after a snapshot is created.
+	LogEntriesForSlowFollowers uint64 `json:",omitempty"`
+
+	// ElectionTick is the number of ticks that a follower will wait for a message
+	// from the leader before becoming a candidate and starting an election.
+	// ElectionTick must be greater than HeartbeatTick.
+	//
+	// A tick currently defaults to one second, so these translate directly to
+	// seconds currently, but this is NOT guaranteed.
+	ElectionTick int
+
+	// HeartbeatTick is the number of ticks between heartbeats. Every
+	// HeartbeatTick ticks, the leader will send a heartbeat to the
+	// followers.
+	//
+	// A tick currently defaults to one second, so these translate directly to
+	// seconds currently, but this is NOT guaranteed.
+	HeartbeatTick int
+}
+
+// DispatcherConfig represents dispatcher configuration.
+type DispatcherConfig struct {
+	// HeartbeatPeriod defines how often agent should send heartbeats to
+	// dispatcher.
+	HeartbeatPeriod time.Duration `json:",omitempty"`
+}
+
+// CAConfig represents CA configuration.
+type CAConfig struct {
+	// NodeCertExpiry is the duration certificates should be issued for
+	NodeCertExpiry time.Duration `json:",omitempty"`
+
+	// ExternalCAs is a list of CAs to which a manager node will make
+	// certificate signing requests for node certificates.
+	ExternalCAs []*ExternalCA `json:",omitempty"`
+
+	// SigningCACert and SigningCAKey specify the desired signing root CA and
+	// root CA key for the swarm.  When inspecting the cluster, the key will
+	// be redacted.
+	SigningCACert string `json:",omitempty"`
+	SigningCAKey  string `json:",omitempty"`
+
+	// If this value changes, and there is no specified signing cert and key,
+	// then the swarm is forced to generate a new root certificate ane key.
+	ForceRotate uint64 `json:",omitempty"`
+}
+
+// ExternalCAProtocol represents type of external CA.
+type ExternalCAProtocol string
+
+// ExternalCAProtocolCFSSL CFSSL
+const ExternalCAProtocolCFSSL ExternalCAProtocol = "cfssl"
+
+// ExternalCA defines external CA to be used by the cluster.
+type ExternalCA struct {
+	// Protocol is the protocol used by this external CA.
+	Protocol ExternalCAProtocol
+
+	// URL is the URL where the external CA can be reached.
+	URL string
+
+	// Options is a set of additional key/value pairs whose interpretation
+	// depends on the specified CA type.
+	Options map[string]string `json:",omitempty"`
+
+	// CACert specifies which root CA is used by this external CA.  This certificate must
+	// be in PEM format.
+	CACert string
+}
+
+// InitRequest is the request used to init a swarm.
+type InitRequest struct {
+	ListenAddr       string
+	AdvertiseAddr    string
+	DataPathAddr     string
+	DataPathPort     uint32
+	ForceNewCluster  bool
+	Spec             Spec
+	AutoLockManagers bool
+	Availability     NodeAvailability
+	DefaultAddrPool  []string
+	SubnetSize       uint32
+}
+
+// JoinRequest is the request used to join a swarm.
+type JoinRequest struct {
+	ListenAddr    string
+	AdvertiseAddr string
+	DataPathAddr  string
+	RemoteAddrs   []string
+	JoinToken     string // accept by secret
+	Availability  NodeAvailability
+}
+
+// UnlockRequest is the request used to unlock a swarm.
+type UnlockRequest struct {
+	// UnlockKey is the unlock key in ASCII-armored format.
+	UnlockKey string
+}
+
+// LocalNodeState represents the state of the local node.
+type LocalNodeState string
+
+const (
+	// LocalNodeStateInactive INACTIVE
+	LocalNodeStateInactive LocalNodeState = "inactive"
+	// LocalNodeStatePending PENDING
+	LocalNodeStatePending LocalNodeState = "pending"
+	// LocalNodeStateActive ACTIVE
+	LocalNodeStateActive LocalNodeState = "active"
+	// LocalNodeStateError ERROR
+	LocalNodeStateError LocalNodeState = "error"
+	// LocalNodeStateLocked LOCKED
+	LocalNodeStateLocked LocalNodeState = "locked"
+)
+
+// Info represents generic information about swarm.
+type Info struct {
+	NodeID   string
+	NodeAddr string
+
+	LocalNodeState   LocalNodeState
+	ControlAvailable bool
+	Error            string
+
+	RemoteManagers []Peer
+	Nodes          int `json:",omitempty"`
+	Managers       int `json:",omitempty"`
+
+	Cluster *ClusterInfo `json:",omitempty"`
+
+	Warnings []string `json:",omitempty"`
+}
+
+// Peer represents a peer.
+type Peer struct {
+	NodeID string
+	Addr   string
+}
+
+// UpdateFlags contains flags for SwarmUpdate.
+type UpdateFlags struct {
+	RotateWorkerToken      bool
+	RotateManagerToken     bool
+	RotateManagerUnlockKey bool
+}
diff --git a/vendor/github.com/docker/docker/api/types/swarm/task.go b/vendor/github.com/docker/docker/api/types/swarm/task.go
new file mode 100644
index 0000000000000..a6f7ab7b5c790
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/swarm/task.go
@@ -0,0 +1,206 @@
+package swarm // import "github.com/docker/docker/api/types/swarm"
+
+import (
+	"time"
+
+	"github.com/docker/docker/api/types/swarm/runtime"
+)
+
+// TaskState represents the state of a task.
+type TaskState string
+
+const (
+	// TaskStateNew NEW
+	TaskStateNew TaskState = "new"
+	// TaskStateAllocated ALLOCATED
+	TaskStateAllocated TaskState = "allocated"
+	// TaskStatePending PENDING
+	TaskStatePending TaskState = "pending"
+	// TaskStateAssigned ASSIGNED
+	TaskStateAssigned TaskState = "assigned"
+	// TaskStateAccepted ACCEPTED
+	TaskStateAccepted TaskState = "accepted"
+	// TaskStatePreparing PREPARING
+	TaskStatePreparing TaskState = "preparing"
+	// TaskStateReady READY
+	TaskStateReady TaskState = "ready"
+	// TaskStateStarting STARTING
+	TaskStateStarting TaskState = "starting"
+	// TaskStateRunning RUNNING
+	TaskStateRunning TaskState = "running"
+	// TaskStateComplete COMPLETE
+	TaskStateComplete TaskState = "complete"
+	// TaskStateShutdown SHUTDOWN
+	TaskStateShutdown TaskState = "shutdown"
+	// TaskStateFailed FAILED
+	TaskStateFailed TaskState = "failed"
+	// TaskStateRejected REJECTED
+	TaskStateRejected TaskState = "rejected"
+	// TaskStateRemove REMOVE
+	TaskStateRemove TaskState = "remove"
+	// TaskStateOrphaned ORPHANED
+	TaskStateOrphaned TaskState = "orphaned"
+)
+
+// Task represents a task.
+type Task struct {
+	ID string
+	Meta
+	Annotations
+
+	Spec                TaskSpec            `json:",omitempty"`
+	ServiceID           string              `json:",omitempty"`
+	Slot                int                 `json:",omitempty"`
+	NodeID              string              `json:",omitempty"`
+	Status              TaskStatus          `json:",omitempty"`
+	DesiredState        TaskState           `json:",omitempty"`
+	NetworksAttachments []NetworkAttachment `json:",omitempty"`
+	GenericResources    []GenericResource   `json:",omitempty"`
+
+	// JobIteration is the JobIteration of the Service that this Task was
+	// spawned from, if the Service is a ReplicatedJob or GlobalJob. This is
+	// used to determine which Tasks belong to which run of the job. This field
+	// is absent if the Service mode is Replicated or Global.
+	JobIteration *Version `json:",omitempty"`
+}
+
+// TaskSpec represents the spec of a task.
+type TaskSpec struct {
+	// ContainerSpec, NetworkAttachmentSpec, and PluginSpec are mutually exclusive.
+	// PluginSpec is only used when the `Runtime` field is set to `plugin`
+	// NetworkAttachmentSpec is used if the `Runtime` field is set to
+	// `attachment`.
+	ContainerSpec         *ContainerSpec         `json:",omitempty"`
+	PluginSpec            *runtime.PluginSpec    `json:",omitempty"`
+	NetworkAttachmentSpec *NetworkAttachmentSpec `json:",omitempty"`
+
+	Resources     *ResourceRequirements     `json:",omitempty"`
+	RestartPolicy *RestartPolicy            `json:",omitempty"`
+	Placement     *Placement                `json:",omitempty"`
+	Networks      []NetworkAttachmentConfig `json:",omitempty"`
+
+	// LogDriver specifies the LogDriver to use for tasks created from this
+	// spec. If not present, the one on cluster default on swarm.Spec will be
+	// used, finally falling back to the engine default if not specified.
+	LogDriver *Driver `json:",omitempty"`
+
+	// ForceUpdate is a counter that triggers an update even if no relevant
+	// parameters have been changed.
+	ForceUpdate uint64
+
+	Runtime RuntimeType `json:",omitempty"`
+}
+
+// Resources represents resources (CPU/Memory) which can be advertised by a
+// node and requested to be reserved for a task.
+type Resources struct {
+	NanoCPUs         int64             `json:",omitempty"`
+	MemoryBytes      int64             `json:",omitempty"`
+	GenericResources []GenericResource `json:",omitempty"`
+}
+
+// Limit describes limits on resources which can be requested by a task.
+type Limit struct {
+	NanoCPUs    int64 `json:",omitempty"`
+	MemoryBytes int64 `json:",omitempty"`
+	Pids        int64 `json:",omitempty"`
+}
+
+// GenericResource represents a "user defined" resource which can
+// be either an integer (e.g: SSD=3) or a string (e.g: SSD=sda1)
+type GenericResource struct {
+	NamedResourceSpec    *NamedGenericResource    `json:",omitempty"`
+	DiscreteResourceSpec *DiscreteGenericResource `json:",omitempty"`
+}
+
+// NamedGenericResource represents a "user defined" resource which is defined
+// as a string.
+// "Kind" is used to describe the Kind of a resource (e.g: "GPU", "FPGA", "SSD", ...)
+// Value is used to identify the resource (GPU="UUID-1", FPGA="/dev/sdb5", ...)
+type NamedGenericResource struct {
+	Kind  string `json:",omitempty"`
+	Value string `json:",omitempty"`
+}
+
+// DiscreteGenericResource represents a "user defined" resource which is defined
+// as an integer
+// "Kind" is used to describe the Kind of a resource (e.g: "GPU", "FPGA", "SSD", ...)
+// Value is used to count the resource (SSD=5, HDD=3, ...)
+type DiscreteGenericResource struct {
+	Kind  string `json:",omitempty"`
+	Value int64  `json:",omitempty"`
+}
+
+// ResourceRequirements represents resources requirements.
+type ResourceRequirements struct {
+	Limits       *Limit     `json:",omitempty"`
+	Reservations *Resources `json:",omitempty"`
+}
+
+// Placement represents orchestration parameters.
+type Placement struct {
+	Constraints []string              `json:",omitempty"`
+	Preferences []PlacementPreference `json:",omitempty"`
+	MaxReplicas uint64                `json:",omitempty"`
+
+	// Platforms stores all the platforms that the image can run on.
+	// This field is used in the platform filter for scheduling. If empty,
+	// then the platform filter is off, meaning there are no scheduling restrictions.
+	Platforms []Platform `json:",omitempty"`
+}
+
+// PlacementPreference provides a way to make the scheduler aware of factors
+// such as topology.
+type PlacementPreference struct {
+	Spread *SpreadOver
+}
+
+// SpreadOver is a scheduling preference that instructs the scheduler to spread
+// tasks evenly over groups of nodes identified by labels.
+type SpreadOver struct {
+	// label descriptor, such as engine.labels.az
+	SpreadDescriptor string
+}
+
+// RestartPolicy represents the restart policy.
+type RestartPolicy struct {
+	Condition   RestartPolicyCondition `json:",omitempty"`
+	Delay       *time.Duration         `json:",omitempty"`
+	MaxAttempts *uint64                `json:",omitempty"`
+	Window      *time.Duration         `json:",omitempty"`
+}
+
+// RestartPolicyCondition represents when to restart.
+type RestartPolicyCondition string
+
+const (
+	// RestartPolicyConditionNone NONE
+	RestartPolicyConditionNone RestartPolicyCondition = "none"
+	// RestartPolicyConditionOnFailure ON_FAILURE
+	RestartPolicyConditionOnFailure RestartPolicyCondition = "on-failure"
+	// RestartPolicyConditionAny ANY
+	RestartPolicyConditionAny RestartPolicyCondition = "any"
+)
+
+// TaskStatus represents the status of a task.
+type TaskStatus struct {
+	Timestamp       time.Time        `json:",omitempty"`
+	State           TaskState        `json:",omitempty"`
+	Message         string           `json:",omitempty"`
+	Err             string           `json:",omitempty"`
+	ContainerStatus *ContainerStatus `json:",omitempty"`
+	PortStatus      PortStatus       `json:",omitempty"`
+}
+
+// ContainerStatus represents the status of a container.
+type ContainerStatus struct {
+	ContainerID string
+	PID         int
+	ExitCode    int
+}
+
+// PortStatus represents the port status of a task's host ports whose
+// service has published host ports
+type PortStatus struct {
+	Ports []PortConfig `json:",omitempty"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/time/duration_convert.go b/vendor/github.com/docker/docker/api/types/time/duration_convert.go
new file mode 100644
index 0000000000000..84b6f073224c2
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/time/duration_convert.go
@@ -0,0 +1,12 @@
+package time // import "github.com/docker/docker/api/types/time"
+
+import (
+	"strconv"
+	"time"
+)
+
+// DurationToSecondsString converts the specified duration to the number
+// seconds it represents, formatted as a string.
+func DurationToSecondsString(duration time.Duration) string {
+	return strconv.FormatFloat(duration.Seconds(), 'f', 0, 64)
+}
diff --git a/vendor/github.com/docker/docker/api/types/time/timestamp.go b/vendor/github.com/docker/docker/api/types/time/timestamp.go
new file mode 100644
index 0000000000000..2a74b7a59795e
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/time/timestamp.go
@@ -0,0 +1,131 @@
+package time // import "github.com/docker/docker/api/types/time"
+
+import (
+	"fmt"
+	"math"
+	"strconv"
+	"strings"
+	"time"
+)
+
+// These are additional predefined layouts for use in Time.Format and Time.Parse
+// with --since and --until parameters for `docker logs` and `docker events`
+const (
+	rFC3339Local     = "2006-01-02T15:04:05"           // RFC3339 with local timezone
+	rFC3339NanoLocal = "2006-01-02T15:04:05.999999999" // RFC3339Nano with local timezone
+	dateWithZone     = "2006-01-02Z07:00"              // RFC3339 with time at 00:00:00
+	dateLocal        = "2006-01-02"                    // RFC3339 with local timezone and time at 00:00:00
+)
+
+// GetTimestamp tries to parse given string as golang duration,
+// then RFC3339 time and finally as a Unix timestamp. If
+// any of these were successful, it returns a Unix timestamp
+// as string otherwise returns the given value back.
+// In case of duration input, the returned timestamp is computed
+// as the given reference time minus the amount of the duration.
+func GetTimestamp(value string, reference time.Time) (string, error) {
+	if d, err := time.ParseDuration(value); value != "0" && err == nil {
+		return strconv.FormatInt(reference.Add(-d).Unix(), 10), nil
+	}
+
+	var format string
+	// if the string has a Z or a + or three dashes use parse otherwise use parseinlocation
+	parseInLocation := !(strings.ContainsAny(value, "zZ+") || strings.Count(value, "-") == 3)
+
+	if strings.Contains(value, ".") {
+		if parseInLocation {
+			format = rFC3339NanoLocal
+		} else {
+			format = time.RFC3339Nano
+		}
+	} else if strings.Contains(value, "T") {
+		// we want the number of colons in the T portion of the timestamp
+		tcolons := strings.Count(value, ":")
+		// if parseInLocation is off and we have a +/- zone offset (not Z) then
+		// there will be an extra colon in the input for the tz offset subtract that
+		// colon from the tcolons count
+		if !parseInLocation && !strings.ContainsAny(value, "zZ") && tcolons > 0 {
+			tcolons--
+		}
+		if parseInLocation {
+			switch tcolons {
+			case 0:
+				format = "2006-01-02T15"
+			case 1:
+				format = "2006-01-02T15:04"
+			default:
+				format = rFC3339Local
+			}
+		} else {
+			switch tcolons {
+			case 0:
+				format = "2006-01-02T15Z07:00"
+			case 1:
+				format = "2006-01-02T15:04Z07:00"
+			default:
+				format = time.RFC3339
+			}
+		}
+	} else if parseInLocation {
+		format = dateLocal
+	} else {
+		format = dateWithZone
+	}
+
+	var t time.Time
+	var err error
+
+	if parseInLocation {
+		t, err = time.ParseInLocation(format, value, time.FixedZone(reference.Zone()))
+	} else {
+		t, err = time.Parse(format, value)
+	}
+
+	if err != nil {
+		// if there is a `-` then it's an RFC3339 like timestamp
+		if strings.Contains(value, "-") {
+			return "", err // was probably an RFC3339 like timestamp but the parser failed with an error
+		}
+		if _, _, err := parseTimestamp(value); err != nil {
+			return "", fmt.Errorf("failed to parse value as time or duration: %q", value)
+		}
+		return value, nil // unix timestamp in and out case (meaning: the value passed at the command line is already in the right format for passing to the server)
+	}
+
+	return fmt.Sprintf("%d.%09d", t.Unix(), int64(t.Nanosecond())), nil
+}
+
+// ParseTimestamps returns seconds and nanoseconds from a timestamp that has the
+// format "%d.%09d", time.Unix(), int64(time.Nanosecond()))
+// if the incoming nanosecond portion is longer or shorter than 9 digits it is
+// converted to nanoseconds.  The expectation is that the seconds and
+// seconds will be used to create a time variable.  For example:
+//
+//	seconds, nanoseconds, err := ParseTimestamp("1136073600.000000001",0)
+//	if err == nil since := time.Unix(seconds, nanoseconds)
+//
+// returns seconds as def(aultSeconds) if value == ""
+func ParseTimestamps(value string, def int64) (int64, int64, error) {
+	if value == "" {
+		return def, 0, nil
+	}
+	return parseTimestamp(value)
+}
+
+func parseTimestamp(value string) (int64, int64, error) {
+	sa := strings.SplitN(value, ".", 2)
+	s, err := strconv.ParseInt(sa[0], 10, 64)
+	if err != nil {
+		return s, 0, err
+	}
+	if len(sa) != 2 {
+		return s, 0, nil
+	}
+	n, err := strconv.ParseInt(sa[1], 10, 64)
+	if err != nil {
+		return s, n, err
+	}
+	// should already be in nanoseconds but just in case convert n to nanoseconds
+	n = int64(float64(n) * math.Pow(float64(10), float64(9-len(sa[1]))))
+	return s, n, nil
+}
diff --git a/vendor/github.com/docker/docker/api/types/types.go b/vendor/github.com/docker/docker/api/types/types.go
new file mode 100644
index 0000000000000..e3a159912e22c
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/types.go
@@ -0,0 +1,635 @@
+package types // import "github.com/docker/docker/api/types"
+
+import (
+	"errors"
+	"fmt"
+	"io"
+	"os"
+	"strings"
+	"time"
+
+	"github.com/docker/docker/api/types/container"
+	"github.com/docker/docker/api/types/filters"
+	"github.com/docker/docker/api/types/mount"
+	"github.com/docker/docker/api/types/network"
+	"github.com/docker/docker/api/types/registry"
+	"github.com/docker/docker/api/types/swarm"
+	"github.com/docker/go-connections/nat"
+)
+
+// RootFS returns Image's RootFS description including the layer IDs.
+type RootFS struct {
+	Type      string
+	Layers    []string `json:",omitempty"`
+	BaseLayer string   `json:",omitempty"`
+}
+
+// ImageInspect contains response of Engine API:
+// GET "/images/{name:.*}/json"
+type ImageInspect struct {
+	ID              string `json:"Id"`
+	RepoTags        []string
+	RepoDigests     []string
+	Parent          string
+	Comment         string
+	Created         string
+	Container       string
+	ContainerConfig *container.Config
+	DockerVersion   string
+	Author          string
+	Config          *container.Config
+	Architecture    string
+	Variant         string `json:",omitempty"`
+	Os              string
+	OsVersion       string `json:",omitempty"`
+	Size            int64
+	VirtualSize     int64
+	GraphDriver     GraphDriverData
+	RootFS          RootFS
+	Metadata        ImageMetadata
+}
+
+// ImageMetadata contains engine-local data about the image
+type ImageMetadata struct {
+	LastTagTime time.Time `json:",omitempty"`
+}
+
+// Container contains response of Engine API:
+// GET "/containers/json"
+type Container struct {
+	ID         string `json:"Id"`
+	Names      []string
+	Image      string
+	ImageID    string
+	Command    string
+	Created    int64
+	Ports      []Port
+	SizeRw     int64 `json:",omitempty"`
+	SizeRootFs int64 `json:",omitempty"`
+	Labels     map[string]string
+	State      string
+	Status     string
+	HostConfig struct {
+		NetworkMode string `json:",omitempty"`
+	}
+	NetworkSettings *SummaryNetworkSettings
+	Mounts          []MountPoint
+}
+
+// CopyConfig contains request body of Engine API:
+// POST "/containers/"+containerID+"/copy"
+type CopyConfig struct {
+	Resource string
+}
+
+// ContainerPathStat is used to encode the header from
+// GET "/containers/{name:.*}/archive"
+// "Name" is the file or directory name.
+type ContainerPathStat struct {
+	Name       string      `json:"name"`
+	Size       int64       `json:"size"`
+	Mode       os.FileMode `json:"mode"`
+	Mtime      time.Time   `json:"mtime"`
+	LinkTarget string      `json:"linkTarget"`
+}
+
+// ContainerStats contains response of Engine API:
+// GET "/stats"
+type ContainerStats struct {
+	Body   io.ReadCloser `json:"body"`
+	OSType string        `json:"ostype"`
+}
+
+// Ping contains response of Engine API:
+// GET "/_ping"
+type Ping struct {
+	APIVersion     string
+	OSType         string
+	Experimental   bool
+	BuilderVersion BuilderVersion
+}
+
+// ComponentVersion describes the version information for a specific component.
+type ComponentVersion struct {
+	Name    string
+	Version string
+	Details map[string]string `json:",omitempty"`
+}
+
+// Version contains response of Engine API:
+// GET "/version"
+type Version struct {
+	Platform   struct{ Name string } `json:",omitempty"`
+	Components []ComponentVersion    `json:",omitempty"`
+
+	// The following fields are deprecated, they relate to the Engine component and are kept for backwards compatibility
+
+	Version       string
+	APIVersion    string `json:"ApiVersion"`
+	MinAPIVersion string `json:"MinAPIVersion,omitempty"`
+	GitCommit     string
+	GoVersion     string
+	Os            string
+	Arch          string
+	KernelVersion string `json:",omitempty"`
+	Experimental  bool   `json:",omitempty"`
+	BuildTime     string `json:",omitempty"`
+}
+
+// Commit holds the Git-commit (SHA1) that a binary was built from, as reported
+// in the version-string of external tools, such as containerd, or runC.
+type Commit struct {
+	ID       string // ID is the actual commit ID of external tool.
+	Expected string // Expected is the commit ID of external tool expected by dockerd as set at build time.
+}
+
+// Info contains response of Engine API:
+// GET "/info"
+type Info struct {
+	ID                 string
+	Containers         int
+	ContainersRunning  int
+	ContainersPaused   int
+	ContainersStopped  int
+	Images             int
+	Driver             string
+	DriverStatus       [][2]string
+	SystemStatus       [][2]string `json:",omitempty"` // SystemStatus is only propagated by the Swarm standalone API
+	Plugins            PluginsInfo
+	MemoryLimit        bool
+	SwapLimit          bool
+	KernelMemory       bool // Deprecated: kernel 5.4 deprecated kmem.limit_in_bytes
+	KernelMemoryTCP    bool
+	CPUCfsPeriod       bool `json:"CpuCfsPeriod"`
+	CPUCfsQuota        bool `json:"CpuCfsQuota"`
+	CPUShares          bool
+	CPUSet             bool
+	PidsLimit          bool
+	IPv4Forwarding     bool
+	BridgeNfIptables   bool
+	BridgeNfIP6tables  bool `json:"BridgeNfIp6tables"`
+	Debug              bool
+	NFd                int
+	OomKillDisable     bool
+	NGoroutines        int
+	SystemTime         string
+	LoggingDriver      string
+	CgroupDriver       string
+	CgroupVersion      string `json:",omitempty"`
+	NEventsListener    int
+	KernelVersion      string
+	OperatingSystem    string
+	OSVersion          string
+	OSType             string
+	Architecture       string
+	IndexServerAddress string
+	RegistryConfig     *registry.ServiceConfig
+	NCPU               int
+	MemTotal           int64
+	GenericResources   []swarm.GenericResource
+	DockerRootDir      string
+	HTTPProxy          string `json:"HttpProxy"`
+	HTTPSProxy         string `json:"HttpsProxy"`
+	NoProxy            string
+	Name               string
+	Labels             []string
+	ExperimentalBuild  bool
+	ServerVersion      string
+	ClusterStore       string `json:",omitempty"` // Deprecated: host-discovery and overlay networks with external k/v stores are deprecated
+	ClusterAdvertise   string `json:",omitempty"` // Deprecated: host-discovery and overlay networks with external k/v stores are deprecated
+	Runtimes           map[string]Runtime
+	DefaultRuntime     string
+	Swarm              swarm.Info
+	// LiveRestoreEnabled determines whether containers should be kept
+	// running when the daemon is shutdown or upon daemon start if
+	// running containers are detected
+	LiveRestoreEnabled  bool
+	Isolation           container.Isolation
+	InitBinary          string
+	ContainerdCommit    Commit
+	RuncCommit          Commit
+	InitCommit          Commit
+	SecurityOptions     []string
+	ProductLicense      string               `json:",omitempty"`
+	DefaultAddressPools []NetworkAddressPool `json:",omitempty"`
+	Warnings            []string
+}
+
+// KeyValue holds a key/value pair
+type KeyValue struct {
+	Key, Value string
+}
+
+// NetworkAddressPool is a temp struct used by Info struct
+type NetworkAddressPool struct {
+	Base string
+	Size int
+}
+
+// SecurityOpt contains the name and options of a security option
+type SecurityOpt struct {
+	Name    string
+	Options []KeyValue
+}
+
+// DecodeSecurityOptions decodes a security options string slice to a type safe
+// SecurityOpt
+func DecodeSecurityOptions(opts []string) ([]SecurityOpt, error) {
+	so := []SecurityOpt{}
+	for _, opt := range opts {
+		// support output from a < 1.13 docker daemon
+		if !strings.Contains(opt, "=") {
+			so = append(so, SecurityOpt{Name: opt})
+			continue
+		}
+		secopt := SecurityOpt{}
+		split := strings.Split(opt, ",")
+		for _, s := range split {
+			kv := strings.SplitN(s, "=", 2)
+			if len(kv) != 2 {
+				return nil, fmt.Errorf("invalid security option %q", s)
+			}
+			if kv[0] == "" || kv[1] == "" {
+				return nil, errors.New("invalid empty security option")
+			}
+			if kv[0] == "name" {
+				secopt.Name = kv[1]
+				continue
+			}
+			secopt.Options = append(secopt.Options, KeyValue{Key: kv[0], Value: kv[1]})
+		}
+		so = append(so, secopt)
+	}
+	return so, nil
+}
+
+// PluginsInfo is a temp struct holding Plugins name
+// registered with docker daemon. It is used by Info struct
+type PluginsInfo struct {
+	// List of Volume plugins registered
+	Volume []string
+	// List of Network plugins registered
+	Network []string
+	// List of Authorization plugins registered
+	Authorization []string
+	// List of Log plugins registered
+	Log []string
+}
+
+// ExecStartCheck is a temp struct used by execStart
+// Config fields is part of ExecConfig in runconfig package
+type ExecStartCheck struct {
+	// ExecStart will first check if it's detached
+	Detach bool
+	// Check if there's a tty
+	Tty bool
+}
+
+// HealthcheckResult stores information about a single run of a healthcheck probe
+type HealthcheckResult struct {
+	Start    time.Time // Start is the time this check started
+	End      time.Time // End is the time this check ended
+	ExitCode int       // ExitCode meanings: 0=healthy, 1=unhealthy, 2=reserved (considered unhealthy), else=error running probe
+	Output   string    // Output from last check
+}
+
+// Health states
+const (
+	NoHealthcheck = "none"      // Indicates there is no healthcheck
+	Starting      = "starting"  // Starting indicates that the container is not yet ready
+	Healthy       = "healthy"   // Healthy indicates that the container is running correctly
+	Unhealthy     = "unhealthy" // Unhealthy indicates that the container has a problem
+)
+
+// Health stores information about the container's healthcheck results
+type Health struct {
+	Status        string               // Status is one of Starting, Healthy or Unhealthy
+	FailingStreak int                  // FailingStreak is the number of consecutive failures
+	Log           []*HealthcheckResult // Log contains the last few results (oldest first)
+}
+
+// ContainerState stores container's running state
+// it's part of ContainerJSONBase and will return by "inspect" command
+type ContainerState struct {
+	Status     string // String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead"
+	Running    bool
+	Paused     bool
+	Restarting bool
+	OOMKilled  bool
+	Dead       bool
+	Pid        int
+	ExitCode   int
+	Error      string
+	StartedAt  string
+	FinishedAt string
+	Health     *Health `json:",omitempty"`
+}
+
+// ContainerNode stores information about the node that a container
+// is running on.  It's only used by the Docker Swarm standalone API
+type ContainerNode struct {
+	ID        string
+	IPAddress string `json:"IP"`
+	Addr      string
+	Name      string
+	Cpus      int
+	Memory    int64
+	Labels    map[string]string
+}
+
+// ContainerJSONBase contains response of Engine API:
+// GET "/containers/{name:.*}/json"
+type ContainerJSONBase struct {
+	ID              string `json:"Id"`
+	Created         string
+	Path            string
+	Args            []string
+	State           *ContainerState
+	Image           string
+	ResolvConfPath  string
+	HostnamePath    string
+	HostsPath       string
+	LogPath         string
+	Node            *ContainerNode `json:",omitempty"` // Node is only propagated by Docker Swarm standalone API
+	Name            string
+	RestartCount    int
+	Driver          string
+	Platform        string
+	MountLabel      string
+	ProcessLabel    string
+	AppArmorProfile string
+	ExecIDs         []string
+	HostConfig      *container.HostConfig
+	GraphDriver     GraphDriverData
+	SizeRw          *int64 `json:",omitempty"`
+	SizeRootFs      *int64 `json:",omitempty"`
+}
+
+// ContainerJSON is newly used struct along with MountPoint
+type ContainerJSON struct {
+	*ContainerJSONBase
+	Mounts          []MountPoint
+	Config          *container.Config
+	NetworkSettings *NetworkSettings
+}
+
+// NetworkSettings exposes the network settings in the api
+type NetworkSettings struct {
+	NetworkSettingsBase
+	DefaultNetworkSettings
+	Networks map[string]*network.EndpointSettings
+}
+
+// SummaryNetworkSettings provides a summary of container's networks
+// in /containers/json
+type SummaryNetworkSettings struct {
+	Networks map[string]*network.EndpointSettings
+}
+
+// NetworkSettingsBase holds basic information about networks
+type NetworkSettingsBase struct {
+	Bridge                 string      // Bridge is the Bridge name the network uses(e.g. `docker0`)
+	SandboxID              string      // SandboxID uniquely represents a container's network stack
+	HairpinMode            bool        // HairpinMode specifies if hairpin NAT should be enabled on the virtual interface
+	LinkLocalIPv6Address   string      // LinkLocalIPv6Address is an IPv6 unicast address using the link-local prefix
+	LinkLocalIPv6PrefixLen int         // LinkLocalIPv6PrefixLen is the prefix length of an IPv6 unicast address
+	Ports                  nat.PortMap // Ports is a collection of PortBinding indexed by Port
+	SandboxKey             string      // SandboxKey identifies the sandbox
+	SecondaryIPAddresses   []network.Address
+	SecondaryIPv6Addresses []network.Address
+}
+
+// DefaultNetworkSettings holds network information
+// during the 2 release deprecation period.
+// It will be removed in Docker 1.11.
+type DefaultNetworkSettings struct {
+	EndpointID          string // EndpointID uniquely represents a service endpoint in a Sandbox
+	Gateway             string // Gateway holds the gateway address for the network
+	GlobalIPv6Address   string // GlobalIPv6Address holds network's global IPv6 address
+	GlobalIPv6PrefixLen int    // GlobalIPv6PrefixLen represents mask length of network's global IPv6 address
+	IPAddress           string // IPAddress holds the IPv4 address for the network
+	IPPrefixLen         int    // IPPrefixLen represents mask length of network's IPv4 address
+	IPv6Gateway         string // IPv6Gateway holds gateway address specific for IPv6
+	MacAddress          string // MacAddress holds the MAC address for the network
+}
+
+// MountPoint represents a mount point configuration inside the container.
+// This is used for reporting the mountpoints in use by a container.
+type MountPoint struct {
+	Type        mount.Type `json:",omitempty"`
+	Name        string     `json:",omitempty"`
+	Source      string
+	Destination string
+	Driver      string `json:",omitempty"`
+	Mode        string
+	RW          bool
+	Propagation mount.Propagation
+}
+
+// NetworkResource is the body of the "get network" http response message
+type NetworkResource struct {
+	Name       string                         // Name is the requested name of the network
+	ID         string                         `json:"Id"` // ID uniquely identifies a network on a single machine
+	Created    time.Time                      // Created is the time the network created
+	Scope      string                         // Scope describes the level at which the network exists (e.g. `swarm` for cluster-wide or `local` for machine level)
+	Driver     string                         // Driver is the Driver name used to create the network (e.g. `bridge`, `overlay`)
+	EnableIPv6 bool                           // EnableIPv6 represents whether to enable IPv6
+	IPAM       network.IPAM                   // IPAM is the network's IP Address Management
+	Internal   bool                           // Internal represents if the network is used internal only
+	Attachable bool                           // Attachable represents if the global scope is manually attachable by regular containers from workers in swarm mode.
+	Ingress    bool                           // Ingress indicates the network is providing the routing-mesh for the swarm cluster.
+	ConfigFrom network.ConfigReference        // ConfigFrom specifies the source which will provide the configuration for this network.
+	ConfigOnly bool                           // ConfigOnly networks are place-holder networks for network configurations to be used by other networks. ConfigOnly networks cannot be used directly to run containers or services.
+	Containers map[string]EndpointResource    // Containers contains endpoints belonging to the network
+	Options    map[string]string              // Options holds the network specific options to use for when creating the network
+	Labels     map[string]string              // Labels holds metadata specific to the network being created
+	Peers      []network.PeerInfo             `json:",omitempty"` // List of peer nodes for an overlay network
+	Services   map[string]network.ServiceInfo `json:",omitempty"`
+}
+
+// EndpointResource contains network resources allocated and used for a container in a network
+type EndpointResource struct {
+	Name        string
+	EndpointID  string
+	MacAddress  string
+	IPv4Address string
+	IPv6Address string
+}
+
+// NetworkCreate is the expected body of the "create network" http request message
+type NetworkCreate struct {
+	// Check for networks with duplicate names.
+	// Network is primarily keyed based on a random ID and not on the name.
+	// Network name is strictly a user-friendly alias to the network
+	// which is uniquely identified using ID.
+	// And there is no guaranteed way to check for duplicates.
+	// Option CheckDuplicate is there to provide a best effort checking of any networks
+	// which has the same name but it is not guaranteed to catch all name collisions.
+	CheckDuplicate bool
+	Driver         string
+	Scope          string
+	EnableIPv6     bool
+	IPAM           *network.IPAM
+	Internal       bool
+	Attachable     bool
+	Ingress        bool
+	ConfigOnly     bool
+	ConfigFrom     *network.ConfigReference
+	Options        map[string]string
+	Labels         map[string]string
+}
+
+// NetworkCreateRequest is the request message sent to the server for network create call.
+type NetworkCreateRequest struct {
+	NetworkCreate
+	Name string
+}
+
+// NetworkCreateResponse is the response message sent by the server for network create call
+type NetworkCreateResponse struct {
+	ID      string `json:"Id"`
+	Warning string
+}
+
+// NetworkConnect represents the data to be used to connect a container to the network
+type NetworkConnect struct {
+	Container      string
+	EndpointConfig *network.EndpointSettings `json:",omitempty"`
+}
+
+// NetworkDisconnect represents the data to be used to disconnect a container from the network
+type NetworkDisconnect struct {
+	Container string
+	Force     bool
+}
+
+// NetworkInspectOptions holds parameters to inspect network
+type NetworkInspectOptions struct {
+	Scope   string
+	Verbose bool
+}
+
+// Checkpoint represents the details of a checkpoint
+type Checkpoint struct {
+	Name string // Name is the name of the checkpoint
+}
+
+// Runtime describes an OCI runtime
+type Runtime struct {
+	Path string   `json:"path"`
+	Args []string `json:"runtimeArgs,omitempty"`
+
+	// This is exposed here only for internal use
+	// It is not currently supported to specify custom shim configs
+	Shim *ShimConfig `json:"-"`
+}
+
+// ShimConfig is used by runtime to configure containerd shims
+type ShimConfig struct {
+	Binary string
+	Opts   interface{}
+}
+
+// DiskUsage contains response of Engine API:
+// GET "/system/df"
+type DiskUsage struct {
+	LayersSize  int64
+	Images      []*ImageSummary
+	Containers  []*Container
+	Volumes     []*Volume
+	BuildCache  []*BuildCache
+	BuilderSize int64 // deprecated
+}
+
+// ContainersPruneReport contains the response for Engine API:
+// POST "/containers/prune"
+type ContainersPruneReport struct {
+	ContainersDeleted []string
+	SpaceReclaimed    uint64
+}
+
+// VolumesPruneReport contains the response for Engine API:
+// POST "/volumes/prune"
+type VolumesPruneReport struct {
+	VolumesDeleted []string
+	SpaceReclaimed uint64
+}
+
+// ImagesPruneReport contains the response for Engine API:
+// POST "/images/prune"
+type ImagesPruneReport struct {
+	ImagesDeleted  []ImageDeleteResponseItem
+	SpaceReclaimed uint64
+}
+
+// BuildCachePruneReport contains the response for Engine API:
+// POST "/build/prune"
+type BuildCachePruneReport struct {
+	CachesDeleted  []string
+	SpaceReclaimed uint64
+}
+
+// NetworksPruneReport contains the response for Engine API:
+// POST "/networks/prune"
+type NetworksPruneReport struct {
+	NetworksDeleted []string
+}
+
+// SecretCreateResponse contains the information returned to a client
+// on the creation of a new secret.
+type SecretCreateResponse struct {
+	// ID is the id of the created secret.
+	ID string
+}
+
+// SecretListOptions holds parameters to list secrets
+type SecretListOptions struct {
+	Filters filters.Args
+}
+
+// ConfigCreateResponse contains the information returned to a client
+// on the creation of a new config.
+type ConfigCreateResponse struct {
+	// ID is the id of the created config.
+	ID string
+}
+
+// ConfigListOptions holds parameters to list configs
+type ConfigListOptions struct {
+	Filters filters.Args
+}
+
+// PushResult contains the tag, manifest digest, and manifest size from the
+// push. It's used to signal this information to the trust code in the client
+// so it can sign the manifest if necessary.
+type PushResult struct {
+	Tag    string
+	Digest string
+	Size   int
+}
+
+// BuildResult contains the image id of a successful build
+type BuildResult struct {
+	ID string
+}
+
+// BuildCache contains information about a build cache record
+type BuildCache struct {
+	ID          string
+	Parent      string
+	Type        string
+	Description string
+	InUse       bool
+	Shared      bool
+	Size        int64
+	CreatedAt   time.Time
+	LastUsedAt  *time.Time
+	UsageCount  int
+}
+
+// BuildCachePruneOptions hold parameters to prune the build cache
+type BuildCachePruneOptions struct {
+	All         bool
+	KeepStorage int64
+	Filters     filters.Args
+}
diff --git a/vendor/github.com/docker/docker/api/types/versions/README.md b/vendor/github.com/docker/docker/api/types/versions/README.md
new file mode 100644
index 0000000000000..1ef911edb0f9a
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/versions/README.md
@@ -0,0 +1,14 @@
+# Legacy API type versions
+
+This package includes types for legacy API versions. The stable version of the API types live in `api/types/*.go`.
+
+Consider moving a type here when you need to keep backwards compatibility in the API. This legacy types are organized by the latest API version they appear in. For instance, types in the `v1p19` package are valid for API versions below or equal `1.19`. Types in the `v1p20` package are valid for the API version `1.20`, since the versions below that will use the legacy types in `v1p19`.
+
+## Package name conventions
+
+The package name convention is to use `v` as a prefix for the version number and `p`(patch) as a separator. We use this nomenclature due to a few restrictions in the Go package name convention:
+
+1. We cannot use `.` because it's interpreted by the language, think of `v1.20.CallFunction`.
+2. We cannot use `_` because golint complains about it. The code is actually valid, but it looks probably more weird: `v1_20.CallFunction`.
+
+For instance, if you want to modify a type that was available in the version `1.21` of the API but it will have different fields in the version `1.22`, you want to create a new package under `api/types/versions/v1p21`.
diff --git a/vendor/github.com/docker/docker/api/types/versions/compare.go b/vendor/github.com/docker/docker/api/types/versions/compare.go
new file mode 100644
index 0000000000000..8ccb0aa92ebe4
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/versions/compare.go
@@ -0,0 +1,62 @@
+package versions // import "github.com/docker/docker/api/types/versions"
+
+import (
+	"strconv"
+	"strings"
+)
+
+// compare compares two version strings
+// returns -1 if v1 < v2, 1 if v1 > v2, 0 otherwise.
+func compare(v1, v2 string) int {
+	var (
+		currTab  = strings.Split(v1, ".")
+		otherTab = strings.Split(v2, ".")
+	)
+
+	max := len(currTab)
+	if len(otherTab) > max {
+		max = len(otherTab)
+	}
+	for i := 0; i < max; i++ {
+		var currInt, otherInt int
+
+		if len(currTab) > i {
+			currInt, _ = strconv.Atoi(currTab[i])
+		}
+		if len(otherTab) > i {
+			otherInt, _ = strconv.Atoi(otherTab[i])
+		}
+		if currInt > otherInt {
+			return 1
+		}
+		if otherInt > currInt {
+			return -1
+		}
+	}
+	return 0
+}
+
+// LessThan checks if a version is less than another
+func LessThan(v, other string) bool {
+	return compare(v, other) == -1
+}
+
+// LessThanOrEqualTo checks if a version is less than or equal to another
+func LessThanOrEqualTo(v, other string) bool {
+	return compare(v, other) <= 0
+}
+
+// GreaterThan checks if a version is greater than another
+func GreaterThan(v, other string) bool {
+	return compare(v, other) == 1
+}
+
+// GreaterThanOrEqualTo checks if a version is greater than or equal to another
+func GreaterThanOrEqualTo(v, other string) bool {
+	return compare(v, other) >= 0
+}
+
+// Equal checks if a version is equal to another
+func Equal(v, other string) bool {
+	return compare(v, other) == 0
+}
diff --git a/vendor/github.com/docker/docker/api/types/volume.go b/vendor/github.com/docker/docker/api/types/volume.go
new file mode 100644
index 0000000000000..c69b08448df4c
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/volume.go
@@ -0,0 +1,72 @@
+package types
+
+// This file was generated by the swagger tool.
+// Editing this file might prove futile when you re-run the swagger generate command
+
+// Volume volume
+// swagger:model Volume
+type Volume struct {
+
+	// Date/Time the volume was created.
+	CreatedAt string `json:"CreatedAt,omitempty"`
+
+	// Name of the volume driver used by the volume.
+	// Required: true
+	Driver string `json:"Driver"`
+
+	// User-defined key/value metadata.
+	// Required: true
+	Labels map[string]string `json:"Labels"`
+
+	// Mount path of the volume on the host.
+	// Required: true
+	Mountpoint string `json:"Mountpoint"`
+
+	// Name of the volume.
+	// Required: true
+	Name string `json:"Name"`
+
+	// The driver specific options used when creating the volume.
+	//
+	// Required: true
+	Options map[string]string `json:"Options"`
+
+	// The level at which the volume exists. Either `global` for cluster-wide,
+	// or `local` for machine level.
+	//
+	// Required: true
+	Scope string `json:"Scope"`
+
+	// Low-level details about the volume, provided by the volume driver.
+	// Details are returned as a map with key/value pairs:
+	// `{"key":"value","key2":"value2"}`.
+	//
+	// The `Status` field is optional, and is omitted if the volume driver
+	// does not support this feature.
+	//
+	Status map[string]interface{} `json:"Status,omitempty"`
+
+	// usage data
+	UsageData *VolumeUsageData `json:"UsageData,omitempty"`
+}
+
+// VolumeUsageData Usage details about the volume. This information is used by the
+// `GET /system/df` endpoint, and omitted in other endpoints.
+//
+// swagger:model VolumeUsageData
+type VolumeUsageData struct {
+
+	// The number of containers referencing this volume. This field
+	// is set to `-1` if the reference-count is not available.
+	//
+	// Required: true
+	RefCount int64 `json:"RefCount"`
+
+	// Amount of disk space used by the volume (in bytes). This information
+	// is only available for volumes created with the `"local"` volume
+	// driver. For volumes created with other volume drivers, this field
+	// is set to `-1` ("not available")
+	//
+	// Required: true
+	Size int64 `json:"Size"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/volume/volume_create.go b/vendor/github.com/docker/docker/api/types/volume/volume_create.go
new file mode 100644
index 0000000000000..8538078dd663c
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/volume/volume_create.go
@@ -0,0 +1,31 @@
+package volume // import "github.com/docker/docker/api/types/volume"
+
+// ----------------------------------------------------------------------------
+// Code generated by `swagger generate operation`. DO NOT EDIT.
+//
+// See hack/generate-swagger-api.sh
+// ----------------------------------------------------------------------------
+
+// VolumeCreateBody Volume configuration
+// swagger:model VolumeCreateBody
+type VolumeCreateBody struct {
+
+	// Name of the volume driver to use.
+	// Required: true
+	Driver string `json:"Driver"`
+
+	// A mapping of driver options and values. These options are
+	// passed directly to the driver and are driver specific.
+	//
+	// Required: true
+	DriverOpts map[string]string `json:"DriverOpts"`
+
+	// User-defined key/value metadata.
+	// Required: true
+	Labels map[string]string `json:"Labels"`
+
+	// The new volume's name. If not specified, Docker generates a name.
+	//
+	// Required: true
+	Name string `json:"Name"`
+}
diff --git a/vendor/github.com/docker/docker/api/types/volume/volume_list.go b/vendor/github.com/docker/docker/api/types/volume/volume_list.go
new file mode 100644
index 0000000000000..be06179bf488d
--- /dev/null
+++ b/vendor/github.com/docker/docker/api/types/volume/volume_list.go
@@ -0,0 +1,23 @@
+package volume // import "github.com/docker/docker/api/types/volume"
+
+// ----------------------------------------------------------------------------
+// Code generated by `swagger generate operation`. DO NOT EDIT.
+//
+// See hack/generate-swagger-api.sh
+// ----------------------------------------------------------------------------
+
+import "github.com/docker/docker/api/types"
+
+// VolumeListOKBody Volume list response
+// swagger:model VolumeListOKBody
+type VolumeListOKBody struct {
+
+	// List of volumes
+	// Required: true
+	Volumes []*types.Volume `json:"Volumes"`
+
+	// Warnings that occurred when fetching the list of volumes.
+	//
+	// Required: true
+	Warnings []string `json:"Warnings"`
+}
diff --git a/vendor/github.com/docker/docker/client/README.md b/vendor/github.com/docker/docker/client/README.md
new file mode 100644
index 0000000000000..992f18117df57
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/README.md
@@ -0,0 +1,35 @@
+# Go client for the Docker Engine API
+
+The `docker` command uses this package to communicate with the daemon. It can also be used by your own Go applications to do anything the command-line interface does – running containers, pulling images, managing swarms, etc.
+
+For example, to list running containers (the equivalent of `docker ps`):
+
+```go
+package main
+
+import (
+	"context"
+	"fmt"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/client"
+)
+
+func main() {
+	cli, err := client.NewClientWithOpts(client.FromEnv)
+	if err != nil {
+		panic(err)
+	}
+
+	containers, err := cli.ContainerList(context.Background(), types.ContainerListOptions{})
+	if err != nil {
+		panic(err)
+	}
+
+	for _, container := range containers {
+		fmt.Printf("%s %s\n", container.ID[:10], container.Image)
+	}
+}
+```
+
+[Full documentation is available on GoDoc.](https://godoc.org/github.com/docker/docker/client)
diff --git a/vendor/github.com/docker/docker/client/build_cancel.go b/vendor/github.com/docker/docker/client/build_cancel.go
new file mode 100644
index 0000000000000..3aae43e3d17e7
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/build_cancel.go
@@ -0,0 +1,16 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+)
+
+// BuildCancel requests the daemon to cancel ongoing build request
+func (cli *Client) BuildCancel(ctx context.Context, id string) error {
+	query := url.Values{}
+	query.Set("id", id)
+
+	serverResp, err := cli.post(ctx, "/build/cancel", query, nil, nil)
+	ensureReaderClosed(serverResp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/build_prune.go b/vendor/github.com/docker/docker/client/build_prune.go
new file mode 100644
index 0000000000000..397d67cdcf1ac
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/build_prune.go
@@ -0,0 +1,45 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/filters"
+	"github.com/pkg/errors"
+)
+
+// BuildCachePrune requests the daemon to delete unused cache data
+func (cli *Client) BuildCachePrune(ctx context.Context, opts types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error) {
+	if err := cli.NewVersionError("1.31", "build prune"); err != nil {
+		return nil, err
+	}
+
+	report := types.BuildCachePruneReport{}
+
+	query := url.Values{}
+	if opts.All {
+		query.Set("all", "1")
+	}
+	query.Set("keep-storage", fmt.Sprintf("%d", opts.KeepStorage))
+	filters, err := filters.ToJSON(opts.Filters)
+	if err != nil {
+		return nil, errors.Wrap(err, "prune could not marshal filters option")
+	}
+	query.Set("filters", filters)
+
+	serverResp, err := cli.post(ctx, "/build/prune", query, nil, nil)
+	defer ensureReaderClosed(serverResp)
+
+	if err != nil {
+		return nil, err
+	}
+
+	if err := json.NewDecoder(serverResp.body).Decode(&report); err != nil {
+		return nil, fmt.Errorf("Error retrieving disk usage: %v", err)
+	}
+
+	return &report, nil
+}
diff --git a/vendor/github.com/docker/docker/client/checkpoint_create.go b/vendor/github.com/docker/docker/client/checkpoint_create.go
new file mode 100644
index 0000000000000..921024fe4fb06
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/checkpoint_create.go
@@ -0,0 +1,14 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+
+	"github.com/docker/docker/api/types"
+)
+
+// CheckpointCreate creates a checkpoint from the given container with the given name
+func (cli *Client) CheckpointCreate(ctx context.Context, container string, options types.CheckpointCreateOptions) error {
+	resp, err := cli.post(ctx, "/containers/"+container+"/checkpoints", nil, options, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/checkpoint_delete.go b/vendor/github.com/docker/docker/client/checkpoint_delete.go
new file mode 100644
index 0000000000000..54f55fa76e6ac
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/checkpoint_delete.go
@@ -0,0 +1,20 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+)
+
+// CheckpointDelete deletes the checkpoint with the given name from the given container
+func (cli *Client) CheckpointDelete(ctx context.Context, containerID string, options types.CheckpointDeleteOptions) error {
+	query := url.Values{}
+	if options.CheckpointDir != "" {
+		query.Set("dir", options.CheckpointDir)
+	}
+
+	resp, err := cli.delete(ctx, "/containers/"+containerID+"/checkpoints/"+options.CheckpointID, query, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/checkpoint_list.go b/vendor/github.com/docker/docker/client/checkpoint_list.go
new file mode 100644
index 0000000000000..66d46dd161ba3
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/checkpoint_list.go
@@ -0,0 +1,28 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+)
+
+// CheckpointList returns the checkpoints of the given container in the docker host
+func (cli *Client) CheckpointList(ctx context.Context, container string, options types.CheckpointListOptions) ([]types.Checkpoint, error) {
+	var checkpoints []types.Checkpoint
+
+	query := url.Values{}
+	if options.CheckpointDir != "" {
+		query.Set("dir", options.CheckpointDir)
+	}
+
+	resp, err := cli.get(ctx, "/containers/"+container+"/checkpoints", query, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return checkpoints, wrapResponseError(err, resp, "container", container)
+	}
+
+	err = json.NewDecoder(resp.body).Decode(&checkpoints)
+	return checkpoints, err
+}
diff --git a/vendor/github.com/docker/docker/client/client.go b/vendor/github.com/docker/docker/client/client.go
new file mode 100644
index 0000000000000..0d3614d5dbdac
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/client.go
@@ -0,0 +1,306 @@
+/*
+Package client is a Go client for the Docker Engine API.
+
+For more information about the Engine API, see the documentation:
+https://docs.docker.com/engine/api/
+
+# Usage
+
+You use the library by creating a client object and calling methods on it. The
+client can be created either from environment variables with NewClientWithOpts(client.FromEnv),
+or configured manually with NewClient().
+
+For example, to list running containers (the equivalent of "docker ps"):
+
+	package main
+
+	import (
+		"context"
+		"fmt"
+
+		"github.com/docker/docker/api/types"
+		"github.com/docker/docker/client"
+	)
+
+	func main() {
+		cli, err := client.NewClientWithOpts(client.FromEnv)
+		if err != nil {
+			panic(err)
+		}
+
+		containers, err := cli.ContainerList(context.Background(), types.ContainerListOptions{})
+		if err != nil {
+			panic(err)
+		}
+
+		for _, container := range containers {
+			fmt.Printf("%s %s\n", container.ID[:10], container.Image)
+		}
+	}
+*/
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"fmt"
+	"net"
+	"net/http"
+	"net/url"
+	"path"
+	"strings"
+
+	"github.com/docker/docker/api"
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/versions"
+	"github.com/docker/go-connections/sockets"
+	"github.com/pkg/errors"
+)
+
+// ErrRedirect is the error returned by checkRedirect when the request is non-GET.
+var ErrRedirect = errors.New("unexpected redirect in response")
+
+// Client is the API client that performs all operations
+// against a docker server.
+type Client struct {
+	// scheme sets the scheme for the client
+	scheme string
+	// host holds the server address to connect to
+	host string
+	// proto holds the client protocol i.e. unix.
+	proto string
+	// addr holds the client address.
+	addr string
+	// basePath holds the path to prepend to the requests.
+	basePath string
+	// client used to send and receive http requests.
+	client *http.Client
+	// version of the server to talk to.
+	version string
+	// custom http headers configured by users.
+	customHTTPHeaders map[string]string
+	// manualOverride is set to true when the version was set by users.
+	manualOverride bool
+
+	// negotiateVersion indicates if the client should automatically negotiate
+	// the API version to use when making requests. API version negotiation is
+	// performed on the first request, after which negotiated is set to "true"
+	// so that subsequent requests do not re-negotiate.
+	negotiateVersion bool
+
+	// negotiated indicates that API version negotiation took place
+	negotiated bool
+}
+
+// CheckRedirect specifies the policy for dealing with redirect responses:
+// If the request is non-GET return `ErrRedirect`. Otherwise use the last response.
+//
+// Go 1.8 changes behavior for HTTP redirects (specifically 301, 307, and 308) in the client .
+// The Docker client (and by extension docker API client) can be made to send a request
+// like POST /containers//start where what would normally be in the name section of the URL is empty.
+// This triggers an HTTP 301 from the daemon.
+// In go 1.8 this 301 will be converted to a GET request, and ends up getting a 404 from the daemon.
+// This behavior change manifests in the client in that before the 301 was not followed and
+// the client did not generate an error, but now results in a message like Error response from daemon: page not found.
+func CheckRedirect(req *http.Request, via []*http.Request) error {
+	if via[0].Method == http.MethodGet {
+		return http.ErrUseLastResponse
+	}
+	return ErrRedirect
+}
+
+// NewClientWithOpts initializes a new API client with default values. It takes functors
+// to modify values when creating it, like `NewClientWithOpts(WithVersion(…))`
+// It also initializes the custom http headers to add to each request.
+//
+// It won't send any version information if the version number is empty. It is
+// highly recommended that you set a version or your client may break if the
+// server is upgraded.
+func NewClientWithOpts(ops ...Opt) (*Client, error) {
+	client, err := defaultHTTPClient(DefaultDockerHost)
+	if err != nil {
+		return nil, err
+	}
+	c := &Client{
+		host:    DefaultDockerHost,
+		version: api.DefaultVersion,
+		client:  client,
+		proto:   defaultProto,
+		addr:    defaultAddr,
+	}
+
+	for _, op := range ops {
+		if err := op(c); err != nil {
+			return nil, err
+		}
+	}
+
+	if c.scheme == "" {
+		c.scheme = "http"
+
+		tlsConfig := resolveTLSConfig(c.client.Transport)
+		if tlsConfig != nil {
+			// TODO(stevvooe): This isn't really the right way to write clients in Go.
+			// `NewClient` should probably only take an `*http.Client` and work from there.
+			// Unfortunately, the model of having a host-ish/url-thingy as the connection
+			// string has us confusing protocol and transport layers. We continue doing
+			// this to avoid breaking existing clients but this should be addressed.
+			c.scheme = "https"
+		}
+	}
+
+	return c, nil
+}
+
+func defaultHTTPClient(host string) (*http.Client, error) {
+	url, err := ParseHostURL(host)
+	if err != nil {
+		return nil, err
+	}
+	transport := new(http.Transport)
+	sockets.ConfigureTransport(transport, url.Scheme, url.Host)
+	return &http.Client{
+		Transport:     transport,
+		CheckRedirect: CheckRedirect,
+	}, nil
+}
+
+// Close the transport used by the client
+func (cli *Client) Close() error {
+	if t, ok := cli.client.Transport.(*http.Transport); ok {
+		t.CloseIdleConnections()
+	}
+	return nil
+}
+
+// getAPIPath returns the versioned request path to call the api.
+// It appends the query parameters to the path if they are not empty.
+func (cli *Client) getAPIPath(ctx context.Context, p string, query url.Values) string {
+	var apiPath string
+	if cli.negotiateVersion && !cli.negotiated {
+		cli.NegotiateAPIVersion(ctx)
+	}
+	if cli.version != "" {
+		v := strings.TrimPrefix(cli.version, "v")
+		apiPath = path.Join(cli.basePath, "/v"+v, p)
+	} else {
+		apiPath = path.Join(cli.basePath, p)
+	}
+	return (&url.URL{Path: apiPath, RawQuery: query.Encode()}).String()
+}
+
+// ClientVersion returns the API version used by this client.
+func (cli *Client) ClientVersion() string {
+	return cli.version
+}
+
+// NegotiateAPIVersion queries the API and updates the version to match the
+// API version. Any errors are silently ignored. If a manual override is in place,
+// either through the `DOCKER_API_VERSION` environment variable, or if the client
+// was initialized with a fixed version (`opts.WithVersion(xx)`), no negotiation
+// will be performed.
+func (cli *Client) NegotiateAPIVersion(ctx context.Context) {
+	if !cli.manualOverride {
+		ping, _ := cli.Ping(ctx)
+		cli.negotiateAPIVersionPing(ping)
+	}
+}
+
+// NegotiateAPIVersionPing updates the client version to match the Ping.APIVersion
+// if the ping version is less than the default version.  If a manual override is
+// in place, either through the `DOCKER_API_VERSION` environment variable, or if
+// the client was initialized with a fixed version (`opts.WithVersion(xx)`), no
+// negotiation is performed.
+func (cli *Client) NegotiateAPIVersionPing(p types.Ping) {
+	if !cli.manualOverride {
+		cli.negotiateAPIVersionPing(p)
+	}
+}
+
+// negotiateAPIVersionPing queries the API and updates the version to match the
+// API version. Any errors are silently ignored.
+func (cli *Client) negotiateAPIVersionPing(p types.Ping) {
+	// try the latest version before versioning headers existed
+	if p.APIVersion == "" {
+		p.APIVersion = "1.24"
+	}
+
+	// if the client is not initialized with a version, start with the latest supported version
+	if cli.version == "" {
+		cli.version = api.DefaultVersion
+	}
+
+	// if server version is lower than the client version, downgrade
+	if versions.LessThan(p.APIVersion, cli.version) {
+		cli.version = p.APIVersion
+	}
+
+	// Store the results, so that automatic API version negotiation (if enabled)
+	// won't be performed on the next request.
+	if cli.negotiateVersion {
+		cli.negotiated = true
+	}
+}
+
+// DaemonHost returns the host address used by the client
+func (cli *Client) DaemonHost() string {
+	return cli.host
+}
+
+// HTTPClient returns a copy of the HTTP client bound to the server
+func (cli *Client) HTTPClient() *http.Client {
+	c := *cli.client
+	return &c
+}
+
+// ParseHostURL parses a url string, validates the string is a host url, and
+// returns the parsed URL
+func ParseHostURL(host string) (*url.URL, error) {
+	protoAddrParts := strings.SplitN(host, "://", 2)
+	if len(protoAddrParts) == 1 {
+		return nil, fmt.Errorf("unable to parse docker host `%s`", host)
+	}
+
+	var basePath string
+	proto, addr := protoAddrParts[0], protoAddrParts[1]
+	if proto == "tcp" {
+		parsed, err := url.Parse("tcp://" + addr)
+		if err != nil {
+			return nil, err
+		}
+		addr = parsed.Host
+		basePath = parsed.Path
+	}
+	return &url.URL{
+		Scheme: proto,
+		Host:   addr,
+		Path:   basePath,
+	}, nil
+}
+
+// CustomHTTPHeaders returns the custom http headers stored by the client.
+func (cli *Client) CustomHTTPHeaders() map[string]string {
+	m := make(map[string]string)
+	for k, v := range cli.customHTTPHeaders {
+		m[k] = v
+	}
+	return m
+}
+
+// SetCustomHTTPHeaders that will be set on every HTTP request made by the client.
+// Deprecated: use WithHTTPHeaders when creating the client.
+func (cli *Client) SetCustomHTTPHeaders(headers map[string]string) {
+	cli.customHTTPHeaders = headers
+}
+
+// Dialer returns a dialer for a raw stream connection, with HTTP/1.1 header, that can be used for proxying the daemon connection.
+// Used by `docker dial-stdio` (docker/cli#889).
+func (cli *Client) Dialer() func(context.Context) (net.Conn, error) {
+	return func(ctx context.Context) (net.Conn, error) {
+		if transport, ok := cli.client.Transport.(*http.Transport); ok {
+			if transport.DialContext != nil && transport.TLSClientConfig == nil {
+				return transport.DialContext(ctx, cli.proto, cli.addr)
+			}
+		}
+		return fallbackDial(cli.proto, cli.addr, resolveTLSConfig(cli.client.Transport))
+	}
+}
diff --git a/vendor/github.com/docker/docker/client/client_deprecated.go b/vendor/github.com/docker/docker/client/client_deprecated.go
new file mode 100644
index 0000000000000..54cdfc29a84b6
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/client_deprecated.go
@@ -0,0 +1,23 @@
+package client
+
+import "net/http"
+
+// NewClient initializes a new API client for the given host and API version.
+// It uses the given http client as transport.
+// It also initializes the custom http headers to add to each request.
+//
+// It won't send any version information if the version number is empty. It is
+// highly recommended that you set a version or your client may break if the
+// server is upgraded.
+// Deprecated: use NewClientWithOpts
+func NewClient(host string, version string, client *http.Client, httpHeaders map[string]string) (*Client, error) {
+	return NewClientWithOpts(WithHost(host), WithVersion(version), WithHTTPClient(client), WithHTTPHeaders(httpHeaders))
+}
+
+// NewEnvClient initializes a new API client based on environment variables.
+// See FromEnv for a list of support environment variables.
+//
+// Deprecated: use NewClientWithOpts(FromEnv)
+func NewEnvClient() (*Client, error) {
+	return NewClientWithOpts(FromEnv)
+}
diff --git a/vendor/github.com/docker/docker/client/client_unix.go b/vendor/github.com/docker/docker/client/client_unix.go
new file mode 100644
index 0000000000000..5846f888fea21
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/client_unix.go
@@ -0,0 +1,10 @@
+//go:build linux || freebsd || openbsd || netbsd || darwin || solaris || illumos || dragonfly
+// +build linux freebsd openbsd netbsd darwin solaris illumos dragonfly
+
+package client // import "github.com/docker/docker/client"
+
+// DefaultDockerHost defines os specific default if DOCKER_HOST is unset
+const DefaultDockerHost = "unix:///var/run/docker.sock"
+
+const defaultProto = "unix"
+const defaultAddr = "/var/run/docker.sock"
diff --git a/vendor/github.com/docker/docker/client/client_windows.go b/vendor/github.com/docker/docker/client/client_windows.go
new file mode 100644
index 0000000000000..c649e54412ced
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/client_windows.go
@@ -0,0 +1,7 @@
+package client // import "github.com/docker/docker/client"
+
+// DefaultDockerHost defines os specific default if DOCKER_HOST is unset
+const DefaultDockerHost = "npipe:////./pipe/docker_engine"
+
+const defaultProto = "npipe"
+const defaultAddr = "//./pipe/docker_engine"
diff --git a/vendor/github.com/docker/docker/client/config_create.go b/vendor/github.com/docker/docker/client/config_create.go
new file mode 100644
index 0000000000000..ee7d411df06af
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/config_create.go
@@ -0,0 +1,25 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// ConfigCreate creates a new Config.
+func (cli *Client) ConfigCreate(ctx context.Context, config swarm.ConfigSpec) (types.ConfigCreateResponse, error) {
+	var response types.ConfigCreateResponse
+	if err := cli.NewVersionError("1.30", "config create"); err != nil {
+		return response, err
+	}
+	resp, err := cli.post(ctx, "/configs/create", nil, config, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return response, err
+	}
+
+	err = json.NewDecoder(resp.body).Decode(&response)
+	return response, err
+}
diff --git a/vendor/github.com/docker/docker/client/config_inspect.go b/vendor/github.com/docker/docker/client/config_inspect.go
new file mode 100644
index 0000000000000..f1b0d7f7536ce
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/config_inspect.go
@@ -0,0 +1,36 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"io"
+
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// ConfigInspectWithRaw returns the config information with raw data
+func (cli *Client) ConfigInspectWithRaw(ctx context.Context, id string) (swarm.Config, []byte, error) {
+	if id == "" {
+		return swarm.Config{}, nil, objectNotFoundError{object: "config", id: id}
+	}
+	if err := cli.NewVersionError("1.30", "config inspect"); err != nil {
+		return swarm.Config{}, nil, err
+	}
+	resp, err := cli.get(ctx, "/configs/"+id, nil, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return swarm.Config{}, nil, wrapResponseError(err, resp, "config", id)
+	}
+
+	body, err := io.ReadAll(resp.body)
+	if err != nil {
+		return swarm.Config{}, nil, err
+	}
+
+	var config swarm.Config
+	rdr := bytes.NewReader(body)
+	err = json.NewDecoder(rdr).Decode(&config)
+
+	return config, body, err
+}
diff --git a/vendor/github.com/docker/docker/client/config_list.go b/vendor/github.com/docker/docker/client/config_list.go
new file mode 100644
index 0000000000000..565acc6e27332
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/config_list.go
@@ -0,0 +1,38 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/filters"
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// ConfigList returns the list of configs.
+func (cli *Client) ConfigList(ctx context.Context, options types.ConfigListOptions) ([]swarm.Config, error) {
+	if err := cli.NewVersionError("1.30", "config list"); err != nil {
+		return nil, err
+	}
+	query := url.Values{}
+
+	if options.Filters.Len() > 0 {
+		filterJSON, err := filters.ToJSON(options.Filters)
+		if err != nil {
+			return nil, err
+		}
+
+		query.Set("filters", filterJSON)
+	}
+
+	resp, err := cli.get(ctx, "/configs", query, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return nil, err
+	}
+
+	var configs []swarm.Config
+	err = json.NewDecoder(resp.body).Decode(&configs)
+	return configs, err
+}
diff --git a/vendor/github.com/docker/docker/client/config_remove.go b/vendor/github.com/docker/docker/client/config_remove.go
new file mode 100644
index 0000000000000..a708fcaecfdc3
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/config_remove.go
@@ -0,0 +1,13 @@
+package client // import "github.com/docker/docker/client"
+
+import "context"
+
+// ConfigRemove removes a Config.
+func (cli *Client) ConfigRemove(ctx context.Context, id string) error {
+	if err := cli.NewVersionError("1.30", "config remove"); err != nil {
+		return err
+	}
+	resp, err := cli.delete(ctx, "/configs/"+id, nil, nil)
+	defer ensureReaderClosed(resp)
+	return wrapResponseError(err, resp, "config", id)
+}
diff --git a/vendor/github.com/docker/docker/client/config_update.go b/vendor/github.com/docker/docker/client/config_update.go
new file mode 100644
index 0000000000000..39e59cf858904
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/config_update.go
@@ -0,0 +1,21 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+	"strconv"
+
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// ConfigUpdate attempts to update a Config
+func (cli *Client) ConfigUpdate(ctx context.Context, id string, version swarm.Version, config swarm.ConfigSpec) error {
+	if err := cli.NewVersionError("1.30", "config update"); err != nil {
+		return err
+	}
+	query := url.Values{}
+	query.Set("version", strconv.FormatUint(version.Index, 10))
+	resp, err := cli.post(ctx, "/configs/"+id+"/update", query, config, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/container_attach.go b/vendor/github.com/docker/docker/client/container_attach.go
new file mode 100644
index 0000000000000..3becefba0836c
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_attach.go
@@ -0,0 +1,57 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+)
+
+// ContainerAttach attaches a connection to a container in the server.
+// It returns a types.HijackedConnection with the hijacked connection
+// and the a reader to get output. It's up to the called to close
+// the hijacked connection by calling types.HijackedResponse.Close.
+//
+// The stream format on the response will be in one of two formats:
+//
+// If the container is using a TTY, there is only a single stream (stdout), and
+// data is copied directly from the container output stream, no extra
+// multiplexing or headers.
+//
+// If the container is *not* using a TTY, streams for stdout and stderr are
+// multiplexed.
+// The format of the multiplexed stream is as follows:
+//
+//	[8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}[]byte{OUTPUT}
+//
+// STREAM_TYPE can be 1 for stdout and 2 for stderr
+//
+// SIZE1, SIZE2, SIZE3, and SIZE4 are four bytes of uint32 encoded as big endian.
+// This is the size of OUTPUT.
+//
+// You can use github.com/docker/docker/pkg/stdcopy.StdCopy to demultiplex this
+// stream.
+func (cli *Client) ContainerAttach(ctx context.Context, container string, options types.ContainerAttachOptions) (types.HijackedResponse, error) {
+	query := url.Values{}
+	if options.Stream {
+		query.Set("stream", "1")
+	}
+	if options.Stdin {
+		query.Set("stdin", "1")
+	}
+	if options.Stdout {
+		query.Set("stdout", "1")
+	}
+	if options.Stderr {
+		query.Set("stderr", "1")
+	}
+	if options.DetachKeys != "" {
+		query.Set("detachKeys", options.DetachKeys)
+	}
+	if options.Logs {
+		query.Set("logs", "1")
+	}
+
+	headers := map[string][]string{"Content-Type": {"text/plain"}}
+	return cli.postHijacked(ctx, "/containers/"+container+"/attach", query, nil, headers)
+}
diff --git a/vendor/github.com/docker/docker/client/container_commit.go b/vendor/github.com/docker/docker/client/container_commit.go
new file mode 100644
index 0000000000000..2966e88c8eca4
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_commit.go
@@ -0,0 +1,55 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"errors"
+	"net/url"
+
+	"github.com/docker/distribution/reference"
+	"github.com/docker/docker/api/types"
+)
+
+// ContainerCommit applies changes into a container and creates a new tagged image.
+func (cli *Client) ContainerCommit(ctx context.Context, container string, options types.ContainerCommitOptions) (types.IDResponse, error) {
+	var repository, tag string
+	if options.Reference != "" {
+		ref, err := reference.ParseNormalizedNamed(options.Reference)
+		if err != nil {
+			return types.IDResponse{}, err
+		}
+
+		if _, isCanonical := ref.(reference.Canonical); isCanonical {
+			return types.IDResponse{}, errors.New("refusing to create a tag with a digest reference")
+		}
+		ref = reference.TagNameOnly(ref)
+
+		if tagged, ok := ref.(reference.Tagged); ok {
+			tag = tagged.Tag()
+		}
+		repository = reference.FamiliarName(ref)
+	}
+
+	query := url.Values{}
+	query.Set("container", container)
+	query.Set("repo", repository)
+	query.Set("tag", tag)
+	query.Set("comment", options.Comment)
+	query.Set("author", options.Author)
+	for _, change := range options.Changes {
+		query.Add("changes", change)
+	}
+	if !options.Pause {
+		query.Set("pause", "0")
+	}
+
+	var response types.IDResponse
+	resp, err := cli.post(ctx, "/commit", query, options.Config, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return response, err
+	}
+
+	err = json.NewDecoder(resp.body).Decode(&response)
+	return response, err
+}
diff --git a/vendor/github.com/docker/docker/client/container_copy.go b/vendor/github.com/docker/docker/client/container_copy.go
new file mode 100644
index 0000000000000..bb278bf7f324d
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_copy.go
@@ -0,0 +1,103 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/base64"
+	"encoding/json"
+	"fmt"
+	"io"
+	"net/http"
+	"net/url"
+	"path/filepath"
+	"strings"
+
+	"github.com/docker/docker/api/types"
+)
+
+// ContainerStatPath returns Stat information about a path inside the container filesystem.
+func (cli *Client) ContainerStatPath(ctx context.Context, containerID, path string) (types.ContainerPathStat, error) {
+	query := url.Values{}
+	query.Set("path", filepath.ToSlash(path)) // Normalize the paths used in the API.
+
+	urlStr := "/containers/" + containerID + "/archive"
+	response, err := cli.head(ctx, urlStr, query, nil)
+	defer ensureReaderClosed(response)
+	if err != nil {
+		return types.ContainerPathStat{}, wrapResponseError(err, response, "container:path", containerID+":"+path)
+	}
+	return getContainerPathStatFromHeader(response.header)
+}
+
+// CopyToContainer copies content into the container filesystem.
+// Note that `content` must be a Reader for a TAR archive
+func (cli *Client) CopyToContainer(ctx context.Context, containerID, dstPath string, content io.Reader, options types.CopyToContainerOptions) error {
+	query := url.Values{}
+	query.Set("path", filepath.ToSlash(dstPath)) // Normalize the paths used in the API.
+	// Do not allow for an existing directory to be overwritten by a non-directory and vice versa.
+	if !options.AllowOverwriteDirWithFile {
+		query.Set("noOverwriteDirNonDir", "true")
+	}
+
+	if options.CopyUIDGID {
+		query.Set("copyUIDGID", "true")
+	}
+
+	apiPath := "/containers/" + containerID + "/archive"
+
+	response, err := cli.putRaw(ctx, apiPath, query, content, nil)
+	defer ensureReaderClosed(response)
+	if err != nil {
+		return wrapResponseError(err, response, "container:path", containerID+":"+dstPath)
+	}
+
+	// TODO this code converts non-error status-codes (e.g., "204 No Content") into an error; verify if this is the desired behavior
+	if response.statusCode != http.StatusOK {
+		return fmt.Errorf("unexpected status code from daemon: %d", response.statusCode)
+	}
+
+	return nil
+}
+
+// CopyFromContainer gets the content from the container and returns it as a Reader
+// for a TAR archive to manipulate it in the host. It's up to the caller to close the reader.
+func (cli *Client) CopyFromContainer(ctx context.Context, containerID, srcPath string) (io.ReadCloser, types.ContainerPathStat, error) {
+	query := make(url.Values, 1)
+	query.Set("path", filepath.ToSlash(srcPath)) // Normalize the paths used in the API.
+
+	apiPath := "/containers/" + containerID + "/archive"
+	response, err := cli.get(ctx, apiPath, query, nil)
+	if err != nil {
+		return nil, types.ContainerPathStat{}, wrapResponseError(err, response, "container:path", containerID+":"+srcPath)
+	}
+
+	// TODO this code converts non-error status-codes (e.g., "204 No Content") into an error; verify if this is the desired behavior
+	if response.statusCode != http.StatusOK {
+		return nil, types.ContainerPathStat{}, fmt.Errorf("unexpected status code from daemon: %d", response.statusCode)
+	}
+
+	// In order to get the copy behavior right, we need to know information
+	// about both the source and the destination. The response headers include
+	// stat info about the source that we can use in deciding exactly how to
+	// copy it locally. Along with the stat info about the local destination,
+	// we have everything we need to handle the multiple possibilities there
+	// can be when copying a file/dir from one location to another file/dir.
+	stat, err := getContainerPathStatFromHeader(response.header)
+	if err != nil {
+		return nil, stat, fmt.Errorf("unable to get resource stat from response: %s", err)
+	}
+	return response.body, stat, err
+}
+
+func getContainerPathStatFromHeader(header http.Header) (types.ContainerPathStat, error) {
+	var stat types.ContainerPathStat
+
+	encodedStat := header.Get("X-Docker-Container-Path-Stat")
+	statDecoder := base64.NewDecoder(base64.StdEncoding, strings.NewReader(encodedStat))
+
+	err := json.NewDecoder(statDecoder).Decode(&stat)
+	if err != nil {
+		err = fmt.Errorf("unable to decode container path stat header: %s", err)
+	}
+
+	return stat, err
+}
diff --git a/vendor/github.com/docker/docker/client/container_create.go b/vendor/github.com/docker/docker/client/container_create.go
new file mode 100644
index 0000000000000..c5079ee539e90
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_create.go
@@ -0,0 +1,74 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+	"path"
+
+	"github.com/docker/docker/api/types/container"
+	"github.com/docker/docker/api/types/network"
+	"github.com/docker/docker/api/types/versions"
+	specs "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+type configWrapper struct {
+	*container.Config
+	HostConfig       *container.HostConfig
+	NetworkingConfig *network.NetworkingConfig
+}
+
+// ContainerCreate creates a new container based in the given configuration.
+// It can be associated with a name, but it's not mandatory.
+func (cli *Client) ContainerCreate(ctx context.Context, config *container.Config, hostConfig *container.HostConfig, networkingConfig *network.NetworkingConfig, platform *specs.Platform, containerName string) (container.ContainerCreateCreatedBody, error) {
+	var response container.ContainerCreateCreatedBody
+
+	if err := cli.NewVersionError("1.25", "stop timeout"); config != nil && config.StopTimeout != nil && err != nil {
+		return response, err
+	}
+
+	// When using API 1.24 and under, the client is responsible for removing the container
+	if hostConfig != nil && versions.LessThan(cli.ClientVersion(), "1.25") {
+		hostConfig.AutoRemove = false
+	}
+
+	if err := cli.NewVersionError("1.41", "specify container image platform"); platform != nil && err != nil {
+		return response, err
+	}
+
+	query := url.Values{}
+	if p := formatPlatform(platform); p != "" {
+		query.Set("platform", p)
+	}
+
+	if containerName != "" {
+		query.Set("name", containerName)
+	}
+
+	body := configWrapper{
+		Config:           config,
+		HostConfig:       hostConfig,
+		NetworkingConfig: networkingConfig,
+	}
+
+	serverResp, err := cli.post(ctx, "/containers/create", query, body, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return response, err
+	}
+
+	err = json.NewDecoder(serverResp.body).Decode(&response)
+	return response, err
+}
+
+// formatPlatform returns a formatted string representing platform (e.g. linux/arm/v7).
+//
+// Similar to containerd's platforms.Format(), but does allow components to be
+// omitted (e.g. pass "architecture" only, without "os":
+// https://github.com/containerd/containerd/blob/v1.5.2/platforms/platforms.go#L243-L263
+func formatPlatform(platform *specs.Platform) string {
+	if platform == nil {
+		return ""
+	}
+	return path.Join(platform.OS, platform.Architecture, platform.Variant)
+}
diff --git a/vendor/github.com/docker/docker/client/container_diff.go b/vendor/github.com/docker/docker/client/container_diff.go
new file mode 100644
index 0000000000000..29dac8491df51
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_diff.go
@@ -0,0 +1,23 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+
+	"github.com/docker/docker/api/types/container"
+)
+
+// ContainerDiff shows differences in a container filesystem since it was started.
+func (cli *Client) ContainerDiff(ctx context.Context, containerID string) ([]container.ContainerChangeResponseItem, error) {
+	var changes []container.ContainerChangeResponseItem
+
+	serverResp, err := cli.get(ctx, "/containers/"+containerID+"/changes", url.Values{}, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return changes, err
+	}
+
+	err = json.NewDecoder(serverResp.body).Decode(&changes)
+	return changes, err
+}
diff --git a/vendor/github.com/docker/docker/client/container_exec.go b/vendor/github.com/docker/docker/client/container_exec.go
new file mode 100644
index 0000000000000..e3ee755b71dc2
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_exec.go
@@ -0,0 +1,54 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+
+	"github.com/docker/docker/api/types"
+)
+
+// ContainerExecCreate creates a new exec configuration to run an exec process.
+func (cli *Client) ContainerExecCreate(ctx context.Context, container string, config types.ExecConfig) (types.IDResponse, error) {
+	var response types.IDResponse
+
+	if err := cli.NewVersionError("1.25", "env"); len(config.Env) != 0 && err != nil {
+		return response, err
+	}
+
+	resp, err := cli.post(ctx, "/containers/"+container+"/exec", nil, config, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return response, err
+	}
+	err = json.NewDecoder(resp.body).Decode(&response)
+	return response, err
+}
+
+// ContainerExecStart starts an exec process already created in the docker host.
+func (cli *Client) ContainerExecStart(ctx context.Context, execID string, config types.ExecStartCheck) error {
+	resp, err := cli.post(ctx, "/exec/"+execID+"/start", nil, config, nil)
+	ensureReaderClosed(resp)
+	return err
+}
+
+// ContainerExecAttach attaches a connection to an exec process in the server.
+// It returns a types.HijackedConnection with the hijacked connection
+// and the a reader to get output. It's up to the called to close
+// the hijacked connection by calling types.HijackedResponse.Close.
+func (cli *Client) ContainerExecAttach(ctx context.Context, execID string, config types.ExecStartCheck) (types.HijackedResponse, error) {
+	headers := map[string][]string{"Content-Type": {"application/json"}}
+	return cli.postHijacked(ctx, "/exec/"+execID+"/start", nil, config, headers)
+}
+
+// ContainerExecInspect returns information about a specific exec process on the docker host.
+func (cli *Client) ContainerExecInspect(ctx context.Context, execID string) (types.ContainerExecInspect, error) {
+	var response types.ContainerExecInspect
+	resp, err := cli.get(ctx, "/exec/"+execID+"/json", nil, nil)
+	if err != nil {
+		return response, err
+	}
+
+	err = json.NewDecoder(resp.body).Decode(&response)
+	ensureReaderClosed(resp)
+	return response, err
+}
diff --git a/vendor/github.com/docker/docker/client/container_export.go b/vendor/github.com/docker/docker/client/container_export.go
new file mode 100644
index 0000000000000..d0c0a5cbadfa2
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_export.go
@@ -0,0 +1,19 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"io"
+	"net/url"
+)
+
+// ContainerExport retrieves the raw contents of a container
+// and returns them as an io.ReadCloser. It's up to the caller
+// to close the stream.
+func (cli *Client) ContainerExport(ctx context.Context, containerID string) (io.ReadCloser, error) {
+	serverResp, err := cli.get(ctx, "/containers/"+containerID+"/export", url.Values{}, nil)
+	if err != nil {
+		return nil, err
+	}
+
+	return serverResp.body, nil
+}
diff --git a/vendor/github.com/docker/docker/client/container_inspect.go b/vendor/github.com/docker/docker/client/container_inspect.go
new file mode 100644
index 0000000000000..43db32bd973a1
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_inspect.go
@@ -0,0 +1,53 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"io"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+)
+
+// ContainerInspect returns the container information.
+func (cli *Client) ContainerInspect(ctx context.Context, containerID string) (types.ContainerJSON, error) {
+	if containerID == "" {
+		return types.ContainerJSON{}, objectNotFoundError{object: "container", id: containerID}
+	}
+	serverResp, err := cli.get(ctx, "/containers/"+containerID+"/json", nil, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return types.ContainerJSON{}, wrapResponseError(err, serverResp, "container", containerID)
+	}
+
+	var response types.ContainerJSON
+	err = json.NewDecoder(serverResp.body).Decode(&response)
+	return response, err
+}
+
+// ContainerInspectWithRaw returns the container information and its raw representation.
+func (cli *Client) ContainerInspectWithRaw(ctx context.Context, containerID string, getSize bool) (types.ContainerJSON, []byte, error) {
+	if containerID == "" {
+		return types.ContainerJSON{}, nil, objectNotFoundError{object: "container", id: containerID}
+	}
+	query := url.Values{}
+	if getSize {
+		query.Set("size", "1")
+	}
+	serverResp, err := cli.get(ctx, "/containers/"+containerID+"/json", query, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return types.ContainerJSON{}, nil, wrapResponseError(err, serverResp, "container", containerID)
+	}
+
+	body, err := io.ReadAll(serverResp.body)
+	if err != nil {
+		return types.ContainerJSON{}, nil, err
+	}
+
+	var response types.ContainerJSON
+	rdr := bytes.NewReader(body)
+	err = json.NewDecoder(rdr).Decode(&response)
+	return response, body, err
+}
diff --git a/vendor/github.com/docker/docker/client/container_kill.go b/vendor/github.com/docker/docker/client/container_kill.go
new file mode 100644
index 0000000000000..4d6f1d23da9de
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_kill.go
@@ -0,0 +1,16 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+)
+
+// ContainerKill terminates the container process but does not remove the container from the docker host.
+func (cli *Client) ContainerKill(ctx context.Context, containerID, signal string) error {
+	query := url.Values{}
+	query.Set("signal", signal)
+
+	resp, err := cli.post(ctx, "/containers/"+containerID+"/kill", query, nil, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/container_list.go b/vendor/github.com/docker/docker/client/container_list.go
new file mode 100644
index 0000000000000..a973de597fdf3
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_list.go
@@ -0,0 +1,57 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+	"strconv"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/filters"
+)
+
+// ContainerList returns the list of containers in the docker host.
+func (cli *Client) ContainerList(ctx context.Context, options types.ContainerListOptions) ([]types.Container, error) {
+	query := url.Values{}
+
+	if options.All {
+		query.Set("all", "1")
+	}
+
+	if options.Limit != -1 {
+		query.Set("limit", strconv.Itoa(options.Limit))
+	}
+
+	if options.Since != "" {
+		query.Set("since", options.Since)
+	}
+
+	if options.Before != "" {
+		query.Set("before", options.Before)
+	}
+
+	if options.Size {
+		query.Set("size", "1")
+	}
+
+	if options.Filters.Len() > 0 {
+		//nolint:staticcheck // ignore SA1019 for old code
+		filterJSON, err := filters.ToParamWithVersion(cli.version, options.Filters)
+
+		if err != nil {
+			return nil, err
+		}
+
+		query.Set("filters", filterJSON)
+	}
+
+	resp, err := cli.get(ctx, "/containers/json", query, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return nil, err
+	}
+
+	var containers []types.Container
+	err = json.NewDecoder(resp.body).Decode(&containers)
+	return containers, err
+}
diff --git a/vendor/github.com/docker/docker/client/container_logs.go b/vendor/github.com/docker/docker/client/container_logs.go
new file mode 100644
index 0000000000000..add852a833a8d
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_logs.go
@@ -0,0 +1,80 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"io"
+	"net/url"
+	"time"
+
+	"github.com/docker/docker/api/types"
+	timetypes "github.com/docker/docker/api/types/time"
+	"github.com/pkg/errors"
+)
+
+// ContainerLogs returns the logs generated by a container in an io.ReadCloser.
+// It's up to the caller to close the stream.
+//
+// The stream format on the response will be in one of two formats:
+//
+// If the container is using a TTY, there is only a single stream (stdout), and
+// data is copied directly from the container output stream, no extra
+// multiplexing or headers.
+//
+// If the container is *not* using a TTY, streams for stdout and stderr are
+// multiplexed.
+// The format of the multiplexed stream is as follows:
+//
+//	[8]byte{STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4}[]byte{OUTPUT}
+//
+// STREAM_TYPE can be 1 for stdout and 2 for stderr
+//
+// SIZE1, SIZE2, SIZE3, and SIZE4 are four bytes of uint32 encoded as big endian.
+// This is the size of OUTPUT.
+//
+// You can use github.com/docker/docker/pkg/stdcopy.StdCopy to demultiplex this
+// stream.
+func (cli *Client) ContainerLogs(ctx context.Context, container string, options types.ContainerLogsOptions) (io.ReadCloser, error) {
+	query := url.Values{}
+	if options.ShowStdout {
+		query.Set("stdout", "1")
+	}
+
+	if options.ShowStderr {
+		query.Set("stderr", "1")
+	}
+
+	if options.Since != "" {
+		ts, err := timetypes.GetTimestamp(options.Since, time.Now())
+		if err != nil {
+			return nil, errors.Wrap(err, `invalid value for "since"`)
+		}
+		query.Set("since", ts)
+	}
+
+	if options.Until != "" {
+		ts, err := timetypes.GetTimestamp(options.Until, time.Now())
+		if err != nil {
+			return nil, errors.Wrap(err, `invalid value for "until"`)
+		}
+		query.Set("until", ts)
+	}
+
+	if options.Timestamps {
+		query.Set("timestamps", "1")
+	}
+
+	if options.Details {
+		query.Set("details", "1")
+	}
+
+	if options.Follow {
+		query.Set("follow", "1")
+	}
+	query.Set("tail", options.Tail)
+
+	resp, err := cli.get(ctx, "/containers/"+container+"/logs", query, nil)
+	if err != nil {
+		return nil, wrapResponseError(err, resp, "container", container)
+	}
+	return resp.body, nil
+}
diff --git a/vendor/github.com/docker/docker/client/container_pause.go b/vendor/github.com/docker/docker/client/container_pause.go
new file mode 100644
index 0000000000000..5e7271a371ce7
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_pause.go
@@ -0,0 +1,10 @@
+package client // import "github.com/docker/docker/client"
+
+import "context"
+
+// ContainerPause pauses the main process of a given container without terminating it.
+func (cli *Client) ContainerPause(ctx context.Context, containerID string) error {
+	resp, err := cli.post(ctx, "/containers/"+containerID+"/pause", nil, nil, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/container_prune.go b/vendor/github.com/docker/docker/client/container_prune.go
new file mode 100644
index 0000000000000..04383deaaffc3
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_prune.go
@@ -0,0 +1,36 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/filters"
+)
+
+// ContainersPrune requests the daemon to delete unused data
+func (cli *Client) ContainersPrune(ctx context.Context, pruneFilters filters.Args) (types.ContainersPruneReport, error) {
+	var report types.ContainersPruneReport
+
+	if err := cli.NewVersionError("1.25", "container prune"); err != nil {
+		return report, err
+	}
+
+	query, err := getFiltersQuery(pruneFilters)
+	if err != nil {
+		return report, err
+	}
+
+	serverResp, err := cli.post(ctx, "/containers/prune", query, nil, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return report, err
+	}
+
+	if err := json.NewDecoder(serverResp.body).Decode(&report); err != nil {
+		return report, fmt.Errorf("Error retrieving disk usage: %v", err)
+	}
+
+	return report, nil
+}
diff --git a/vendor/github.com/docker/docker/client/container_remove.go b/vendor/github.com/docker/docker/client/container_remove.go
new file mode 100644
index 0000000000000..df81461b889c5
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_remove.go
@@ -0,0 +1,27 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+)
+
+// ContainerRemove kills and removes a container from the docker host.
+func (cli *Client) ContainerRemove(ctx context.Context, containerID string, options types.ContainerRemoveOptions) error {
+	query := url.Values{}
+	if options.RemoveVolumes {
+		query.Set("v", "1")
+	}
+	if options.RemoveLinks {
+		query.Set("link", "1")
+	}
+
+	if options.Force {
+		query.Set("force", "1")
+	}
+
+	resp, err := cli.delete(ctx, "/containers/"+containerID, query, nil)
+	defer ensureReaderClosed(resp)
+	return wrapResponseError(err, resp, "container", containerID)
+}
diff --git a/vendor/github.com/docker/docker/client/container_rename.go b/vendor/github.com/docker/docker/client/container_rename.go
new file mode 100644
index 0000000000000..240fdf552b440
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_rename.go
@@ -0,0 +1,15 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+)
+
+// ContainerRename changes the name of a given container.
+func (cli *Client) ContainerRename(ctx context.Context, containerID, newContainerName string) error {
+	query := url.Values{}
+	query.Set("name", newContainerName)
+	resp, err := cli.post(ctx, "/containers/"+containerID+"/rename", query, nil, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/container_resize.go b/vendor/github.com/docker/docker/client/container_resize.go
new file mode 100644
index 0000000000000..a9d4c0c79a0d3
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_resize.go
@@ -0,0 +1,29 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+	"strconv"
+
+	"github.com/docker/docker/api/types"
+)
+
+// ContainerResize changes the size of the tty for a container.
+func (cli *Client) ContainerResize(ctx context.Context, containerID string, options types.ResizeOptions) error {
+	return cli.resize(ctx, "/containers/"+containerID, options.Height, options.Width)
+}
+
+// ContainerExecResize changes the size of the tty for an exec process running inside a container.
+func (cli *Client) ContainerExecResize(ctx context.Context, execID string, options types.ResizeOptions) error {
+	return cli.resize(ctx, "/exec/"+execID, options.Height, options.Width)
+}
+
+func (cli *Client) resize(ctx context.Context, basePath string, height, width uint) error {
+	query := url.Values{}
+	query.Set("h", strconv.Itoa(int(height)))
+	query.Set("w", strconv.Itoa(int(width)))
+
+	resp, err := cli.post(ctx, basePath+"/resize", query, nil, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/container_restart.go b/vendor/github.com/docker/docker/client/container_restart.go
new file mode 100644
index 0000000000000..41e421969f47a
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_restart.go
@@ -0,0 +1,22 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+	"time"
+
+	timetypes "github.com/docker/docker/api/types/time"
+)
+
+// ContainerRestart stops and starts a container again.
+// It makes the daemon to wait for the container to be up again for
+// a specific amount of time, given the timeout.
+func (cli *Client) ContainerRestart(ctx context.Context, containerID string, timeout *time.Duration) error {
+	query := url.Values{}
+	if timeout != nil {
+		query.Set("t", timetypes.DurationToSecondsString(*timeout))
+	}
+	resp, err := cli.post(ctx, "/containers/"+containerID+"/restart", query, nil, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/container_start.go b/vendor/github.com/docker/docker/client/container_start.go
new file mode 100644
index 0000000000000..c2e0b15dca8bf
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_start.go
@@ -0,0 +1,23 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+)
+
+// ContainerStart sends a request to the docker daemon to start a container.
+func (cli *Client) ContainerStart(ctx context.Context, containerID string, options types.ContainerStartOptions) error {
+	query := url.Values{}
+	if len(options.CheckpointID) != 0 {
+		query.Set("checkpoint", options.CheckpointID)
+	}
+	if len(options.CheckpointDir) != 0 {
+		query.Set("checkpoint-dir", options.CheckpointDir)
+	}
+
+	resp, err := cli.post(ctx, "/containers/"+containerID+"/start", query, nil, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/container_stats.go b/vendor/github.com/docker/docker/client/container_stats.go
new file mode 100644
index 0000000000000..0a6488dde8266
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_stats.go
@@ -0,0 +1,42 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+)
+
+// ContainerStats returns near realtime stats for a given container.
+// It's up to the caller to close the io.ReadCloser returned.
+func (cli *Client) ContainerStats(ctx context.Context, containerID string, stream bool) (types.ContainerStats, error) {
+	query := url.Values{}
+	query.Set("stream", "0")
+	if stream {
+		query.Set("stream", "1")
+	}
+
+	resp, err := cli.get(ctx, "/containers/"+containerID+"/stats", query, nil)
+	if err != nil {
+		return types.ContainerStats{}, err
+	}
+
+	osType := getDockerOS(resp.header.Get("Server"))
+	return types.ContainerStats{Body: resp.body, OSType: osType}, err
+}
+
+// ContainerStatsOneShot gets a single stat entry from a container.
+// It differs from `ContainerStats` in that the API should not wait to prime the stats
+func (cli *Client) ContainerStatsOneShot(ctx context.Context, containerID string) (types.ContainerStats, error) {
+	query := url.Values{}
+	query.Set("stream", "0")
+	query.Set("one-shot", "1")
+
+	resp, err := cli.get(ctx, "/containers/"+containerID+"/stats", query, nil)
+	if err != nil {
+		return types.ContainerStats{}, err
+	}
+
+	osType := getDockerOS(resp.header.Get("Server"))
+	return types.ContainerStats{Body: resp.body, OSType: osType}, err
+}
diff --git a/vendor/github.com/docker/docker/client/container_stop.go b/vendor/github.com/docker/docker/client/container_stop.go
new file mode 100644
index 0000000000000..629d7ab64c807
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_stop.go
@@ -0,0 +1,26 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+	"time"
+
+	timetypes "github.com/docker/docker/api/types/time"
+)
+
+// ContainerStop stops a container. In case the container fails to stop
+// gracefully within a time frame specified by the timeout argument,
+// it is forcefully terminated (killed).
+//
+// If the timeout is nil, the container's StopTimeout value is used, if set,
+// otherwise the engine default. A negative timeout value can be specified,
+// meaning no timeout, i.e. no forceful termination is performed.
+func (cli *Client) ContainerStop(ctx context.Context, containerID string, timeout *time.Duration) error {
+	query := url.Values{}
+	if timeout != nil {
+		query.Set("t", timetypes.DurationToSecondsString(*timeout))
+	}
+	resp, err := cli.post(ctx, "/containers/"+containerID+"/stop", query, nil, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/container_top.go b/vendor/github.com/docker/docker/client/container_top.go
new file mode 100644
index 0000000000000..a5b78999bf0a7
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_top.go
@@ -0,0 +1,28 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+	"strings"
+
+	"github.com/docker/docker/api/types/container"
+)
+
+// ContainerTop shows process information from within a container.
+func (cli *Client) ContainerTop(ctx context.Context, containerID string, arguments []string) (container.ContainerTopOKBody, error) {
+	var response container.ContainerTopOKBody
+	query := url.Values{}
+	if len(arguments) > 0 {
+		query.Set("ps_args", strings.Join(arguments, " "))
+	}
+
+	resp, err := cli.get(ctx, "/containers/"+containerID+"/top", query, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return response, err
+	}
+
+	err = json.NewDecoder(resp.body).Decode(&response)
+	return response, err
+}
diff --git a/vendor/github.com/docker/docker/client/container_unpause.go b/vendor/github.com/docker/docker/client/container_unpause.go
new file mode 100644
index 0000000000000..1d8f873169b30
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_unpause.go
@@ -0,0 +1,10 @@
+package client // import "github.com/docker/docker/client"
+
+import "context"
+
+// ContainerUnpause resumes the process execution within a container
+func (cli *Client) ContainerUnpause(ctx context.Context, containerID string) error {
+	resp, err := cli.post(ctx, "/containers/"+containerID+"/unpause", nil, nil, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/container_update.go b/vendor/github.com/docker/docker/client/container_update.go
new file mode 100644
index 0000000000000..6917cf9fb36d3
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_update.go
@@ -0,0 +1,21 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+
+	"github.com/docker/docker/api/types/container"
+)
+
+// ContainerUpdate updates resources of a container
+func (cli *Client) ContainerUpdate(ctx context.Context, containerID string, updateConfig container.UpdateConfig) (container.ContainerUpdateOKBody, error) {
+	var response container.ContainerUpdateOKBody
+	serverResp, err := cli.post(ctx, "/containers/"+containerID+"/update", nil, updateConfig, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return response, err
+	}
+
+	err = json.NewDecoder(serverResp.body).Decode(&response)
+	return response, err
+}
diff --git a/vendor/github.com/docker/docker/client/container_wait.go b/vendor/github.com/docker/docker/client/container_wait.go
new file mode 100644
index 0000000000000..6ab8c1da96a2c
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/container_wait.go
@@ -0,0 +1,83 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+
+	"github.com/docker/docker/api/types/container"
+	"github.com/docker/docker/api/types/versions"
+)
+
+// ContainerWait waits until the specified container is in a certain state
+// indicated by the given condition, either "not-running" (default),
+// "next-exit", or "removed".
+//
+// If this client's API version is before 1.30, condition is ignored and
+// ContainerWait will return immediately with the two channels, as the server
+// will wait as if the condition were "not-running".
+//
+// If this client's API version is at least 1.30, ContainerWait blocks until
+// the request has been acknowledged by the server (with a response header),
+// then returns two channels on which the caller can wait for the exit status
+// of the container or an error if there was a problem either beginning the
+// wait request or in getting the response. This allows the caller to
+// synchronize ContainerWait with other calls, such as specifying a
+// "next-exit" condition before issuing a ContainerStart request.
+func (cli *Client) ContainerWait(ctx context.Context, containerID string, condition container.WaitCondition) (<-chan container.ContainerWaitOKBody, <-chan error) {
+	if versions.LessThan(cli.ClientVersion(), "1.30") {
+		return cli.legacyContainerWait(ctx, containerID)
+	}
+
+	resultC := make(chan container.ContainerWaitOKBody)
+	errC := make(chan error, 1)
+
+	query := url.Values{}
+	query.Set("condition", string(condition))
+
+	resp, err := cli.post(ctx, "/containers/"+containerID+"/wait", query, nil, nil)
+	if err != nil {
+		defer ensureReaderClosed(resp)
+		errC <- err
+		return resultC, errC
+	}
+
+	go func() {
+		defer ensureReaderClosed(resp)
+		var res container.ContainerWaitOKBody
+		if err := json.NewDecoder(resp.body).Decode(&res); err != nil {
+			errC <- err
+			return
+		}
+
+		resultC <- res
+	}()
+
+	return resultC, errC
+}
+
+// legacyContainerWait returns immediately and doesn't have an option to wait
+// until the container is removed.
+func (cli *Client) legacyContainerWait(ctx context.Context, containerID string) (<-chan container.ContainerWaitOKBody, <-chan error) {
+	resultC := make(chan container.ContainerWaitOKBody)
+	errC := make(chan error)
+
+	go func() {
+		resp, err := cli.post(ctx, "/containers/"+containerID+"/wait", nil, nil, nil)
+		if err != nil {
+			errC <- err
+			return
+		}
+		defer ensureReaderClosed(resp)
+
+		var res container.ContainerWaitOKBody
+		if err := json.NewDecoder(resp.body).Decode(&res); err != nil {
+			errC <- err
+			return
+		}
+
+		resultC <- res
+	}()
+
+	return resultC, errC
+}
diff --git a/vendor/github.com/docker/docker/client/disk_usage.go b/vendor/github.com/docker/docker/client/disk_usage.go
new file mode 100644
index 0000000000000..354cd36939a8c
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/disk_usage.go
@@ -0,0 +1,26 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+
+	"github.com/docker/docker/api/types"
+)
+
+// DiskUsage requests the current data usage from the daemon
+func (cli *Client) DiskUsage(ctx context.Context) (types.DiskUsage, error) {
+	var du types.DiskUsage
+
+	serverResp, err := cli.get(ctx, "/system/df", nil, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return du, err
+	}
+
+	if err := json.NewDecoder(serverResp.body).Decode(&du); err != nil {
+		return du, fmt.Errorf("Error retrieving disk usage: %v", err)
+	}
+
+	return du, nil
+}
diff --git a/vendor/github.com/docker/docker/client/distribution_inspect.go b/vendor/github.com/docker/docker/client/distribution_inspect.go
new file mode 100644
index 0000000000000..f4e3794cb4c6d
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/distribution_inspect.go
@@ -0,0 +1,38 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+
+	registrytypes "github.com/docker/docker/api/types/registry"
+)
+
+// DistributionInspect returns the image digest with full Manifest
+func (cli *Client) DistributionInspect(ctx context.Context, image, encodedRegistryAuth string) (registrytypes.DistributionInspect, error) {
+	// Contact the registry to retrieve digest and platform information
+	var distributionInspect registrytypes.DistributionInspect
+	if image == "" {
+		return distributionInspect, objectNotFoundError{object: "distribution", id: image}
+	}
+
+	if err := cli.NewVersionError("1.30", "distribution inspect"); err != nil {
+		return distributionInspect, err
+	}
+	var headers map[string][]string
+
+	if encodedRegistryAuth != "" {
+		headers = map[string][]string{
+			"X-Registry-Auth": {encodedRegistryAuth},
+		}
+	}
+
+	resp, err := cli.get(ctx, "/distribution/"+image+"/json", url.Values{}, headers)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return distributionInspect, err
+	}
+
+	err = json.NewDecoder(resp.body).Decode(&distributionInspect)
+	return distributionInspect, err
+}
diff --git a/vendor/github.com/docker/docker/client/errors.go b/vendor/github.com/docker/docker/client/errors.go
new file mode 100644
index 0000000000000..041bc8d49c44c
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/errors.go
@@ -0,0 +1,138 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"fmt"
+	"net/http"
+
+	"github.com/docker/docker/api/types/versions"
+	"github.com/docker/docker/errdefs"
+	"github.com/pkg/errors"
+)
+
+// errConnectionFailed implements an error returned when connection failed.
+type errConnectionFailed struct {
+	host string
+}
+
+// Error returns a string representation of an errConnectionFailed
+func (err errConnectionFailed) Error() string {
+	if err.host == "" {
+		return "Cannot connect to the Docker daemon. Is the docker daemon running on this host?"
+	}
+	return fmt.Sprintf("Cannot connect to the Docker daemon at %s. Is the docker daemon running?", err.host)
+}
+
+// IsErrConnectionFailed returns true if the error is caused by connection failed.
+func IsErrConnectionFailed(err error) bool {
+	return errors.As(err, &errConnectionFailed{})
+}
+
+// ErrorConnectionFailed returns an error with host in the error message when connection to docker daemon failed.
+func ErrorConnectionFailed(host string) error {
+	return errConnectionFailed{host: host}
+}
+
+// Deprecated: use the errdefs.NotFound() interface instead. Kept for backward compatibility
+type notFound interface {
+	error
+	NotFound() bool
+}
+
+// IsErrNotFound returns true if the error is a NotFound error, which is returned
+// by the API when some object is not found.
+func IsErrNotFound(err error) bool {
+	var e notFound
+	if errors.As(err, &e) {
+		return true
+	}
+	return errdefs.IsNotFound(err)
+}
+
+type objectNotFoundError struct {
+	object string
+	id     string
+}
+
+func (e objectNotFoundError) NotFound() {}
+
+func (e objectNotFoundError) Error() string {
+	return fmt.Sprintf("Error: No such %s: %s", e.object, e.id)
+}
+
+func wrapResponseError(err error, resp serverResponse, object, id string) error {
+	switch {
+	case err == nil:
+		return nil
+	case resp.statusCode == http.StatusNotFound:
+		return objectNotFoundError{object: object, id: id}
+	case resp.statusCode == http.StatusNotImplemented:
+		return errdefs.NotImplemented(err)
+	default:
+		return err
+	}
+}
+
+// unauthorizedError represents an authorization error in a remote registry.
+type unauthorizedError struct {
+	cause error
+}
+
+// Error returns a string representation of an unauthorizedError
+func (u unauthorizedError) Error() string {
+	return u.cause.Error()
+}
+
+// IsErrUnauthorized returns true if the error is caused
+// when a remote registry authentication fails
+func IsErrUnauthorized(err error) bool {
+	if _, ok := err.(unauthorizedError); ok {
+		return ok
+	}
+	return errdefs.IsUnauthorized(err)
+}
+
+type pluginPermissionDenied struct {
+	name string
+}
+
+func (e pluginPermissionDenied) Error() string {
+	return "Permission denied while installing plugin " + e.name
+}
+
+// IsErrPluginPermissionDenied returns true if the error is caused
+// when a user denies a plugin's permissions
+func IsErrPluginPermissionDenied(err error) bool {
+	_, ok := err.(pluginPermissionDenied)
+	return ok
+}
+
+type notImplementedError struct {
+	message string
+}
+
+func (e notImplementedError) Error() string {
+	return e.message
+}
+
+func (e notImplementedError) NotImplemented() bool {
+	return true
+}
+
+// IsErrNotImplemented returns true if the error is a NotImplemented error.
+// This is returned by the API when a requested feature has not been
+// implemented.
+func IsErrNotImplemented(err error) bool {
+	if _, ok := err.(notImplementedError); ok {
+		return ok
+	}
+	return errdefs.IsNotImplemented(err)
+}
+
+// NewVersionError returns an error if the APIVersion required
+// if less than the current supported version
+func (cli *Client) NewVersionError(APIrequired, feature string) error {
+	if cli.version != "" && versions.LessThan(cli.version, APIrequired) {
+		return fmt.Errorf("%q requires API version %s, but the Docker daemon API version is %s", feature, APIrequired, cli.version)
+	}
+	return nil
+}
diff --git a/vendor/github.com/docker/docker/client/events.go b/vendor/github.com/docker/docker/client/events.go
new file mode 100644
index 0000000000000..f0dc9d9e12f32
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/events.go
@@ -0,0 +1,102 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+	"time"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/events"
+	"github.com/docker/docker/api/types/filters"
+	timetypes "github.com/docker/docker/api/types/time"
+)
+
+// Events returns a stream of events in the daemon. It's up to the caller to close the stream
+// by cancelling the context. Once the stream has been completely read an io.EOF error will
+// be sent over the error channel. If an error is sent all processing will be stopped. It's up
+// to the caller to reopen the stream in the event of an error by reinvoking this method.
+func (cli *Client) Events(ctx context.Context, options types.EventsOptions) (<-chan events.Message, <-chan error) {
+
+	messages := make(chan events.Message)
+	errs := make(chan error, 1)
+
+	started := make(chan struct{})
+	go func() {
+		defer close(errs)
+
+		query, err := buildEventsQueryParams(cli.version, options)
+		if err != nil {
+			close(started)
+			errs <- err
+			return
+		}
+
+		resp, err := cli.get(ctx, "/events", query, nil)
+		if err != nil {
+			close(started)
+			errs <- err
+			return
+		}
+		defer resp.body.Close()
+
+		decoder := json.NewDecoder(resp.body)
+
+		close(started)
+		for {
+			select {
+			case <-ctx.Done():
+				errs <- ctx.Err()
+				return
+			default:
+				var event events.Message
+				if err := decoder.Decode(&event); err != nil {
+					errs <- err
+					return
+				}
+
+				select {
+				case messages <- event:
+				case <-ctx.Done():
+					errs <- ctx.Err()
+					return
+				}
+			}
+		}
+	}()
+	<-started
+
+	return messages, errs
+}
+
+func buildEventsQueryParams(cliVersion string, options types.EventsOptions) (url.Values, error) {
+	query := url.Values{}
+	ref := time.Now()
+
+	if options.Since != "" {
+		ts, err := timetypes.GetTimestamp(options.Since, ref)
+		if err != nil {
+			return nil, err
+		}
+		query.Set("since", ts)
+	}
+
+	if options.Until != "" {
+		ts, err := timetypes.GetTimestamp(options.Until, ref)
+		if err != nil {
+			return nil, err
+		}
+		query.Set("until", ts)
+	}
+
+	if options.Filters.Len() > 0 {
+		//nolint:staticcheck // ignore SA1019 for old code
+		filterJSON, err := filters.ToParamWithVersion(cliVersion, options.Filters)
+		if err != nil {
+			return nil, err
+		}
+		query.Set("filters", filterJSON)
+	}
+
+	return query, nil
+}
diff --git a/vendor/github.com/docker/docker/client/hijack.go b/vendor/github.com/docker/docker/client/hijack.go
new file mode 100644
index 0000000000000..e1dc49ef0f66a
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/hijack.go
@@ -0,0 +1,145 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"bufio"
+	"context"
+	"crypto/tls"
+	"fmt"
+	"net"
+	"net/http"
+	"net/http/httputil"
+	"net/url"
+	"time"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/go-connections/sockets"
+	"github.com/pkg/errors"
+)
+
+// postHijacked sends a POST request and hijacks the connection.
+func (cli *Client) postHijacked(ctx context.Context, path string, query url.Values, body interface{}, headers map[string][]string) (types.HijackedResponse, error) {
+	bodyEncoded, err := encodeData(body)
+	if err != nil {
+		return types.HijackedResponse{}, err
+	}
+
+	apiPath := cli.getAPIPath(ctx, path, query)
+	req, err := http.NewRequest(http.MethodPost, apiPath, bodyEncoded)
+	if err != nil {
+		return types.HijackedResponse{}, err
+	}
+	req = cli.addHeaders(req, headers)
+
+	conn, err := cli.setupHijackConn(ctx, req, "tcp")
+	if err != nil {
+		return types.HijackedResponse{}, err
+	}
+
+	return types.HijackedResponse{Conn: conn, Reader: bufio.NewReader(conn)}, err
+}
+
+// DialHijack returns a hijacked connection with negotiated protocol proto.
+func (cli *Client) DialHijack(ctx context.Context, url, proto string, meta map[string][]string) (net.Conn, error) {
+	req, err := http.NewRequest(http.MethodPost, url, nil)
+	if err != nil {
+		return nil, err
+	}
+	req = cli.addHeaders(req, meta)
+
+	return cli.setupHijackConn(ctx, req, proto)
+}
+
+// fallbackDial is used when WithDialer() was not called.
+// See cli.Dialer().
+func fallbackDial(proto, addr string, tlsConfig *tls.Config) (net.Conn, error) {
+	if tlsConfig != nil && proto != "unix" && proto != "npipe" {
+		return tls.Dial(proto, addr, tlsConfig)
+	}
+	if proto == "npipe" {
+		return sockets.DialPipe(addr, 32*time.Second)
+	}
+	return net.Dial(proto, addr)
+}
+
+func (cli *Client) setupHijackConn(ctx context.Context, req *http.Request, proto string) (net.Conn, error) {
+	req.Host = cli.addr
+	req.Header.Set("Connection", "Upgrade")
+	req.Header.Set("Upgrade", proto)
+
+	dialer := cli.Dialer()
+	conn, err := dialer(ctx)
+	if err != nil {
+		return nil, errors.Wrap(err, "cannot connect to the Docker daemon. Is 'docker daemon' running on this host?")
+	}
+
+	// When we set up a TCP connection for hijack, there could be long periods
+	// of inactivity (a long running command with no output) that in certain
+	// network setups may cause ECONNTIMEOUT, leaving the client in an unknown
+	// state. Setting TCP KeepAlive on the socket connection will prohibit
+	// ECONNTIMEOUT unless the socket connection truly is broken
+	if tcpConn, ok := conn.(*net.TCPConn); ok {
+		tcpConn.SetKeepAlive(true)
+		tcpConn.SetKeepAlivePeriod(30 * time.Second)
+	}
+
+	clientconn := httputil.NewClientConn(conn, nil)
+	defer clientconn.Close()
+
+	// Server hijacks the connection, error 'connection closed' expected
+	resp, err := clientconn.Do(req)
+
+	//nolint:staticcheck // ignore SA1019 for connecting to old (pre go1.8) daemons
+	if err != httputil.ErrPersistEOF {
+		if err != nil {
+			return nil, err
+		}
+		if resp.StatusCode != http.StatusSwitchingProtocols {
+			resp.Body.Close()
+			return nil, fmt.Errorf("unable to upgrade to %s, received %d", proto, resp.StatusCode)
+		}
+	}
+
+	c, br := clientconn.Hijack()
+	if br.Buffered() > 0 {
+		// If there is buffered content, wrap the connection.  We return an
+		// object that implements CloseWrite iff the underlying connection
+		// implements it.
+		if _, ok := c.(types.CloseWriter); ok {
+			c = &hijackedConnCloseWriter{&hijackedConn{c, br}}
+		} else {
+			c = &hijackedConn{c, br}
+		}
+	} else {
+		br.Reset(nil)
+	}
+
+	return c, nil
+}
+
+// hijackedConn wraps a net.Conn and is returned by setupHijackConn in the case
+// that a) there was already buffered data in the http layer when Hijack() was
+// called, and b) the underlying net.Conn does *not* implement CloseWrite().
+// hijackedConn does not implement CloseWrite() either.
+type hijackedConn struct {
+	net.Conn
+	r *bufio.Reader
+}
+
+func (c *hijackedConn) Read(b []byte) (int, error) {
+	return c.r.Read(b)
+}
+
+// hijackedConnCloseWriter is a hijackedConn which additionally implements
+// CloseWrite().  It is returned by setupHijackConn in the case that a) there
+// was already buffered data in the http layer when Hijack() was called, and b)
+// the underlying net.Conn *does* implement CloseWrite().
+type hijackedConnCloseWriter struct {
+	*hijackedConn
+}
+
+var _ types.CloseWriter = &hijackedConnCloseWriter{}
+
+func (c *hijackedConnCloseWriter) CloseWrite() error {
+	conn := c.Conn.(types.CloseWriter)
+	return conn.CloseWrite()
+}
diff --git a/vendor/github.com/docker/docker/client/image_build.go b/vendor/github.com/docker/docker/client/image_build.go
new file mode 100644
index 0000000000000..8fcf995036fb0
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/image_build.go
@@ -0,0 +1,146 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/base64"
+	"encoding/json"
+	"io"
+	"net/http"
+	"net/url"
+	"strconv"
+	"strings"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/container"
+)
+
+// ImageBuild sends request to the daemon to build images.
+// The Body in the response implement an io.ReadCloser and it's up to the caller to
+// close it.
+func (cli *Client) ImageBuild(ctx context.Context, buildContext io.Reader, options types.ImageBuildOptions) (types.ImageBuildResponse, error) {
+	query, err := cli.imageBuildOptionsToQuery(options)
+	if err != nil {
+		return types.ImageBuildResponse{}, err
+	}
+
+	headers := http.Header(make(map[string][]string))
+	buf, err := json.Marshal(options.AuthConfigs)
+	if err != nil {
+		return types.ImageBuildResponse{}, err
+	}
+	headers.Add("X-Registry-Config", base64.URLEncoding.EncodeToString(buf))
+
+	headers.Set("Content-Type", "application/x-tar")
+
+	serverResp, err := cli.postRaw(ctx, "/build", query, buildContext, headers)
+	if err != nil {
+		return types.ImageBuildResponse{}, err
+	}
+
+	osType := getDockerOS(serverResp.header.Get("Server"))
+
+	return types.ImageBuildResponse{
+		Body:   serverResp.body,
+		OSType: osType,
+	}, nil
+}
+
+func (cli *Client) imageBuildOptionsToQuery(options types.ImageBuildOptions) (url.Values, error) {
+	query := url.Values{
+		"t":           options.Tags,
+		"securityopt": options.SecurityOpt,
+		"extrahosts":  options.ExtraHosts,
+	}
+	if options.SuppressOutput {
+		query.Set("q", "1")
+	}
+	if options.RemoteContext != "" {
+		query.Set("remote", options.RemoteContext)
+	}
+	if options.NoCache {
+		query.Set("nocache", "1")
+	}
+	if options.Remove {
+		query.Set("rm", "1")
+	} else {
+		query.Set("rm", "0")
+	}
+
+	if options.ForceRemove {
+		query.Set("forcerm", "1")
+	}
+
+	if options.PullParent {
+		query.Set("pull", "1")
+	}
+
+	if options.Squash {
+		if err := cli.NewVersionError("1.25", "squash"); err != nil {
+			return query, err
+		}
+		query.Set("squash", "1")
+	}
+
+	if !container.Isolation.IsDefault(options.Isolation) {
+		query.Set("isolation", string(options.Isolation))
+	}
+
+	query.Set("cpusetcpus", options.CPUSetCPUs)
+	query.Set("networkmode", options.NetworkMode)
+	query.Set("cpusetmems", options.CPUSetMems)
+	query.Set("cpushares", strconv.FormatInt(options.CPUShares, 10))
+	query.Set("cpuquota", strconv.FormatInt(options.CPUQuota, 10))
+	query.Set("cpuperiod", strconv.FormatInt(options.CPUPeriod, 10))
+	query.Set("memory", strconv.FormatInt(options.Memory, 10))
+	query.Set("memswap", strconv.FormatInt(options.MemorySwap, 10))
+	query.Set("cgroupparent", options.CgroupParent)
+	query.Set("shmsize", strconv.FormatInt(options.ShmSize, 10))
+	query.Set("dockerfile", options.Dockerfile)
+	query.Set("target", options.Target)
+
+	ulimitsJSON, err := json.Marshal(options.Ulimits)
+	if err != nil {
+		return query, err
+	}
+	query.Set("ulimits", string(ulimitsJSON))
+
+	buildArgsJSON, err := json.Marshal(options.BuildArgs)
+	if err != nil {
+		return query, err
+	}
+	query.Set("buildargs", string(buildArgsJSON))
+
+	labelsJSON, err := json.Marshal(options.Labels)
+	if err != nil {
+		return query, err
+	}
+	query.Set("labels", string(labelsJSON))
+
+	cacheFromJSON, err := json.Marshal(options.CacheFrom)
+	if err != nil {
+		return query, err
+	}
+	query.Set("cachefrom", string(cacheFromJSON))
+	if options.SessionID != "" {
+		query.Set("session", options.SessionID)
+	}
+	if options.Platform != "" {
+		if err := cli.NewVersionError("1.32", "platform"); err != nil {
+			return query, err
+		}
+		query.Set("platform", strings.ToLower(options.Platform))
+	}
+	if options.BuildID != "" {
+		query.Set("buildid", options.BuildID)
+	}
+	query.Set("version", string(options.Version))
+
+	if options.Outputs != nil {
+		outputsJSON, err := json.Marshal(options.Outputs)
+		if err != nil {
+			return query, err
+		}
+		query.Set("outputs", string(outputsJSON))
+	}
+	return query, nil
+}
diff --git a/vendor/github.com/docker/docker/client/image_create.go b/vendor/github.com/docker/docker/client/image_create.go
new file mode 100644
index 0000000000000..239380474e618
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/image_create.go
@@ -0,0 +1,37 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"io"
+	"net/url"
+	"strings"
+
+	"github.com/docker/distribution/reference"
+	"github.com/docker/docker/api/types"
+)
+
+// ImageCreate creates a new image based in the parent options.
+// It returns the JSON content in the response body.
+func (cli *Client) ImageCreate(ctx context.Context, parentReference string, options types.ImageCreateOptions) (io.ReadCloser, error) {
+	ref, err := reference.ParseNormalizedNamed(parentReference)
+	if err != nil {
+		return nil, err
+	}
+
+	query := url.Values{}
+	query.Set("fromImage", reference.FamiliarName(ref))
+	query.Set("tag", getAPITagFromNamedRef(ref))
+	if options.Platform != "" {
+		query.Set("platform", strings.ToLower(options.Platform))
+	}
+	resp, err := cli.tryImageCreate(ctx, query, options.RegistryAuth)
+	if err != nil {
+		return nil, err
+	}
+	return resp.body, nil
+}
+
+func (cli *Client) tryImageCreate(ctx context.Context, query url.Values, registryAuth string) (serverResponse, error) {
+	headers := map[string][]string{"X-Registry-Auth": {registryAuth}}
+	return cli.post(ctx, "/images/create", query, nil, headers)
+}
diff --git a/vendor/github.com/docker/docker/client/image_history.go b/vendor/github.com/docker/docker/client/image_history.go
new file mode 100644
index 0000000000000..b5bea10d8f638
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/image_history.go
@@ -0,0 +1,22 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+
+	"github.com/docker/docker/api/types/image"
+)
+
+// ImageHistory returns the changes in an image in history format.
+func (cli *Client) ImageHistory(ctx context.Context, imageID string) ([]image.HistoryResponseItem, error) {
+	var history []image.HistoryResponseItem
+	serverResp, err := cli.get(ctx, "/images/"+imageID+"/history", url.Values{}, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return history, err
+	}
+
+	err = json.NewDecoder(serverResp.body).Decode(&history)
+	return history, err
+}
diff --git a/vendor/github.com/docker/docker/client/image_import.go b/vendor/github.com/docker/docker/client/image_import.go
new file mode 100644
index 0000000000000..d3336d4106a9b
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/image_import.go
@@ -0,0 +1,40 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"io"
+	"net/url"
+	"strings"
+
+	"github.com/docker/distribution/reference"
+	"github.com/docker/docker/api/types"
+)
+
+// ImageImport creates a new image based in the source options.
+// It returns the JSON content in the response body.
+func (cli *Client) ImageImport(ctx context.Context, source types.ImageImportSource, ref string, options types.ImageImportOptions) (io.ReadCloser, error) {
+	if ref != "" {
+		// Check if the given image name can be resolved
+		if _, err := reference.ParseNormalizedNamed(ref); err != nil {
+			return nil, err
+		}
+	}
+
+	query := url.Values{}
+	query.Set("fromSrc", source.SourceName)
+	query.Set("repo", ref)
+	query.Set("tag", options.Tag)
+	query.Set("message", options.Message)
+	if options.Platform != "" {
+		query.Set("platform", strings.ToLower(options.Platform))
+	}
+	for _, change := range options.Changes {
+		query.Add("changes", change)
+	}
+
+	resp, err := cli.postRaw(ctx, "/images/create", query, source.Source, nil)
+	if err != nil {
+		return nil, err
+	}
+	return resp.body, nil
+}
diff --git a/vendor/github.com/docker/docker/client/image_inspect.go b/vendor/github.com/docker/docker/client/image_inspect.go
new file mode 100644
index 0000000000000..03aa12d8b4cdd
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/image_inspect.go
@@ -0,0 +1,32 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"io"
+
+	"github.com/docker/docker/api/types"
+)
+
+// ImageInspectWithRaw returns the image information and its raw representation.
+func (cli *Client) ImageInspectWithRaw(ctx context.Context, imageID string) (types.ImageInspect, []byte, error) {
+	if imageID == "" {
+		return types.ImageInspect{}, nil, objectNotFoundError{object: "image", id: imageID}
+	}
+	serverResp, err := cli.get(ctx, "/images/"+imageID+"/json", nil, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return types.ImageInspect{}, nil, wrapResponseError(err, serverResp, "image", imageID)
+	}
+
+	body, err := io.ReadAll(serverResp.body)
+	if err != nil {
+		return types.ImageInspect{}, nil, err
+	}
+
+	var response types.ImageInspect
+	rdr := bytes.NewReader(body)
+	err = json.NewDecoder(rdr).Decode(&response)
+	return response, body, err
+}
diff --git a/vendor/github.com/docker/docker/client/image_list.go b/vendor/github.com/docker/docker/client/image_list.go
new file mode 100644
index 0000000000000..a4d7505094cd5
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/image_list.go
@@ -0,0 +1,46 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/filters"
+	"github.com/docker/docker/api/types/versions"
+)
+
+// ImageList returns a list of images in the docker host.
+func (cli *Client) ImageList(ctx context.Context, options types.ImageListOptions) ([]types.ImageSummary, error) {
+	var images []types.ImageSummary
+	query := url.Values{}
+
+	optionFilters := options.Filters
+	referenceFilters := optionFilters.Get("reference")
+	if versions.LessThan(cli.version, "1.25") && len(referenceFilters) > 0 {
+		query.Set("filter", referenceFilters[0])
+		for _, filterValue := range referenceFilters {
+			optionFilters.Del("reference", filterValue)
+		}
+	}
+	if optionFilters.Len() > 0 {
+		//nolint:staticcheck // ignore SA1019 for old code
+		filterJSON, err := filters.ToParamWithVersion(cli.version, optionFilters)
+		if err != nil {
+			return images, err
+		}
+		query.Set("filters", filterJSON)
+	}
+	if options.All {
+		query.Set("all", "1")
+	}
+
+	serverResp, err := cli.get(ctx, "/images/json", query, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return images, err
+	}
+
+	err = json.NewDecoder(serverResp.body).Decode(&images)
+	return images, err
+}
diff --git a/vendor/github.com/docker/docker/client/image_load.go b/vendor/github.com/docker/docker/client/image_load.go
new file mode 100644
index 0000000000000..91016e493c441
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/image_load.go
@@ -0,0 +1,29 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"io"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+)
+
+// ImageLoad loads an image in the docker host from the client host.
+// It's up to the caller to close the io.ReadCloser in the
+// ImageLoadResponse returned by this function.
+func (cli *Client) ImageLoad(ctx context.Context, input io.Reader, quiet bool) (types.ImageLoadResponse, error) {
+	v := url.Values{}
+	v.Set("quiet", "0")
+	if quiet {
+		v.Set("quiet", "1")
+	}
+	headers := map[string][]string{"Content-Type": {"application/x-tar"}}
+	resp, err := cli.postRaw(ctx, "/images/load", v, input, headers)
+	if err != nil {
+		return types.ImageLoadResponse{}, err
+	}
+	return types.ImageLoadResponse{
+		Body: resp.body,
+		JSON: resp.header.Get("Content-Type") == "application/json",
+	}, nil
+}
diff --git a/vendor/github.com/docker/docker/client/image_prune.go b/vendor/github.com/docker/docker/client/image_prune.go
new file mode 100644
index 0000000000000..56af6d7f98f4f
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/image_prune.go
@@ -0,0 +1,36 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/filters"
+)
+
+// ImagesPrune requests the daemon to delete unused data
+func (cli *Client) ImagesPrune(ctx context.Context, pruneFilters filters.Args) (types.ImagesPruneReport, error) {
+	var report types.ImagesPruneReport
+
+	if err := cli.NewVersionError("1.25", "image prune"); err != nil {
+		return report, err
+	}
+
+	query, err := getFiltersQuery(pruneFilters)
+	if err != nil {
+		return report, err
+	}
+
+	serverResp, err := cli.post(ctx, "/images/prune", query, nil, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return report, err
+	}
+
+	if err := json.NewDecoder(serverResp.body).Decode(&report); err != nil {
+		return report, fmt.Errorf("Error retrieving disk usage: %v", err)
+	}
+
+	return report, nil
+}
diff --git a/vendor/github.com/docker/docker/client/image_pull.go b/vendor/github.com/docker/docker/client/image_pull.go
new file mode 100644
index 0000000000000..a23975591be27
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/image_pull.go
@@ -0,0 +1,64 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"io"
+	"net/url"
+	"strings"
+
+	"github.com/docker/distribution/reference"
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/errdefs"
+)
+
+// ImagePull requests the docker host to pull an image from a remote registry.
+// It executes the privileged function if the operation is unauthorized
+// and it tries one more time.
+// It's up to the caller to handle the io.ReadCloser and close it properly.
+//
+// FIXME(vdemeester): there is currently used in a few way in docker/docker
+// - if not in trusted content, ref is used to pass the whole reference, and tag is empty
+// - if in trusted content, ref is used to pass the reference name, and tag for the digest
+func (cli *Client) ImagePull(ctx context.Context, refStr string, options types.ImagePullOptions) (io.ReadCloser, error) {
+	ref, err := reference.ParseNormalizedNamed(refStr)
+	if err != nil {
+		return nil, err
+	}
+
+	query := url.Values{}
+	query.Set("fromImage", reference.FamiliarName(ref))
+	if !options.All {
+		query.Set("tag", getAPITagFromNamedRef(ref))
+	}
+	if options.Platform != "" {
+		query.Set("platform", strings.ToLower(options.Platform))
+	}
+
+	resp, err := cli.tryImageCreate(ctx, query, options.RegistryAuth)
+	if errdefs.IsUnauthorized(err) && options.PrivilegeFunc != nil {
+		newAuthHeader, privilegeErr := options.PrivilegeFunc()
+		if privilegeErr != nil {
+			return nil, privilegeErr
+		}
+		resp, err = cli.tryImageCreate(ctx, query, newAuthHeader)
+	}
+	if err != nil {
+		return nil, err
+	}
+	return resp.body, nil
+}
+
+// getAPITagFromNamedRef returns a tag from the specified reference.
+// This function is necessary as long as the docker "server" api expects
+// digests to be sent as tags and makes a distinction between the name
+// and tag/digest part of a reference.
+func getAPITagFromNamedRef(ref reference.Named) string {
+	if digested, ok := ref.(reference.Digested); ok {
+		return digested.Digest().String()
+	}
+	ref = reference.TagNameOnly(ref)
+	if tagged, ok := ref.(reference.Tagged); ok {
+		return tagged.Tag()
+	}
+	return ""
+}
diff --git a/vendor/github.com/docker/docker/client/image_push.go b/vendor/github.com/docker/docker/client/image_push.go
new file mode 100644
index 0000000000000..845580d4a4cdb
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/image_push.go
@@ -0,0 +1,54 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"errors"
+	"io"
+	"net/url"
+
+	"github.com/docker/distribution/reference"
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/errdefs"
+)
+
+// ImagePush requests the docker host to push an image to a remote registry.
+// It executes the privileged function if the operation is unauthorized
+// and it tries one more time.
+// It's up to the caller to handle the io.ReadCloser and close it properly.
+func (cli *Client) ImagePush(ctx context.Context, image string, options types.ImagePushOptions) (io.ReadCloser, error) {
+	ref, err := reference.ParseNormalizedNamed(image)
+	if err != nil {
+		return nil, err
+	}
+
+	if _, isCanonical := ref.(reference.Canonical); isCanonical {
+		return nil, errors.New("cannot push a digest reference")
+	}
+
+	name := reference.FamiliarName(ref)
+	query := url.Values{}
+	if !options.All {
+		ref = reference.TagNameOnly(ref)
+		if tagged, ok := ref.(reference.Tagged); ok {
+			query.Set("tag", tagged.Tag())
+		}
+	}
+
+	resp, err := cli.tryImagePush(ctx, name, query, options.RegistryAuth)
+	if errdefs.IsUnauthorized(err) && options.PrivilegeFunc != nil {
+		newAuthHeader, privilegeErr := options.PrivilegeFunc()
+		if privilegeErr != nil {
+			return nil, privilegeErr
+		}
+		resp, err = cli.tryImagePush(ctx, name, query, newAuthHeader)
+	}
+	if err != nil {
+		return nil, err
+	}
+	return resp.body, nil
+}
+
+func (cli *Client) tryImagePush(ctx context.Context, imageID string, query url.Values, registryAuth string) (serverResponse, error) {
+	headers := map[string][]string{"X-Registry-Auth": {registryAuth}}
+	return cli.post(ctx, "/images/"+imageID+"/push", query, nil, headers)
+}
diff --git a/vendor/github.com/docker/docker/client/image_remove.go b/vendor/github.com/docker/docker/client/image_remove.go
new file mode 100644
index 0000000000000..84a41af0f2ca7
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/image_remove.go
@@ -0,0 +1,31 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+)
+
+// ImageRemove removes an image from the docker host.
+func (cli *Client) ImageRemove(ctx context.Context, imageID string, options types.ImageRemoveOptions) ([]types.ImageDeleteResponseItem, error) {
+	query := url.Values{}
+
+	if options.Force {
+		query.Set("force", "1")
+	}
+	if !options.PruneChildren {
+		query.Set("noprune", "1")
+	}
+
+	var dels []types.ImageDeleteResponseItem
+	resp, err := cli.delete(ctx, "/images/"+imageID, query, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return dels, wrapResponseError(err, resp, "image", imageID)
+	}
+
+	err = json.NewDecoder(resp.body).Decode(&dels)
+	return dels, err
+}
diff --git a/vendor/github.com/docker/docker/client/image_save.go b/vendor/github.com/docker/docker/client/image_save.go
new file mode 100644
index 0000000000000..d1314e4b22fe1
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/image_save.go
@@ -0,0 +1,21 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"io"
+	"net/url"
+)
+
+// ImageSave retrieves one or more images from the docker host as an io.ReadCloser.
+// It's up to the caller to store the images and close the stream.
+func (cli *Client) ImageSave(ctx context.Context, imageIDs []string) (io.ReadCloser, error) {
+	query := url.Values{
+		"names": imageIDs,
+	}
+
+	resp, err := cli.get(ctx, "/images/get", query, nil)
+	if err != nil {
+		return nil, err
+	}
+	return resp.body, nil
+}
diff --git a/vendor/github.com/docker/docker/client/image_search.go b/vendor/github.com/docker/docker/client/image_search.go
new file mode 100644
index 0000000000000..82955a74775b1
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/image_search.go
@@ -0,0 +1,51 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/filters"
+	"github.com/docker/docker/api/types/registry"
+	"github.com/docker/docker/errdefs"
+)
+
+// ImageSearch makes the docker host to search by a term in a remote registry.
+// The list of results is not sorted in any fashion.
+func (cli *Client) ImageSearch(ctx context.Context, term string, options types.ImageSearchOptions) ([]registry.SearchResult, error) {
+	var results []registry.SearchResult
+	query := url.Values{}
+	query.Set("term", term)
+	query.Set("limit", fmt.Sprintf("%d", options.Limit))
+
+	if options.Filters.Len() > 0 {
+		filterJSON, err := filters.ToJSON(options.Filters)
+		if err != nil {
+			return results, err
+		}
+		query.Set("filters", filterJSON)
+	}
+
+	resp, err := cli.tryImageSearch(ctx, query, options.RegistryAuth)
+	defer ensureReaderClosed(resp)
+	if errdefs.IsUnauthorized(err) && options.PrivilegeFunc != nil {
+		newAuthHeader, privilegeErr := options.PrivilegeFunc()
+		if privilegeErr != nil {
+			return results, privilegeErr
+		}
+		resp, err = cli.tryImageSearch(ctx, query, newAuthHeader)
+	}
+	if err != nil {
+		return results, err
+	}
+
+	err = json.NewDecoder(resp.body).Decode(&results)
+	return results, err
+}
+
+func (cli *Client) tryImageSearch(ctx context.Context, query url.Values, registryAuth string) (serverResponse, error) {
+	headers := map[string][]string{"X-Registry-Auth": {registryAuth}}
+	return cli.get(ctx, "/images/search", query, headers)
+}
diff --git a/vendor/github.com/docker/docker/client/image_tag.go b/vendor/github.com/docker/docker/client/image_tag.go
new file mode 100644
index 0000000000000..5652bfc252bba
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/image_tag.go
@@ -0,0 +1,37 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+
+	"github.com/docker/distribution/reference"
+	"github.com/pkg/errors"
+)
+
+// ImageTag tags an image in the docker host
+func (cli *Client) ImageTag(ctx context.Context, source, target string) error {
+	if _, err := reference.ParseAnyReference(source); err != nil {
+		return errors.Wrapf(err, "Error parsing reference: %q is not a valid repository/tag", source)
+	}
+
+	ref, err := reference.ParseNormalizedNamed(target)
+	if err != nil {
+		return errors.Wrapf(err, "Error parsing reference: %q is not a valid repository/tag", target)
+	}
+
+	if _, isCanonical := ref.(reference.Canonical); isCanonical {
+		return errors.New("refusing to create a tag with a digest reference")
+	}
+
+	ref = reference.TagNameOnly(ref)
+
+	query := url.Values{}
+	query.Set("repo", reference.FamiliarName(ref))
+	if tagged, ok := ref.(reference.Tagged); ok {
+		query.Set("tag", tagged.Tag())
+	}
+
+	resp, err := cli.post(ctx, "/images/"+source+"/tag", query, nil, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/info.go b/vendor/github.com/docker/docker/client/info.go
new file mode 100644
index 0000000000000..c856704e23f4f
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/info.go
@@ -0,0 +1,26 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+)
+
+// Info returns information about the docker server.
+func (cli *Client) Info(ctx context.Context) (types.Info, error) {
+	var info types.Info
+	serverResp, err := cli.get(ctx, "/info", url.Values{}, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return info, err
+	}
+
+	if err := json.NewDecoder(serverResp.body).Decode(&info); err != nil {
+		return info, fmt.Errorf("Error reading remote info: %v", err)
+	}
+
+	return info, nil
+}
diff --git a/vendor/github.com/docker/docker/client/interface.go b/vendor/github.com/docker/docker/client/interface.go
new file mode 100644
index 0000000000000..aabad4a911050
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/interface.go
@@ -0,0 +1,201 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"io"
+	"net"
+	"net/http"
+	"time"
+
+	"github.com/docker/docker/api/types"
+	containertypes "github.com/docker/docker/api/types/container"
+	"github.com/docker/docker/api/types/events"
+	"github.com/docker/docker/api/types/filters"
+	"github.com/docker/docker/api/types/image"
+	networktypes "github.com/docker/docker/api/types/network"
+	"github.com/docker/docker/api/types/registry"
+	"github.com/docker/docker/api/types/swarm"
+	volumetypes "github.com/docker/docker/api/types/volume"
+	specs "github.com/opencontainers/image-spec/specs-go/v1"
+)
+
+// CommonAPIClient is the common methods between stable and experimental versions of APIClient.
+type CommonAPIClient interface {
+	ConfigAPIClient
+	ContainerAPIClient
+	DistributionAPIClient
+	ImageAPIClient
+	NodeAPIClient
+	NetworkAPIClient
+	PluginAPIClient
+	ServiceAPIClient
+	SwarmAPIClient
+	SecretAPIClient
+	SystemAPIClient
+	VolumeAPIClient
+	ClientVersion() string
+	DaemonHost() string
+	HTTPClient() *http.Client
+	ServerVersion(ctx context.Context) (types.Version, error)
+	NegotiateAPIVersion(ctx context.Context)
+	NegotiateAPIVersionPing(types.Ping)
+	DialHijack(ctx context.Context, url, proto string, meta map[string][]string) (net.Conn, error)
+	Dialer() func(context.Context) (net.Conn, error)
+	Close() error
+}
+
+// ContainerAPIClient defines API client methods for the containers
+type ContainerAPIClient interface {
+	ContainerAttach(ctx context.Context, container string, options types.ContainerAttachOptions) (types.HijackedResponse, error)
+	ContainerCommit(ctx context.Context, container string, options types.ContainerCommitOptions) (types.IDResponse, error)
+	ContainerCreate(ctx context.Context, config *containertypes.Config, hostConfig *containertypes.HostConfig, networkingConfig *networktypes.NetworkingConfig, platform *specs.Platform, containerName string) (containertypes.ContainerCreateCreatedBody, error)
+	ContainerDiff(ctx context.Context, container string) ([]containertypes.ContainerChangeResponseItem, error)
+	ContainerExecAttach(ctx context.Context, execID string, config types.ExecStartCheck) (types.HijackedResponse, error)
+	ContainerExecCreate(ctx context.Context, container string, config types.ExecConfig) (types.IDResponse, error)
+	ContainerExecInspect(ctx context.Context, execID string) (types.ContainerExecInspect, error)
+	ContainerExecResize(ctx context.Context, execID string, options types.ResizeOptions) error
+	ContainerExecStart(ctx context.Context, execID string, config types.ExecStartCheck) error
+	ContainerExport(ctx context.Context, container string) (io.ReadCloser, error)
+	ContainerInspect(ctx context.Context, container string) (types.ContainerJSON, error)
+	ContainerInspectWithRaw(ctx context.Context, container string, getSize bool) (types.ContainerJSON, []byte, error)
+	ContainerKill(ctx context.Context, container, signal string) error
+	ContainerList(ctx context.Context, options types.ContainerListOptions) ([]types.Container, error)
+	ContainerLogs(ctx context.Context, container string, options types.ContainerLogsOptions) (io.ReadCloser, error)
+	ContainerPause(ctx context.Context, container string) error
+	ContainerRemove(ctx context.Context, container string, options types.ContainerRemoveOptions) error
+	ContainerRename(ctx context.Context, container, newContainerName string) error
+	ContainerResize(ctx context.Context, container string, options types.ResizeOptions) error
+	ContainerRestart(ctx context.Context, container string, timeout *time.Duration) error
+	ContainerStatPath(ctx context.Context, container, path string) (types.ContainerPathStat, error)
+	ContainerStats(ctx context.Context, container string, stream bool) (types.ContainerStats, error)
+	ContainerStatsOneShot(ctx context.Context, container string) (types.ContainerStats, error)
+	ContainerStart(ctx context.Context, container string, options types.ContainerStartOptions) error
+	ContainerStop(ctx context.Context, container string, timeout *time.Duration) error
+	ContainerTop(ctx context.Context, container string, arguments []string) (containertypes.ContainerTopOKBody, error)
+	ContainerUnpause(ctx context.Context, container string) error
+	ContainerUpdate(ctx context.Context, container string, updateConfig containertypes.UpdateConfig) (containertypes.ContainerUpdateOKBody, error)
+	ContainerWait(ctx context.Context, container string, condition containertypes.WaitCondition) (<-chan containertypes.ContainerWaitOKBody, <-chan error)
+	CopyFromContainer(ctx context.Context, container, srcPath string) (io.ReadCloser, types.ContainerPathStat, error)
+	CopyToContainer(ctx context.Context, container, path string, content io.Reader, options types.CopyToContainerOptions) error
+	ContainersPrune(ctx context.Context, pruneFilters filters.Args) (types.ContainersPruneReport, error)
+}
+
+// DistributionAPIClient defines API client methods for the registry
+type DistributionAPIClient interface {
+	DistributionInspect(ctx context.Context, image, encodedRegistryAuth string) (registry.DistributionInspect, error)
+}
+
+// ImageAPIClient defines API client methods for the images
+type ImageAPIClient interface {
+	ImageBuild(ctx context.Context, context io.Reader, options types.ImageBuildOptions) (types.ImageBuildResponse, error)
+	BuildCachePrune(ctx context.Context, opts types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error)
+	BuildCancel(ctx context.Context, id string) error
+	ImageCreate(ctx context.Context, parentReference string, options types.ImageCreateOptions) (io.ReadCloser, error)
+	ImageHistory(ctx context.Context, image string) ([]image.HistoryResponseItem, error)
+	ImageImport(ctx context.Context, source types.ImageImportSource, ref string, options types.ImageImportOptions) (io.ReadCloser, error)
+	ImageInspectWithRaw(ctx context.Context, image string) (types.ImageInspect, []byte, error)
+	ImageList(ctx context.Context, options types.ImageListOptions) ([]types.ImageSummary, error)
+	ImageLoad(ctx context.Context, input io.Reader, quiet bool) (types.ImageLoadResponse, error)
+	ImagePull(ctx context.Context, ref string, options types.ImagePullOptions) (io.ReadCloser, error)
+	ImagePush(ctx context.Context, ref string, options types.ImagePushOptions) (io.ReadCloser, error)
+	ImageRemove(ctx context.Context, image string, options types.ImageRemoveOptions) ([]types.ImageDeleteResponseItem, error)
+	ImageSearch(ctx context.Context, term string, options types.ImageSearchOptions) ([]registry.SearchResult, error)
+	ImageSave(ctx context.Context, images []string) (io.ReadCloser, error)
+	ImageTag(ctx context.Context, image, ref string) error
+	ImagesPrune(ctx context.Context, pruneFilter filters.Args) (types.ImagesPruneReport, error)
+}
+
+// NetworkAPIClient defines API client methods for the networks
+type NetworkAPIClient interface {
+	NetworkConnect(ctx context.Context, network, container string, config *networktypes.EndpointSettings) error
+	NetworkCreate(ctx context.Context, name string, options types.NetworkCreate) (types.NetworkCreateResponse, error)
+	NetworkDisconnect(ctx context.Context, network, container string, force bool) error
+	NetworkInspect(ctx context.Context, network string, options types.NetworkInspectOptions) (types.NetworkResource, error)
+	NetworkInspectWithRaw(ctx context.Context, network string, options types.NetworkInspectOptions) (types.NetworkResource, []byte, error)
+	NetworkList(ctx context.Context, options types.NetworkListOptions) ([]types.NetworkResource, error)
+	NetworkRemove(ctx context.Context, network string) error
+	NetworksPrune(ctx context.Context, pruneFilter filters.Args) (types.NetworksPruneReport, error)
+}
+
+// NodeAPIClient defines API client methods for the nodes
+type NodeAPIClient interface {
+	NodeInspectWithRaw(ctx context.Context, nodeID string) (swarm.Node, []byte, error)
+	NodeList(ctx context.Context, options types.NodeListOptions) ([]swarm.Node, error)
+	NodeRemove(ctx context.Context, nodeID string, options types.NodeRemoveOptions) error
+	NodeUpdate(ctx context.Context, nodeID string, version swarm.Version, node swarm.NodeSpec) error
+}
+
+// PluginAPIClient defines API client methods for the plugins
+type PluginAPIClient interface {
+	PluginList(ctx context.Context, filter filters.Args) (types.PluginsListResponse, error)
+	PluginRemove(ctx context.Context, name string, options types.PluginRemoveOptions) error
+	PluginEnable(ctx context.Context, name string, options types.PluginEnableOptions) error
+	PluginDisable(ctx context.Context, name string, options types.PluginDisableOptions) error
+	PluginInstall(ctx context.Context, name string, options types.PluginInstallOptions) (io.ReadCloser, error)
+	PluginUpgrade(ctx context.Context, name string, options types.PluginInstallOptions) (io.ReadCloser, error)
+	PluginPush(ctx context.Context, name string, registryAuth string) (io.ReadCloser, error)
+	PluginSet(ctx context.Context, name string, args []string) error
+	PluginInspectWithRaw(ctx context.Context, name string) (*types.Plugin, []byte, error)
+	PluginCreate(ctx context.Context, createContext io.Reader, options types.PluginCreateOptions) error
+}
+
+// ServiceAPIClient defines API client methods for the services
+type ServiceAPIClient interface {
+	ServiceCreate(ctx context.Context, service swarm.ServiceSpec, options types.ServiceCreateOptions) (types.ServiceCreateResponse, error)
+	ServiceInspectWithRaw(ctx context.Context, serviceID string, options types.ServiceInspectOptions) (swarm.Service, []byte, error)
+	ServiceList(ctx context.Context, options types.ServiceListOptions) ([]swarm.Service, error)
+	ServiceRemove(ctx context.Context, serviceID string) error
+	ServiceUpdate(ctx context.Context, serviceID string, version swarm.Version, service swarm.ServiceSpec, options types.ServiceUpdateOptions) (types.ServiceUpdateResponse, error)
+	ServiceLogs(ctx context.Context, serviceID string, options types.ContainerLogsOptions) (io.ReadCloser, error)
+	TaskLogs(ctx context.Context, taskID string, options types.ContainerLogsOptions) (io.ReadCloser, error)
+	TaskInspectWithRaw(ctx context.Context, taskID string) (swarm.Task, []byte, error)
+	TaskList(ctx context.Context, options types.TaskListOptions) ([]swarm.Task, error)
+}
+
+// SwarmAPIClient defines API client methods for the swarm
+type SwarmAPIClient interface {
+	SwarmInit(ctx context.Context, req swarm.InitRequest) (string, error)
+	SwarmJoin(ctx context.Context, req swarm.JoinRequest) error
+	SwarmGetUnlockKey(ctx context.Context) (types.SwarmUnlockKeyResponse, error)
+	SwarmUnlock(ctx context.Context, req swarm.UnlockRequest) error
+	SwarmLeave(ctx context.Context, force bool) error
+	SwarmInspect(ctx context.Context) (swarm.Swarm, error)
+	SwarmUpdate(ctx context.Context, version swarm.Version, swarm swarm.Spec, flags swarm.UpdateFlags) error
+}
+
+// SystemAPIClient defines API client methods for the system
+type SystemAPIClient interface {
+	Events(ctx context.Context, options types.EventsOptions) (<-chan events.Message, <-chan error)
+	Info(ctx context.Context) (types.Info, error)
+	RegistryLogin(ctx context.Context, auth types.AuthConfig) (registry.AuthenticateOKBody, error)
+	DiskUsage(ctx context.Context) (types.DiskUsage, error)
+	Ping(ctx context.Context) (types.Ping, error)
+}
+
+// VolumeAPIClient defines API client methods for the volumes
+type VolumeAPIClient interface {
+	VolumeCreate(ctx context.Context, options volumetypes.VolumeCreateBody) (types.Volume, error)
+	VolumeInspect(ctx context.Context, volumeID string) (types.Volume, error)
+	VolumeInspectWithRaw(ctx context.Context, volumeID string) (types.Volume, []byte, error)
+	VolumeList(ctx context.Context, filter filters.Args) (volumetypes.VolumeListOKBody, error)
+	VolumeRemove(ctx context.Context, volumeID string, force bool) error
+	VolumesPrune(ctx context.Context, pruneFilter filters.Args) (types.VolumesPruneReport, error)
+}
+
+// SecretAPIClient defines API client methods for secrets
+type SecretAPIClient interface {
+	SecretList(ctx context.Context, options types.SecretListOptions) ([]swarm.Secret, error)
+	SecretCreate(ctx context.Context, secret swarm.SecretSpec) (types.SecretCreateResponse, error)
+	SecretRemove(ctx context.Context, id string) error
+	SecretInspectWithRaw(ctx context.Context, name string) (swarm.Secret, []byte, error)
+	SecretUpdate(ctx context.Context, id string, version swarm.Version, secret swarm.SecretSpec) error
+}
+
+// ConfigAPIClient defines API client methods for configs
+type ConfigAPIClient interface {
+	ConfigList(ctx context.Context, options types.ConfigListOptions) ([]swarm.Config, error)
+	ConfigCreate(ctx context.Context, config swarm.ConfigSpec) (types.ConfigCreateResponse, error)
+	ConfigRemove(ctx context.Context, id string) error
+	ConfigInspectWithRaw(ctx context.Context, name string) (swarm.Config, []byte, error)
+	ConfigUpdate(ctx context.Context, id string, version swarm.Version, config swarm.ConfigSpec) error
+}
diff --git a/vendor/github.com/docker/docker/client/interface_experimental.go b/vendor/github.com/docker/docker/client/interface_experimental.go
new file mode 100644
index 0000000000000..402ffb512cd03
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/interface_experimental.go
@@ -0,0 +1,18 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+
+	"github.com/docker/docker/api/types"
+)
+
+type apiClientExperimental interface {
+	CheckpointAPIClient
+}
+
+// CheckpointAPIClient defines API client methods for the checkpoints
+type CheckpointAPIClient interface {
+	CheckpointCreate(ctx context.Context, container string, options types.CheckpointCreateOptions) error
+	CheckpointDelete(ctx context.Context, container string, options types.CheckpointDeleteOptions) error
+	CheckpointList(ctx context.Context, container string, options types.CheckpointListOptions) ([]types.Checkpoint, error)
+}
diff --git a/vendor/github.com/docker/docker/client/interface_stable.go b/vendor/github.com/docker/docker/client/interface_stable.go
new file mode 100644
index 0000000000000..5502cd7426614
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/interface_stable.go
@@ -0,0 +1,10 @@
+package client // import "github.com/docker/docker/client"
+
+// APIClient is an interface that clients that talk with a docker server must implement.
+type APIClient interface {
+	CommonAPIClient
+	apiClientExperimental
+}
+
+// Ensure that Client always implements APIClient.
+var _ APIClient = &Client{}
diff --git a/vendor/github.com/docker/docker/client/login.go b/vendor/github.com/docker/docker/client/login.go
new file mode 100644
index 0000000000000..f058520638238
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/login.go
@@ -0,0 +1,25 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/registry"
+)
+
+// RegistryLogin authenticates the docker server with a given docker registry.
+// It returns unauthorizedError when the authentication fails.
+func (cli *Client) RegistryLogin(ctx context.Context, auth types.AuthConfig) (registry.AuthenticateOKBody, error) {
+	resp, err := cli.post(ctx, "/auth", url.Values{}, auth, nil)
+	defer ensureReaderClosed(resp)
+
+	if err != nil {
+		return registry.AuthenticateOKBody{}, err
+	}
+
+	var response registry.AuthenticateOKBody
+	err = json.NewDecoder(resp.body).Decode(&response)
+	return response, err
+}
diff --git a/vendor/github.com/docker/docker/client/network_connect.go b/vendor/github.com/docker/docker/client/network_connect.go
new file mode 100644
index 0000000000000..571894613419e
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/network_connect.go
@@ -0,0 +1,19 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/network"
+)
+
+// NetworkConnect connects a container to an existent network in the docker host.
+func (cli *Client) NetworkConnect(ctx context.Context, networkID, containerID string, config *network.EndpointSettings) error {
+	nc := types.NetworkConnect{
+		Container:      containerID,
+		EndpointConfig: config,
+	}
+	resp, err := cli.post(ctx, "/networks/"+networkID+"/connect", nil, nc, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/network_create.go b/vendor/github.com/docker/docker/client/network_create.go
new file mode 100644
index 0000000000000..278d9383a86b2
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/network_create.go
@@ -0,0 +1,25 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+
+	"github.com/docker/docker/api/types"
+)
+
+// NetworkCreate creates a new network in the docker host.
+func (cli *Client) NetworkCreate(ctx context.Context, name string, options types.NetworkCreate) (types.NetworkCreateResponse, error) {
+	networkCreateRequest := types.NetworkCreateRequest{
+		NetworkCreate: options,
+		Name:          name,
+	}
+	var response types.NetworkCreateResponse
+	serverResp, err := cli.post(ctx, "/networks/create", nil, networkCreateRequest, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return response, err
+	}
+
+	err = json.NewDecoder(serverResp.body).Decode(&response)
+	return response, err
+}
diff --git a/vendor/github.com/docker/docker/client/network_disconnect.go b/vendor/github.com/docker/docker/client/network_disconnect.go
new file mode 100644
index 0000000000000..dd1567665665a
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/network_disconnect.go
@@ -0,0 +1,15 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+
+	"github.com/docker/docker/api/types"
+)
+
+// NetworkDisconnect disconnects a container from an existent network in the docker host.
+func (cli *Client) NetworkDisconnect(ctx context.Context, networkID, containerID string, force bool) error {
+	nd := types.NetworkDisconnect{Container: containerID, Force: force}
+	resp, err := cli.post(ctx, "/networks/"+networkID+"/disconnect", nil, nd, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/network_inspect.go b/vendor/github.com/docker/docker/client/network_inspect.go
new file mode 100644
index 0000000000000..ecf20ceb6e46f
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/network_inspect.go
@@ -0,0 +1,49 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"io"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+)
+
+// NetworkInspect returns the information for a specific network configured in the docker host.
+func (cli *Client) NetworkInspect(ctx context.Context, networkID string, options types.NetworkInspectOptions) (types.NetworkResource, error) {
+	networkResource, _, err := cli.NetworkInspectWithRaw(ctx, networkID, options)
+	return networkResource, err
+}
+
+// NetworkInspectWithRaw returns the information for a specific network configured in the docker host and its raw representation.
+func (cli *Client) NetworkInspectWithRaw(ctx context.Context, networkID string, options types.NetworkInspectOptions) (types.NetworkResource, []byte, error) {
+	if networkID == "" {
+		return types.NetworkResource{}, nil, objectNotFoundError{object: "network", id: networkID}
+	}
+	var (
+		networkResource types.NetworkResource
+		resp            serverResponse
+		err             error
+	)
+	query := url.Values{}
+	if options.Verbose {
+		query.Set("verbose", "true")
+	}
+	if options.Scope != "" {
+		query.Set("scope", options.Scope)
+	}
+	resp, err = cli.get(ctx, "/networks/"+networkID, query, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return networkResource, nil, wrapResponseError(err, resp, "network", networkID)
+	}
+
+	body, err := io.ReadAll(resp.body)
+	if err != nil {
+		return networkResource, nil, err
+	}
+	rdr := bytes.NewReader(body)
+	err = json.NewDecoder(rdr).Decode(&networkResource)
+	return networkResource, body, err
+}
diff --git a/vendor/github.com/docker/docker/client/network_list.go b/vendor/github.com/docker/docker/client/network_list.go
new file mode 100644
index 0000000000000..ed2acb55711dd
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/network_list.go
@@ -0,0 +1,32 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/filters"
+)
+
+// NetworkList returns the list of networks configured in the docker host.
+func (cli *Client) NetworkList(ctx context.Context, options types.NetworkListOptions) ([]types.NetworkResource, error) {
+	query := url.Values{}
+	if options.Filters.Len() > 0 {
+		//nolint:staticcheck // ignore SA1019 for old code
+		filterJSON, err := filters.ToParamWithVersion(cli.version, options.Filters)
+		if err != nil {
+			return nil, err
+		}
+
+		query.Set("filters", filterJSON)
+	}
+	var networkResources []types.NetworkResource
+	resp, err := cli.get(ctx, "/networks", query, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return networkResources, err
+	}
+	err = json.NewDecoder(resp.body).Decode(&networkResources)
+	return networkResources, err
+}
diff --git a/vendor/github.com/docker/docker/client/network_prune.go b/vendor/github.com/docker/docker/client/network_prune.go
new file mode 100644
index 0000000000000..cebb18821925e
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/network_prune.go
@@ -0,0 +1,36 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/filters"
+)
+
+// NetworksPrune requests the daemon to delete unused networks
+func (cli *Client) NetworksPrune(ctx context.Context, pruneFilters filters.Args) (types.NetworksPruneReport, error) {
+	var report types.NetworksPruneReport
+
+	if err := cli.NewVersionError("1.25", "network prune"); err != nil {
+		return report, err
+	}
+
+	query, err := getFiltersQuery(pruneFilters)
+	if err != nil {
+		return report, err
+	}
+
+	serverResp, err := cli.post(ctx, "/networks/prune", query, nil, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return report, err
+	}
+
+	if err := json.NewDecoder(serverResp.body).Decode(&report); err != nil {
+		return report, fmt.Errorf("Error retrieving network prune report: %v", err)
+	}
+
+	return report, nil
+}
diff --git a/vendor/github.com/docker/docker/client/network_remove.go b/vendor/github.com/docker/docker/client/network_remove.go
new file mode 100644
index 0000000000000..e71b16d86929a
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/network_remove.go
@@ -0,0 +1,10 @@
+package client // import "github.com/docker/docker/client"
+
+import "context"
+
+// NetworkRemove removes an existent network from the docker host.
+func (cli *Client) NetworkRemove(ctx context.Context, networkID string) error {
+	resp, err := cli.delete(ctx, "/networks/"+networkID, nil, nil)
+	defer ensureReaderClosed(resp)
+	return wrapResponseError(err, resp, "network", networkID)
+}
diff --git a/vendor/github.com/docker/docker/client/node_inspect.go b/vendor/github.com/docker/docker/client/node_inspect.go
new file mode 100644
index 0000000000000..b58db528567bf
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/node_inspect.go
@@ -0,0 +1,32 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"io"
+
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// NodeInspectWithRaw returns the node information.
+func (cli *Client) NodeInspectWithRaw(ctx context.Context, nodeID string) (swarm.Node, []byte, error) {
+	if nodeID == "" {
+		return swarm.Node{}, nil, objectNotFoundError{object: "node", id: nodeID}
+	}
+	serverResp, err := cli.get(ctx, "/nodes/"+nodeID, nil, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return swarm.Node{}, nil, wrapResponseError(err, serverResp, "node", nodeID)
+	}
+
+	body, err := io.ReadAll(serverResp.body)
+	if err != nil {
+		return swarm.Node{}, nil, err
+	}
+
+	var response swarm.Node
+	rdr := bytes.NewReader(body)
+	err = json.NewDecoder(rdr).Decode(&response)
+	return response, body, err
+}
diff --git a/vendor/github.com/docker/docker/client/node_list.go b/vendor/github.com/docker/docker/client/node_list.go
new file mode 100644
index 0000000000000..c212906bc718e
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/node_list.go
@@ -0,0 +1,36 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/filters"
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// NodeList returns the list of nodes.
+func (cli *Client) NodeList(ctx context.Context, options types.NodeListOptions) ([]swarm.Node, error) {
+	query := url.Values{}
+
+	if options.Filters.Len() > 0 {
+		filterJSON, err := filters.ToJSON(options.Filters)
+
+		if err != nil {
+			return nil, err
+		}
+
+		query.Set("filters", filterJSON)
+	}
+
+	resp, err := cli.get(ctx, "/nodes", query, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return nil, err
+	}
+
+	var nodes []swarm.Node
+	err = json.NewDecoder(resp.body).Decode(&nodes)
+	return nodes, err
+}
diff --git a/vendor/github.com/docker/docker/client/node_remove.go b/vendor/github.com/docker/docker/client/node_remove.go
new file mode 100644
index 0000000000000..03ab878097418
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/node_remove.go
@@ -0,0 +1,20 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+)
+
+// NodeRemove removes a Node.
+func (cli *Client) NodeRemove(ctx context.Context, nodeID string, options types.NodeRemoveOptions) error {
+	query := url.Values{}
+	if options.Force {
+		query.Set("force", "1")
+	}
+
+	resp, err := cli.delete(ctx, "/nodes/"+nodeID, query, nil)
+	defer ensureReaderClosed(resp)
+	return wrapResponseError(err, resp, "node", nodeID)
+}
diff --git a/vendor/github.com/docker/docker/client/node_update.go b/vendor/github.com/docker/docker/client/node_update.go
new file mode 100644
index 0000000000000..de32a617fb016
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/node_update.go
@@ -0,0 +1,18 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+	"strconv"
+
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// NodeUpdate updates a Node.
+func (cli *Client) NodeUpdate(ctx context.Context, nodeID string, version swarm.Version, node swarm.NodeSpec) error {
+	query := url.Values{}
+	query.Set("version", strconv.FormatUint(version.Index, 10))
+	resp, err := cli.post(ctx, "/nodes/"+nodeID+"/update", query, node, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/options.go b/vendor/github.com/docker/docker/client/options.go
new file mode 100644
index 0000000000000..6f77f0955f698
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/options.go
@@ -0,0 +1,172 @@
+package client
+
+import (
+	"context"
+	"net"
+	"net/http"
+	"os"
+	"path/filepath"
+	"time"
+
+	"github.com/docker/go-connections/sockets"
+	"github.com/docker/go-connections/tlsconfig"
+	"github.com/pkg/errors"
+)
+
+// Opt is a configuration option to initialize a client
+type Opt func(*Client) error
+
+// FromEnv configures the client with values from environment variables.
+//
+// Supported environment variables:
+// DOCKER_HOST to set the url to the docker server.
+// DOCKER_API_VERSION to set the version of the API to reach, leave empty for latest.
+// DOCKER_CERT_PATH to load the TLS certificates from.
+// DOCKER_TLS_VERIFY to enable or disable TLS verification, off by default.
+func FromEnv(c *Client) error {
+	if dockerCertPath := os.Getenv("DOCKER_CERT_PATH"); dockerCertPath != "" {
+		options := tlsconfig.Options{
+			CAFile:             filepath.Join(dockerCertPath, "ca.pem"),
+			CertFile:           filepath.Join(dockerCertPath, "cert.pem"),
+			KeyFile:            filepath.Join(dockerCertPath, "key.pem"),
+			InsecureSkipVerify: os.Getenv("DOCKER_TLS_VERIFY") == "",
+		}
+		tlsc, err := tlsconfig.Client(options)
+		if err != nil {
+			return err
+		}
+
+		c.client = &http.Client{
+			Transport:     &http.Transport{TLSClientConfig: tlsc},
+			CheckRedirect: CheckRedirect,
+		}
+	}
+
+	if host := os.Getenv("DOCKER_HOST"); host != "" {
+		if err := WithHost(host)(c); err != nil {
+			return err
+		}
+	}
+
+	if version := os.Getenv("DOCKER_API_VERSION"); version != "" {
+		if err := WithVersion(version)(c); err != nil {
+			return err
+		}
+	}
+	return nil
+}
+
+// WithDialer applies the dialer.DialContext to the client transport. This can be
+// used to set the Timeout and KeepAlive settings of the client.
+// Deprecated: use WithDialContext
+func WithDialer(dialer *net.Dialer) Opt {
+	return WithDialContext(dialer.DialContext)
+}
+
+// WithDialContext applies the dialer to the client transport. This can be
+// used to set the Timeout and KeepAlive settings of the client.
+func WithDialContext(dialContext func(ctx context.Context, network, addr string) (net.Conn, error)) Opt {
+	return func(c *Client) error {
+		if transport, ok := c.client.Transport.(*http.Transport); ok {
+			transport.DialContext = dialContext
+			return nil
+		}
+		return errors.Errorf("cannot apply dialer to transport: %T", c.client.Transport)
+	}
+}
+
+// WithHost overrides the client host with the specified one.
+func WithHost(host string) Opt {
+	return func(c *Client) error {
+		hostURL, err := ParseHostURL(host)
+		if err != nil {
+			return err
+		}
+		c.host = host
+		c.proto = hostURL.Scheme
+		c.addr = hostURL.Host
+		c.basePath = hostURL.Path
+		if transport, ok := c.client.Transport.(*http.Transport); ok {
+			return sockets.ConfigureTransport(transport, c.proto, c.addr)
+		}
+		return errors.Errorf("cannot apply host to transport: %T", c.client.Transport)
+	}
+}
+
+// WithHTTPClient overrides the client http client with the specified one
+func WithHTTPClient(client *http.Client) Opt {
+	return func(c *Client) error {
+		if client != nil {
+			c.client = client
+		}
+		return nil
+	}
+}
+
+// WithTimeout configures the time limit for requests made by the HTTP client
+func WithTimeout(timeout time.Duration) Opt {
+	return func(c *Client) error {
+		c.client.Timeout = timeout
+		return nil
+	}
+}
+
+// WithHTTPHeaders overrides the client default http headers
+func WithHTTPHeaders(headers map[string]string) Opt {
+	return func(c *Client) error {
+		c.customHTTPHeaders = headers
+		return nil
+	}
+}
+
+// WithScheme overrides the client scheme with the specified one
+func WithScheme(scheme string) Opt {
+	return func(c *Client) error {
+		c.scheme = scheme
+		return nil
+	}
+}
+
+// WithTLSClientConfig applies a tls config to the client transport.
+func WithTLSClientConfig(cacertPath, certPath, keyPath string) Opt {
+	return func(c *Client) error {
+		opts := tlsconfig.Options{
+			CAFile:             cacertPath,
+			CertFile:           certPath,
+			KeyFile:            keyPath,
+			ExclusiveRootPools: true,
+		}
+		config, err := tlsconfig.Client(opts)
+		if err != nil {
+			return errors.Wrap(err, "failed to create tls config")
+		}
+		if transport, ok := c.client.Transport.(*http.Transport); ok {
+			transport.TLSClientConfig = config
+			return nil
+		}
+		return errors.Errorf("cannot apply tls config to transport: %T", c.client.Transport)
+	}
+}
+
+// WithVersion overrides the client version with the specified one. If an empty
+// version is specified, the value will be ignored to allow version negotiation.
+func WithVersion(version string) Opt {
+	return func(c *Client) error {
+		if version != "" {
+			c.version = version
+			c.manualOverride = true
+		}
+		return nil
+	}
+}
+
+// WithAPIVersionNegotiation enables automatic API version negotiation for the client.
+// With this option enabled, the client automatically negotiates the API version
+// to use when making requests. API version negotiation is performed on the first
+// request; subsequent requests will not re-negotiate.
+func WithAPIVersionNegotiation() Opt {
+	return func(c *Client) error {
+		c.negotiateVersion = true
+		return nil
+	}
+}
diff --git a/vendor/github.com/docker/docker/client/ping.go b/vendor/github.com/docker/docker/client/ping.go
new file mode 100644
index 0000000000000..a9af001ef46b5
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/ping.go
@@ -0,0 +1,66 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/http"
+	"path"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/errdefs"
+)
+
+// Ping pings the server and returns the value of the "Docker-Experimental",
+// "Builder-Version", "OS-Type" & "API-Version" headers. It attempts to use
+// a HEAD request on the endpoint, but falls back to GET if HEAD is not supported
+// by the daemon.
+func (cli *Client) Ping(ctx context.Context) (types.Ping, error) {
+	var ping types.Ping
+
+	// Using cli.buildRequest() + cli.doRequest() instead of cli.sendRequest()
+	// because ping requests are used during API version negotiation, so we want
+	// to hit the non-versioned /_ping endpoint, not /v1.xx/_ping
+	req, err := cli.buildRequest(http.MethodHead, path.Join(cli.basePath, "/_ping"), nil, nil)
+	if err != nil {
+		return ping, err
+	}
+	serverResp, err := cli.doRequest(ctx, req)
+	if err == nil {
+		defer ensureReaderClosed(serverResp)
+		switch serverResp.statusCode {
+		case http.StatusOK, http.StatusInternalServerError:
+			// Server handled the request, so parse the response
+			return parsePingResponse(cli, serverResp)
+		}
+	} else if IsErrConnectionFailed(err) {
+		return ping, err
+	}
+
+	req, err = cli.buildRequest(http.MethodGet, path.Join(cli.basePath, "/_ping"), nil, nil)
+	if err != nil {
+		return ping, err
+	}
+	serverResp, err = cli.doRequest(ctx, req)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return ping, err
+	}
+	return parsePingResponse(cli, serverResp)
+}
+
+func parsePingResponse(cli *Client, resp serverResponse) (types.Ping, error) {
+	var ping types.Ping
+	if resp.header == nil {
+		err := cli.checkResponseErr(resp)
+		return ping, errdefs.FromStatusCode(err, resp.statusCode)
+	}
+	ping.APIVersion = resp.header.Get("API-Version")
+	ping.OSType = resp.header.Get("OSType")
+	if resp.header.Get("Docker-Experimental") == "true" {
+		ping.Experimental = true
+	}
+	if bv := resp.header.Get("Builder-Version"); bv != "" {
+		ping.BuilderVersion = types.BuilderVersion(bv)
+	}
+	err := cli.checkResponseErr(resp)
+	return ping, errdefs.FromStatusCode(err, resp.statusCode)
+}
diff --git a/vendor/github.com/docker/docker/client/plugin_create.go b/vendor/github.com/docker/docker/client/plugin_create.go
new file mode 100644
index 0000000000000..b95dbaf68633b
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/plugin_create.go
@@ -0,0 +1,23 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"io"
+	"net/http"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+)
+
+// PluginCreate creates a plugin
+func (cli *Client) PluginCreate(ctx context.Context, createContext io.Reader, createOptions types.PluginCreateOptions) error {
+	headers := http.Header(make(map[string][]string))
+	headers.Set("Content-Type", "application/x-tar")
+
+	query := url.Values{}
+	query.Set("name", createOptions.RepoName)
+
+	resp, err := cli.postRaw(ctx, "/plugins/create", query, createContext, headers)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/plugin_disable.go b/vendor/github.com/docker/docker/client/plugin_disable.go
new file mode 100644
index 0000000000000..01f6574f9529b
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/plugin_disable.go
@@ -0,0 +1,19 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+)
+
+// PluginDisable disables a plugin
+func (cli *Client) PluginDisable(ctx context.Context, name string, options types.PluginDisableOptions) error {
+	query := url.Values{}
+	if options.Force {
+		query.Set("force", "1")
+	}
+	resp, err := cli.post(ctx, "/plugins/"+name+"/disable", query, nil, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/plugin_enable.go b/vendor/github.com/docker/docker/client/plugin_enable.go
new file mode 100644
index 0000000000000..736da48bd1014
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/plugin_enable.go
@@ -0,0 +1,19 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+	"strconv"
+
+	"github.com/docker/docker/api/types"
+)
+
+// PluginEnable enables a plugin
+func (cli *Client) PluginEnable(ctx context.Context, name string, options types.PluginEnableOptions) error {
+	query := url.Values{}
+	query.Set("timeout", strconv.Itoa(options.Timeout))
+
+	resp, err := cli.post(ctx, "/plugins/"+name+"/enable", query, nil, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/plugin_inspect.go b/vendor/github.com/docker/docker/client/plugin_inspect.go
new file mode 100644
index 0000000000000..4a90bec51a0cb
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/plugin_inspect.go
@@ -0,0 +1,31 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"io"
+
+	"github.com/docker/docker/api/types"
+)
+
+// PluginInspectWithRaw inspects an existing plugin
+func (cli *Client) PluginInspectWithRaw(ctx context.Context, name string) (*types.Plugin, []byte, error) {
+	if name == "" {
+		return nil, nil, objectNotFoundError{object: "plugin", id: name}
+	}
+	resp, err := cli.get(ctx, "/plugins/"+name+"/json", nil, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return nil, nil, wrapResponseError(err, resp, "plugin", name)
+	}
+
+	body, err := io.ReadAll(resp.body)
+	if err != nil {
+		return nil, nil, err
+	}
+	var p types.Plugin
+	rdr := bytes.NewReader(body)
+	err = json.NewDecoder(rdr).Decode(&p)
+	return &p, body, err
+}
diff --git a/vendor/github.com/docker/docker/client/plugin_install.go b/vendor/github.com/docker/docker/client/plugin_install.go
new file mode 100644
index 0000000000000..012afe61cacf0
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/plugin_install.go
@@ -0,0 +1,113 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"io"
+	"net/url"
+
+	"github.com/docker/distribution/reference"
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/errdefs"
+	"github.com/pkg/errors"
+)
+
+// PluginInstall installs a plugin
+func (cli *Client) PluginInstall(ctx context.Context, name string, options types.PluginInstallOptions) (rc io.ReadCloser, err error) {
+	query := url.Values{}
+	if _, err := reference.ParseNormalizedNamed(options.RemoteRef); err != nil {
+		return nil, errors.Wrap(err, "invalid remote reference")
+	}
+	query.Set("remote", options.RemoteRef)
+
+	privileges, err := cli.checkPluginPermissions(ctx, query, options)
+	if err != nil {
+		return nil, err
+	}
+
+	// set name for plugin pull, if empty should default to remote reference
+	query.Set("name", name)
+
+	resp, err := cli.tryPluginPull(ctx, query, privileges, options.RegistryAuth)
+	if err != nil {
+		return nil, err
+	}
+
+	name = resp.header.Get("Docker-Plugin-Name")
+
+	pr, pw := io.Pipe()
+	go func() { // todo: the client should probably be designed more around the actual api
+		_, err := io.Copy(pw, resp.body)
+		if err != nil {
+			pw.CloseWithError(err)
+			return
+		}
+		defer func() {
+			if err != nil {
+				delResp, _ := cli.delete(ctx, "/plugins/"+name, nil, nil)
+				ensureReaderClosed(delResp)
+			}
+		}()
+		if len(options.Args) > 0 {
+			if err := cli.PluginSet(ctx, name, options.Args); err != nil {
+				pw.CloseWithError(err)
+				return
+			}
+		}
+
+		if options.Disabled {
+			pw.Close()
+			return
+		}
+
+		enableErr := cli.PluginEnable(ctx, name, types.PluginEnableOptions{Timeout: 0})
+		pw.CloseWithError(enableErr)
+	}()
+	return pr, nil
+}
+
+func (cli *Client) tryPluginPrivileges(ctx context.Context, query url.Values, registryAuth string) (serverResponse, error) {
+	headers := map[string][]string{"X-Registry-Auth": {registryAuth}}
+	return cli.get(ctx, "/plugins/privileges", query, headers)
+}
+
+func (cli *Client) tryPluginPull(ctx context.Context, query url.Values, privileges types.PluginPrivileges, registryAuth string) (serverResponse, error) {
+	headers := map[string][]string{"X-Registry-Auth": {registryAuth}}
+	return cli.post(ctx, "/plugins/pull", query, privileges, headers)
+}
+
+func (cli *Client) checkPluginPermissions(ctx context.Context, query url.Values, options types.PluginInstallOptions) (types.PluginPrivileges, error) {
+	resp, err := cli.tryPluginPrivileges(ctx, query, options.RegistryAuth)
+	if errdefs.IsUnauthorized(err) && options.PrivilegeFunc != nil {
+		// todo: do inspect before to check existing name before checking privileges
+		newAuthHeader, privilegeErr := options.PrivilegeFunc()
+		if privilegeErr != nil {
+			ensureReaderClosed(resp)
+			return nil, privilegeErr
+		}
+		options.RegistryAuth = newAuthHeader
+		resp, err = cli.tryPluginPrivileges(ctx, query, options.RegistryAuth)
+	}
+	if err != nil {
+		ensureReaderClosed(resp)
+		return nil, err
+	}
+
+	var privileges types.PluginPrivileges
+	if err := json.NewDecoder(resp.body).Decode(&privileges); err != nil {
+		ensureReaderClosed(resp)
+		return nil, err
+	}
+	ensureReaderClosed(resp)
+
+	if !options.AcceptAllPermissions && options.AcceptPermissionsFunc != nil && len(privileges) > 0 {
+		accept, err := options.AcceptPermissionsFunc(privileges)
+		if err != nil {
+			return nil, err
+		}
+		if !accept {
+			return nil, pluginPermissionDenied{options.RemoteRef}
+		}
+	}
+	return privileges, nil
+}
diff --git a/vendor/github.com/docker/docker/client/plugin_list.go b/vendor/github.com/docker/docker/client/plugin_list.go
new file mode 100644
index 0000000000000..cf1935e2f5ee5
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/plugin_list.go
@@ -0,0 +1,33 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/filters"
+)
+
+// PluginList returns the installed plugins
+func (cli *Client) PluginList(ctx context.Context, filter filters.Args) (types.PluginsListResponse, error) {
+	var plugins types.PluginsListResponse
+	query := url.Values{}
+
+	if filter.Len() > 0 {
+		//nolint:staticcheck // ignore SA1019 for old code
+		filterJSON, err := filters.ToParamWithVersion(cli.version, filter)
+		if err != nil {
+			return plugins, err
+		}
+		query.Set("filters", filterJSON)
+	}
+	resp, err := cli.get(ctx, "/plugins", query, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return plugins, wrapResponseError(err, resp, "plugin", "")
+	}
+
+	err = json.NewDecoder(resp.body).Decode(&plugins)
+	return plugins, err
+}
diff --git a/vendor/github.com/docker/docker/client/plugin_push.go b/vendor/github.com/docker/docker/client/plugin_push.go
new file mode 100644
index 0000000000000..d20bfe8447909
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/plugin_push.go
@@ -0,0 +1,16 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"io"
+)
+
+// PluginPush pushes a plugin to a registry
+func (cli *Client) PluginPush(ctx context.Context, name string, registryAuth string) (io.ReadCloser, error) {
+	headers := map[string][]string{"X-Registry-Auth": {registryAuth}}
+	resp, err := cli.post(ctx, "/plugins/"+name+"/push", nil, nil, headers)
+	if err != nil {
+		return nil, err
+	}
+	return resp.body, nil
+}
diff --git a/vendor/github.com/docker/docker/client/plugin_remove.go b/vendor/github.com/docker/docker/client/plugin_remove.go
new file mode 100644
index 0000000000000..51ca1040d6d29
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/plugin_remove.go
@@ -0,0 +1,20 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+)
+
+// PluginRemove removes a plugin
+func (cli *Client) PluginRemove(ctx context.Context, name string, options types.PluginRemoveOptions) error {
+	query := url.Values{}
+	if options.Force {
+		query.Set("force", "1")
+	}
+
+	resp, err := cli.delete(ctx, "/plugins/"+name, query, nil)
+	defer ensureReaderClosed(resp)
+	return wrapResponseError(err, resp, "plugin", name)
+}
diff --git a/vendor/github.com/docker/docker/client/plugin_set.go b/vendor/github.com/docker/docker/client/plugin_set.go
new file mode 100644
index 0000000000000..dcf5752ca2b1c
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/plugin_set.go
@@ -0,0 +1,12 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+)
+
+// PluginSet modifies settings for an existing plugin
+func (cli *Client) PluginSet(ctx context.Context, name string, args []string) error {
+	resp, err := cli.post(ctx, "/plugins/"+name+"/set", nil, args, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/plugin_upgrade.go b/vendor/github.com/docker/docker/client/plugin_upgrade.go
new file mode 100644
index 0000000000000..115cea945ba89
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/plugin_upgrade.go
@@ -0,0 +1,39 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"io"
+	"net/url"
+
+	"github.com/docker/distribution/reference"
+	"github.com/docker/docker/api/types"
+	"github.com/pkg/errors"
+)
+
+// PluginUpgrade upgrades a plugin
+func (cli *Client) PluginUpgrade(ctx context.Context, name string, options types.PluginInstallOptions) (rc io.ReadCloser, err error) {
+	if err := cli.NewVersionError("1.26", "plugin upgrade"); err != nil {
+		return nil, err
+	}
+	query := url.Values{}
+	if _, err := reference.ParseNormalizedNamed(options.RemoteRef); err != nil {
+		return nil, errors.Wrap(err, "invalid remote reference")
+	}
+	query.Set("remote", options.RemoteRef)
+
+	privileges, err := cli.checkPluginPermissions(ctx, query, options)
+	if err != nil {
+		return nil, err
+	}
+
+	resp, err := cli.tryPluginUpgrade(ctx, query, privileges, name, options.RegistryAuth)
+	if err != nil {
+		return nil, err
+	}
+	return resp.body, nil
+}
+
+func (cli *Client) tryPluginUpgrade(ctx context.Context, query url.Values, privileges types.PluginPrivileges, name, registryAuth string) (serverResponse, error) {
+	headers := map[string][]string{"X-Registry-Auth": {registryAuth}}
+	return cli.post(ctx, "/plugins/"+name+"/upgrade", query, privileges, headers)
+}
diff --git a/vendor/github.com/docker/docker/client/request.go b/vendor/github.com/docker/docker/client/request.go
new file mode 100644
index 0000000000000..d3d9a3fe64bac
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/request.go
@@ -0,0 +1,264 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"fmt"
+	"io"
+	"net"
+	"net/http"
+	"net/url"
+	"os"
+	"strings"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/versions"
+	"github.com/docker/docker/errdefs"
+	"github.com/pkg/errors"
+)
+
+// serverResponse is a wrapper for http API responses.
+type serverResponse struct {
+	body       io.ReadCloser
+	header     http.Header
+	statusCode int
+	reqURL     *url.URL
+}
+
+// head sends an http request to the docker API using the method HEAD.
+func (cli *Client) head(ctx context.Context, path string, query url.Values, headers map[string][]string) (serverResponse, error) {
+	return cli.sendRequest(ctx, http.MethodHead, path, query, nil, headers)
+}
+
+// get sends an http request to the docker API using the method GET with a specific Go context.
+func (cli *Client) get(ctx context.Context, path string, query url.Values, headers map[string][]string) (serverResponse, error) {
+	return cli.sendRequest(ctx, http.MethodGet, path, query, nil, headers)
+}
+
+// post sends an http request to the docker API using the method POST with a specific Go context.
+func (cli *Client) post(ctx context.Context, path string, query url.Values, obj interface{}, headers map[string][]string) (serverResponse, error) {
+	body, headers, err := encodeBody(obj, headers)
+	if err != nil {
+		return serverResponse{}, err
+	}
+	return cli.sendRequest(ctx, http.MethodPost, path, query, body, headers)
+}
+
+func (cli *Client) postRaw(ctx context.Context, path string, query url.Values, body io.Reader, headers map[string][]string) (serverResponse, error) {
+	return cli.sendRequest(ctx, http.MethodPost, path, query, body, headers)
+}
+
+// putRaw sends an http request to the docker API using the method PUT.
+func (cli *Client) putRaw(ctx context.Context, path string, query url.Values, body io.Reader, headers map[string][]string) (serverResponse, error) {
+	return cli.sendRequest(ctx, http.MethodPut, path, query, body, headers)
+}
+
+// delete sends an http request to the docker API using the method DELETE.
+func (cli *Client) delete(ctx context.Context, path string, query url.Values, headers map[string][]string) (serverResponse, error) {
+	return cli.sendRequest(ctx, http.MethodDelete, path, query, nil, headers)
+}
+
+type headers map[string][]string
+
+func encodeBody(obj interface{}, headers headers) (io.Reader, headers, error) {
+	if obj == nil {
+		return nil, headers, nil
+	}
+
+	body, err := encodeData(obj)
+	if err != nil {
+		return nil, headers, err
+	}
+	if headers == nil {
+		headers = make(map[string][]string)
+	}
+	headers["Content-Type"] = []string{"application/json"}
+	return body, headers, nil
+}
+
+func (cli *Client) buildRequest(method, path string, body io.Reader, headers headers) (*http.Request, error) {
+	expectedPayload := (method == http.MethodPost || method == http.MethodPut)
+	if expectedPayload && body == nil {
+		body = bytes.NewReader([]byte{})
+	}
+
+	req, err := http.NewRequest(method, path, body)
+	if err != nil {
+		return nil, err
+	}
+	req = cli.addHeaders(req, headers)
+
+	if cli.proto == "unix" || cli.proto == "npipe" {
+		// For local communications, it doesn't matter what the host is. We just
+		// need a valid and meaningful host name. (See #189)
+		req.Host = "docker"
+	}
+
+	req.URL.Host = cli.addr
+	req.URL.Scheme = cli.scheme
+
+	if expectedPayload && req.Header.Get("Content-Type") == "" {
+		req.Header.Set("Content-Type", "text/plain")
+	}
+	return req, nil
+}
+
+func (cli *Client) sendRequest(ctx context.Context, method, path string, query url.Values, body io.Reader, headers headers) (serverResponse, error) {
+	req, err := cli.buildRequest(method, cli.getAPIPath(ctx, path, query), body, headers)
+	if err != nil {
+		return serverResponse{}, err
+	}
+	resp, err := cli.doRequest(ctx, req)
+	if err != nil {
+		return resp, errdefs.FromStatusCode(err, resp.statusCode)
+	}
+	err = cli.checkResponseErr(resp)
+	return resp, errdefs.FromStatusCode(err, resp.statusCode)
+}
+
+func (cli *Client) doRequest(ctx context.Context, req *http.Request) (serverResponse, error) {
+	serverResp := serverResponse{statusCode: -1, reqURL: req.URL}
+
+	req = req.WithContext(ctx)
+	resp, err := cli.client.Do(req)
+	if err != nil {
+		if cli.scheme != "https" && strings.Contains(err.Error(), "malformed HTTP response") {
+			return serverResp, fmt.Errorf("%v.\n* Are you trying to connect to a TLS-enabled daemon without TLS?", err)
+		}
+
+		if cli.scheme == "https" && strings.Contains(err.Error(), "bad certificate") {
+			return serverResp, errors.Wrap(err, "the server probably has client authentication (--tlsverify) enabled; check your TLS client certification settings")
+		}
+
+		// Don't decorate context sentinel errors; users may be comparing to
+		// them directly.
+		if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
+			return serverResp, err
+		}
+
+		if nErr, ok := err.(*url.Error); ok {
+			if nErr, ok := nErr.Err.(*net.OpError); ok {
+				if os.IsPermission(nErr.Err) {
+					return serverResp, errors.Wrapf(err, "permission denied while trying to connect to the Docker daemon socket at %v", cli.host)
+				}
+			}
+		}
+
+		if err, ok := err.(net.Error); ok {
+			if err.Timeout() {
+				return serverResp, ErrorConnectionFailed(cli.host)
+			}
+			if strings.Contains(err.Error(), "connection refused") || strings.Contains(err.Error(), "dial unix") {
+				return serverResp, ErrorConnectionFailed(cli.host)
+			}
+		}
+
+		// Although there's not a strongly typed error for this in go-winio,
+		// lots of people are using the default configuration for the docker
+		// daemon on Windows where the daemon is listening on a named pipe
+		// `//./pipe/docker_engine, and the client must be running elevated.
+		// Give users a clue rather than the not-overly useful message
+		// such as `error during connect: Get http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.26/info:
+		// open //./pipe/docker_engine: The system cannot find the file specified.`.
+		// Note we can't string compare "The system cannot find the file specified" as
+		// this is localised - for example in French the error would be
+		// `open //./pipe/docker_engine: Le fichier spécifié est introuvable.`
+		if strings.Contains(err.Error(), `open //./pipe/docker_engine`) {
+			// Checks if client is running with elevated privileges
+			if f, elevatedErr := os.Open("\\\\.\\PHYSICALDRIVE0"); elevatedErr == nil {
+				err = errors.Wrap(err, "in the default daemon configuration on Windows, the docker client must be run with elevated privileges to connect")
+			} else {
+				f.Close()
+				err = errors.Wrap(err, "this error may indicate that the docker daemon is not running")
+			}
+		}
+
+		return serverResp, errors.Wrap(err, "error during connect")
+	}
+
+	if resp != nil {
+		serverResp.statusCode = resp.StatusCode
+		serverResp.body = resp.Body
+		serverResp.header = resp.Header
+	}
+	return serverResp, nil
+}
+
+func (cli *Client) checkResponseErr(serverResp serverResponse) error {
+	if serverResp.statusCode >= 200 && serverResp.statusCode < 400 {
+		return nil
+	}
+
+	var body []byte
+	var err error
+	if serverResp.body != nil {
+		bodyMax := 1 * 1024 * 1024 // 1 MiB
+		bodyR := &io.LimitedReader{
+			R: serverResp.body,
+			N: int64(bodyMax),
+		}
+		body, err = io.ReadAll(bodyR)
+		if err != nil {
+			return err
+		}
+		if bodyR.N == 0 {
+			return fmt.Errorf("request returned %s with a message (> %d bytes) for API route and version %s, check if the server supports the requested API version", http.StatusText(serverResp.statusCode), bodyMax, serverResp.reqURL)
+		}
+	}
+	if len(body) == 0 {
+		return fmt.Errorf("request returned %s for API route and version %s, check if the server supports the requested API version", http.StatusText(serverResp.statusCode), serverResp.reqURL)
+	}
+
+	var ct string
+	if serverResp.header != nil {
+		ct = serverResp.header.Get("Content-Type")
+	}
+
+	var errorMessage string
+	if (cli.version == "" || versions.GreaterThan(cli.version, "1.23")) && ct == "application/json" {
+		var errorResponse types.ErrorResponse
+		if err := json.Unmarshal(body, &errorResponse); err != nil {
+			return errors.Wrap(err, "Error reading JSON")
+		}
+		errorMessage = strings.TrimSpace(errorResponse.Message)
+	} else {
+		errorMessage = strings.TrimSpace(string(body))
+	}
+
+	return errors.Wrap(errors.New(errorMessage), "Error response from daemon")
+}
+
+func (cli *Client) addHeaders(req *http.Request, headers headers) *http.Request {
+	// Add CLI Config's HTTP Headers BEFORE we set the Docker headers
+	// then the user can't change OUR headers
+	for k, v := range cli.customHTTPHeaders {
+		if versions.LessThan(cli.version, "1.25") && k == "User-Agent" {
+			continue
+		}
+		req.Header.Set(k, v)
+	}
+
+	for k, v := range headers {
+		req.Header[k] = v
+	}
+	return req
+}
+
+func encodeData(data interface{}) (*bytes.Buffer, error) {
+	params := bytes.NewBuffer(nil)
+	if data != nil {
+		if err := json.NewEncoder(params).Encode(data); err != nil {
+			return nil, err
+		}
+	}
+	return params, nil
+}
+
+func ensureReaderClosed(response serverResponse) {
+	if response.body != nil {
+		// Drain up to 512 bytes and close the body to let the Transport reuse the connection
+		io.CopyN(io.Discard, response.body, 512)
+		response.body.Close()
+	}
+}
diff --git a/vendor/github.com/docker/docker/client/secret_create.go b/vendor/github.com/docker/docker/client/secret_create.go
new file mode 100644
index 0000000000000..fd5b914136c3a
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/secret_create.go
@@ -0,0 +1,25 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// SecretCreate creates a new Secret.
+func (cli *Client) SecretCreate(ctx context.Context, secret swarm.SecretSpec) (types.SecretCreateResponse, error) {
+	var response types.SecretCreateResponse
+	if err := cli.NewVersionError("1.25", "secret create"); err != nil {
+		return response, err
+	}
+	resp, err := cli.post(ctx, "/secrets/create", nil, secret, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return response, err
+	}
+
+	err = json.NewDecoder(resp.body).Decode(&response)
+	return response, err
+}
diff --git a/vendor/github.com/docker/docker/client/secret_inspect.go b/vendor/github.com/docker/docker/client/secret_inspect.go
new file mode 100644
index 0000000000000..c07c9550d448d
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/secret_inspect.go
@@ -0,0 +1,36 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"io"
+
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// SecretInspectWithRaw returns the secret information with raw data
+func (cli *Client) SecretInspectWithRaw(ctx context.Context, id string) (swarm.Secret, []byte, error) {
+	if err := cli.NewVersionError("1.25", "secret inspect"); err != nil {
+		return swarm.Secret{}, nil, err
+	}
+	if id == "" {
+		return swarm.Secret{}, nil, objectNotFoundError{object: "secret", id: id}
+	}
+	resp, err := cli.get(ctx, "/secrets/"+id, nil, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return swarm.Secret{}, nil, wrapResponseError(err, resp, "secret", id)
+	}
+
+	body, err := io.ReadAll(resp.body)
+	if err != nil {
+		return swarm.Secret{}, nil, err
+	}
+
+	var secret swarm.Secret
+	rdr := bytes.NewReader(body)
+	err = json.NewDecoder(rdr).Decode(&secret)
+
+	return secret, body, err
+}
diff --git a/vendor/github.com/docker/docker/client/secret_list.go b/vendor/github.com/docker/docker/client/secret_list.go
new file mode 100644
index 0000000000000..a0289c9f440f2
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/secret_list.go
@@ -0,0 +1,38 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/filters"
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// SecretList returns the list of secrets.
+func (cli *Client) SecretList(ctx context.Context, options types.SecretListOptions) ([]swarm.Secret, error) {
+	if err := cli.NewVersionError("1.25", "secret list"); err != nil {
+		return nil, err
+	}
+	query := url.Values{}
+
+	if options.Filters.Len() > 0 {
+		filterJSON, err := filters.ToJSON(options.Filters)
+		if err != nil {
+			return nil, err
+		}
+
+		query.Set("filters", filterJSON)
+	}
+
+	resp, err := cli.get(ctx, "/secrets", query, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return nil, err
+	}
+
+	var secrets []swarm.Secret
+	err = json.NewDecoder(resp.body).Decode(&secrets)
+	return secrets, err
+}
diff --git a/vendor/github.com/docker/docker/client/secret_remove.go b/vendor/github.com/docker/docker/client/secret_remove.go
new file mode 100644
index 0000000000000..c16f55580416d
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/secret_remove.go
@@ -0,0 +1,13 @@
+package client // import "github.com/docker/docker/client"
+
+import "context"
+
+// SecretRemove removes a Secret.
+func (cli *Client) SecretRemove(ctx context.Context, id string) error {
+	if err := cli.NewVersionError("1.25", "secret remove"); err != nil {
+		return err
+	}
+	resp, err := cli.delete(ctx, "/secrets/"+id, nil, nil)
+	defer ensureReaderClosed(resp)
+	return wrapResponseError(err, resp, "secret", id)
+}
diff --git a/vendor/github.com/docker/docker/client/secret_update.go b/vendor/github.com/docker/docker/client/secret_update.go
new file mode 100644
index 0000000000000..164256bbc15b5
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/secret_update.go
@@ -0,0 +1,21 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+	"strconv"
+
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// SecretUpdate attempts to update a Secret
+func (cli *Client) SecretUpdate(ctx context.Context, id string, version swarm.Version, secret swarm.SecretSpec) error {
+	if err := cli.NewVersionError("1.25", "secret update"); err != nil {
+		return err
+	}
+	query := url.Values{}
+	query.Set("version", strconv.FormatUint(version.Index, 10))
+	resp, err := cli.post(ctx, "/secrets/"+id+"/update", query, secret, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/service_create.go b/vendor/github.com/docker/docker/client/service_create.go
new file mode 100644
index 0000000000000..e0428bf98b3c3
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/service_create.go
@@ -0,0 +1,178 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+	"strings"
+
+	"github.com/docker/distribution/reference"
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/swarm"
+	digest "github.com/opencontainers/go-digest"
+	"github.com/pkg/errors"
+)
+
+// ServiceCreate creates a new Service.
+func (cli *Client) ServiceCreate(ctx context.Context, service swarm.ServiceSpec, options types.ServiceCreateOptions) (types.ServiceCreateResponse, error) {
+	var response types.ServiceCreateResponse
+	headers := map[string][]string{
+		"version": {cli.version},
+	}
+
+	if options.EncodedRegistryAuth != "" {
+		headers["X-Registry-Auth"] = []string{options.EncodedRegistryAuth}
+	}
+
+	// Make sure containerSpec is not nil when no runtime is set or the runtime is set to container
+	if service.TaskTemplate.ContainerSpec == nil && (service.TaskTemplate.Runtime == "" || service.TaskTemplate.Runtime == swarm.RuntimeContainer) {
+		service.TaskTemplate.ContainerSpec = &swarm.ContainerSpec{}
+	}
+
+	if err := validateServiceSpec(service); err != nil {
+		return response, err
+	}
+
+	// ensure that the image is tagged
+	var resolveWarning string
+	switch {
+	case service.TaskTemplate.ContainerSpec != nil:
+		if taggedImg := imageWithTagString(service.TaskTemplate.ContainerSpec.Image); taggedImg != "" {
+			service.TaskTemplate.ContainerSpec.Image = taggedImg
+		}
+		if options.QueryRegistry {
+			resolveWarning = resolveContainerSpecImage(ctx, cli, &service.TaskTemplate, options.EncodedRegistryAuth)
+		}
+	case service.TaskTemplate.PluginSpec != nil:
+		if taggedImg := imageWithTagString(service.TaskTemplate.PluginSpec.Remote); taggedImg != "" {
+			service.TaskTemplate.PluginSpec.Remote = taggedImg
+		}
+		if options.QueryRegistry {
+			resolveWarning = resolvePluginSpecRemote(ctx, cli, &service.TaskTemplate, options.EncodedRegistryAuth)
+		}
+	}
+
+	resp, err := cli.post(ctx, "/services/create", nil, service, headers)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return response, err
+	}
+
+	err = json.NewDecoder(resp.body).Decode(&response)
+	if resolveWarning != "" {
+		response.Warnings = append(response.Warnings, resolveWarning)
+	}
+
+	return response, err
+}
+
+func resolveContainerSpecImage(ctx context.Context, cli DistributionAPIClient, taskSpec *swarm.TaskSpec, encodedAuth string) string {
+	var warning string
+	if img, imgPlatforms, err := imageDigestAndPlatforms(ctx, cli, taskSpec.ContainerSpec.Image, encodedAuth); err != nil {
+		warning = digestWarning(taskSpec.ContainerSpec.Image)
+	} else {
+		taskSpec.ContainerSpec.Image = img
+		if len(imgPlatforms) > 0 {
+			if taskSpec.Placement == nil {
+				taskSpec.Placement = &swarm.Placement{}
+			}
+			taskSpec.Placement.Platforms = imgPlatforms
+		}
+	}
+	return warning
+}
+
+func resolvePluginSpecRemote(ctx context.Context, cli DistributionAPIClient, taskSpec *swarm.TaskSpec, encodedAuth string) string {
+	var warning string
+	if img, imgPlatforms, err := imageDigestAndPlatforms(ctx, cli, taskSpec.PluginSpec.Remote, encodedAuth); err != nil {
+		warning = digestWarning(taskSpec.PluginSpec.Remote)
+	} else {
+		taskSpec.PluginSpec.Remote = img
+		if len(imgPlatforms) > 0 {
+			if taskSpec.Placement == nil {
+				taskSpec.Placement = &swarm.Placement{}
+			}
+			taskSpec.Placement.Platforms = imgPlatforms
+		}
+	}
+	return warning
+}
+
+func imageDigestAndPlatforms(ctx context.Context, cli DistributionAPIClient, image, encodedAuth string) (string, []swarm.Platform, error) {
+	distributionInspect, err := cli.DistributionInspect(ctx, image, encodedAuth)
+	var platforms []swarm.Platform
+	if err != nil {
+		return "", nil, err
+	}
+
+	imageWithDigest := imageWithDigestString(image, distributionInspect.Descriptor.Digest)
+
+	if len(distributionInspect.Platforms) > 0 {
+		platforms = make([]swarm.Platform, 0, len(distributionInspect.Platforms))
+		for _, p := range distributionInspect.Platforms {
+			// clear architecture field for arm. This is a temporary patch to address
+			// https://github.com/docker/swarmkit/issues/2294. The issue is that while
+			// image manifests report "arm" as the architecture, the node reports
+			// something like "armv7l" (includes the variant), which causes arm images
+			// to stop working with swarm mode. This patch removes the architecture
+			// constraint for arm images to ensure tasks get scheduled.
+			arch := p.Architecture
+			if strings.ToLower(arch) == "arm" {
+				arch = ""
+			}
+			platforms = append(platforms, swarm.Platform{
+				Architecture: arch,
+				OS:           p.OS,
+			})
+		}
+	}
+	return imageWithDigest, platforms, err
+}
+
+// imageWithDigestString takes an image string and a digest, and updates
+// the image string if it didn't originally contain a digest. It returns
+// image unmodified in other situations.
+func imageWithDigestString(image string, dgst digest.Digest) string {
+	namedRef, err := reference.ParseNormalizedNamed(image)
+	if err == nil {
+		if _, isCanonical := namedRef.(reference.Canonical); !isCanonical {
+			// ensure that image gets a default tag if none is provided
+			img, err := reference.WithDigest(namedRef, dgst)
+			if err == nil {
+				return reference.FamiliarString(img)
+			}
+		}
+	}
+	return image
+}
+
+// imageWithTagString takes an image string, and returns a tagged image
+// string, adding a 'latest' tag if one was not provided. It returns an
+// empty string if a canonical reference was provided
+func imageWithTagString(image string) string {
+	namedRef, err := reference.ParseNormalizedNamed(image)
+	if err == nil {
+		return reference.FamiliarString(reference.TagNameOnly(namedRef))
+	}
+	return ""
+}
+
+// digestWarning constructs a formatted warning string using the
+// image name that could not be pinned by digest. The formatting
+// is hardcoded, but could me made smarter in the future
+func digestWarning(image string) string {
+	return fmt.Sprintf("image %s could not be accessed on a registry to record\nits digest. Each node will access %s independently,\npossibly leading to different nodes running different\nversions of the image.\n", image, image)
+}
+
+func validateServiceSpec(s swarm.ServiceSpec) error {
+	if s.TaskTemplate.ContainerSpec != nil && s.TaskTemplate.PluginSpec != nil {
+		return errors.New("must not specify both a container spec and a plugin spec in the task template")
+	}
+	if s.TaskTemplate.PluginSpec != nil && s.TaskTemplate.Runtime != swarm.RuntimePlugin {
+		return errors.New("mismatched runtime with plugin spec")
+	}
+	if s.TaskTemplate.ContainerSpec != nil && (s.TaskTemplate.Runtime != "" && s.TaskTemplate.Runtime != swarm.RuntimeContainer) {
+		return errors.New("mismatched runtime with container spec")
+	}
+	return nil
+}
diff --git a/vendor/github.com/docker/docker/client/service_inspect.go b/vendor/github.com/docker/docker/client/service_inspect.go
new file mode 100644
index 0000000000000..c5368bab1e3c8
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/service_inspect.go
@@ -0,0 +1,37 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"fmt"
+	"io"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// ServiceInspectWithRaw returns the service information and the raw data.
+func (cli *Client) ServiceInspectWithRaw(ctx context.Context, serviceID string, opts types.ServiceInspectOptions) (swarm.Service, []byte, error) {
+	if serviceID == "" {
+		return swarm.Service{}, nil, objectNotFoundError{object: "service", id: serviceID}
+	}
+	query := url.Values{}
+	query.Set("insertDefaults", fmt.Sprintf("%v", opts.InsertDefaults))
+	serverResp, err := cli.get(ctx, "/services/"+serviceID, query, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return swarm.Service{}, nil, wrapResponseError(err, serverResp, "service", serviceID)
+	}
+
+	body, err := io.ReadAll(serverResp.body)
+	if err != nil {
+		return swarm.Service{}, nil, err
+	}
+
+	var response swarm.Service
+	rdr := bytes.NewReader(body)
+	err = json.NewDecoder(rdr).Decode(&response)
+	return response, body, err
+}
diff --git a/vendor/github.com/docker/docker/client/service_list.go b/vendor/github.com/docker/docker/client/service_list.go
new file mode 100644
index 0000000000000..f97ec75a5cb76
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/service_list.go
@@ -0,0 +1,39 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/filters"
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// ServiceList returns the list of services.
+func (cli *Client) ServiceList(ctx context.Context, options types.ServiceListOptions) ([]swarm.Service, error) {
+	query := url.Values{}
+
+	if options.Filters.Len() > 0 {
+		filterJSON, err := filters.ToJSON(options.Filters)
+		if err != nil {
+			return nil, err
+		}
+
+		query.Set("filters", filterJSON)
+	}
+
+	if options.Status {
+		query.Set("status", "true")
+	}
+
+	resp, err := cli.get(ctx, "/services", query, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return nil, err
+	}
+
+	var services []swarm.Service
+	err = json.NewDecoder(resp.body).Decode(&services)
+	return services, err
+}
diff --git a/vendor/github.com/docker/docker/client/service_logs.go b/vendor/github.com/docker/docker/client/service_logs.go
new file mode 100644
index 0000000000000..906fd4059e6ab
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/service_logs.go
@@ -0,0 +1,52 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"io"
+	"net/url"
+	"time"
+
+	"github.com/docker/docker/api/types"
+	timetypes "github.com/docker/docker/api/types/time"
+	"github.com/pkg/errors"
+)
+
+// ServiceLogs returns the logs generated by a service in an io.ReadCloser.
+// It's up to the caller to close the stream.
+func (cli *Client) ServiceLogs(ctx context.Context, serviceID string, options types.ContainerLogsOptions) (io.ReadCloser, error) {
+	query := url.Values{}
+	if options.ShowStdout {
+		query.Set("stdout", "1")
+	}
+
+	if options.ShowStderr {
+		query.Set("stderr", "1")
+	}
+
+	if options.Since != "" {
+		ts, err := timetypes.GetTimestamp(options.Since, time.Now())
+		if err != nil {
+			return nil, errors.Wrap(err, `invalid value for "since"`)
+		}
+		query.Set("since", ts)
+	}
+
+	if options.Timestamps {
+		query.Set("timestamps", "1")
+	}
+
+	if options.Details {
+		query.Set("details", "1")
+	}
+
+	if options.Follow {
+		query.Set("follow", "1")
+	}
+	query.Set("tail", options.Tail)
+
+	resp, err := cli.get(ctx, "/services/"+serviceID+"/logs", query, nil)
+	if err != nil {
+		return nil, err
+	}
+	return resp.body, nil
+}
diff --git a/vendor/github.com/docker/docker/client/service_remove.go b/vendor/github.com/docker/docker/client/service_remove.go
new file mode 100644
index 0000000000000..953a2adf5aec4
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/service_remove.go
@@ -0,0 +1,10 @@
+package client // import "github.com/docker/docker/client"
+
+import "context"
+
+// ServiceRemove kills and removes a service.
+func (cli *Client) ServiceRemove(ctx context.Context, serviceID string) error {
+	resp, err := cli.delete(ctx, "/services/"+serviceID, nil, nil)
+	defer ensureReaderClosed(resp)
+	return wrapResponseError(err, resp, "service", serviceID)
+}
diff --git a/vendor/github.com/docker/docker/client/service_update.go b/vendor/github.com/docker/docker/client/service_update.go
new file mode 100644
index 0000000000000..c63895f74f252
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/service_update.go
@@ -0,0 +1,75 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+	"strconv"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// ServiceUpdate updates a Service. The version number is required to avoid conflicting writes.
+// It should be the value as set *before* the update. You can find this value in the Meta field
+// of swarm.Service, which can be found using ServiceInspectWithRaw.
+func (cli *Client) ServiceUpdate(ctx context.Context, serviceID string, version swarm.Version, service swarm.ServiceSpec, options types.ServiceUpdateOptions) (types.ServiceUpdateResponse, error) {
+	var (
+		query    = url.Values{}
+		response = types.ServiceUpdateResponse{}
+	)
+
+	headers := map[string][]string{
+		"version": {cli.version},
+	}
+
+	if options.EncodedRegistryAuth != "" {
+		headers["X-Registry-Auth"] = []string{options.EncodedRegistryAuth}
+	}
+
+	if options.RegistryAuthFrom != "" {
+		query.Set("registryAuthFrom", options.RegistryAuthFrom)
+	}
+
+	if options.Rollback != "" {
+		query.Set("rollback", options.Rollback)
+	}
+
+	query.Set("version", strconv.FormatUint(version.Index, 10))
+
+	if err := validateServiceSpec(service); err != nil {
+		return response, err
+	}
+
+	// ensure that the image is tagged
+	var resolveWarning string
+	switch {
+	case service.TaskTemplate.ContainerSpec != nil:
+		if taggedImg := imageWithTagString(service.TaskTemplate.ContainerSpec.Image); taggedImg != "" {
+			service.TaskTemplate.ContainerSpec.Image = taggedImg
+		}
+		if options.QueryRegistry {
+			resolveWarning = resolveContainerSpecImage(ctx, cli, &service.TaskTemplate, options.EncodedRegistryAuth)
+		}
+	case service.TaskTemplate.PluginSpec != nil:
+		if taggedImg := imageWithTagString(service.TaskTemplate.PluginSpec.Remote); taggedImg != "" {
+			service.TaskTemplate.PluginSpec.Remote = taggedImg
+		}
+		if options.QueryRegistry {
+			resolveWarning = resolvePluginSpecRemote(ctx, cli, &service.TaskTemplate, options.EncodedRegistryAuth)
+		}
+	}
+
+	resp, err := cli.post(ctx, "/services/"+serviceID+"/update", query, service, headers)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return response, err
+	}
+
+	err = json.NewDecoder(resp.body).Decode(&response)
+	if resolveWarning != "" {
+		response.Warnings = append(response.Warnings, resolveWarning)
+	}
+
+	return response, err
+}
diff --git a/vendor/github.com/docker/docker/client/swarm_get_unlock_key.go b/vendor/github.com/docker/docker/client/swarm_get_unlock_key.go
new file mode 100644
index 0000000000000..19f59dd582a90
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/swarm_get_unlock_key.go
@@ -0,0 +1,21 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+
+	"github.com/docker/docker/api/types"
+)
+
+// SwarmGetUnlockKey retrieves the swarm's unlock key.
+func (cli *Client) SwarmGetUnlockKey(ctx context.Context) (types.SwarmUnlockKeyResponse, error) {
+	serverResp, err := cli.get(ctx, "/swarm/unlockkey", nil, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return types.SwarmUnlockKeyResponse{}, err
+	}
+
+	var response types.SwarmUnlockKeyResponse
+	err = json.NewDecoder(serverResp.body).Decode(&response)
+	return response, err
+}
diff --git a/vendor/github.com/docker/docker/client/swarm_init.go b/vendor/github.com/docker/docker/client/swarm_init.go
new file mode 100644
index 0000000000000..da3c1637ef049
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/swarm_init.go
@@ -0,0 +1,21 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// SwarmInit initializes the swarm.
+func (cli *Client) SwarmInit(ctx context.Context, req swarm.InitRequest) (string, error) {
+	serverResp, err := cli.post(ctx, "/swarm/init", nil, req, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return "", err
+	}
+
+	var response string
+	err = json.NewDecoder(serverResp.body).Decode(&response)
+	return response, err
+}
diff --git a/vendor/github.com/docker/docker/client/swarm_inspect.go b/vendor/github.com/docker/docker/client/swarm_inspect.go
new file mode 100644
index 0000000000000..b52b67a8849bc
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/swarm_inspect.go
@@ -0,0 +1,21 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// SwarmInspect inspects the swarm.
+func (cli *Client) SwarmInspect(ctx context.Context) (swarm.Swarm, error) {
+	serverResp, err := cli.get(ctx, "/swarm", nil, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return swarm.Swarm{}, err
+	}
+
+	var response swarm.Swarm
+	err = json.NewDecoder(serverResp.body).Decode(&response)
+	return response, err
+}
diff --git a/vendor/github.com/docker/docker/client/swarm_join.go b/vendor/github.com/docker/docker/client/swarm_join.go
new file mode 100644
index 0000000000000..a1cf0455d2b98
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/swarm_join.go
@@ -0,0 +1,14 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// SwarmJoin joins the swarm.
+func (cli *Client) SwarmJoin(ctx context.Context, req swarm.JoinRequest) error {
+	resp, err := cli.post(ctx, "/swarm/join", nil, req, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/swarm_leave.go b/vendor/github.com/docker/docker/client/swarm_leave.go
new file mode 100644
index 0000000000000..90ca84b363baf
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/swarm_leave.go
@@ -0,0 +1,17 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+)
+
+// SwarmLeave leaves the swarm.
+func (cli *Client) SwarmLeave(ctx context.Context, force bool) error {
+	query := url.Values{}
+	if force {
+		query.Set("force", "1")
+	}
+	resp, err := cli.post(ctx, "/swarm/leave", query, nil, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/swarm_unlock.go b/vendor/github.com/docker/docker/client/swarm_unlock.go
new file mode 100644
index 0000000000000..d2412f7d441dc
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/swarm_unlock.go
@@ -0,0 +1,14 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// SwarmUnlock unlocks locked swarm.
+func (cli *Client) SwarmUnlock(ctx context.Context, req swarm.UnlockRequest) error {
+	serverResp, err := cli.post(ctx, "/swarm/unlock", nil, req, nil)
+	ensureReaderClosed(serverResp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/swarm_update.go b/vendor/github.com/docker/docker/client/swarm_update.go
new file mode 100644
index 0000000000000..56a5bea761e6d
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/swarm_update.go
@@ -0,0 +1,22 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"fmt"
+	"net/url"
+	"strconv"
+
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// SwarmUpdate updates the swarm.
+func (cli *Client) SwarmUpdate(ctx context.Context, version swarm.Version, swarm swarm.Spec, flags swarm.UpdateFlags) error {
+	query := url.Values{}
+	query.Set("version", strconv.FormatUint(version.Index, 10))
+	query.Set("rotateWorkerToken", fmt.Sprintf("%v", flags.RotateWorkerToken))
+	query.Set("rotateManagerToken", fmt.Sprintf("%v", flags.RotateManagerToken))
+	query.Set("rotateManagerUnlockKey", fmt.Sprintf("%v", flags.RotateManagerUnlockKey))
+	resp, err := cli.post(ctx, "/swarm/update", query, swarm, nil)
+	ensureReaderClosed(resp)
+	return err
+}
diff --git a/vendor/github.com/docker/docker/client/task_inspect.go b/vendor/github.com/docker/docker/client/task_inspect.go
new file mode 100644
index 0000000000000..fb0949da5be9e
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/task_inspect.go
@@ -0,0 +1,32 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"io"
+
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// TaskInspectWithRaw returns the task information and its raw representation..
+func (cli *Client) TaskInspectWithRaw(ctx context.Context, taskID string) (swarm.Task, []byte, error) {
+	if taskID == "" {
+		return swarm.Task{}, nil, objectNotFoundError{object: "task", id: taskID}
+	}
+	serverResp, err := cli.get(ctx, "/tasks/"+taskID, nil, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return swarm.Task{}, nil, wrapResponseError(err, serverResp, "task", taskID)
+	}
+
+	body, err := io.ReadAll(serverResp.body)
+	if err != nil {
+		return swarm.Task{}, nil, err
+	}
+
+	var response swarm.Task
+	rdr := bytes.NewReader(body)
+	err = json.NewDecoder(rdr).Decode(&response)
+	return response, body, err
+}
diff --git a/vendor/github.com/docker/docker/client/task_list.go b/vendor/github.com/docker/docker/client/task_list.go
new file mode 100644
index 0000000000000..4869b44493b1e
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/task_list.go
@@ -0,0 +1,35 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/filters"
+	"github.com/docker/docker/api/types/swarm"
+)
+
+// TaskList returns the list of tasks.
+func (cli *Client) TaskList(ctx context.Context, options types.TaskListOptions) ([]swarm.Task, error) {
+	query := url.Values{}
+
+	if options.Filters.Len() > 0 {
+		filterJSON, err := filters.ToJSON(options.Filters)
+		if err != nil {
+			return nil, err
+		}
+
+		query.Set("filters", filterJSON)
+	}
+
+	resp, err := cli.get(ctx, "/tasks", query, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return nil, err
+	}
+
+	var tasks []swarm.Task
+	err = json.NewDecoder(resp.body).Decode(&tasks)
+	return tasks, err
+}
diff --git a/vendor/github.com/docker/docker/client/task_logs.go b/vendor/github.com/docker/docker/client/task_logs.go
new file mode 100644
index 0000000000000..6222fab577d17
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/task_logs.go
@@ -0,0 +1,51 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"io"
+	"net/url"
+	"time"
+
+	"github.com/docker/docker/api/types"
+	timetypes "github.com/docker/docker/api/types/time"
+)
+
+// TaskLogs returns the logs generated by a task in an io.ReadCloser.
+// It's up to the caller to close the stream.
+func (cli *Client) TaskLogs(ctx context.Context, taskID string, options types.ContainerLogsOptions) (io.ReadCloser, error) {
+	query := url.Values{}
+	if options.ShowStdout {
+		query.Set("stdout", "1")
+	}
+
+	if options.ShowStderr {
+		query.Set("stderr", "1")
+	}
+
+	if options.Since != "" {
+		ts, err := timetypes.GetTimestamp(options.Since, time.Now())
+		if err != nil {
+			return nil, err
+		}
+		query.Set("since", ts)
+	}
+
+	if options.Timestamps {
+		query.Set("timestamps", "1")
+	}
+
+	if options.Details {
+		query.Set("details", "1")
+	}
+
+	if options.Follow {
+		query.Set("follow", "1")
+	}
+	query.Set("tail", options.Tail)
+
+	resp, err := cli.get(ctx, "/tasks/"+taskID+"/logs", query, nil)
+	if err != nil {
+		return nil, err
+	}
+	return resp.body, nil
+}
diff --git a/vendor/github.com/docker/docker/client/transport.go b/vendor/github.com/docker/docker/client/transport.go
new file mode 100644
index 0000000000000..5541344366b61
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/transport.go
@@ -0,0 +1,17 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"crypto/tls"
+	"net/http"
+)
+
+// resolveTLSConfig attempts to resolve the TLS configuration from the
+// RoundTripper.
+func resolveTLSConfig(transport http.RoundTripper) *tls.Config {
+	switch tr := transport.(type) {
+	case *http.Transport:
+		return tr.TLSClientConfig
+	default:
+		return nil
+	}
+}
diff --git a/vendor/github.com/docker/docker/client/utils.go b/vendor/github.com/docker/docker/client/utils.go
new file mode 100644
index 0000000000000..7f3ff44eb80b8
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/utils.go
@@ -0,0 +1,34 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"net/url"
+	"regexp"
+
+	"github.com/docker/docker/api/types/filters"
+)
+
+var headerRegexp = regexp.MustCompile(`\ADocker/.+\s\((.+)\)\z`)
+
+// getDockerOS returns the operating system based on the server header from the daemon.
+func getDockerOS(serverHeader string) string {
+	var osType string
+	matches := headerRegexp.FindStringSubmatch(serverHeader)
+	if len(matches) > 0 {
+		osType = matches[1]
+	}
+	return osType
+}
+
+// getFiltersQuery returns a url query with "filters" query term, based on the
+// filters provided.
+func getFiltersQuery(f filters.Args) (url.Values, error) {
+	query := url.Values{}
+	if f.Len() > 0 {
+		filterJSON, err := filters.ToJSON(f)
+		if err != nil {
+			return query, err
+		}
+		query.Set("filters", filterJSON)
+	}
+	return query, nil
+}
diff --git a/vendor/github.com/docker/docker/client/version.go b/vendor/github.com/docker/docker/client/version.go
new file mode 100644
index 0000000000000..8f17ff4e87af1
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/version.go
@@ -0,0 +1,21 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+
+	"github.com/docker/docker/api/types"
+)
+
+// ServerVersion returns information of the docker client and server host.
+func (cli *Client) ServerVersion(ctx context.Context) (types.Version, error) {
+	resp, err := cli.get(ctx, "/version", nil, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return types.Version{}, err
+	}
+
+	var server types.Version
+	err = json.NewDecoder(resp.body).Decode(&server)
+	return server, err
+}
diff --git a/vendor/github.com/docker/docker/client/volume_create.go b/vendor/github.com/docker/docker/client/volume_create.go
new file mode 100644
index 0000000000000..92761b3c639ea
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/volume_create.go
@@ -0,0 +1,21 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+
+	"github.com/docker/docker/api/types"
+	volumetypes "github.com/docker/docker/api/types/volume"
+)
+
+// VolumeCreate creates a volume in the docker host.
+func (cli *Client) VolumeCreate(ctx context.Context, options volumetypes.VolumeCreateBody) (types.Volume, error) {
+	var volume types.Volume
+	resp, err := cli.post(ctx, "/volumes/create", nil, options, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return volume, err
+	}
+	err = json.NewDecoder(resp.body).Decode(&volume)
+	return volume, err
+}
diff --git a/vendor/github.com/docker/docker/client/volume_inspect.go b/vendor/github.com/docker/docker/client/volume_inspect.go
new file mode 100644
index 0000000000000..5c5b3f905c54c
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/volume_inspect.go
@@ -0,0 +1,38 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"io"
+
+	"github.com/docker/docker/api/types"
+)
+
+// VolumeInspect returns the information about a specific volume in the docker host.
+func (cli *Client) VolumeInspect(ctx context.Context, volumeID string) (types.Volume, error) {
+	volume, _, err := cli.VolumeInspectWithRaw(ctx, volumeID)
+	return volume, err
+}
+
+// VolumeInspectWithRaw returns the information about a specific volume in the docker host and its raw representation
+func (cli *Client) VolumeInspectWithRaw(ctx context.Context, volumeID string) (types.Volume, []byte, error) {
+	if volumeID == "" {
+		return types.Volume{}, nil, objectNotFoundError{object: "volume", id: volumeID}
+	}
+
+	var volume types.Volume
+	resp, err := cli.get(ctx, "/volumes/"+volumeID, nil, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return volume, nil, wrapResponseError(err, resp, "volume", volumeID)
+	}
+
+	body, err := io.ReadAll(resp.body)
+	if err != nil {
+		return volume, nil, err
+	}
+	rdr := bytes.NewReader(body)
+	err = json.NewDecoder(rdr).Decode(&volume)
+	return volume, body, err
+}
diff --git a/vendor/github.com/docker/docker/client/volume_list.go b/vendor/github.com/docker/docker/client/volume_list.go
new file mode 100644
index 0000000000000..942498dde2c74
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/volume_list.go
@@ -0,0 +1,33 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"net/url"
+
+	"github.com/docker/docker/api/types/filters"
+	volumetypes "github.com/docker/docker/api/types/volume"
+)
+
+// VolumeList returns the volumes configured in the docker host.
+func (cli *Client) VolumeList(ctx context.Context, filter filters.Args) (volumetypes.VolumeListOKBody, error) {
+	var volumes volumetypes.VolumeListOKBody
+	query := url.Values{}
+
+	if filter.Len() > 0 {
+		//nolint:staticcheck // ignore SA1019 for old code
+		filterJSON, err := filters.ToParamWithVersion(cli.version, filter)
+		if err != nil {
+			return volumes, err
+		}
+		query.Set("filters", filterJSON)
+	}
+	resp, err := cli.get(ctx, "/volumes", query, nil)
+	defer ensureReaderClosed(resp)
+	if err != nil {
+		return volumes, err
+	}
+
+	err = json.NewDecoder(resp.body).Decode(&volumes)
+	return volumes, err
+}
diff --git a/vendor/github.com/docker/docker/client/volume_prune.go b/vendor/github.com/docker/docker/client/volume_prune.go
new file mode 100644
index 0000000000000..6e324708f2b13
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/volume_prune.go
@@ -0,0 +1,36 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+
+	"github.com/docker/docker/api/types"
+	"github.com/docker/docker/api/types/filters"
+)
+
+// VolumesPrune requests the daemon to delete unused data
+func (cli *Client) VolumesPrune(ctx context.Context, pruneFilters filters.Args) (types.VolumesPruneReport, error) {
+	var report types.VolumesPruneReport
+
+	if err := cli.NewVersionError("1.25", "volume prune"); err != nil {
+		return report, err
+	}
+
+	query, err := getFiltersQuery(pruneFilters)
+	if err != nil {
+		return report, err
+	}
+
+	serverResp, err := cli.post(ctx, "/volumes/prune", query, nil, nil)
+	defer ensureReaderClosed(serverResp)
+	if err != nil {
+		return report, err
+	}
+
+	if err := json.NewDecoder(serverResp.body).Decode(&report); err != nil {
+		return report, fmt.Errorf("Error retrieving volume prune report: %v", err)
+	}
+
+	return report, nil
+}
diff --git a/vendor/github.com/docker/docker/client/volume_remove.go b/vendor/github.com/docker/docker/client/volume_remove.go
new file mode 100644
index 0000000000000..79decdafab88d
--- /dev/null
+++ b/vendor/github.com/docker/docker/client/volume_remove.go
@@ -0,0 +1,21 @@
+package client // import "github.com/docker/docker/client"
+
+import (
+	"context"
+	"net/url"
+
+	"github.com/docker/docker/api/types/versions"
+)
+
+// VolumeRemove removes a volume from the docker host.
+func (cli *Client) VolumeRemove(ctx context.Context, volumeID string, force bool) error {
+	query := url.Values{}
+	if versions.GreaterThanOrEqualTo(cli.version, "1.25") {
+		if force {
+			query.Set("force", "1")
+		}
+	}
+	resp, err := cli.delete(ctx, "/volumes/"+volumeID, query, nil)
+	defer ensureReaderClosed(resp)
+	return wrapResponseError(err, resp, "volume", volumeID)
+}
diff --git a/vendor/github.com/docker/docker/errdefs/defs.go b/vendor/github.com/docker/docker/errdefs/defs.go
new file mode 100644
index 0000000000000..61e7456b4ebfe
--- /dev/null
+++ b/vendor/github.com/docker/docker/errdefs/defs.go
@@ -0,0 +1,69 @@
+package errdefs // import "github.com/docker/docker/errdefs"
+
+// ErrNotFound signals that the requested object doesn't exist
+type ErrNotFound interface {
+	NotFound()
+}
+
+// ErrInvalidParameter signals that the user input is invalid
+type ErrInvalidParameter interface {
+	InvalidParameter()
+}
+
+// ErrConflict signals that some internal state conflicts with the requested action and can't be performed.
+// A change in state should be able to clear this error.
+type ErrConflict interface {
+	Conflict()
+}
+
+// ErrUnauthorized is used to signify that the user is not authorized to perform a specific action
+type ErrUnauthorized interface {
+	Unauthorized()
+}
+
+// ErrUnavailable signals that the requested action/subsystem is not available.
+type ErrUnavailable interface {
+	Unavailable()
+}
+
+// ErrForbidden signals that the requested action cannot be performed under any circumstances.
+// When a ErrForbidden is returned, the caller should never retry the action.
+type ErrForbidden interface {
+	Forbidden()
+}
+
+// ErrSystem signals that some internal error occurred.
+// An example of this would be a failed mount request.
+type ErrSystem interface {
+	System()
+}
+
+// ErrNotModified signals that an action can't be performed because it's already in the desired state
+type ErrNotModified interface {
+	NotModified()
+}
+
+// ErrNotImplemented signals that the requested action/feature is not implemented on the system as configured.
+type ErrNotImplemented interface {
+	NotImplemented()
+}
+
+// ErrUnknown signals that the kind of error that occurred is not known.
+type ErrUnknown interface {
+	Unknown()
+}
+
+// ErrCancelled signals that the action was cancelled.
+type ErrCancelled interface {
+	Cancelled()
+}
+
+// ErrDeadline signals that the deadline was reached before the action completed.
+type ErrDeadline interface {
+	DeadlineExceeded()
+}
+
+// ErrDataLoss indicates that data was lost or there is data corruption.
+type ErrDataLoss interface {
+	DataLoss()
+}
diff --git a/vendor/github.com/docker/docker/errdefs/doc.go b/vendor/github.com/docker/docker/errdefs/doc.go
new file mode 100644
index 0000000000000..c211f174fc117
--- /dev/null
+++ b/vendor/github.com/docker/docker/errdefs/doc.go
@@ -0,0 +1,8 @@
+// Package errdefs defines a set of error interfaces that packages should use for communicating classes of errors.
+// Errors that cross the package boundary should implement one (and only one) of these interfaces.
+//
+// Packages should not reference these interfaces directly, only implement them.
+// To check if a particular error implements one of these interfaces, there are helper
+// functions provided (e.g. `Is<SomeError>`) which can be used rather than asserting the interfaces directly.
+// If you must assert on these interfaces, be sure to check the causal chain (`err.Cause()`).
+package errdefs // import "github.com/docker/docker/errdefs"
diff --git a/vendor/github.com/docker/docker/errdefs/helpers.go b/vendor/github.com/docker/docker/errdefs/helpers.go
new file mode 100644
index 0000000000000..fe06fb6f703b1
--- /dev/null
+++ b/vendor/github.com/docker/docker/errdefs/helpers.go
@@ -0,0 +1,279 @@
+package errdefs // import "github.com/docker/docker/errdefs"
+
+import "context"
+
+type errNotFound struct{ error }
+
+func (errNotFound) NotFound() {}
+
+func (e errNotFound) Cause() error {
+	return e.error
+}
+
+func (e errNotFound) Unwrap() error {
+	return e.error
+}
+
+// NotFound is a helper to create an error of the class with the same name from any error type
+func NotFound(err error) error {
+	if err == nil || IsNotFound(err) {
+		return err
+	}
+	return errNotFound{err}
+}
+
+type errInvalidParameter struct{ error }
+
+func (errInvalidParameter) InvalidParameter() {}
+
+func (e errInvalidParameter) Cause() error {
+	return e.error
+}
+
+func (e errInvalidParameter) Unwrap() error {
+	return e.error
+}
+
+// InvalidParameter is a helper to create an error of the class with the same name from any error type
+func InvalidParameter(err error) error {
+	if err == nil || IsInvalidParameter(err) {
+		return err
+	}
+	return errInvalidParameter{err}
+}
+
+type errConflict struct{ error }
+
+func (errConflict) Conflict() {}
+
+func (e errConflict) Cause() error {
+	return e.error
+}
+
+func (e errConflict) Unwrap() error {
+	return e.error
+}
+
+// Conflict is a helper to create an error of the class with the same name from any error type
+func Conflict(err error) error {
+	if err == nil || IsConflict(err) {
+		return err
+	}
+	return errConflict{err}
+}
+
+type errUnauthorized struct{ error }
+
+func (errUnauthorized) Unauthorized() {}
+
+func (e errUnauthorized) Cause() error {
+	return e.error
+}
+
+func (e errUnauthorized) Unwrap() error {
+	return e.error
+}
+
+// Unauthorized is a helper to create an error of the class with the same name from any error type
+func Unauthorized(err error) error {
+	if err == nil || IsUnauthorized(err) {
+		return err
+	}
+	return errUnauthorized{err}
+}
+
+type errUnavailable struct{ error }
+
+func (errUnavailable) Unavailable() {}
+
+func (e errUnavailable) Cause() error {
+	return e.error
+}
+
+func (e errUnavailable) Unwrap() error {
+	return e.error
+}
+
+// Unavailable is a helper to create an error of the class with the same name from any error type
+func Unavailable(err error) error {
+	if err == nil || IsUnavailable(err) {
+		return err
+	}
+	return errUnavailable{err}
+}
+
+type errForbidden struct{ error }
+
+func (errForbidden) Forbidden() {}
+
+func (e errForbidden) Cause() error {
+	return e.error
+}
+
+func (e errForbidden) Unwrap() error {
+	return e.error
+}
+
+// Forbidden is a helper to create an error of the class with the same name from any error type
+func Forbidden(err error) error {
+	if err == nil || IsForbidden(err) {
+		return err
+	}
+	return errForbidden{err}
+}
+
+type errSystem struct{ error }
+
+func (errSystem) System() {}
+
+func (e errSystem) Cause() error {
+	return e.error
+}
+
+func (e errSystem) Unwrap() error {
+	return e.error
+}
+
+// System is a helper to create an error of the class with the same name from any error type
+func System(err error) error {
+	if err == nil || IsSystem(err) {
+		return err
+	}
+	return errSystem{err}
+}
+
+type errNotModified struct{ error }
+
+func (errNotModified) NotModified() {}
+
+func (e errNotModified) Cause() error {
+	return e.error
+}
+
+func (e errNotModified) Unwrap() error {
+	return e.error
+}
+
+// NotModified is a helper to create an error of the class with the same name from any error type
+func NotModified(err error) error {
+	if err == nil || IsNotModified(err) {
+		return err
+	}
+	return errNotModified{err}
+}
+
+type errNotImplemented struct{ error }
+
+func (errNotImplemented) NotImplemented() {}
+
+func (e errNotImplemented) Cause() error {
+	return e.error
+}
+
+func (e errNotImplemented) Unwrap() error {
+	return e.error
+}
+
+// NotImplemented is a helper to create an error of the class with the same name from any error type
+func NotImplemented(err error) error {
+	if err == nil || IsNotImplemented(err) {
+		return err
+	}
+	return errNotImplemented{err}
+}
+
+type errUnknown struct{ error }
+
+func (errUnknown) Unknown() {}
+
+func (e errUnknown) Cause() error {
+	return e.error
+}
+
+func (e errUnknown) Unwrap() error {
+	return e.error
+}
+
+// Unknown is a helper to create an error of the class with the same name from any error type
+func Unknown(err error) error {
+	if err == nil || IsUnknown(err) {
+		return err
+	}
+	return errUnknown{err}
+}
+
+type errCancelled struct{ error }
+
+func (errCancelled) Cancelled() {}
+
+func (e errCancelled) Cause() error {
+	return e.error
+}
+
+func (e errCancelled) Unwrap() error {
+	return e.error
+}
+
+// Cancelled is a helper to create an error of the class with the same name from any error type
+func Cancelled(err error) error {
+	if err == nil || IsCancelled(err) {
+		return err
+	}
+	return errCancelled{err}
+}
+
+type errDeadline struct{ error }
+
+func (errDeadline) DeadlineExceeded() {}
+
+func (e errDeadline) Cause() error {
+	return e.error
+}
+
+func (e errDeadline) Unwrap() error {
+	return e.error
+}
+
+// Deadline is a helper to create an error of the class with the same name from any error type
+func Deadline(err error) error {
+	if err == nil || IsDeadline(err) {
+		return err
+	}
+	return errDeadline{err}
+}
+
+type errDataLoss struct{ error }
+
+func (errDataLoss) DataLoss() {}
+
+func (e errDataLoss) Cause() error {
+	return e.error
+}
+
+func (e errDataLoss) Unwrap() error {
+	return e.error
+}
+
+// DataLoss is a helper to create an error of the class with the same name from any error type
+func DataLoss(err error) error {
+	if err == nil || IsDataLoss(err) {
+		return err
+	}
+	return errDataLoss{err}
+}
+
+// FromContext returns the error class from the passed in context
+func FromContext(ctx context.Context) error {
+	e := ctx.Err()
+	if e == nil {
+		return nil
+	}
+
+	if e == context.Canceled {
+		return Cancelled(e)
+	}
+	if e == context.DeadlineExceeded {
+		return Deadline(e)
+	}
+	return Unknown(e)
+}
diff --git a/vendor/github.com/docker/docker/errdefs/http_helpers.go b/vendor/github.com/docker/docker/errdefs/http_helpers.go
new file mode 100644
index 0000000000000..5afe486779db5
--- /dev/null
+++ b/vendor/github.com/docker/docker/errdefs/http_helpers.go
@@ -0,0 +1,53 @@
+package errdefs // import "github.com/docker/docker/errdefs"
+
+import (
+	"net/http"
+
+	"github.com/sirupsen/logrus"
+)
+
+// FromStatusCode creates an errdef error, based on the provided HTTP status-code
+func FromStatusCode(err error, statusCode int) error {
+	if err == nil {
+		return err
+	}
+	switch statusCode {
+	case http.StatusNotFound:
+		err = NotFound(err)
+	case http.StatusBadRequest:
+		err = InvalidParameter(err)
+	case http.StatusConflict:
+		err = Conflict(err)
+	case http.StatusUnauthorized:
+		err = Unauthorized(err)
+	case http.StatusServiceUnavailable:
+		err = Unavailable(err)
+	case http.StatusForbidden:
+		err = Forbidden(err)
+	case http.StatusNotModified:
+		err = NotModified(err)
+	case http.StatusNotImplemented:
+		err = NotImplemented(err)
+	case http.StatusInternalServerError:
+		if !IsSystem(err) && !IsUnknown(err) && !IsDataLoss(err) && !IsDeadline(err) && !IsCancelled(err) {
+			err = System(err)
+		}
+	default:
+		logrus.WithError(err).WithFields(logrus.Fields{
+			"module":      "api",
+			"status_code": statusCode,
+		}).Debug("FIXME: Got an status-code for which error does not match any expected type!!!")
+
+		switch {
+		case statusCode >= 200 && statusCode < 400:
+			// it's a client error
+		case statusCode >= 400 && statusCode < 500:
+			err = InvalidParameter(err)
+		case statusCode >= 500 && statusCode < 600:
+			err = System(err)
+		default:
+			err = Unknown(err)
+		}
+	}
+	return err
+}
diff --git a/vendor/github.com/docker/docker/errdefs/is.go b/vendor/github.com/docker/docker/errdefs/is.go
new file mode 100644
index 0000000000000..3abf07d0c3570
--- /dev/null
+++ b/vendor/github.com/docker/docker/errdefs/is.go
@@ -0,0 +1,107 @@
+package errdefs // import "github.com/docker/docker/errdefs"
+
+type causer interface {
+	Cause() error
+}
+
+func getImplementer(err error) error {
+	switch e := err.(type) {
+	case
+		ErrNotFound,
+		ErrInvalidParameter,
+		ErrConflict,
+		ErrUnauthorized,
+		ErrUnavailable,
+		ErrForbidden,
+		ErrSystem,
+		ErrNotModified,
+		ErrNotImplemented,
+		ErrCancelled,
+		ErrDeadline,
+		ErrDataLoss,
+		ErrUnknown:
+		return err
+	case causer:
+		return getImplementer(e.Cause())
+	default:
+		return err
+	}
+}
+
+// IsNotFound returns if the passed in error is an ErrNotFound
+func IsNotFound(err error) bool {
+	_, ok := getImplementer(err).(ErrNotFound)
+	return ok
+}
+
+// IsInvalidParameter returns if the passed in error is an ErrInvalidParameter
+func IsInvalidParameter(err error) bool {
+	_, ok := getImplementer(err).(ErrInvalidParameter)
+	return ok
+}
+
+// IsConflict returns if the passed in error is an ErrConflict
+func IsConflict(err error) bool {
+	_, ok := getImplementer(err).(ErrConflict)
+	return ok
+}
+
+// IsUnauthorized returns if the passed in error is an ErrUnauthorized
+func IsUnauthorized(err error) bool {
+	_, ok := getImplementer(err).(ErrUnauthorized)
+	return ok
+}
+
+// IsUnavailable returns if the passed in error is an ErrUnavailable
+func IsUnavailable(err error) bool {
+	_, ok := getImplementer(err).(ErrUnavailable)
+	return ok
+}
+
+// IsForbidden returns if the passed in error is an ErrForbidden
+func IsForbidden(err error) bool {
+	_, ok := getImplementer(err).(ErrForbidden)
+	return ok
+}
+
+// IsSystem returns if the passed in error is an ErrSystem
+func IsSystem(err error) bool {
+	_, ok := getImplementer(err).(ErrSystem)
+	return ok
+}
+
+// IsNotModified returns if the passed in error is a NotModified error
+func IsNotModified(err error) bool {
+	_, ok := getImplementer(err).(ErrNotModified)
+	return ok
+}
+
+// IsNotImplemented returns if the passed in error is an ErrNotImplemented
+func IsNotImplemented(err error) bool {
+	_, ok := getImplementer(err).(ErrNotImplemented)
+	return ok
+}
+
+// IsUnknown returns if the passed in error is an ErrUnknown
+func IsUnknown(err error) bool {
+	_, ok := getImplementer(err).(ErrUnknown)
+	return ok
+}
+
+// IsCancelled returns if the passed in error is an ErrCancelled
+func IsCancelled(err error) bool {
+	_, ok := getImplementer(err).(ErrCancelled)
+	return ok
+}
+
+// IsDeadline returns if the passed in error is an ErrDeadline
+func IsDeadline(err error) bool {
+	_, ok := getImplementer(err).(ErrDeadline)
+	return ok
+}
+
+// IsDataLoss returns if the passed in error is an ErrDataLoss
+func IsDataLoss(err error) bool {
+	_, ok := getImplementer(err).(ErrDataLoss)
+	return ok
+}
diff --git a/vendor/github.com/docker/go-connections/LICENSE b/vendor/github.com/docker/go-connections/LICENSE
new file mode 100644
index 0000000000000..b55b37bc31620
--- /dev/null
+++ b/vendor/github.com/docker/go-connections/LICENSE
@@ -0,0 +1,191 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        https://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   Copyright 2015 Docker, Inc.
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       https://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
diff --git a/vendor/github.com/docker/go-connections/nat/nat.go b/vendor/github.com/docker/go-connections/nat/nat.go
new file mode 100644
index 0000000000000..bb7e4e336950b
--- /dev/null
+++ b/vendor/github.com/docker/go-connections/nat/nat.go
@@ -0,0 +1,242 @@
+// Package nat is a convenience package for manipulation of strings describing network ports.
+package nat
+
+import (
+	"fmt"
+	"net"
+	"strconv"
+	"strings"
+)
+
+const (
+	// portSpecTemplate is the expected format for port specifications
+	portSpecTemplate = "ip:hostPort:containerPort"
+)
+
+// PortBinding represents a binding between a Host IP address and a Host Port
+type PortBinding struct {
+	// HostIP is the host IP Address
+	HostIP string `json:"HostIp"`
+	// HostPort is the host port number
+	HostPort string
+}
+
+// PortMap is a collection of PortBinding indexed by Port
+type PortMap map[Port][]PortBinding
+
+// PortSet is a collection of structs indexed by Port
+type PortSet map[Port]struct{}
+
+// Port is a string containing port number and protocol in the format "80/tcp"
+type Port string
+
+// NewPort creates a new instance of a Port given a protocol and port number or port range
+func NewPort(proto, port string) (Port, error) {
+	// Check for parsing issues on "port" now so we can avoid having
+	// to check it later on.
+
+	portStartInt, portEndInt, err := ParsePortRangeToInt(port)
+	if err != nil {
+		return "", err
+	}
+
+	if portStartInt == portEndInt {
+		return Port(fmt.Sprintf("%d/%s", portStartInt, proto)), nil
+	}
+	return Port(fmt.Sprintf("%d-%d/%s", portStartInt, portEndInt, proto)), nil
+}
+
+// ParsePort parses the port number string and returns an int
+func ParsePort(rawPort string) (int, error) {
+	if len(rawPort) == 0 {
+		return 0, nil
+	}
+	port, err := strconv.ParseUint(rawPort, 10, 16)
+	if err != nil {
+		return 0, err
+	}
+	return int(port), nil
+}
+
+// ParsePortRangeToInt parses the port range string and returns start/end ints
+func ParsePortRangeToInt(rawPort string) (int, int, error) {
+	if len(rawPort) == 0 {
+		return 0, 0, nil
+	}
+	start, end, err := ParsePortRange(rawPort)
+	if err != nil {
+		return 0, 0, err
+	}
+	return int(start), int(end), nil
+}
+
+// Proto returns the protocol of a Port
+func (p Port) Proto() string {
+	proto, _ := SplitProtoPort(string(p))
+	return proto
+}
+
+// Port returns the port number of a Port
+func (p Port) Port() string {
+	_, port := SplitProtoPort(string(p))
+	return port
+}
+
+// Int returns the port number of a Port as an int
+func (p Port) Int() int {
+	portStr := p.Port()
+	// We don't need to check for an error because we're going to
+	// assume that any error would have been found, and reported, in NewPort()
+	port, _ := ParsePort(portStr)
+	return port
+}
+
+// Range returns the start/end port numbers of a Port range as ints
+func (p Port) Range() (int, int, error) {
+	return ParsePortRangeToInt(p.Port())
+}
+
+// SplitProtoPort splits a port in the format of proto/port
+func SplitProtoPort(rawPort string) (string, string) {
+	parts := strings.Split(rawPort, "/")
+	l := len(parts)
+	if len(rawPort) == 0 || l == 0 || len(parts[0]) == 0 {
+		return "", ""
+	}
+	if l == 1 {
+		return "tcp", rawPort
+	}
+	if len(parts[1]) == 0 {
+		return "tcp", parts[0]
+	}
+	return parts[1], parts[0]
+}
+
+func validateProto(proto string) bool {
+	for _, availableProto := range []string{"tcp", "udp", "sctp"} {
+		if availableProto == proto {
+			return true
+		}
+	}
+	return false
+}
+
+// ParsePortSpecs receives port specs in the format of ip:public:private/proto and parses
+// these in to the internal types
+func ParsePortSpecs(ports []string) (map[Port]struct{}, map[Port][]PortBinding, error) {
+	var (
+		exposedPorts = make(map[Port]struct{}, len(ports))
+		bindings     = make(map[Port][]PortBinding)
+	)
+	for _, rawPort := range ports {
+		portMappings, err := ParsePortSpec(rawPort)
+		if err != nil {
+			return nil, nil, err
+		}
+
+		for _, portMapping := range portMappings {
+			port := portMapping.Port
+			if _, exists := exposedPorts[port]; !exists {
+				exposedPorts[port] = struct{}{}
+			}
+			bslice, exists := bindings[port]
+			if !exists {
+				bslice = []PortBinding{}
+			}
+			bindings[port] = append(bslice, portMapping.Binding)
+		}
+	}
+	return exposedPorts, bindings, nil
+}
+
+// PortMapping is a data object mapping a Port to a PortBinding
+type PortMapping struct {
+	Port    Port
+	Binding PortBinding
+}
+
+func splitParts(rawport string) (string, string, string) {
+	parts := strings.Split(rawport, ":")
+	n := len(parts)
+	containerport := parts[n-1]
+
+	switch n {
+	case 1:
+		return "", "", containerport
+	case 2:
+		return "", parts[0], containerport
+	case 3:
+		return parts[0], parts[1], containerport
+	default:
+		return strings.Join(parts[:n-2], ":"), parts[n-2], containerport
+	}
+}
+
+// ParsePortSpec parses a port specification string into a slice of PortMappings
+func ParsePortSpec(rawPort string) ([]PortMapping, error) {
+	var proto string
+	rawIP, hostPort, containerPort := splitParts(rawPort)
+	proto, containerPort = SplitProtoPort(containerPort)
+
+	// Strip [] from IPV6 addresses
+	ip, _, err := net.SplitHostPort(rawIP + ":")
+	if err != nil {
+		return nil, fmt.Errorf("Invalid ip address %v: %s", rawIP, err)
+	}
+	if ip != "" && net.ParseIP(ip) == nil {
+		return nil, fmt.Errorf("Invalid ip address: %s", ip)
+	}
+	if containerPort == "" {
+		return nil, fmt.Errorf("No port specified: %s<empty>", rawPort)
+	}
+
+	startPort, endPort, err := ParsePortRange(containerPort)
+	if err != nil {
+		return nil, fmt.Errorf("Invalid containerPort: %s", containerPort)
+	}
+
+	var startHostPort, endHostPort uint64 = 0, 0
+	if len(hostPort) > 0 {
+		startHostPort, endHostPort, err = ParsePortRange(hostPort)
+		if err != nil {
+			return nil, fmt.Errorf("Invalid hostPort: %s", hostPort)
+		}
+	}
+
+	if hostPort != "" && (endPort-startPort) != (endHostPort-startHostPort) {
+		// Allow host port range iff containerPort is not a range.
+		// In this case, use the host port range as the dynamic
+		// host port range to allocate into.
+		if endPort != startPort {
+			return nil, fmt.Errorf("Invalid ranges specified for container and host Ports: %s and %s", containerPort, hostPort)
+		}
+	}
+
+	if !validateProto(strings.ToLower(proto)) {
+		return nil, fmt.Errorf("Invalid proto: %s", proto)
+	}
+
+	ports := []PortMapping{}
+	for i := uint64(0); i <= (endPort - startPort); i++ {
+		containerPort = strconv.FormatUint(startPort+i, 10)
+		if len(hostPort) > 0 {
+			hostPort = strconv.FormatUint(startHostPort+i, 10)
+		}
+		// Set hostPort to a range only if there is a single container port
+		// and a dynamic host port.
+		if startPort == endPort && startHostPort != endHostPort {
+			hostPort = fmt.Sprintf("%s-%s", hostPort, strconv.FormatUint(endHostPort, 10))
+		}
+		port, err := NewPort(strings.ToLower(proto), containerPort)
+		if err != nil {
+			return nil, err
+		}
+
+		binding := PortBinding{
+			HostIP:   ip,
+			HostPort: hostPort,
+		}
+		ports = append(ports, PortMapping{Port: port, Binding: binding})
+	}
+	return ports, nil
+}
diff --git a/vendor/github.com/docker/go-connections/nat/parse.go b/vendor/github.com/docker/go-connections/nat/parse.go
new file mode 100644
index 0000000000000..892adf8c6673e
--- /dev/null
+++ b/vendor/github.com/docker/go-connections/nat/parse.go
@@ -0,0 +1,57 @@
+package nat
+
+import (
+	"fmt"
+	"strconv"
+	"strings"
+)
+
+// PartParser parses and validates the specified string (data) using the specified template
+// e.g. ip:public:private -> 192.168.0.1:80:8000
+// DEPRECATED: do not use, this function may be removed in a future version
+func PartParser(template, data string) (map[string]string, error) {
+	// ip:public:private
+	var (
+		templateParts = strings.Split(template, ":")
+		parts         = strings.Split(data, ":")
+		out           = make(map[string]string, len(templateParts))
+	)
+	if len(parts) != len(templateParts) {
+		return nil, fmt.Errorf("Invalid format to parse. %s should match template %s", data, template)
+	}
+
+	for i, t := range templateParts {
+		value := ""
+		if len(parts) > i {
+			value = parts[i]
+		}
+		out[t] = value
+	}
+	return out, nil
+}
+
+// ParsePortRange parses and validates the specified string as a port-range (8000-9000)
+func ParsePortRange(ports string) (uint64, uint64, error) {
+	if ports == "" {
+		return 0, 0, fmt.Errorf("Empty string specified for ports.")
+	}
+	if !strings.Contains(ports, "-") {
+		start, err := strconv.ParseUint(ports, 10, 16)
+		end := start
+		return start, end, err
+	}
+
+	parts := strings.Split(ports, "-")
+	start, err := strconv.ParseUint(parts[0], 10, 16)
+	if err != nil {
+		return 0, 0, err
+	}
+	end, err := strconv.ParseUint(parts[1], 10, 16)
+	if err != nil {
+		return 0, 0, err
+	}
+	if end < start {
+		return 0, 0, fmt.Errorf("Invalid range specified for the Port: %s", ports)
+	}
+	return start, end, nil
+}
diff --git a/vendor/github.com/docker/go-connections/nat/sort.go b/vendor/github.com/docker/go-connections/nat/sort.go
new file mode 100644
index 0000000000000..ce950171e3154
--- /dev/null
+++ b/vendor/github.com/docker/go-connections/nat/sort.go
@@ -0,0 +1,96 @@
+package nat
+
+import (
+	"sort"
+	"strings"
+)
+
+type portSorter struct {
+	ports []Port
+	by    func(i, j Port) bool
+}
+
+func (s *portSorter) Len() int {
+	return len(s.ports)
+}
+
+func (s *portSorter) Swap(i, j int) {
+	s.ports[i], s.ports[j] = s.ports[j], s.ports[i]
+}
+
+func (s *portSorter) Less(i, j int) bool {
+	ip := s.ports[i]
+	jp := s.ports[j]
+
+	return s.by(ip, jp)
+}
+
+// Sort sorts a list of ports using the provided predicate
+// This function should compare `i` and `j`, returning true if `i` is
+// considered to be less than `j`
+func Sort(ports []Port, predicate func(i, j Port) bool) {
+	s := &portSorter{ports, predicate}
+	sort.Sort(s)
+}
+
+type portMapEntry struct {
+	port    Port
+	binding PortBinding
+}
+
+type portMapSorter []portMapEntry
+
+func (s portMapSorter) Len() int      { return len(s) }
+func (s portMapSorter) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
+
+// sort the port so that the order is:
+// 1. port with larger specified bindings
+// 2. larger port
+// 3. port with tcp protocol
+func (s portMapSorter) Less(i, j int) bool {
+	pi, pj := s[i].port, s[j].port
+	hpi, hpj := toInt(s[i].binding.HostPort), toInt(s[j].binding.HostPort)
+	return hpi > hpj || pi.Int() > pj.Int() || (pi.Int() == pj.Int() && strings.ToLower(pi.Proto()) == "tcp")
+}
+
+// SortPortMap sorts the list of ports and their respected mapping. The ports
+// will explicit HostPort will be placed first.
+func SortPortMap(ports []Port, bindings PortMap) {
+	s := portMapSorter{}
+	for _, p := range ports {
+		if binding, ok := bindings[p]; ok {
+			for _, b := range binding {
+				s = append(s, portMapEntry{port: p, binding: b})
+			}
+			bindings[p] = []PortBinding{}
+		} else {
+			s = append(s, portMapEntry{port: p})
+		}
+	}
+
+	sort.Sort(s)
+	var (
+		i  int
+		pm = make(map[Port]struct{})
+	)
+	// reorder ports
+	for _, entry := range s {
+		if _, ok := pm[entry.port]; !ok {
+			ports[i] = entry.port
+			pm[entry.port] = struct{}{}
+			i++
+		}
+		// reorder bindings for this port
+		if _, ok := bindings[entry.port]; ok {
+			bindings[entry.port] = append(bindings[entry.port], entry.binding)
+		}
+	}
+}
+
+func toInt(s string) uint64 {
+	i, _, err := ParsePortRange(s)
+	if err != nil {
+		i = 0
+	}
+	return i
+}
diff --git a/vendor/github.com/docker/go-connections/sockets/README.md b/vendor/github.com/docker/go-connections/sockets/README.md
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/vendor/github.com/docker/go-connections/sockets/inmem_socket.go b/vendor/github.com/docker/go-connections/sockets/inmem_socket.go
new file mode 100644
index 0000000000000..99846ffddb1a3
--- /dev/null
+++ b/vendor/github.com/docker/go-connections/sockets/inmem_socket.go
@@ -0,0 +1,81 @@
+package sockets
+
+import (
+	"errors"
+	"net"
+	"sync"
+)
+
+var errClosed = errors.New("use of closed network connection")
+
+// InmemSocket implements net.Listener using in-memory only connections.
+type InmemSocket struct {
+	chConn  chan net.Conn
+	chClose chan struct{}
+	addr    string
+	mu      sync.Mutex
+}
+
+// dummyAddr is used to satisfy net.Addr for the in-mem socket
+// it is just stored as a string and returns the string for all calls
+type dummyAddr string
+
+// NewInmemSocket creates an in-memory only net.Listener
+// The addr argument can be any string, but is used to satisfy the `Addr()` part
+// of the net.Listener interface
+func NewInmemSocket(addr string, bufSize int) *InmemSocket {
+	return &InmemSocket{
+		chConn:  make(chan net.Conn, bufSize),
+		chClose: make(chan struct{}),
+		addr:    addr,
+	}
+}
+
+// Addr returns the socket's addr string to satisfy net.Listener
+func (s *InmemSocket) Addr() net.Addr {
+	return dummyAddr(s.addr)
+}
+
+// Accept implements the Accept method in the Listener interface; it waits for the next call and returns a generic Conn.
+func (s *InmemSocket) Accept() (net.Conn, error) {
+	select {
+	case conn := <-s.chConn:
+		return conn, nil
+	case <-s.chClose:
+		return nil, errClosed
+	}
+}
+
+// Close closes the listener. It will be unavailable for use once closed.
+func (s *InmemSocket) Close() error {
+	s.mu.Lock()
+	defer s.mu.Unlock()
+	select {
+	case <-s.chClose:
+	default:
+		close(s.chClose)
+	}
+	return nil
+}
+
+// Dial is used to establish a connection with the in-mem server
+func (s *InmemSocket) Dial(network, addr string) (net.Conn, error) {
+	srvConn, clientConn := net.Pipe()
+	select {
+	case s.chConn <- srvConn:
+	case <-s.chClose:
+		return nil, errClosed
+	}
+
+	return clientConn, nil
+}
+
+// Network returns the addr string, satisfies net.Addr
+func (a dummyAddr) Network() string {
+	return string(a)
+}
+
+// String returns the string form
+func (a dummyAddr) String() string {
+	return string(a)
+}
diff --git a/vendor/github.com/docker/go-connections/sockets/proxy.go b/vendor/github.com/docker/go-connections/sockets/proxy.go
new file mode 100644
index 0000000000000..98e9a1dc61b54
--- /dev/null
+++ b/vendor/github.com/docker/go-connections/sockets/proxy.go
@@ -0,0 +1,51 @@
+package sockets
+
+import (
+	"net"
+	"net/url"
+	"os"
+	"strings"
+
+	"golang.org/x/net/proxy"
+)
+
+// GetProxyEnv allows access to the uppercase and the lowercase forms of
+// proxy-related variables.  See the Go specification for details on these
+// variables. https://golang.org/pkg/net/http/
+func GetProxyEnv(key string) string {
+	proxyValue := os.Getenv(strings.ToUpper(key))
+	if proxyValue == "" {
+		return os.Getenv(strings.ToLower(key))
+	}
+	return proxyValue
+}
+
+// DialerFromEnvironment takes in a "direct" *net.Dialer and returns a
+// proxy.Dialer which will route the connections through the proxy using the
+// given dialer.
+func DialerFromEnvironment(direct *net.Dialer) (proxy.Dialer, error) {
+	allProxy := GetProxyEnv("all_proxy")
+	if len(allProxy) == 0 {
+		return direct, nil
+	}
+
+	proxyURL, err := url.Parse(allProxy)
+	if err != nil {
+		return direct, err
+	}
+
+	proxyFromURL, err := proxy.FromURL(proxyURL, direct)
+	if err != nil {
+		return direct, err
+	}
+
+	noProxy := GetProxyEnv("no_proxy")
+	if len(noProxy) == 0 {
+		return proxyFromURL, nil
+	}
+
+	perHost := proxy.NewPerHost(proxyFromURL, direct)
+	perHost.AddFromString(noProxy)
+
+	return perHost, nil
+}
diff --git a/vendor/github.com/docker/go-connections/sockets/sockets.go b/vendor/github.com/docker/go-connections/sockets/sockets.go
new file mode 100644
index 0000000000000..a1d7beb4d8059
--- /dev/null
+++ b/vendor/github.com/docker/go-connections/sockets/sockets.go
@@ -0,0 +1,38 @@
+// Package sockets provides helper functions to create and configure Unix or TCP sockets.
+package sockets
+
+import (
+	"errors"
+	"net"
+	"net/http"
+	"time"
+)
+
+// Why 32? See https://github.com/docker/docker/pull/8035.
+const defaultTimeout = 32 * time.Second
+
+// ErrProtocolNotAvailable is returned when a given transport protocol is not provided by the operating system.
+var ErrProtocolNotAvailable = errors.New("protocol not available")
+
+// ConfigureTransport configures the specified Transport according to the
+// specified proto and addr.
+// If the proto is unix (using a unix socket to communicate) or npipe the
+// compression is disabled.
+func ConfigureTransport(tr *http.Transport, proto, addr string) error {
+	switch proto {
+	case "unix":
+		return configureUnixTransport(tr, proto, addr)
+	case "npipe":
+		return configureNpipeTransport(tr, proto, addr)
+	default:
+		tr.Proxy = http.ProxyFromEnvironment
+		dialer, err := DialerFromEnvironment(&net.Dialer{
+			Timeout: defaultTimeout,
+		})
+		if err != nil {
+			return err
+		}
+		tr.Dial = dialer.Dial
+	}
+	return nil
+}
diff --git a/vendor/github.com/docker/go-connections/sockets/sockets_unix.go b/vendor/github.com/docker/go-connections/sockets/sockets_unix.go
new file mode 100644
index 0000000000000..386cf0dbbdecb
--- /dev/null
+++ b/vendor/github.com/docker/go-connections/sockets/sockets_unix.go
@@ -0,0 +1,35 @@
+// +build !windows
+
+package sockets
+
+import (
+	"fmt"
+	"net"
+	"net/http"
+	"syscall"
+	"time"
+)
+
+const maxUnixSocketPathSize = len(syscall.RawSockaddrUnix{}.Path)
+
+func configureUnixTransport(tr *http.Transport, proto, addr string) error {
+	if len(addr) > maxUnixSocketPathSize {
+		return fmt.Errorf("Unix socket path %q is too long", addr)
+	}
+	// No need for compression in local communications.
+	tr.DisableCompression = true
+	tr.Dial = func(_, _ string) (net.Conn, error) {
+		return net.DialTimeout(proto, addr, defaultTimeout)
+	}
+	return nil
+}
+
+func configureNpipeTransport(tr *http.Transport, proto, addr string) error {
+	return ErrProtocolNotAvailable
+}
+
+// DialPipe connects to a Windows named pipe.
+// This is not supported on other OSes.
+func DialPipe(_ string, _ time.Duration) (net.Conn, error) {
+	return nil, syscall.EAFNOSUPPORT
+}
diff --git a/vendor/github.com/docker/go-connections/sockets/sockets_windows.go b/vendor/github.com/docker/go-connections/sockets/sockets_windows.go
new file mode 100644
index 0000000000000..5c21644e1fe7b
--- /dev/null
+++ b/vendor/github.com/docker/go-connections/sockets/sockets_windows.go
@@ -0,0 +1,27 @@
+package sockets
+
+import (
+	"net"
+	"net/http"
+	"time"
+
+	"github.com/Microsoft/go-winio"
+)
+
+func configureUnixTransport(tr *http.Transport, proto, addr string) error {
+	return ErrProtocolNotAvailable
+}
+
+func configureNpipeTransport(tr *http.Transport, proto, addr string) error {
+	// No need for compression in local communications.
+	tr.DisableCompression = true
+	tr.Dial = func(_, _ string) (net.Conn, error) {
+		return DialPipe(addr, defaultTimeout)
+	}
+	return nil
+}
+
+// DialPipe connects to a Windows named pipe.
+func DialPipe(addr string, timeout time.Duration) (net.Conn, error) {
+	return winio.DialPipe(addr, &timeout)
+}
diff --git a/vendor/github.com/docker/go-connections/sockets/tcp_socket.go b/vendor/github.com/docker/go-connections/sockets/tcp_socket.go
new file mode 100644
index 0000000000000..53cbb6c79e476
--- /dev/null
+++ b/vendor/github.com/docker/go-connections/sockets/tcp_socket.go
@@ -0,0 +1,22 @@
+// Package sockets provides helper functions to create and configure Unix or TCP sockets.
+package sockets
+
+import (
+	"crypto/tls"
+	"net"
+)
+
+// NewTCPSocket creates a TCP socket listener with the specified address and
+// the specified tls configuration. If TLSConfig is set, will encapsulate the
+// TCP listener inside a TLS one.
+func NewTCPSocket(addr string, tlsConfig *tls.Config) (net.Listener, error) {
+	l, err := net.Listen("tcp", addr)
+	if err != nil {
+		return nil, err
+	}
+	if tlsConfig != nil {
+		tlsConfig.NextProtos = []string{"http/1.1"}
+		l = tls.NewListener(l, tlsConfig)
+	}
+	return l, nil
+}
diff --git a/vendor/github.com/docker/go-connections/sockets/unix_socket.go b/vendor/github.com/docker/go-connections/sockets/unix_socket.go
new file mode 100644
index 0000000000000..a8b5dbb6fdc04
--- /dev/null
+++ b/vendor/github.com/docker/go-connections/sockets/unix_socket.go
@@ -0,0 +1,32 @@
+// +build !windows
+
+package sockets
+
+import (
+	"net"
+	"os"
+	"syscall"
+)
+
+// NewUnixSocket creates a unix socket with the specified path and group.
+func NewUnixSocket(path string, gid int) (net.Listener, error) {
+	if err := syscall.Unlink(path); err != nil && !os.IsNotExist(err) {
+		return nil, err
+	}
+	mask := syscall.Umask(0777)
+	defer syscall.Umask(mask)
+
+	l, err := net.Listen("unix", path)
+	if err != nil {
+		return nil, err
+	}
+	if err := os.Chown(path, 0, gid); err != nil {
+		l.Close()
+		return nil, err
+	}
+	if err := os.Chmod(path, 0660); err != nil {
+		l.Close()
+		return nil, err
+	}
+	return l, nil
+}
diff --git a/vendor/github.com/docker/go-connections/tlsconfig/certpool_go17.go b/vendor/github.com/docker/go-connections/tlsconfig/certpool_go17.go
new file mode 100644
index 0000000000000..1ca0965e06ea5
--- /dev/null
+++ b/vendor/github.com/docker/go-connections/tlsconfig/certpool_go17.go
@@ -0,0 +1,18 @@
+// +build go1.7
+
+package tlsconfig
+
+import (
+	"crypto/x509"
+	"runtime"
+)
+
+// SystemCertPool returns a copy of the system cert pool,
+// returns an error if failed to load or empty pool on windows.
+func SystemCertPool() (*x509.CertPool, error) {
+	certpool, err := x509.SystemCertPool()
+	if err != nil && runtime.GOOS == "windows" {
+		return x509.NewCertPool(), nil
+	}
+	return certpool, err
+}
diff --git a/vendor/github.com/docker/go-connections/tlsconfig/certpool_other.go b/vendor/github.com/docker/go-connections/tlsconfig/certpool_other.go
new file mode 100644
index 0000000000000..1ff81c333c369
--- /dev/null
+++ b/vendor/github.com/docker/go-connections/tlsconfig/certpool_other.go
@@ -0,0 +1,13 @@
+// +build !go1.7
+
+package tlsconfig
+
+import (
+	"crypto/x509"
+)
+
+// SystemCertPool returns an new empty cert pool,
+// accessing system cert pool is supported in go 1.7
+func SystemCertPool() (*x509.CertPool, error) {
+	return x509.NewCertPool(), nil
+}
diff --git a/vendor/github.com/docker/go-connections/tlsconfig/config.go b/vendor/github.com/docker/go-connections/tlsconfig/config.go
new file mode 100644
index 0000000000000..0ef3fdcb46906
--- /dev/null
+++ b/vendor/github.com/docker/go-connections/tlsconfig/config.go
@@ -0,0 +1,254 @@
+// Package tlsconfig provides primitives to retrieve secure-enough TLS configurations for both clients and servers.
+//
+// As a reminder from https://golang.org/pkg/crypto/tls/#Config:
+//	A Config structure is used to configure a TLS client or server. After one has been passed to a TLS function it must not be modified.
+//	A Config may be reused; the tls package will also not modify it.
+package tlsconfig
+
+import (
+	"crypto/tls"
+	"crypto/x509"
+	"encoding/pem"
+	"fmt"
+	"io/ioutil"
+	"os"
+
+	"github.com/pkg/errors"
+)
+
+// Options represents the information needed to create client and server TLS configurations.
+type Options struct {
+	CAFile string
+
+	// If either CertFile or KeyFile is empty, Client() will not load them
+	// preventing the client from authenticating to the server.
+	// However, Server() requires them and will error out if they are empty.
+	CertFile string
+	KeyFile  string
+
+	// client-only option
+	InsecureSkipVerify bool
+	// server-only option
+	ClientAuth tls.ClientAuthType
+	// If ExclusiveRootPools is set, then if a CA file is provided, the root pool used for TLS
+	// creds will include exclusively the roots in that CA file.  If no CA file is provided,
+	// the system pool will be used.
+	ExclusiveRootPools bool
+	MinVersion         uint16
+	// If Passphrase is set, it will be used to decrypt a TLS private key
+	// if the key is encrypted
+	Passphrase string
+}
+
+// Extra (server-side) accepted CBC cipher suites - will phase out in the future
+var acceptedCBCCiphers = []uint16{
+	tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
+	tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
+	tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
+	tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
+}
+
+// DefaultServerAcceptedCiphers should be uses by code which already has a crypto/tls
+// options struct but wants to use a commonly accepted set of TLS cipher suites, with
+// known weak algorithms removed.
+var DefaultServerAcceptedCiphers = append(clientCipherSuites, acceptedCBCCiphers...)
+
+// allTLSVersions lists all the TLS versions and is used by the code that validates
+// a uint16 value as a TLS version.
+var allTLSVersions = map[uint16]struct{}{
+	tls.VersionSSL30: {},
+	tls.VersionTLS10: {},
+	tls.VersionTLS11: {},
+	tls.VersionTLS12: {},
+}
+
+// ServerDefault returns a secure-enough TLS configuration for the server TLS configuration.
+func ServerDefault(ops ...func(*tls.Config)) *tls.Config {
+	tlsconfig := &tls.Config{
+		// Avoid fallback by default to SSL protocols < TLS1.2
+		MinVersion:               tls.VersionTLS12,
+		PreferServerCipherSuites: true,
+		CipherSuites:             DefaultServerAcceptedCiphers,
+	}
+
+	for _, op := range ops {
+		op(tlsconfig)
+	}
+
+	return tlsconfig
+}
+
+// ClientDefault returns a secure-enough TLS configuration for the client TLS configuration.
+func ClientDefault(ops ...func(*tls.Config)) *tls.Config {
+	tlsconfig := &tls.Config{
+		// Prefer TLS1.2 as the client minimum
+		MinVersion:   tls.VersionTLS12,
+		CipherSuites: clientCipherSuites,
+	}
+
+	for _, op := range ops {
+		op(tlsconfig)
+	}
+
+	return tlsconfig
+}
+
+// certPool returns an X.509 certificate pool from `caFile`, the certificate file.
+func certPool(caFile string, exclusivePool bool) (*x509.CertPool, error) {
+	// If we should verify the server, we need to load a trusted ca
+	var (
+		certPool *x509.CertPool
+		err      error
+	)
+	if exclusivePool {
+		certPool = x509.NewCertPool()
+	} else {
+		certPool, err = SystemCertPool()
+		if err != nil {
+			return nil, fmt.Errorf("failed to read system certificates: %v", err)
+		}
+	}
+	pem, err := ioutil.ReadFile(caFile)
+	if err != nil {
+		return nil, fmt.Errorf("could not read CA certificate %q: %v", caFile, err)
+	}
+	if !certPool.AppendCertsFromPEM(pem) {
+		return nil, fmt.Errorf("failed to append certificates from PEM file: %q", caFile)
+	}
+	return certPool, nil
+}
+
+// isValidMinVersion checks that the input value is a valid tls minimum version
+func isValidMinVersion(version uint16) bool {
+	_, ok := allTLSVersions[version]
+	return ok
+}
+
+// adjustMinVersion sets the MinVersion on `config`, the input configuration.
+// It assumes the current MinVersion on the `config` is the lowest allowed.
+func adjustMinVersion(options Options, config *tls.Config) error {
+	if options.MinVersion > 0 {
+		if !isValidMinVersion(options.MinVersion) {
+			return fmt.Errorf("Invalid minimum TLS version: %x", options.MinVersion)
+		}
+		if options.MinVersion < config.MinVersion {
+			return fmt.Errorf("Requested minimum TLS version is too low. Should be at-least: %x", config.MinVersion)
+		}
+		config.MinVersion = options.MinVersion
+	}
+
+	return nil
+}
+
+// IsErrEncryptedKey returns true if the 'err' is an error of incorrect
+// password when tryin to decrypt a TLS private key
+func IsErrEncryptedKey(err error) bool {
+	return errors.Cause(err) == x509.IncorrectPasswordError
+}
+
+// getPrivateKey returns the private key in 'keyBytes', in PEM-encoded format.
+// If the private key is encrypted, 'passphrase' is used to decrypted the
+// private key.
+func getPrivateKey(keyBytes []byte, passphrase string) ([]byte, error) {
+	// this section makes some small changes to code from notary/tuf/utils/x509.go
+	pemBlock, _ := pem.Decode(keyBytes)
+	if pemBlock == nil {
+		return nil, fmt.Errorf("no valid private key found")
+	}
+
+	var err error
+	if x509.IsEncryptedPEMBlock(pemBlock) {
+		keyBytes, err = x509.DecryptPEMBlock(pemBlock, []byte(passphrase))
+		if err != nil {
+			return nil, errors.Wrap(err, "private key is encrypted, but could not decrypt it")
+		}
+		keyBytes = pem.EncodeToMemory(&pem.Block{Type: pemBlock.Type, Bytes: keyBytes})
+	}
+
+	return keyBytes, nil
+}
+
+// getCert returns a Certificate from the CertFile and KeyFile in 'options',
+// if the key is encrypted, the Passphrase in 'options' will be used to
+// decrypt it.
+func getCert(options Options) ([]tls.Certificate, error) {
+	if options.CertFile == "" && options.KeyFile == "" {
+		return nil, nil
+	}
+
+	errMessage := "Could not load X509 key pair"
+
+	cert, err := ioutil.ReadFile(options.CertFile)
+	if err != nil {
+		return nil, errors.Wrap(err, errMessage)
+	}
+
+	prKeyBytes, err := ioutil.ReadFile(options.KeyFile)
+	if err != nil {
+		return nil, errors.Wrap(err, errMessage)
+	}
+
+	prKeyBytes, err = getPrivateKey(prKeyBytes, options.Passphrase)
+	if err != nil {
+		return nil, errors.Wrap(err, errMessage)
+	}
+
+	tlsCert, err := tls.X509KeyPair(cert, prKeyBytes)
+	if err != nil {
+		return nil, errors.Wrap(err, errMessage)
+	}
+
+	return []tls.Certificate{tlsCert}, nil
+}
+
+// Client returns a TLS configuration meant to be used by a client.
+func Client(options Options) (*tls.Config, error) {
+	tlsConfig := ClientDefault()
+	tlsConfig.InsecureSkipVerify = options.InsecureSkipVerify
+	if !options.InsecureSkipVerify && options.CAFile != "" {
+		CAs, err := certPool(options.CAFile, options.ExclusiveRootPools)
+		if err != nil {
+			return nil, err
+		}
+		tlsConfig.RootCAs = CAs
+	}
+
+	tlsCerts, err := getCert(options)
+	if err != nil {
+		return nil, err
+	}
+	tlsConfig.Certificates = tlsCerts
+
+	if err := adjustMinVersion(options, tlsConfig); err != nil {
+		return nil, err
+	}
+
+	return tlsConfig, nil
+}
+
+// Server returns a TLS configuration meant to be used by a server.
+func Server(options Options) (*tls.Config, error) {
+	tlsConfig := ServerDefault()
+	tlsConfig.ClientAuth = options.ClientAuth
+	tlsCert, err := tls.LoadX509KeyPair(options.CertFile, options.KeyFile)
+	if err != nil {
+		if os.IsNotExist(err) {
+			return nil, fmt.Errorf("Could not load X509 key pair (cert: %q, key: %q): %v", options.CertFile, options.KeyFile, err)
+		}
+		return nil, fmt.Errorf("Error reading X509 key pair (cert: %q, key: %q): %v. Make sure the key is not encrypted.", options.CertFile, options.KeyFile, err)
+	}
+	tlsConfig.Certificates = []tls.Certificate{tlsCert}
+	if options.ClientAuth >= tls.VerifyClientCertIfGiven && options.CAFile != "" {
+		CAs, err := certPool(options.CAFile, options.ExclusiveRootPools)
+		if err != nil {
+			return nil, err
+		}
+		tlsConfig.ClientCAs = CAs
+	}
+
+	if err := adjustMinVersion(options, tlsConfig); err != nil {
+		return nil, err
+	}
+
+	return tlsConfig, nil
+}
diff --git a/vendor/github.com/docker/go-connections/tlsconfig/config_client_ciphers.go b/vendor/github.com/docker/go-connections/tlsconfig/config_client_ciphers.go
new file mode 100644
index 0000000000000..6b4c6a7c0d06d
--- /dev/null
+++ b/vendor/github.com/docker/go-connections/tlsconfig/config_client_ciphers.go
@@ -0,0 +1,17 @@
+// +build go1.5
+
+// Package tlsconfig provides primitives to retrieve secure-enough TLS configurations for both clients and servers.
+//
+package tlsconfig
+
+import (
+	"crypto/tls"
+)
+
+// Client TLS cipher suites (dropping CBC ciphers for client preferred suite set)
+var clientCipherSuites = []uint16{
+	tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
+	tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
+	tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
+	tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
+}
diff --git a/vendor/github.com/docker/go-connections/tlsconfig/config_legacy_client_ciphers.go b/vendor/github.com/docker/go-connections/tlsconfig/config_legacy_client_ciphers.go
new file mode 100644
index 0000000000000..ee22df47cb29b
--- /dev/null
+++ b/vendor/github.com/docker/go-connections/tlsconfig/config_legacy_client_ciphers.go
@@ -0,0 +1,15 @@
+// +build !go1.5
+
+// Package tlsconfig provides primitives to retrieve secure-enough TLS configurations for both clients and servers.
+//
+package tlsconfig
+
+import (
+	"crypto/tls"
+)
+
+// Client TLS cipher suites (dropping CBC ciphers for client preferred suite set)
+var clientCipherSuites = []uint16{
+	tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
+	tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
+}
diff --git a/vendor/github.com/google/cadvisor/container/docker/client.go b/vendor/github.com/google/cadvisor/container/docker/client.go
new file mode 100644
index 0000000000000..295e647cd740c
--- /dev/null
+++ b/vendor/github.com/google/cadvisor/container/docker/client.go
@@ -0,0 +1,61 @@
+// Copyright 2015 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// Handler for /validate content.
+// Validates cadvisor dependencies - kernel, os, docker setup.
+
+package docker
+
+import (
+	"net/http"
+	"sync"
+
+	dclient "github.com/docker/docker/client"
+	"github.com/docker/go-connections/tlsconfig"
+)
+
+var (
+	dockerClient     *dclient.Client
+	dockerClientErr  error
+	dockerClientOnce sync.Once
+)
+
+// Client creates a Docker API client based on the given Docker flags
+func Client() (*dclient.Client, error) {
+	dockerClientOnce.Do(func() {
+		var client *http.Client
+		if *ArgDockerTLS {
+			client = &http.Client{}
+			options := tlsconfig.Options{
+				CAFile:             *ArgDockerCA,
+				CertFile:           *ArgDockerCert,
+				KeyFile:            *ArgDockerKey,
+				InsecureSkipVerify: false,
+			}
+			tlsc, err := tlsconfig.Client(options)
+			if err != nil {
+				dockerClientErr = err
+				return
+			}
+			client.Transport = &http.Transport{
+				TLSClientConfig: tlsc,
+			}
+		}
+		dockerClient, dockerClientErr = dclient.NewClientWithOpts(
+			dclient.WithHost(*ArgDockerEndpoint),
+			dclient.WithHTTPClient(client),
+			dclient.WithAPIVersionNegotiation())
+	})
+	return dockerClient, dockerClientErr
+}
diff --git a/vendor/github.com/google/cadvisor/container/docker/docker.go b/vendor/github.com/google/cadvisor/container/docker/docker.go
new file mode 100644
index 0000000000000..4c01c370d7cec
--- /dev/null
+++ b/vendor/github.com/google/cadvisor/container/docker/docker.go
@@ -0,0 +1,192 @@
+// Copyright 2016 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// Provides global docker information.
+package docker
+
+import (
+	"fmt"
+	"regexp"
+	"strconv"
+	"time"
+
+	dockertypes "github.com/docker/docker/api/types"
+	"golang.org/x/net/context"
+
+	"github.com/google/cadvisor/container/docker/utils"
+	v1 "github.com/google/cadvisor/info/v1"
+	"github.com/google/cadvisor/machine"
+)
+
+var dockerTimeout = 10 * time.Second
+
+func defaultContext() context.Context {
+	ctx, _ := context.WithTimeout(context.Background(), dockerTimeout)
+	return ctx
+}
+
+func SetTimeout(timeout time.Duration) {
+	dockerTimeout = timeout
+}
+
+func Status() (v1.DockerStatus, error) {
+	return StatusWithContext(defaultContext())
+}
+
+func StatusWithContext(ctx context.Context) (v1.DockerStatus, error) {
+	client, err := Client()
+	if err != nil {
+		return v1.DockerStatus{}, fmt.Errorf("unable to communicate with docker daemon: %v", err)
+	}
+	dockerInfo, err := client.Info(ctx)
+	if err != nil {
+		return v1.DockerStatus{}, err
+	}
+	return StatusFromDockerInfo(dockerInfo)
+}
+
+func StatusFromDockerInfo(dockerInfo dockertypes.Info) (v1.DockerStatus, error) {
+	out := v1.DockerStatus{}
+	out.KernelVersion = machine.KernelVersion()
+	out.OS = dockerInfo.OperatingSystem
+	out.Hostname = dockerInfo.Name
+	out.RootDir = dockerInfo.DockerRootDir
+	out.Driver = dockerInfo.Driver
+	out.NumImages = dockerInfo.Images
+	out.NumContainers = dockerInfo.Containers
+	out.DriverStatus = make(map[string]string, len(dockerInfo.DriverStatus))
+	for _, v := range dockerInfo.DriverStatus {
+		out.DriverStatus[v[0]] = v[1]
+	}
+	var err error
+	ver, err := VersionString()
+	if err != nil {
+		return out, err
+	}
+	out.Version = ver
+	ver, err = APIVersionString()
+	if err != nil {
+		return out, err
+	}
+	out.APIVersion = ver
+	return out, nil
+}
+
+func Images() ([]v1.DockerImage, error) {
+	client, err := Client()
+	if err != nil {
+		return nil, fmt.Errorf("unable to communicate with docker daemon: %v", err)
+	}
+	summaries, err := client.ImageList(defaultContext(), dockertypes.ImageListOptions{All: false})
+	if err != nil {
+		return nil, err
+	}
+	return utils.SummariesToImages(summaries)
+}
+
+// Checks whether the dockerInfo reflects a valid docker setup, and returns it if it does, or an
+// error otherwise.
+func ValidateInfo(GetInfo func() (*dockertypes.Info, error), ServerVersion func() (string, error)) (*dockertypes.Info, error) {
+	info, err := GetInfo()
+	if err != nil {
+		return nil, err
+	}
+
+	// Fall back to version API if ServerVersion is not set in info.
+	if info.ServerVersion == "" {
+		var err error
+		info.ServerVersion, err = ServerVersion()
+		if err != nil {
+			return nil, fmt.Errorf("unable to get runtime version: %v", err)
+		}
+	}
+
+	version, err := ParseVersion(info.ServerVersion, VersionRe, 3)
+	if err != nil {
+		return nil, err
+	}
+
+	if version[0] < 1 {
+		return nil, fmt.Errorf("cAdvisor requires runtime version %v or above but we have found version %v reported as %q", []int{1, 0, 0}, version, info.ServerVersion)
+	}
+
+	if info.Driver == "" {
+		return nil, fmt.Errorf("failed to find runtime storage driver")
+	}
+
+	return info, nil
+}
+
+func Info() (*dockertypes.Info, error) {
+	client, err := Client()
+	if err != nil {
+		return nil, fmt.Errorf("unable to communicate with docker daemon: %v", err)
+	}
+
+	dockerInfo, err := client.Info(defaultContext())
+	if err != nil {
+		return nil, fmt.Errorf("failed to detect Docker info: %v", err)
+	}
+
+	return &dockerInfo, nil
+}
+
+func APIVersion() ([]int, error) {
+	ver, err := APIVersionString()
+	if err != nil {
+		return nil, err
+	}
+	return ParseVersion(ver, apiVersionRe, 2)
+}
+
+func VersionString() (string, error) {
+	dockerVersion := "Unknown"
+	client, err := Client()
+	if err == nil {
+		version, err := client.ServerVersion(defaultContext())
+		if err == nil {
+			dockerVersion = version.Version
+		}
+	}
+	return dockerVersion, err
+}
+
+func APIVersionString() (string, error) {
+	apiVersion := "Unknown"
+	client, err := Client()
+	if err == nil {
+		version, err := client.ServerVersion(defaultContext())
+		if err == nil {
+			apiVersion = version.APIVersion
+		}
+	}
+	return apiVersion, err
+}
+
+func ParseVersion(versionString string, regex *regexp.Regexp, length int) ([]int, error) {
+	matches := regex.FindAllStringSubmatch(versionString, -1)
+	if len(matches) != 1 {
+		return nil, fmt.Errorf("version string \"%v\" doesn't match expected regular expression: \"%v\"", versionString, regex.String())
+	}
+	versionStringArray := matches[0][1:]
+	versionArray := make([]int, length)
+	for index, versionStr := range versionStringArray {
+		version, err := strconv.Atoi(versionStr)
+		if err != nil {
+			return nil, fmt.Errorf("error while parsing \"%v\" in \"%v\"", versionStr, versionString)
+		}
+		versionArray[index] = version
+	}
+	return versionArray, nil
+}
diff --git a/vendor/github.com/google/cadvisor/container/docker/factory.go b/vendor/github.com/google/cadvisor/container/docker/factory.go
new file mode 100644
index 0000000000000..d9a371616d590
--- /dev/null
+++ b/vendor/github.com/google/cadvisor/container/docker/factory.go
@@ -0,0 +1,370 @@
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package docker
+
+import (
+	"flag"
+	"fmt"
+	"regexp"
+	"strconv"
+	"strings"
+	"sync"
+	"time"
+
+	"github.com/blang/semver/v4"
+	dockertypes "github.com/docker/docker/api/types"
+
+	"github.com/google/cadvisor/container"
+	dockerutil "github.com/google/cadvisor/container/docker/utils"
+	"github.com/google/cadvisor/container/libcontainer"
+	"github.com/google/cadvisor/devicemapper"
+	"github.com/google/cadvisor/fs"
+	info "github.com/google/cadvisor/info/v1"
+	"github.com/google/cadvisor/machine"
+	"github.com/google/cadvisor/watcher"
+	"github.com/google/cadvisor/zfs"
+
+	docker "github.com/docker/docker/client"
+	"golang.org/x/net/context"
+	"k8s.io/klog/v2"
+)
+
+var ArgDockerEndpoint = flag.String("docker", "unix:///var/run/docker.sock", "docker endpoint")
+var ArgDockerTLS = flag.Bool("docker-tls", false, "use TLS to connect to docker")
+var ArgDockerCert = flag.String("docker-tls-cert", "cert.pem", "path to client certificate")
+var ArgDockerKey = flag.String("docker-tls-key", "key.pem", "path to private key")
+var ArgDockerCA = flag.String("docker-tls-ca", "ca.pem", "path to trusted CA")
+
+var dockerEnvMetadataWhiteList = flag.String("docker_env_metadata_whitelist", "", "DEPRECATED: this flag will be removed, please use `env_metadata_whitelist`. A comma-separated list of environment variable keys matched with specified prefix that needs to be collected for docker containers")
+
+// The namespace under which Docker aliases are unique.
+const DockerNamespace = "docker"
+
+// The retry times for getting docker root dir
+const rootDirRetries = 5
+
+// The retry period for getting docker root dir, Millisecond
+const rootDirRetryPeriod time.Duration = 1000 * time.Millisecond
+
+var (
+	// Basepath to all container specific information that libcontainer stores.
+	dockerRootDir string
+
+	dockerRootDirFlag = flag.String("docker_root", "/var/lib/docker", "DEPRECATED: docker root is read from docker info (this is a fallback, default: /var/lib/docker)")
+
+	dockerRootDirOnce sync.Once
+
+	// flag that controls globally disabling thin_ls pending future enhancements.
+	// in production, it has been found that thin_ls makes excessive use of iops.
+	// in an iops restricted environment, usage of thin_ls must be controlled via blkio.
+	// pending that enhancement, disable its usage.
+	disableThinLs = true
+)
+
+func RootDir() string {
+	dockerRootDirOnce.Do(func() {
+		for i := 0; i < rootDirRetries; i++ {
+			status, err := Status()
+			if err == nil && status.RootDir != "" {
+				dockerRootDir = status.RootDir
+				break
+			} else {
+				time.Sleep(rootDirRetryPeriod)
+			}
+		}
+		if dockerRootDir == "" {
+			dockerRootDir = *dockerRootDirFlag
+		}
+	})
+	return dockerRootDir
+}
+
+type StorageDriver string
+
+const (
+	DevicemapperStorageDriver StorageDriver = "devicemapper"
+	AufsStorageDriver         StorageDriver = "aufs"
+	OverlayStorageDriver      StorageDriver = "overlay"
+	Overlay2StorageDriver     StorageDriver = "overlay2"
+	ZfsStorageDriver          StorageDriver = "zfs"
+	VfsStorageDriver          StorageDriver = "vfs"
+)
+
+type dockerFactory struct {
+	machineInfoFactory info.MachineInfoFactory
+
+	storageDriver StorageDriver
+	storageDir    string
+
+	client *docker.Client
+
+	// Information about the mounted cgroup subsystems.
+	cgroupSubsystems map[string]string
+
+	// Information about mounted filesystems.
+	fsInfo fs.FsInfo
+
+	dockerVersion []int
+
+	dockerAPIVersion []int
+
+	includedMetrics container.MetricSet
+
+	thinPoolName    string
+	thinPoolWatcher *devicemapper.ThinPoolWatcher
+
+	zfsWatcher *zfs.ZfsWatcher
+}
+
+func (f *dockerFactory) String() string {
+	return DockerNamespace
+}
+
+func (f *dockerFactory) NewContainerHandler(name string, metadataEnvAllowList []string, inHostNamespace bool) (handler container.ContainerHandler, err error) {
+	client, err := Client()
+	if err != nil {
+		return
+	}
+
+	dockerMetadataEnvAllowList := strings.Split(*dockerEnvMetadataWhiteList, ",")
+
+	// prefer using the unified metadataEnvAllowList
+	if len(metadataEnvAllowList) != 0 {
+		dockerMetadataEnvAllowList = metadataEnvAllowList
+	}
+
+	handler, err = newDockerContainerHandler(
+		client,
+		name,
+		f.machineInfoFactory,
+		f.fsInfo,
+		f.storageDriver,
+		f.storageDir,
+		f.cgroupSubsystems,
+		inHostNamespace,
+		dockerMetadataEnvAllowList,
+		f.dockerVersion,
+		f.includedMetrics,
+		f.thinPoolName,
+		f.thinPoolWatcher,
+		f.zfsWatcher,
+	)
+	return
+}
+
+// Docker handles all containers under /docker
+func (f *dockerFactory) CanHandleAndAccept(name string) (bool, bool, error) {
+	// if the container is not associated with docker, we can't handle it or accept it.
+	if !dockerutil.IsContainerName(name) {
+		return false, false, nil
+	}
+
+	// Check if the container is known to docker and it is active.
+	id := dockerutil.ContainerNameToId(name)
+
+	// We assume that if Inspect fails then the container is not known to docker.
+	ctnr, err := f.client.ContainerInspect(context.Background(), id)
+	if err != nil || !ctnr.State.Running {
+		return false, true, fmt.Errorf("error inspecting container: %v", err)
+	}
+
+	return true, true, nil
+}
+
+func (f *dockerFactory) DebugInfo() map[string][]string {
+	return map[string][]string{}
+}
+
+var (
+	versionRegexpString    = `(\d+)\.(\d+)\.(\d+)`
+	VersionRe              = regexp.MustCompile(versionRegexpString)
+	apiVersionRegexpString = `(\d+)\.(\d+)`
+	apiVersionRe           = regexp.MustCompile(apiVersionRegexpString)
+)
+
+func StartThinPoolWatcher(dockerInfo *dockertypes.Info) (*devicemapper.ThinPoolWatcher, error) {
+	_, err := devicemapper.ThinLsBinaryPresent()
+	if err != nil {
+		return nil, err
+	}
+
+	if err := ensureThinLsKernelVersion(machine.KernelVersion()); err != nil {
+		return nil, err
+	}
+
+	if disableThinLs {
+		return nil, fmt.Errorf("usage of thin_ls is disabled to preserve iops")
+	}
+
+	dockerThinPoolName, err := dockerutil.DockerThinPoolName(*dockerInfo)
+	if err != nil {
+		return nil, err
+	}
+
+	dockerMetadataDevice, err := dockerutil.DockerMetadataDevice(*dockerInfo)
+	if err != nil {
+		return nil, err
+	}
+
+	thinPoolWatcher, err := devicemapper.NewThinPoolWatcher(dockerThinPoolName, dockerMetadataDevice)
+	if err != nil {
+		return nil, err
+	}
+
+	go thinPoolWatcher.Start()
+	return thinPoolWatcher, nil
+}
+
+func StartZfsWatcher(dockerInfo *dockertypes.Info) (*zfs.ZfsWatcher, error) {
+	filesystem, err := dockerutil.DockerZfsFilesystem(*dockerInfo)
+	if err != nil {
+		return nil, err
+	}
+
+	zfsWatcher, err := zfs.NewZfsWatcher(filesystem)
+	if err != nil {
+		return nil, err
+	}
+
+	go zfsWatcher.Start()
+	return zfsWatcher, nil
+}
+
+func ensureThinLsKernelVersion(kernelVersion string) error {
+	// kernel 4.4.0 has the proper bug fixes to allow thin_ls to work without corrupting the thin pool
+	minKernelVersion := semver.MustParse("4.4.0")
+	// RHEL 7 kernel 3.10.0 release >= 366 has the proper bug fixes backported from 4.4.0 to allow
+	// thin_ls to work without corrupting the thin pool
+	minRhel7KernelVersion := semver.MustParse("3.10.0")
+
+	matches := VersionRe.FindStringSubmatch(kernelVersion)
+	if len(matches) < 4 {
+		return fmt.Errorf("error parsing kernel version: %q is not a semver", kernelVersion)
+	}
+
+	sem, err := semver.Make(matches[0])
+	if err != nil {
+		return err
+	}
+
+	if sem.GTE(minKernelVersion) {
+		// kernel 4.4+ - good
+		return nil
+	}
+
+	// Certain RHEL/Centos 7.x kernels have a backport to fix the corruption bug
+	if !strings.Contains(kernelVersion, ".el7") {
+		// not a RHEL 7.x kernel - won't work
+		return fmt.Errorf("kernel version 4.4.0 or later is required to use thin_ls - you have %q", kernelVersion)
+	}
+
+	// RHEL/Centos 7.x from here on
+	if sem.Major != 3 {
+		// only 3.x kernels *may* work correctly
+		return fmt.Errorf("RHEL/Centos 7.x kernel version 3.10.0-366 or later is required to use thin_ls - you have %q", kernelVersion)
+	}
+
+	if sem.GT(minRhel7KernelVersion) {
+		// 3.10.1+ - good
+		return nil
+	}
+
+	if sem.EQ(minRhel7KernelVersion) {
+		// need to check release
+		releaseRE := regexp.MustCompile(`^[^-]+-([0-9]+)\.`)
+		releaseMatches := releaseRE.FindStringSubmatch(kernelVersion)
+		if len(releaseMatches) != 2 {
+			return fmt.Errorf("unable to determine RHEL/Centos 7.x kernel release from %q", kernelVersion)
+		}
+
+		release, err := strconv.Atoi(releaseMatches[1])
+		if err != nil {
+			return fmt.Errorf("error parsing release %q: %v", releaseMatches[1], err)
+		}
+
+		if release >= 366 {
+			return nil
+		}
+	}
+
+	return fmt.Errorf("RHEL/Centos 7.x kernel version 3.10.0-366 or later is required to use thin_ls - you have %q", kernelVersion)
+}
+
+// Register root container before running this function!
+func Register(factory info.MachineInfoFactory, fsInfo fs.FsInfo, includedMetrics container.MetricSet) error {
+	client, err := Client()
+	if err != nil {
+		return fmt.Errorf("unable to communicate with docker daemon: %v", err)
+	}
+
+	dockerInfo, err := ValidateInfo(Info, VersionString)
+	if err != nil {
+		return fmt.Errorf("failed to validate Docker info: %v", err)
+	}
+
+	// Version already validated above, assume no error here.
+	dockerVersion, _ := ParseVersion(dockerInfo.ServerVersion, VersionRe, 3)
+
+	dockerAPIVersion, _ := APIVersion()
+
+	cgroupSubsystems, err := libcontainer.GetCgroupSubsystems(includedMetrics)
+	if err != nil {
+		return fmt.Errorf("failed to get cgroup subsystems: %v", err)
+	}
+
+	var (
+		thinPoolWatcher *devicemapper.ThinPoolWatcher
+		thinPoolName    string
+		zfsWatcher      *zfs.ZfsWatcher
+	)
+	if includedMetrics.Has(container.DiskUsageMetrics) {
+		if StorageDriver(dockerInfo.Driver) == DevicemapperStorageDriver {
+			thinPoolWatcher, err = StartThinPoolWatcher(dockerInfo)
+			if err != nil {
+				klog.Errorf("devicemapper filesystem stats will not be reported: %v", err)
+			}
+
+			// Safe to ignore error - driver status should always be populated.
+			status, _ := StatusFromDockerInfo(*dockerInfo)
+			thinPoolName = status.DriverStatus[dockerutil.DriverStatusPoolName]
+		}
+
+		if StorageDriver(dockerInfo.Driver) == ZfsStorageDriver {
+			zfsWatcher, err = StartZfsWatcher(dockerInfo)
+			if err != nil {
+				klog.Errorf("zfs filesystem stats will not be reported: %v", err)
+			}
+		}
+	}
+
+	klog.V(1).Infof("Registering Docker factory")
+	f := &dockerFactory{
+		cgroupSubsystems:   cgroupSubsystems,
+		client:             client,
+		dockerVersion:      dockerVersion,
+		dockerAPIVersion:   dockerAPIVersion,
+		fsInfo:             fsInfo,
+		machineInfoFactory: factory,
+		storageDriver:      StorageDriver(dockerInfo.Driver),
+		storageDir:         RootDir(),
+		includedMetrics:    includedMetrics,
+		thinPoolName:       thinPoolName,
+		thinPoolWatcher:    thinPoolWatcher,
+		zfsWatcher:         zfsWatcher,
+	}
+
+	container.RegisterContainerHandlerFactory(f, []watcher.ContainerWatchSource{watcher.Raw})
+	return nil
+}
diff --git a/vendor/github.com/google/cadvisor/container/docker/fs.go b/vendor/github.com/google/cadvisor/container/docker/fs.go
new file mode 100644
index 0000000000000..79384d0e40c2f
--- /dev/null
+++ b/vendor/github.com/google/cadvisor/container/docker/fs.go
@@ -0,0 +1,173 @@
+// Copyright 2022 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package docker
+
+import (
+	"fmt"
+
+	"k8s.io/klog/v2"
+
+	"github.com/google/cadvisor/container"
+	"github.com/google/cadvisor/container/common"
+	"github.com/google/cadvisor/devicemapper"
+	"github.com/google/cadvisor/fs"
+	info "github.com/google/cadvisor/info/v1"
+	"github.com/google/cadvisor/zfs"
+)
+
+func FsStats(
+	stats *info.ContainerStats,
+	machineInfoFactory info.MachineInfoFactory,
+	metrics container.MetricSet,
+	storageDriver StorageDriver,
+	fsHandler common.FsHandler,
+	globalFsInfo fs.FsInfo,
+	poolName string,
+	rootfsStorageDir string,
+	zfsParent string,
+) error {
+	mi, err := machineInfoFactory.GetMachineInfo()
+	if err != nil {
+		return err
+	}
+
+	if metrics.Has(container.DiskIOMetrics) {
+		common.AssignDeviceNamesToDiskStats((*common.MachineInfoNamer)(mi), &stats.DiskIo)
+	}
+
+	if metrics.Has(container.DiskUsageMetrics) {
+		var device string
+		switch storageDriver {
+		case DevicemapperStorageDriver:
+			device = poolName
+		case AufsStorageDriver, OverlayStorageDriver, Overlay2StorageDriver, VfsStorageDriver:
+			deviceInfo, err := globalFsInfo.GetDirFsDevice(rootfsStorageDir)
+			if err != nil {
+				return fmt.Errorf("unable to determine device info for dir: %v: %v", rootfsStorageDir, err)
+			}
+			device = deviceInfo.Device
+		case ZfsStorageDriver:
+			device = zfsParent
+		default:
+			return nil
+		}
+
+		for _, fs := range mi.Filesystems {
+			if fs.Device == device {
+				usage := fsHandler.Usage()
+				fsStat := info.FsStats{
+					Device:    device,
+					Type:      fs.Type,
+					Limit:     fs.Capacity,
+					BaseUsage: usage.BaseUsageBytes,
+					Usage:     usage.TotalUsageBytes,
+					Inodes:    usage.InodeUsage,
+				}
+				fileSystems, err := globalFsInfo.GetGlobalFsInfo()
+				if err != nil {
+					return fmt.Errorf("unable to obtain diskstats for filesystem %s: %v", fsStat.Device, err)
+				}
+				addDiskStats(fileSystems, &fs, &fsStat)
+				stats.Filesystem = append(stats.Filesystem, fsStat)
+				break
+			}
+		}
+	}
+
+	return nil
+}
+
+func addDiskStats(fileSystems []fs.Fs, fsInfo *info.FsInfo, fsStats *info.FsStats) {
+	if fsInfo == nil {
+		return
+	}
+
+	for _, fileSys := range fileSystems {
+		if fsInfo.DeviceMajor == fileSys.DiskStats.Major &&
+			fsInfo.DeviceMinor == fileSys.DiskStats.Minor {
+			fsStats.ReadsCompleted = fileSys.DiskStats.ReadsCompleted
+			fsStats.ReadsMerged = fileSys.DiskStats.ReadsMerged
+			fsStats.SectorsRead = fileSys.DiskStats.SectorsRead
+			fsStats.ReadTime = fileSys.DiskStats.ReadTime
+			fsStats.WritesCompleted = fileSys.DiskStats.WritesCompleted
+			fsStats.WritesMerged = fileSys.DiskStats.WritesMerged
+			fsStats.SectorsWritten = fileSys.DiskStats.SectorsWritten
+			fsStats.WriteTime = fileSys.DiskStats.WriteTime
+			fsStats.IoInProgress = fileSys.DiskStats.IoInProgress
+			fsStats.IoTime = fileSys.DiskStats.IoTime
+			fsStats.WeightedIoTime = fileSys.DiskStats.WeightedIoTime
+			break
+		}
+	}
+}
+
+// FsHandler is a composite FsHandler implementation the incorporates
+// the common fs handler, a devicemapper ThinPoolWatcher, and a zfsWatcher
+type FsHandler struct {
+	FsHandler common.FsHandler
+
+	// thinPoolWatcher is the devicemapper thin pool watcher
+	ThinPoolWatcher *devicemapper.ThinPoolWatcher
+	// deviceID is the id of the container's fs device
+	DeviceID string
+
+	// zfsWatcher is the zfs filesystem watcher
+	ZfsWatcher *zfs.ZfsWatcher
+	// zfsFilesystem is the docker zfs filesystem
+	ZfsFilesystem string
+}
+
+var _ common.FsHandler = &FsHandler{}
+
+func (h *FsHandler) Start() {
+	h.FsHandler.Start()
+}
+
+func (h *FsHandler) Stop() {
+	h.FsHandler.Stop()
+}
+
+func (h *FsHandler) Usage() common.FsUsage {
+	usage := h.FsHandler.Usage()
+
+	// When devicemapper is the storage driver, the base usage of the container comes from the thin pool.
+	// We still need the result of the fsHandler for any extra storage associated with the container.
+	// To correctly factor in the thin pool usage, we should:
+	// * Usage the thin pool usage as the base usage
+	// * Calculate the overall usage by adding the overall usage from the fs handler to the thin pool usage
+	if h.ThinPoolWatcher != nil {
+		thinPoolUsage, err := h.ThinPoolWatcher.GetUsage(h.DeviceID)
+		if err != nil {
+			// TODO: ideally we should keep track of how many times we failed to get the usage for this
+			// device vs how many refreshes of the cache there have been, and display an error e.g. if we've
+			// had at least 1 refresh and we still can't find the device.
+			klog.V(5).Infof("unable to get fs usage from thin pool for device %s: %v", h.DeviceID, err)
+		} else {
+			usage.BaseUsageBytes = thinPoolUsage
+			usage.TotalUsageBytes += thinPoolUsage
+		}
+	}
+
+	if h.ZfsWatcher != nil {
+		zfsUsage, err := h.ZfsWatcher.GetUsage(h.ZfsFilesystem)
+		if err != nil {
+			klog.V(5).Infof("unable to get fs usage from zfs for filesystem %s: %v", h.ZfsFilesystem, err)
+		} else {
+			usage.BaseUsageBytes = zfsUsage
+			usage.TotalUsageBytes += zfsUsage
+		}
+	}
+	return usage
+}
diff --git a/vendor/github.com/google/cadvisor/container/docker/handler.go b/vendor/github.com/google/cadvisor/container/docker/handler.go
new file mode 100644
index 0000000000000..fc66641f6fdb3
--- /dev/null
+++ b/vendor/github.com/google/cadvisor/container/docker/handler.go
@@ -0,0 +1,355 @@
+// Copyright 2014 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// Handler for Docker containers.
+package docker
+
+import (
+	"fmt"
+	"os"
+	"path"
+	"strconv"
+	"strings"
+	"time"
+
+	"github.com/google/cadvisor/container"
+	"github.com/google/cadvisor/container/common"
+	dockerutil "github.com/google/cadvisor/container/docker/utils"
+	containerlibcontainer "github.com/google/cadvisor/container/libcontainer"
+	"github.com/google/cadvisor/devicemapper"
+	"github.com/google/cadvisor/fs"
+	info "github.com/google/cadvisor/info/v1"
+	"github.com/google/cadvisor/zfs"
+	"github.com/opencontainers/runc/libcontainer/cgroups"
+
+	docker "github.com/docker/docker/client"
+	"golang.org/x/net/context"
+)
+
+const (
+	// The read write layers exist here.
+	aufsRWLayer     = "diff"
+	overlayRWLayer  = "upper"
+	overlay2RWLayer = "diff"
+
+	// Path to the directory where docker stores log files if the json logging driver is enabled.
+	pathToContainersDir = "containers"
+)
+
+type dockerContainerHandler struct {
+	// machineInfoFactory provides info.MachineInfo
+	machineInfoFactory info.MachineInfoFactory
+
+	// Absolute path to the cgroup hierarchies of this container.
+	// (e.g.: "cpu" -> "/sys/fs/cgroup/cpu/test")
+	cgroupPaths map[string]string
+
+	// the docker storage driver
+	storageDriver    StorageDriver
+	fsInfo           fs.FsInfo
+	rootfsStorageDir string
+
+	// Time at which this container was created.
+	creationTime time.Time
+
+	// Metadata associated with the container.
+	envs   map[string]string
+	labels map[string]string
+
+	// Image name used for this container.
+	image string
+
+	// Filesystem handler.
+	fsHandler common.FsHandler
+
+	// The IP address of the container
+	ipAddress string
+
+	includedMetrics container.MetricSet
+
+	// the devicemapper poolname
+	poolName string
+
+	// zfsParent is the parent for docker zfs
+	zfsParent string
+
+	// Reference to the container
+	reference info.ContainerReference
+
+	libcontainerHandler *containerlibcontainer.Handler
+}
+
+var _ container.ContainerHandler = &dockerContainerHandler{}
+
+func getRwLayerID(containerID, storageDir string, sd StorageDriver, dockerVersion []int) (string, error) {
+	const (
+		// Docker version >=1.10.0 have a randomized ID for the root fs of a container.
+		randomizedRWLayerMinorVersion = 10
+		rwLayerIDFile                 = "mount-id"
+	)
+	if (dockerVersion[0] <= 1) && (dockerVersion[1] < randomizedRWLayerMinorVersion) {
+		return containerID, nil
+	}
+
+	bytes, err := os.ReadFile(path.Join(storageDir, "image", string(sd), "layerdb", "mounts", containerID, rwLayerIDFile))
+	if err != nil {
+		return "", fmt.Errorf("failed to identify the read-write layer ID for container %q. - %v", containerID, err)
+	}
+	return string(bytes), err
+}
+
+// newDockerContainerHandler returns a new container.ContainerHandler
+func newDockerContainerHandler(
+	client *docker.Client,
+	name string,
+	machineInfoFactory info.MachineInfoFactory,
+	fsInfo fs.FsInfo,
+	storageDriver StorageDriver,
+	storageDir string,
+	cgroupSubsystems map[string]string,
+	inHostNamespace bool,
+	metadataEnvAllowList []string,
+	dockerVersion []int,
+	includedMetrics container.MetricSet,
+	thinPoolName string,
+	thinPoolWatcher *devicemapper.ThinPoolWatcher,
+	zfsWatcher *zfs.ZfsWatcher,
+) (container.ContainerHandler, error) {
+	// Create the cgroup paths.
+	cgroupPaths := common.MakeCgroupPaths(cgroupSubsystems, name)
+
+	// Generate the equivalent cgroup manager for this container.
+	cgroupManager, err := containerlibcontainer.NewCgroupManager(name, cgroupPaths)
+	if err != nil {
+		return nil, err
+	}
+
+	rootFs := "/"
+	if !inHostNamespace {
+		rootFs = "/rootfs"
+		storageDir = path.Join(rootFs, storageDir)
+	}
+
+	id := dockerutil.ContainerNameToId(name)
+
+	// Add the Containers dir where the log files are stored.
+	// FIXME: Give `otherStorageDir` a more descriptive name.
+	otherStorageDir := path.Join(storageDir, pathToContainersDir, id)
+
+	rwLayerID, err := getRwLayerID(id, storageDir, storageDriver, dockerVersion)
+	if err != nil {
+		return nil, err
+	}
+
+	// Determine the rootfs storage dir OR the pool name to determine the device.
+	// For devicemapper, we only need the thin pool name, and that is passed in to this call
+	rootfsStorageDir, zfsFilesystem, zfsParent, err := DetermineDeviceStorage(storageDriver, storageDir, rwLayerID)
+	if err != nil {
+		return nil, fmt.Errorf("unable to determine device storage: %v", err)
+	}
+
+	// We assume that if Inspect fails then the container is not known to docker.
+	ctnr, err := client.ContainerInspect(context.Background(), id)
+	if err != nil {
+		return nil, fmt.Errorf("failed to inspect container %q: %v", id, err)
+	}
+
+	// Do not report network metrics for containers that share netns with another container.
+	metrics := common.RemoveNetMetrics(includedMetrics, ctnr.HostConfig.NetworkMode.IsContainer())
+
+	// TODO: extract object mother method
+	handler := &dockerContainerHandler{
+		machineInfoFactory: machineInfoFactory,
+		cgroupPaths:        cgroupPaths,
+		fsInfo:             fsInfo,
+		storageDriver:      storageDriver,
+		poolName:           thinPoolName,
+		rootfsStorageDir:   rootfsStorageDir,
+		envs:               make(map[string]string),
+		labels:             ctnr.Config.Labels,
+		includedMetrics:    metrics,
+		zfsParent:          zfsParent,
+	}
+	// Timestamp returned by Docker is in time.RFC3339Nano format.
+	handler.creationTime, err = time.Parse(time.RFC3339Nano, ctnr.Created)
+	if err != nil {
+		// This should not happen, report the error just in case
+		return nil, fmt.Errorf("failed to parse the create timestamp %q for container %q: %v", ctnr.Created, id, err)
+	}
+	handler.libcontainerHandler = containerlibcontainer.NewHandler(cgroupManager, rootFs, ctnr.State.Pid, metrics)
+
+	// Add the name and bare ID as aliases of the container.
+	handler.reference = info.ContainerReference{
+		Id:        id,
+		Name:      name,
+		Aliases:   []string{strings.TrimPrefix(ctnr.Name, "/"), id},
+		Namespace: DockerNamespace,
+	}
+	handler.image = ctnr.Config.Image
+	// Only adds restartcount label if it's greater than 0
+	if ctnr.RestartCount > 0 {
+		handler.labels["restartcount"] = strconv.Itoa(ctnr.RestartCount)
+	}
+
+	// Obtain the IP address for the container.
+	// If the NetworkMode starts with 'container:' then we need to use the IP address of the container specified.
+	// This happens in cases such as kubernetes where the containers doesn't have an IP address itself and we need to use the pod's address
+	ipAddress := ctnr.NetworkSettings.IPAddress
+	networkMode := string(ctnr.HostConfig.NetworkMode)
+	if ipAddress == "" && strings.HasPrefix(networkMode, "container:") {
+		containerID := strings.TrimPrefix(networkMode, "container:")
+		c, err := client.ContainerInspect(context.Background(), containerID)
+		if err != nil {
+			return nil, fmt.Errorf("failed to inspect container %q: %v", id, err)
+		}
+		ipAddress = c.NetworkSettings.IPAddress
+	}
+
+	handler.ipAddress = ipAddress
+
+	if includedMetrics.Has(container.DiskUsageMetrics) {
+		handler.fsHandler = &FsHandler{
+			FsHandler:       common.NewFsHandler(common.DefaultPeriod, rootfsStorageDir, otherStorageDir, fsInfo),
+			ThinPoolWatcher: thinPoolWatcher,
+			ZfsWatcher:      zfsWatcher,
+			DeviceID:        ctnr.GraphDriver.Data["DeviceId"],
+			ZfsFilesystem:   zfsFilesystem,
+		}
+	}
+
+	// split env vars to get metadata map.
+	for _, exposedEnv := range metadataEnvAllowList {
+		if exposedEnv == "" {
+			// if no dockerEnvWhitelist provided, len(metadataEnvAllowList) == 1, metadataEnvAllowList[0] == ""
+			continue
+		}
+
+		for _, envVar := range ctnr.Config.Env {
+			if envVar != "" {
+				splits := strings.SplitN(envVar, "=", 2)
+				if len(splits) == 2 && strings.HasPrefix(splits[0], exposedEnv) {
+					handler.envs[strings.ToLower(splits[0])] = splits[1]
+				}
+			}
+		}
+	}
+
+	return handler, nil
+}
+
+func DetermineDeviceStorage(storageDriver StorageDriver, storageDir string, rwLayerID string) (
+	rootfsStorageDir string, zfsFilesystem string, zfsParent string, err error) {
+	switch storageDriver {
+	case AufsStorageDriver:
+		rootfsStorageDir = path.Join(storageDir, string(AufsStorageDriver), aufsRWLayer, rwLayerID)
+	case OverlayStorageDriver:
+		rootfsStorageDir = path.Join(storageDir, string(storageDriver), rwLayerID, overlayRWLayer)
+	case Overlay2StorageDriver:
+		rootfsStorageDir = path.Join(storageDir, string(storageDriver), rwLayerID, overlay2RWLayer)
+	case VfsStorageDriver:
+		rootfsStorageDir = path.Join(storageDir)
+	case ZfsStorageDriver:
+		var status info.DockerStatus
+		status, err = Status()
+		if err != nil {
+			return
+		}
+		zfsParent = status.DriverStatus[dockerutil.DriverStatusParentDataset]
+		zfsFilesystem = path.Join(zfsParent, rwLayerID)
+	}
+	return
+}
+
+func (h *dockerContainerHandler) Start() {
+	if h.fsHandler != nil {
+		h.fsHandler.Start()
+	}
+}
+
+func (h *dockerContainerHandler) Cleanup() {
+	if h.fsHandler != nil {
+		h.fsHandler.Stop()
+	}
+}
+
+func (h *dockerContainerHandler) ContainerReference() (info.ContainerReference, error) {
+	return h.reference, nil
+}
+
+func (h *dockerContainerHandler) GetSpec() (info.ContainerSpec, error) {
+	hasFilesystem := h.includedMetrics.Has(container.DiskUsageMetrics)
+	hasNetwork := h.includedMetrics.Has(container.NetworkUsageMetrics)
+	spec, err := common.GetSpec(h.cgroupPaths, h.machineInfoFactory, hasNetwork, hasFilesystem)
+
+	spec.Labels = h.labels
+	spec.Envs = h.envs
+	spec.Image = h.image
+	spec.CreationTime = h.creationTime
+
+	return spec, err
+}
+
+// TODO(vmarmol): Get from libcontainer API instead of cgroup manager when we don't have to support older Dockers.
+func (h *dockerContainerHandler) GetStats() (*info.ContainerStats, error) {
+	stats, err := h.libcontainerHandler.GetStats()
+	if err != nil {
+		return stats, err
+	}
+
+	// Get filesystem stats.
+	err = FsStats(stats, h.machineInfoFactory, h.includedMetrics, h.storageDriver,
+		h.fsHandler, h.fsInfo, h.poolName, h.rootfsStorageDir, h.zfsParent)
+	if err != nil {
+		return stats, err
+	}
+
+	return stats, nil
+}
+
+func (h *dockerContainerHandler) ListContainers(listType container.ListType) ([]info.ContainerReference, error) {
+	// No-op for Docker driver.
+	return []info.ContainerReference{}, nil
+}
+
+func (h *dockerContainerHandler) GetCgroupPath(resource string) (string, error) {
+	var res string
+	if !cgroups.IsCgroup2UnifiedMode() {
+		res = resource
+	}
+	path, ok := h.cgroupPaths[res]
+	if !ok {
+		return "", fmt.Errorf("could not find path for resource %q for container %q", resource, h.reference.Name)
+	}
+	return path, nil
+}
+
+func (h *dockerContainerHandler) GetContainerLabels() map[string]string {
+	return h.labels
+}
+
+func (h *dockerContainerHandler) GetContainerIPAddress() string {
+	return h.ipAddress
+}
+
+func (h *dockerContainerHandler) ListProcesses(listType container.ListType) ([]int, error) {
+	return h.libcontainerHandler.GetProcesses()
+}
+
+func (h *dockerContainerHandler) Exists() bool {
+	return common.CgroupExists(h.cgroupPaths)
+}
+
+func (h *dockerContainerHandler) Type() container.ContainerType {
+	return container.ContainerTypeDocker
+}
diff --git a/vendor/github.com/google/cadvisor/container/docker/install/install.go b/vendor/github.com/google/cadvisor/container/docker/install/install.go
new file mode 100644
index 0000000000000..81346f68806c9
--- /dev/null
+++ b/vendor/github.com/google/cadvisor/container/docker/install/install.go
@@ -0,0 +1,30 @@
+// Copyright 2019 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// The install package registers docker.NewPlugin() as the "docker" container provider when imported
+package install
+
+import (
+	"k8s.io/klog/v2"
+
+	"github.com/google/cadvisor/container"
+	"github.com/google/cadvisor/container/docker"
+)
+
+func init() {
+	err := container.RegisterPlugin("docker", docker.NewPlugin())
+	if err != nil {
+		klog.Fatalf("Failed to register docker plugin: %v", err)
+	}
+}
diff --git a/vendor/github.com/google/cadvisor/container/docker/plugin.go b/vendor/github.com/google/cadvisor/container/docker/plugin.go
new file mode 100644
index 0000000000000..07e471a732f86
--- /dev/null
+++ b/vendor/github.com/google/cadvisor/container/docker/plugin.go
@@ -0,0 +1,78 @@
+// Copyright 2019 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package docker
+
+import (
+	"time"
+
+	"golang.org/x/net/context"
+	"k8s.io/klog/v2"
+
+	"github.com/google/cadvisor/container"
+	"github.com/google/cadvisor/fs"
+	info "github.com/google/cadvisor/info/v1"
+	"github.com/google/cadvisor/watcher"
+)
+
+const dockerClientTimeout = 10 * time.Second
+
+// NewPlugin returns an implementation of container.Plugin suitable for passing to container.RegisterPlugin()
+func NewPlugin() container.Plugin {
+	return &plugin{}
+}
+
+type plugin struct{}
+
+func (p *plugin) InitializeFSContext(context *fs.Context) error {
+	SetTimeout(dockerClientTimeout)
+	// Try to connect to docker indefinitely on startup.
+	dockerStatus := retryDockerStatus()
+	context.Docker = fs.DockerContext{
+		Root:         RootDir(),
+		Driver:       dockerStatus.Driver,
+		DriverStatus: dockerStatus.DriverStatus,
+	}
+	return nil
+}
+
+func (p *plugin) Register(factory info.MachineInfoFactory, fsInfo fs.FsInfo, includedMetrics container.MetricSet) (watcher.ContainerWatcher, error) {
+	err := Register(factory, fsInfo, includedMetrics)
+	return nil, err
+}
+
+func retryDockerStatus() info.DockerStatus {
+	startupTimeout := dockerClientTimeout
+	maxTimeout := 4 * startupTimeout
+	for {
+		ctx, _ := context.WithTimeout(context.Background(), startupTimeout)
+		dockerStatus, err := StatusWithContext(ctx)
+		if err == nil {
+			return dockerStatus
+		}
+
+		switch err {
+		case context.DeadlineExceeded:
+			klog.Warningf("Timeout trying to communicate with docker during initialization, will retry")
+		default:
+			klog.V(5).Infof("Docker not connected: %v", err)
+			return info.DockerStatus{}
+		}
+
+		startupTimeout = 2 * startupTimeout
+		if startupTimeout > maxTimeout {
+			startupTimeout = maxTimeout
+		}
+	}
+}
diff --git a/vendor/github.com/google/cadvisor/container/docker/utils/docker.go b/vendor/github.com/google/cadvisor/container/docker/utils/docker.go
new file mode 100644
index 0000000000000..11a0c9e9f1371
--- /dev/null
+++ b/vendor/github.com/google/cadvisor/container/docker/utils/docker.go
@@ -0,0 +1,125 @@
+// Copyright 2016 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package utils
+
+import (
+	"fmt"
+	"os"
+	"path"
+	"regexp"
+	"strings"
+
+	dockertypes "github.com/docker/docker/api/types"
+	v1 "github.com/google/cadvisor/info/v1"
+)
+
+const (
+	DriverStatusPoolName      = "Pool Name"
+	DriverStatusMetadataFile  = "Metadata file"
+	DriverStatusParentDataset = "Parent Dataset"
+)
+
+// Regexp that identifies docker cgroups, containers started with
+// --cgroup-parent have another prefix than 'docker'
+var cgroupRegexp = regexp.MustCompile(`([a-z0-9]{64})`)
+
+func DriverStatusValue(status [][2]string, target string) string {
+	for _, v := range status {
+		if strings.EqualFold(v[0], target) {
+			return v[1]
+		}
+	}
+
+	return ""
+}
+
+func DockerThinPoolName(info dockertypes.Info) (string, error) {
+	poolName := DriverStatusValue(info.DriverStatus, DriverStatusPoolName)
+	if len(poolName) == 0 {
+		return "", fmt.Errorf("Could not get devicemapper pool name")
+	}
+
+	return poolName, nil
+}
+
+func DockerMetadataDevice(info dockertypes.Info) (string, error) {
+	metadataDevice := DriverStatusValue(info.DriverStatus, DriverStatusMetadataFile)
+	if len(metadataDevice) != 0 {
+		return metadataDevice, nil
+	}
+
+	poolName, err := DockerThinPoolName(info)
+	if err != nil {
+		return "", err
+	}
+
+	metadataDevice = fmt.Sprintf("/dev/mapper/%s_tmeta", poolName)
+
+	if _, err := os.Stat(metadataDevice); err != nil {
+		return "", err
+	}
+
+	return metadataDevice, nil
+}
+
+func DockerZfsFilesystem(info dockertypes.Info) (string, error) {
+	filesystem := DriverStatusValue(info.DriverStatus, DriverStatusParentDataset)
+	if len(filesystem) == 0 {
+		return "", fmt.Errorf("Could not get zfs filesystem")
+	}
+
+	return filesystem, nil
+}
+
+func SummariesToImages(summaries []dockertypes.ImageSummary) ([]v1.DockerImage, error) {
+	var out []v1.DockerImage
+	const unknownTag = "<none>:<none>"
+	for _, summary := range summaries {
+		if len(summary.RepoTags) == 1 && summary.RepoTags[0] == unknownTag {
+			// images with repo or tags are uninteresting.
+			continue
+		}
+		di := v1.DockerImage{
+			ID:          summary.ID,
+			RepoTags:    summary.RepoTags,
+			Created:     summary.Created,
+			VirtualSize: summary.VirtualSize,
+			Size:        summary.Size,
+		}
+		out = append(out, di)
+	}
+	return out, nil
+}
+
+// Returns the ID from the full container name.
+func ContainerNameToId(name string) string {
+	id := path.Base(name)
+
+	if matches := cgroupRegexp.FindStringSubmatch(id); matches != nil {
+		return matches[1]
+	}
+
+	return id
+}
+
+// IsContainerName returns true if the cgroup with associated name
+// corresponds to a container.
+func IsContainerName(name string) bool {
+	// always ignore .mount cgroup even if associated with docker and delegate to systemd
+	if strings.HasSuffix(name, ".mount") {
+		return false
+	}
+	return cgroupRegexp.MatchString(path.Base(name))
+}
diff --git a/vendor/github.com/google/cadvisor/zfs/watcher.go b/vendor/github.com/google/cadvisor/zfs/watcher.go
new file mode 100644
index 0000000000000..0edbcc91a02bd
--- /dev/null
+++ b/vendor/github.com/google/cadvisor/zfs/watcher.go
@@ -0,0 +1,114 @@
+// Copyright 2016 Google Inc. All Rights Reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package zfs
+
+import (
+	"fmt"
+	"sync"
+	"time"
+
+	zfs "github.com/mistifyio/go-zfs"
+	"k8s.io/klog/v2"
+)
+
+// zfsWatcher maintains a cache of filesystem -> usage stats for a
+// zfs filesystem
+type ZfsWatcher struct {
+	filesystem string
+	lock       *sync.RWMutex
+	cache      map[string]uint64
+	period     time.Duration
+	stopChan   chan struct{}
+}
+
+// NewThinPoolWatcher returns a new ThinPoolWatcher for the given devicemapper
+// thin pool name and metadata device or an error.
+func NewZfsWatcher(filesystem string) (*ZfsWatcher, error) {
+
+	return &ZfsWatcher{
+		filesystem: filesystem,
+		lock:       &sync.RWMutex{},
+		cache:      make(map[string]uint64),
+		period:     15 * time.Second,
+		stopChan:   make(chan struct{}),
+	}, nil
+}
+
+// Start starts the ZfsWatcher.
+func (w *ZfsWatcher) Start() {
+	err := w.Refresh()
+	if err != nil {
+		klog.Errorf("encountered error refreshing zfs watcher: %v", err)
+	}
+
+	for {
+		select {
+		case <-w.stopChan:
+			return
+		case <-time.After(w.period):
+			start := time.Now()
+			err = w.Refresh()
+			if err != nil {
+				klog.Errorf("encountered error refreshing zfs watcher: %v", err)
+			}
+
+			// print latency for refresh
+			duration := time.Since(start)
+			klog.V(5).Infof("zfs(%d) took %s", start.Unix(), duration)
+		}
+	}
+}
+
+// Stop stops the ZfsWatcher.
+func (w *ZfsWatcher) Stop() {
+	close(w.stopChan)
+}
+
+// GetUsage gets the cached usage value of the given filesystem.
+func (w *ZfsWatcher) GetUsage(filesystem string) (uint64, error) {
+	w.lock.RLock()
+	defer w.lock.RUnlock()
+
+	v, ok := w.cache[filesystem]
+	if !ok {
+		return 0, fmt.Errorf("no cached value for usage of filesystem %v", filesystem)
+	}
+
+	return v, nil
+}
+
+// Refresh performs a zfs get
+func (w *ZfsWatcher) Refresh() error {
+	w.lock.Lock()
+	defer w.lock.Unlock()
+	newCache := make(map[string]uint64)
+	parent, err := zfs.GetDataset(w.filesystem)
+	if err != nil {
+		klog.Errorf("encountered error getting zfs filesystem: %s: %v", w.filesystem, err)
+		return err
+	}
+	children, err := parent.Children(0)
+	if err != nil {
+		klog.Errorf("encountered error getting children of zfs filesystem: %s: %v", w.filesystem, err)
+		return err
+	}
+
+	for _, ds := range children {
+		newCache[ds.Name] = ds.Used
+	}
+
+	w.cache = newCache
+	return nil
+}
diff --git a/vendor/github.com/opencontainers/image-spec/LICENSE b/vendor/github.com/opencontainers/image-spec/LICENSE
new file mode 100644
index 0000000000000..9fdc20fdb6a80
--- /dev/null
+++ b/vendor/github.com/opencontainers/image-spec/LICENSE
@@ -0,0 +1,191 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   Copyright 2016 The Linux Foundation.
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
diff --git a/vendor/github.com/opencontainers/image-spec/specs-go/v1/annotations.go b/vendor/github.com/opencontainers/image-spec/specs-go/v1/annotations.go
new file mode 100644
index 0000000000000..35d8108958ff0
--- /dev/null
+++ b/vendor/github.com/opencontainers/image-spec/specs-go/v1/annotations.go
@@ -0,0 +1,56 @@
+// Copyright 2016 The Linux Foundation
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package v1
+
+const (
+	// AnnotationCreated is the annotation key for the date and time on which the image was built (date-time string as defined by RFC 3339).
+	AnnotationCreated = "org.opencontainers.image.created"
+
+	// AnnotationAuthors is the annotation key for the contact details of the people or organization responsible for the image (freeform string).
+	AnnotationAuthors = "org.opencontainers.image.authors"
+
+	// AnnotationURL is the annotation key for the URL to find more information on the image.
+	AnnotationURL = "org.opencontainers.image.url"
+
+	// AnnotationDocumentation is the annotation key for the URL to get documentation on the image.
+	AnnotationDocumentation = "org.opencontainers.image.documentation"
+
+	// AnnotationSource is the annotation key for the URL to get source code for building the image.
+	AnnotationSource = "org.opencontainers.image.source"
+
+	// AnnotationVersion is the annotation key for the version of the packaged software.
+	// The version MAY match a label or tag in the source code repository.
+	// The version MAY be Semantic versioning-compatible.
+	AnnotationVersion = "org.opencontainers.image.version"
+
+	// AnnotationRevision is the annotation key for the source control revision identifier for the packaged software.
+	AnnotationRevision = "org.opencontainers.image.revision"
+
+	// AnnotationVendor is the annotation key for the name of the distributing entity, organization or individual.
+	AnnotationVendor = "org.opencontainers.image.vendor"
+
+	// AnnotationLicenses is the annotation key for the license(s) under which contained software is distributed as an SPDX License Expression.
+	AnnotationLicenses = "org.opencontainers.image.licenses"
+
+	// AnnotationRefName is the annotation key for the name of the reference for a target.
+	// SHOULD only be considered valid when on descriptors on `index.json` within image layout.
+	AnnotationRefName = "org.opencontainers.image.ref.name"
+
+	// AnnotationTitle is the annotation key for the human-readable title of the image.
+	AnnotationTitle = "org.opencontainers.image.title"
+
+	// AnnotationDescription is the annotation key for the human-readable description of the software packaged in the image.
+	AnnotationDescription = "org.opencontainers.image.description"
+)
diff --git a/vendor/github.com/opencontainers/image-spec/specs-go/v1/config.go b/vendor/github.com/opencontainers/image-spec/specs-go/v1/config.go
new file mode 100644
index 0000000000000..fe799bd698c71
--- /dev/null
+++ b/vendor/github.com/opencontainers/image-spec/specs-go/v1/config.go
@@ -0,0 +1,103 @@
+// Copyright 2016 The Linux Foundation
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package v1
+
+import (
+	"time"
+
+	digest "github.com/opencontainers/go-digest"
+)
+
+// ImageConfig defines the execution parameters which should be used as a base when running a container using an image.
+type ImageConfig struct {
+	// User defines the username or UID which the process in the container should run as.
+	User string `json:"User,omitempty"`
+
+	// ExposedPorts a set of ports to expose from a container running this image.
+	ExposedPorts map[string]struct{} `json:"ExposedPorts,omitempty"`
+
+	// Env is a list of environment variables to be used in a container.
+	Env []string `json:"Env,omitempty"`
+
+	// Entrypoint defines a list of arguments to use as the command to execute when the container starts.
+	Entrypoint []string `json:"Entrypoint,omitempty"`
+
+	// Cmd defines the default arguments to the entrypoint of the container.
+	Cmd []string `json:"Cmd,omitempty"`
+
+	// Volumes is a set of directories describing where the process is likely write data specific to a container instance.
+	Volumes map[string]struct{} `json:"Volumes,omitempty"`
+
+	// WorkingDir sets the current working directory of the entrypoint process in the container.
+	WorkingDir string `json:"WorkingDir,omitempty"`
+
+	// Labels contains arbitrary metadata for the container.
+	Labels map[string]string `json:"Labels,omitempty"`
+
+	// StopSignal contains the system call signal that will be sent to the container to exit.
+	StopSignal string `json:"StopSignal,omitempty"`
+}
+
+// RootFS describes a layer content addresses
+type RootFS struct {
+	// Type is the type of the rootfs.
+	Type string `json:"type"`
+
+	// DiffIDs is an array of layer content hashes (DiffIDs), in order from bottom-most to top-most.
+	DiffIDs []digest.Digest `json:"diff_ids"`
+}
+
+// History describes the history of a layer.
+type History struct {
+	// Created is the combined date and time at which the layer was created, formatted as defined by RFC 3339, section 5.6.
+	Created *time.Time `json:"created,omitempty"`
+
+	// CreatedBy is the command which created the layer.
+	CreatedBy string `json:"created_by,omitempty"`
+
+	// Author is the author of the build point.
+	Author string `json:"author,omitempty"`
+
+	// Comment is a custom message set when creating the layer.
+	Comment string `json:"comment,omitempty"`
+
+	// EmptyLayer is used to mark if the history item created a filesystem diff.
+	EmptyLayer bool `json:"empty_layer,omitempty"`
+}
+
+// Image is the JSON structure which describes some basic information about the image.
+// This provides the `application/vnd.oci.image.config.v1+json` mediatype when marshalled to JSON.
+type Image struct {
+	// Created is the combined date and time at which the image was created, formatted as defined by RFC 3339, section 5.6.
+	Created *time.Time `json:"created,omitempty"`
+
+	// Author defines the name and/or email address of the person or entity which created and is responsible for maintaining the image.
+	Author string `json:"author,omitempty"`
+
+	// Architecture is the CPU architecture which the binaries in this image are built to run on.
+	Architecture string `json:"architecture"`
+
+	// OS is the name of the operating system which the image is built to run on.
+	OS string `json:"os"`
+
+	// Config defines the execution parameters which should be used as a base when running a container using the image.
+	Config ImageConfig `json:"config,omitempty"`
+
+	// RootFS references the layer content addresses used by the image.
+	RootFS RootFS `json:"rootfs"`
+
+	// History describes the history of each layer.
+	History []History `json:"history,omitempty"`
+}
diff --git a/vendor/github.com/opencontainers/image-spec/specs-go/v1/descriptor.go b/vendor/github.com/opencontainers/image-spec/specs-go/v1/descriptor.go
new file mode 100644
index 0000000000000..6e442a0853f4b
--- /dev/null
+++ b/vendor/github.com/opencontainers/image-spec/specs-go/v1/descriptor.go
@@ -0,0 +1,64 @@
+// Copyright 2016 The Linux Foundation
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package v1
+
+import digest "github.com/opencontainers/go-digest"
+
+// Descriptor describes the disposition of targeted content.
+// This structure provides `application/vnd.oci.descriptor.v1+json` mediatype
+// when marshalled to JSON.
+type Descriptor struct {
+	// MediaType is the media type of the object this schema refers to.
+	MediaType string `json:"mediaType,omitempty"`
+
+	// Digest is the digest of the targeted content.
+	Digest digest.Digest `json:"digest"`
+
+	// Size specifies the size in bytes of the blob.
+	Size int64 `json:"size"`
+
+	// URLs specifies a list of URLs from which this object MAY be downloaded
+	URLs []string `json:"urls,omitempty"`
+
+	// Annotations contains arbitrary metadata relating to the targeted content.
+	Annotations map[string]string `json:"annotations,omitempty"`
+
+	// Platform describes the platform which the image in the manifest runs on.
+	//
+	// This should only be used when referring to a manifest.
+	Platform *Platform `json:"platform,omitempty"`
+}
+
+// Platform describes the platform which the image in the manifest runs on.
+type Platform struct {
+	// Architecture field specifies the CPU architecture, for example
+	// `amd64` or `ppc64`.
+	Architecture string `json:"architecture"`
+
+	// OS specifies the operating system, for example `linux` or `windows`.
+	OS string `json:"os"`
+
+	// OSVersion is an optional field specifying the operating system
+	// version, for example on Windows `10.0.14393.1066`.
+	OSVersion string `json:"os.version,omitempty"`
+
+	// OSFeatures is an optional field specifying an array of strings,
+	// each listing a required OS feature (for example on Windows `win32k`).
+	OSFeatures []string `json:"os.features,omitempty"`
+
+	// Variant is an optional field specifying a variant of the CPU, for
+	// example `v7` to specify ARMv7 when architecture is `arm`.
+	Variant string `json:"variant,omitempty"`
+}
diff --git a/vendor/github.com/opencontainers/image-spec/specs-go/v1/index.go b/vendor/github.com/opencontainers/image-spec/specs-go/v1/index.go
new file mode 100644
index 0000000000000..82da6c6a89896
--- /dev/null
+++ b/vendor/github.com/opencontainers/image-spec/specs-go/v1/index.go
@@ -0,0 +1,32 @@
+// Copyright 2016 The Linux Foundation
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package v1
+
+import "github.com/opencontainers/image-spec/specs-go"
+
+// Index references manifests for various platforms.
+// This structure provides `application/vnd.oci.image.index.v1+json` mediatype when marshalled to JSON.
+type Index struct {
+	specs.Versioned
+
+	// MediaType specificies the type of this document data structure e.g. `application/vnd.oci.image.index.v1+json`
+	MediaType string `json:"mediaType,omitempty"`
+
+	// Manifests references platform specific manifests.
+	Manifests []Descriptor `json:"manifests"`
+
+	// Annotations contains arbitrary metadata for the image index.
+	Annotations map[string]string `json:"annotations,omitempty"`
+}
diff --git a/vendor/github.com/opencontainers/image-spec/specs-go/v1/layout.go b/vendor/github.com/opencontainers/image-spec/specs-go/v1/layout.go
new file mode 100644
index 0000000000000..fc79e9e0d140f
--- /dev/null
+++ b/vendor/github.com/opencontainers/image-spec/specs-go/v1/layout.go
@@ -0,0 +1,28 @@
+// Copyright 2016 The Linux Foundation
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package v1
+
+const (
+	// ImageLayoutFile is the file name of oci image layout file
+	ImageLayoutFile = "oci-layout"
+	// ImageLayoutVersion is the version of ImageLayout
+	ImageLayoutVersion = "1.0.0"
+)
+
+// ImageLayout is the structure in the "oci-layout" file, found in the root
+// of an OCI Image-layout directory.
+type ImageLayout struct {
+	Version string `json:"imageLayoutVersion"`
+}
diff --git a/vendor/github.com/opencontainers/image-spec/specs-go/v1/manifest.go b/vendor/github.com/opencontainers/image-spec/specs-go/v1/manifest.go
new file mode 100644
index 0000000000000..d72d15ce4bb8b
--- /dev/null
+++ b/vendor/github.com/opencontainers/image-spec/specs-go/v1/manifest.go
@@ -0,0 +1,35 @@
+// Copyright 2016 The Linux Foundation
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package v1
+
+import "github.com/opencontainers/image-spec/specs-go"
+
+// Manifest provides `application/vnd.oci.image.manifest.v1+json` mediatype structure when marshalled to JSON.
+type Manifest struct {
+	specs.Versioned
+
+	// MediaType specificies the type of this document data structure e.g. `application/vnd.oci.image.manifest.v1+json`
+	MediaType string `json:"mediaType,omitempty"`
+
+	// Config references a configuration object for a container, by digest.
+	// The referenced configuration object is a JSON blob that the runtime uses to set up the container.
+	Config Descriptor `json:"config"`
+
+	// Layers is an indexed list of layers referenced by the manifest.
+	Layers []Descriptor `json:"layers"`
+
+	// Annotations contains arbitrary metadata for the image manifest.
+	Annotations map[string]string `json:"annotations,omitempty"`
+}
diff --git a/vendor/github.com/opencontainers/image-spec/specs-go/v1/mediatype.go b/vendor/github.com/opencontainers/image-spec/specs-go/v1/mediatype.go
new file mode 100644
index 0000000000000..bad7bb97f4734
--- /dev/null
+++ b/vendor/github.com/opencontainers/image-spec/specs-go/v1/mediatype.go
@@ -0,0 +1,48 @@
+// Copyright 2016 The Linux Foundation
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package v1
+
+const (
+	// MediaTypeDescriptor specifies the media type for a content descriptor.
+	MediaTypeDescriptor = "application/vnd.oci.descriptor.v1+json"
+
+	// MediaTypeLayoutHeader specifies the media type for the oci-layout.
+	MediaTypeLayoutHeader = "application/vnd.oci.layout.header.v1+json"
+
+	// MediaTypeImageManifest specifies the media type for an image manifest.
+	MediaTypeImageManifest = "application/vnd.oci.image.manifest.v1+json"
+
+	// MediaTypeImageIndex specifies the media type for an image index.
+	MediaTypeImageIndex = "application/vnd.oci.image.index.v1+json"
+
+	// MediaTypeImageLayer is the media type used for layers referenced by the manifest.
+	MediaTypeImageLayer = "application/vnd.oci.image.layer.v1.tar"
+
+	// MediaTypeImageLayerGzip is the media type used for gzipped layers
+	// referenced by the manifest.
+	MediaTypeImageLayerGzip = "application/vnd.oci.image.layer.v1.tar+gzip"
+
+	// MediaTypeImageLayerNonDistributable is the media type for layers referenced by
+	// the manifest but with distribution restrictions.
+	MediaTypeImageLayerNonDistributable = "application/vnd.oci.image.layer.nondistributable.v1.tar"
+
+	// MediaTypeImageLayerNonDistributableGzip is the media type for
+	// gzipped layers referenced by the manifest but with distribution
+	// restrictions.
+	MediaTypeImageLayerNonDistributableGzip = "application/vnd.oci.image.layer.nondistributable.v1.tar+gzip"
+
+	// MediaTypeImageConfig specifies the media type for the image configuration.
+	MediaTypeImageConfig = "application/vnd.oci.image.config.v1+json"
+)
diff --git a/vendor/github.com/opencontainers/image-spec/specs-go/version.go b/vendor/github.com/opencontainers/image-spec/specs-go/version.go
new file mode 100644
index 0000000000000..0d9543f16000d
--- /dev/null
+++ b/vendor/github.com/opencontainers/image-spec/specs-go/version.go
@@ -0,0 +1,32 @@
+// Copyright 2016 The Linux Foundation
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package specs
+
+import "fmt"
+
+const (
+	// VersionMajor is for an API incompatible changes
+	VersionMajor = 1
+	// VersionMinor is for functionality in a backwards-compatible manner
+	VersionMinor = 0
+	// VersionPatch is for backwards-compatible bug fixes
+	VersionPatch = 2
+
+	// VersionDev indicates development branch. Releases will be empty string.
+	VersionDev = ""
+)
+
+// Version is the specification version that the package types support.
+var Version = fmt.Sprintf("%d.%d.%d%s", VersionMajor, VersionMinor, VersionPatch, VersionDev)
diff --git a/vendor/github.com/opencontainers/image-spec/specs-go/versioned.go b/vendor/github.com/opencontainers/image-spec/specs-go/versioned.go
new file mode 100644
index 0000000000000..58a1510f33e94
--- /dev/null
+++ b/vendor/github.com/opencontainers/image-spec/specs-go/versioned.go
@@ -0,0 +1,23 @@
+// Copyright 2016 The Linux Foundation
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//     http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package specs
+
+// Versioned provides a struct with the manifest schemaVersion and mediaType.
+// Incoming content with unknown schema version can be decoded against this
+// struct to check the version.
+type Versioned struct {
+	// SchemaVersion is the image manifest schema that this image follows
+	SchemaVersion int `json:"schemaVersion"`
+}
diff --git a/vendor/modules.txt b/vendor/modules.txt
index 0283c1d04321a..5510c26b5e9c2 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -191,6 +191,35 @@ github.com/daviddengcn/go-colortext
 # github.com/distribution/reference v0.5.0
 ## explicit; go 1.20
 github.com/distribution/reference
+# github.com/docker/distribution v2.8.2+incompatible
+## explicit
+github.com/docker/distribution/digestset
+github.com/docker/distribution/reference
+# github.com/docker/docker v20.10.24+incompatible
+## explicit
+github.com/docker/docker/api
+github.com/docker/docker/api/types
+github.com/docker/docker/api/types/blkiodev
+github.com/docker/docker/api/types/container
+github.com/docker/docker/api/types/events
+github.com/docker/docker/api/types/filters
+github.com/docker/docker/api/types/image
+github.com/docker/docker/api/types/mount
+github.com/docker/docker/api/types/network
+github.com/docker/docker/api/types/registry
+github.com/docker/docker/api/types/strslice
+github.com/docker/docker/api/types/swarm
+github.com/docker/docker/api/types/swarm/runtime
+github.com/docker/docker/api/types/time
+github.com/docker/docker/api/types/versions
+github.com/docker/docker/api/types/volume
+github.com/docker/docker/client
+github.com/docker/docker/errdefs
+# github.com/docker/go-connections v0.4.0
+## explicit
+github.com/docker/go-connections/nat
+github.com/docker/go-connections/sockets
+github.com/docker/go-connections/tlsconfig
 # github.com/docker/go-units v0.5.0
 ## explicit
 github.com/docker/go-units
@@ -323,6 +352,9 @@ github.com/google/cadvisor/container/containerd/namespaces
 github.com/google/cadvisor/container/containerd/pkg/dialer
 github.com/google/cadvisor/container/crio
 github.com/google/cadvisor/container/crio/install
+github.com/google/cadvisor/container/docker
+github.com/google/cadvisor/container/docker/install
+github.com/google/cadvisor/container/docker/utils
 github.com/google/cadvisor/container/libcontainer
 github.com/google/cadvisor/container/raw
 github.com/google/cadvisor/container/systemd
@@ -357,6 +389,7 @@ github.com/google/cadvisor/utils/sysfs
 github.com/google/cadvisor/utils/sysinfo
 github.com/google/cadvisor/version
 github.com/google/cadvisor/watcher
+github.com/google/cadvisor/zfs
 # github.com/google/cel-go v0.17.7
 ## explicit; go 1.18
 github.com/google/cel-go/cel
@@ -591,6 +624,10 @@ github.com/onsi/gomega/types
 # github.com/opencontainers/go-digest v1.0.0
 ## explicit; go 1.13
 github.com/opencontainers/go-digest
+# github.com/opencontainers/image-spec v1.0.2
+## explicit
+github.com/opencontainers/image-spec/specs-go
+github.com/opencontainers/image-spec/specs-go/v1
 # github.com/opencontainers/runc v1.1.10
 ## explicit; go 1.17
 github.com/opencontainers/runc/libcontainer