',
+ description:
+ 'Our website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent. Please see our privacy policy.',
+ acceptAllBtn: "Accept all",
+ showPreferencesBtn: "Settings",
+ revisionMessage:
+ "
Dear user, terms and conditions have changed since the last time you visited!",
+ },
+ preferencesModal: {
+ title: "Cookie settings",
+ acceptAllBtn: "Accept all",
+ acceptNecessaryBtn: "Reject all",
+ savePreferencesBtn: "Save current selection",
+ closeIconLabel: "Close",
+ sections: [
+ {
+ title: "Cookie usage",
+ description:
+ 'Our website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent. Please see our privacy policy.',
+ },
+ {
+ title: "Strictly necessary cookies",
+ description:
+ "These cookies are strictly necessary for the website to function. They are usually set to handle only your actions in response to a service request, such as setting your privacy preferences, navigating between pages, and setting your preferred version. You can set your browser to block these cookies or to alert you to their presence, but some parts of the website will not function without them. These cookies do not store any personally identifiable information.",
+ linkedCategory: "necessary",
+ },
+ {
+ title: "Analytics & Performance cookies",
+ description:
+ "These cookies are used for analytics and performance metering purposes. They are used to collect information about how visitors use our website, which helps us improve it over time. They do not collect any information that identifies a visitor. The information collected is aggregated and anonymous.",
+ linkedCategory: "analytics",
+ },
+ {
+ title: "More information",
+ description:
+ 'For more information about cookie usage, privacy, and how we use the data we collect, please refer to our privacy policy and terms of use.',
+ },
+ ],
+ },
+ },
+ },
+ },
+});
diff --git a/v1.46/assets/js/copy-code.js b/v1.46/assets/js/copy-code.js
new file mode 100644
index 000000000..d64200213
--- /dev/null
+++ b/v1.46/assets/js/copy-code.js
@@ -0,0 +1,22 @@
+$(() => {
+ let copyCodeContainer = $("
lakeFS Cloud is a single tenant, fully-managed lakeFS solution, providing high availability, auto-scaling, support and production-grade features.
+
+
+
+ Why did we build lakeFS cloud?
+
+
+
+
+
+
We built lakeFS cloud for three main reasons:
+
+
We wanted to provide organizations with the benefits of lakeFS without the need to manage it, saving them the investment in infrastructure and work related to installation, upgrades, uptime and scale .
+
We wanted to provide lakeFS cloud users with security that meets their needs, with SSO, SCIM, and RBAC.
+
We wanted to provide additional functionality that reduces friction and allows fast implementation of version controlled data/ML/AI pipelines throughout their data lifecycle.
+
+
+
+
+ What is the value of using lakeFS Cloud as a managed service?
+
+
+
+
+
+
The main advantages of using lakeFS Cloud, the lakeFS managed service are:
+
+
No installation required, no cloud costs and devops efforts on installing and maintaining a lakeFS installation.
lakeFS Cloud version controls your data, without accessing it, using pre-signed URLs! Read more here.
+
When using lakeFS Cloud, you are provided with a rich Role-Based Access Control functionality that allows for fine-grained control by associating permissions with users and groups, granting them specific actions on specific resources. This ensures data security and compliance within an organization.
+
To easily manage users and groups, lakeFS Cloud provides SSO integration (including support for SAML, OIDC, AD FS, Okta, and Azure AD), supporting existing credentials from a trusted provider, eliminating separate logins.
+
lakeFS Cloud supports SCIM for automatically provisioning and deprovisioning users and group memberships to allow organizations to maintain a single source of truth for their user database.
+
STS Auth offers temporary, secure logins using an Identity Provider, simplifying user access and enhancing security.
+
Authentication with AWS IAM Roles allows authentication using AWS IAM roles instead of lakeFS credentials, removing the need to maintain static credentials for lakeFS Enterprise users running on AWS.
+
Auditing provides a detailed action log of events happening within lakeFS, including who performed which action, on which resource - and when.
+
Private-Link support to ensure network security by only allowing access to your lakeFS Cloud installation from your cloud accounts
+
+
+
+
+ What additional functionality does lakeFS Cloud provide?
+
+
+
+
+
+
Using lakeFS cloud is not just a secure and managed way of using lakeFS OSS; it is much more than that. With lakeFS Cloud you enjoy:
+
+
lakeFS Mount allows users to virtually mount a remote lakeFS repository onto a local directory. Once mounted, users can access the data as if it resides on their local filesystem, using any tool, library, or framework that reads from a local filesystem.
+
lakeFS Metadata Search - Allows a granular search API to filter and query versioned objects based on attached metadata. This is especially useful for machine learning environments to filter by labels and file attributes
+
lakeFS for Databricks - Provides a turnkey solution for Databricks customers for analytics, machine learning and business intelligence use cases including full support for Delta Lake tables, Unity Catalog, MLFlow and the rest of the Databricks product suite.
+
lakeFS for Snowflake - Provides full integration into the Snowflake ecosystem, including full support for Iceberg managed tables.
+
lakeFS Cross Cloud - Allows central management of repositories that span across multiple cloud providers including Azure, AWS, GCP and on-prem environments.
+
Transactional Mirroring - allows replicating lakeFS repositories into consistent read-only copies in remote locations.
1,500 read operations/second across all branches on all repositories within a region
+
1,500 write operations per second across all branches on all repositories within a region
+
+
+
This limit can be increased by contacting support.
+
+
Each lakeFS branch can sustain up to a maximum of 1,000 write operations/second and 3,000 read operations per second.
+This scales horizontally, so for example, with 10 concurrent branches, a repository could sustain 10k writes/second and 30k reads/second, assuming load is distributed evenly between them.
+
+
Reading committed data (e.g. from a commit ID or tag) could be scaled up horizontally to any desired capacity, and defaults to ~5,000 reads/second.