Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Storage Account Limitation #41

Open
benofben opened this issue Sep 13, 2015 · 8 comments
Open

Storage Account Limitation #41

benofben opened this issue Sep 13, 2015 · 8 comments

Comments

@benofben
Copy link
Contributor

We understand that storage accounts have a limit of 40 attached drives. This is forcing us to break nodes over different subnets for clusters greater than 40 nodes and introduces a lot of complexity.

We would really like to see this limitation abstracted away/otherwise removed in Azure as it will simplify the templates substantially.

@benofben benofben changed the title Azure Storage Account Limitation Storage Account Limitation Sep 13, 2015
@benofben benofben self-assigned this Sep 14, 2015
@benofben
Copy link
Contributor Author

We're working around this in main by generating the vm resources in Python. We no longer have the multiple subnet issue, but eliminating the 40 node restriction (or automatically allocating OS disks) would simplify things considerably.

@benofben
Copy link
Contributor Author

It sounds like the need to manually specify a storage account for a VM is going away in the future, so this issue will become moot/abstracted away.

@benofben
Copy link
Contributor Author

benofben commented Feb 6, 2016

We're expecting the 40 drive storage account limitation to be abstracted away by a future release of vm scalesets. This is not yet available.

@benofben benofben removed their assignment May 6, 2016
@benofben
Copy link
Contributor Author

Apparently managed disks will fix this.

@benofben
Copy link
Contributor Author

Managed disks are in preview right now. We need to assign someone to testing this.

@benofben
Copy link
Contributor Author

Now wondering if the new H series machines will allow us to just sidestep this issue by using ephemeral.

@benofben
Copy link
Contributor Author

It seems like the H8 and H16 might end up being our all around choice. The memory is still a bit high, but the disk sizes are good. https://azure.microsoft.com/en-us/blog/availability-of-h-series-vms-in-microsoft-azure/

@benofben
Copy link
Contributor Author

We're doing a refactor to managed disks on 2/23/17 for OS disks. That should (finally) resolve this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant