-
Notifications
You must be signed in to change notification settings - Fork 337
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add option for custom root volume storage size #174
base: main
Are you sure you want to change the base?
Conversation
nice, good for when vanilla AMI is used. too bad i'm not maintainer, can't merge all these goodies from incoming PRs 😄 , I'm picking useful ones to merge to my fork lol. |
Perfect. Thank you. |
the test pipeline is broken, who can fix? lint-code is 💀 |
@Dmitry1987 it looks like @machulav has disabled automatic actions so the lint code action can only be triggered by him. Would be good if there were some other maintainers on this. |
I saw @Dmitry1987 offered to be a maintainer by the way, in the discussions here, and one other guy too, but they got no response for now #172 |
@machulav Any way this PR can get merged in asap? The changes seem very simple, and I need this desperately! |
You can always use a fork instead. |
really the best way is to fork and maintain your own version of the plugin so you can control everything easily, it's just aws js sdk so all you need is to keep the src/aws.js parameters of the instance run command the way you need, without even passing them through the YAMLs in case your infra managed by the plugin is more or less the same. so you just pass through the yaml config params options that differ from one pipeline to another, and everything else can be filled in the js code the way you need. |
FWIW, I forked the update and while A workaround I found to be easier is to just introduce a step to expand the root volume size in your workflow. The first step in my job that runs on the EC2 is - name: Expand root volume
run: |
TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
INSTANCE_ID=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/instance-id)
VOLUME_ID=$(aws ec2 describe-volumes --filters Name=attachment.instance-id,Values=$INSTANCE_ID | jq -r .Volumes[0].Attachments[0].VolumeId)
echo $INSTANCE_ID
echo $VOLUME_ID
aws ec2 modify-volume --volume-id $VOLUME_ID --size 256
sleep 15 # let it update
growpart /dev/nvme0n1 1
lsblk
xfs_growfs -d /
df -hT This successfully resizes the EBS volume to 256 GB, expands the partition, and extends the logical file system to use the new space. |
@myz540 I believe it depends on your VM image. On the stock Ubuntu image for example this directly changes the root volume. An additional option can be added for specifying the device path tho, which would support such cases where the device name is different. I didn't bother cuz I use the stock Ubuntu image. |
e362c4f
to
11d0042
Compare
Just updated to add another option |
The default 8GB root volume is too small for certain workflows. This PR adds a
storage-size
option that overrides the default size. Tested and works.