-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Contrib proposal GitOneFlow #124
base: master
Are you sure you want to change the base?
Conversation
…ts, shifting the paradigm from variant management via branches (which comes with challenges related to code synchronization, easy comparison, and workflow improvement) to variant management using the more prevalent
…for each commit or pull request
Git oneflow setup
Now that's a proposal. I'm not usually around on the weekends and this will take me time to get to so don't take my slowness as negativity. Thanks for all the details and I look forward to checking it out in depth |
Don't know if you also want to take a look as well @kuzi-moto or @mitchray |
If I'm understanding this correctly then the shell script is fantastic and would save a lot of manual build time. There are currently docker tags Queries about the New Features: GitHub WorkflowI take it this would automatically update the Git/OneFlowI'm not familiar with this at all but reading that article they mention when not to use it:
Again if I'm understanding correctly, I have a feeling this matches Ampache... its not just a continuously rolling project but one that has major versions with breaking changes. A good chunk of fixes are 'back-ported' so perhaps it is still applicable, I'll let @lachlan-00 make that call Docker tag namingI need to think this one over some more, I like that with |
@mitchray Thank you very much for your comments, which I generally agree with! I will try to respond between the lines and soon I will also go and fix the code with your comments! Thank you.
I hope so, but as of today it does not push on Docker Hub because I still have to understand how to test the image before pushing it on Docker Hub!
Yes, not in full automation but at least once someone has to launch the script update_maintained_versions.sh in order to take all the major versions equal to or greater than the one indicated in the configuration… theoretically the idea is that only supported versions are maintained here.
Not as of today, but it is possible to extend it to have the latest (I guess it’s the develop) the RC (which I guess is the Preview)… for this a pre-coordination with the repository that develops Ampache is needed.
You’re absolutely right! Let me say that this discussion makes a lot of sense in the more dynamic development of code, a little less in the management of IaC, where the management beyond the out of the box with the mechanism of variants and versions already mitigates the problem. Clearly it is always to be evaluated case by case, but look at how the PHP version is managed (https://github.com/Odyno/my-ampache-docker-env/blob/contrib-proposal-gitoneflow/update_maintained_versions.sh#L40) for versions 6.0 and 6.1, or how the base docker image is chosen based on the Variant (https://github.com/Odyno/my-ampache-docker-env/blob/contrib-proposal-gitoneflow/update_maintained_versions.sh#L57) or how the configuration is chosen (e.g. superfisord.conf)… as I said, to be evaluated!
Here too, all true! It’s a choice I respect, what I just wanted to say is that with nosql it expresses that there is no mysql but it hides something else, like that apache is being used (today)… I prefer more the logic of it makes sense to say what there is not what there isn’t… but I repeat it’s pure personal taste. |
Right now i use my ampache-administrator repo to build docker images. I build an image after each release and then develop (which is combined with the preview tag now) whenever i feel like it every few weeks. Because of multi-architecture i haven't seen an easy way of pushing those containers to docker hub as their auto build only seems to do one cpu architecture at a time. I'm using build_docker7.sh and build_docker6.sh No testing before builds outside of changes to the dockerfile being tested off the main tags and build failures I test Ampache releases using docker-release-test6.sh and running my python scripts to test function. The docker compse files i use there are extracting the latest zips, configuring the server and running the python check scripts For me re: naming the base assumption is that ampache is using apache+mysql. That's been the Ampache 'default' since it started so a tag name would be along the lines of this. latest = apache+mysql The only reason there are multiple branches in the repo is i made them separately to build them. so as long as update_maintained_versions.sh allows me to build all those tags i'm happy. does there have to be a folder for each version or can we use the VERSION arg and just have a folder for structural tag? |
I half made updates on this. love that docker has failed the builds (renamed the files now) Sorry for part finishing but i'll keep going tomorrow |
Hello @lachlan-00 , I'm a bit out and I can touch repo for network problem... I'll be present at least on Thursday, if you need something just write it and I'll take care as soon I can bye |
all good, i realised i hadn't renamed the template files properly, it's been pretty easy to pick up so far |
Really like the idea for what it's worth! I'm not too familiar with this way of doing things, but happy to help where I can even if it's just providing input. |
I'm back, @lachlan-00 I see a couple of contributions on the branch and am very happy! |
The big aspect that's missing is the build of develop / git branches without a release tag. If i could maintain versions and build develop images i think this would cover me.
Then the download URL for develop would be different so URL variables would be different for that template Then i think we need a simple way for someone to download the repo and run a compose command for themselves as well so a root compose setup would mean people can still use the repo Other than that, i think that's it |
I think is easy to do, but except develop, again I would discourage branch builds... if I understand correctly, before my pr this repository is used to provision the stable version outside the development cycle and clearly tell how the environment is compiled around version X of Ampache stable, not Ampache itself (which has an ad-hoc repository for this) Possible development elements for this repo are how to install Ampache on k8s with helm, or provide a version that can be installed directly into a NAS, or the provision with Ansible... in short, don't test Ampache commit-xyz with docker locally For Ampache develop (or daily build) it is just a preview of what is to come, and we are already at the limit because it can be interpreted as a release candidate, but to create an artefact for each branch is something that confuses outside. However, just a confirmation: is the develop link https://github.com/ampache/ampache/archive/refs/heads/develop.zip (Branch Develop)? if yes it can be just a special "version".
Then is not clear to me what you mean.... for the default variant is no required the composer, by the way, I provided the current compose on the data directory... for no-sql the composer needs to be really specific, so is unnecessary to provide something. There is already an example folder with an outline of configurations, maybe it makes sense to me to notify it in the readme. What I will do is put a build.sh inside each release folder so that by downloading the repo a person can also build the image locally (it is already done in another branch https://github.com/Odyno/my-ampache-docker-env/tree/docker_hub_push where I am working on the docker no-sql file ) |
Following the discussion in issue #123 , I started reorganizing the repository to make the NoSQL version explicit. The result is functionally identical; there are no changes except for having the configurations of all variant and the Docker image explicit only on a single branch.
This approach is heavily inspired by how it is implemented in the Nextcloud docker project.
The idea
The idea is to have all the Docker config files in the ./data folders and the root directory as the base. We will use a shell script (specifically
update_maintained_versions.sh
) to generate one instance of the Docker files into specific folders, structured as version/variant.What is really different?
Therefore, the most interesting part to review is the file structure and the shell script
update_maintained_versions.sh
. All the contents in the 6.4.0 and 6.5.0 folders are generated and are theoretically the versions maintained in this repository (for simplicity, I have only gone back two versions, but it is possible to add any minor version as needed through configuration).Modifications and New Features
GitHub Workflow: The addition of a GitHub workflow facilitates Continuous Image Build for each commit or pull request (To do: Add push only on master commit).
Explicit Docker variant management: This refactoring enhances the control and management of Docker variants, shifting the paradigm from variant management via branches (which comes with challenges related to code synchronization, easy comparison, and workflow improvement) to variant management using the more prevalent GitOneFlow paradigm . This change allows for more efficient and streamlined Docker variant management.
Docker tag naming: This refactoring follows the Docker tags de facto standard naming convention. Specifically, the tags
ampache:<version>-aio
(all in one) andampache:<version>-apache
are used. These tags offer a clear and consistent versioning system, setting a pattern for future versioning, such as for php-fpm. This approach ensures clarity and consistency in version management.Note:
I realize that this is a paradigm shift and it may be met with reservations, especially if there is an established process that has been in place for years. Therefore, it is not my intention to push for this solution if you are not convinced. If that is the case, I would ask you to simply close the PR with a Reject. On my side, I will use this solution to proceed with the creation of a Dockerized environment as I mentioned in the ticket. No worries at all, and looking forward to catching up soon!