Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/devops ci #3799

Merged
merged 37 commits into from
Dec 10, 2024
Merged
Show file tree
Hide file tree
Changes from 25 commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
3a82b34
initial CI tutorial
memsharded Jul 9, 2024
dfcc0b6
wip
memsharded Jul 15, 2024
caaa5f0
moved
memsharded Jul 17, 2024
559ec59
wip
memsharded Jul 18, 2024
bd2a6e4
wip
memsharded Jul 18, 2024
50ca2f1
Merge branch 'develop2' into feature/devops_ci
memsharded Aug 22, 2024
435f2d7
wip
memsharded Aug 22, 2024
845fb1b
wip
memsharded Aug 22, 2024
acaebc5
moved default versioning
memsharded Aug 23, 2024
a172513
Merge branch 'develop2' into feature/devops_ci
memsharded Sep 30, 2024
c22e05a
products pipeline
memsharded Sep 30, 2024
e7b5888
Merge branch 'develop2' into feature/devops_ci
memsharded Sep 30, 2024
ea4453e
wip
memsharded Oct 2, 2024
62edc7e
final draft
memsharded Oct 2, 2024
fa8b872
Update devops/versioning/default.rst
memsharded Oct 28, 2024
422f9d3
Update devops/package_promotions.rst
memsharded Oct 28, 2024
62185d5
Update devops/package_promotions.rst
memsharded Oct 28, 2024
e8e77a2
Update devops/devops.rst
memsharded Oct 28, 2024
18962cb
Update ci_tutorial/tutorial.rst
memsharded Oct 28, 2024
98fc4a2
Update ci_tutorial/tutorial.rst
memsharded Oct 28, 2024
19cdfd6
Update ci_tutorial/packages_pipeline.rst
memsharded Oct 28, 2024
f598c81
Update ci_tutorial/packages_pipeline/multi_configuration_lockfile.rst
memsharded Oct 28, 2024
380ac46
Update ci_tutorial/tutorial.rst
memsharded Oct 28, 2024
6d6d6b0
Update ci_tutorial/products_pipeline/distributed_build.rst
memsharded Oct 28, 2024
11d5886
Apply suggestions from code review
memsharded Oct 28, 2024
c2e87f6
Merge branch 'develop2' into feature/devops_ci
memsharded Nov 20, 2024
91b52df
Merge branch 'develop2' into feature/devops_ci
memsharded Nov 25, 2024
06e6a77
review
memsharded Nov 25, 2024
333753b
multiline cmdlines -> singleline
memsharded Nov 25, 2024
d7322d9
Update devops/package_promotions.rst
memsharded Nov 25, 2024
941f54a
review
memsharded Nov 25, 2024
608b9b1
review
memsharded Nov 25, 2024
45a4327
Update ci_tutorial/products_pipeline/multi_product.rst
memsharded Nov 25, 2024
2d48338
lockfile storing
memsharded Nov 25, 2024
b885dbe
final remarks
memsharded Nov 26, 2024
5a2e768
Update ci_tutorial/products_pipeline/full_pipeline.rst
memsharded Nov 26, 2024
9fa5877
Update ci_tutorial/products_pipeline/full_pipeline.rst
memsharded Nov 26, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 41 additions & 0 deletions ci_tutorial/packages_pipeline.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
Packages pipeline
==================


The **packages pipeline** will build, create and upload the package binaries for the different configurations and platforms, when some
developer is submitting some changes to one of the organization repositories source code. For example if a developer is doing some changes
to the ``ai`` package, improving some of the library functionality, and bumping the version to ``ai/1.1.0``. If the organization needs to
support both Windows and Linux platforms, then the package pipeline will build the new ``ai/1.1.0`` both for Windows and Linux, before
considering the changes are valid. If some of the configurations fail to build under a specific platform, it is common to consider the
changes invalid and stop the processing of those changes, until the code is fixed.
Comment on lines +5 to +10
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I'm missing in this workflow is a solution for 'nightly' and MR/PR builds. We're usually implementing new features on feature branches which then run MR pipelines to verify compilation and unit test integrity. Packages generated from these feature branches should be consumable by others for integration testing, etc. Also, after integrating an MR into the mainline branch we don't want to create a release yet. We want to have a nightly build and package that can also be consumed by other projects. In Conan 1 we used channels to distinguish between the different package types, but as far as I understood, these should no longer be used for this purpose. However, I don't see a way to do the same thing by leveraging multiple repositories, because in this scenario there is no way to mix dependencies from release, feature branch and nightly channels without creating temporary repositories and fiddling around with the remote order.

Copy link

@jasal82 jasal82 Oct 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's also unclear how to version nightly and MR builds. At that point we haven't decided about a release version yet so we could either use

  • prerelease suffixes like pkg/1.2-pre.1, but they can only be enabled/disabled in the consumer conanfile - which means changing the conanfile all the time - or globally for all packages, which we don't want, because it's not selective enough
  • 'nightly' as a version string, like pkg/nightly - this is difficult for the consumer because it's unclear what the current nightly revision is based on, the string is semver incompatible, older revision cannot be told apart easily and there's no automatic resolving possible against '*' or version ranges
  • user or channels, like pkg/1.2@nightly (discouraged)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And another open question is how to disambiguate between packages with the same name coming from different contexts. For example, we internally mirror Conan Center Index and make these recipe available for consumption in our recipes. However, there are some cases where we have existing company-specific recipes that happen to have the same name as recipes in the Conan Center Index. This would cause clashes when trying to use both the index mirror and the internal deployment remotes in the same configuration. I think it would be good to be able to specify a source remote for each of the requirements to make Conan consider only that remote. Other dependency managers use this solution (e.g. Cargo).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would cause clashes when trying to use both the index mirror and the internal deployment remotes in the same configuration. I think it would be good to be able to specify a source remote for each of the requirements to make Conan consider only that remote. Other dependency managers use this solution (e.g. Cargo).

There are a few ways of handling that with Conan currently - when solving a graph to generate a lockfile, you can specify which remotes are considered - so the lockfile would then only show the revisions that are available in the remote you want, and subsequent calls would only resolve those.

Alternatively, the remote configurations do support a filter of which packages are allowed to be considered from a remote, see the --allowed-packages option in conan remote add and conan remote update.
You can restrict a remote to only serve certain packages, or to exclude some packages, for example the following would prevent the cmake package from being considered from the Conan Center remote.

conan remote update --allowed-packages='~cmake/*' conancenter

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jcar87 Thanks for the quick answer. Both options could work but I think they're moving the decision to the wrong layer, from an architectural point of view. I'd rather have the recipe decide about the package sources because only the recipe knows what package it needs to be buildable. This should not be injected via environment or Conan config. It is good that it CAN be injected that way, because sometimes you want to influence the build from the pipeline for example, but it should not be the recommended way for handling package source selection for recipe dependencies.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, I don't see a way to do the same thing by leveraging multiple repositories, because in this scenario there is no way to mix dependencies from release, feature branch and nightly channels without creating temporary repositories and fiddling around with the remote order.

No need to create temporary repositories, or to fiddle with remote order. For example you can have just one releases repo, and you put there everything you want to do a release build, copy everything, including the dependencies so no need to rely on different repos ordering, and use a lockfile to represent that build, so multiple releases can be built in parallel.

It's also unclear how to version nightly and MR builds. At that point we haven't decided about a release version yet so we could either use

The normal MR builds could be mostly the "packages pipeline" and the nightly build would be the "products pipeline" running on the "products" repository integrating all Pull Requests that have been merged that day. This is not about version of a release product that might gather many different versions of many different transitive dependencies, it is about how new revisions/versions of dependencies are built against the products in a normal internal dev process. The release process typically starts from the develop repository, and it is different from the CI process presented in this PR. This CI tutorial will most likely include later new sections illustrating other processes, like this one, and also different pull request, branching, versioning and merging strategies.

prerelease suffixes like pkg/1.2-pre.1, but they can only be enabled/disabled in the consumer conanfile - which means changing the conanfile all the time - or globally for all packages, which we don't want, because it's not selective enough

Not really, there is a conf now that enables/disables the pre-releases: core.version_ranges:resolve_prereleases, so not necessary to modify conanfile at all.

For example, we internally mirror Conan Center Index and make these recipe available for consumption in our recipes. However, there are some cases where we have existing company-specific recipes that happen to have the same name as recipes in the Conan Center Index. This would cause clashes when trying to use both the index mirror and the internal deployment remotes in the same configuration. I think it would be good to be able to specify a source remote for each of the requirements to make Conan consider only that remote. Other dependency managers use this solution (e.g. Cargo).

This is one of the valid reasons to use @user or @user/channel, there is no problem with that. As long as the zlib/version@user is always @user and the zlib/version without it is always the one from ConanCenter, things are good, because they are constant and immutable. The user/channel is discouraged to be used as a maturity, dynamic qualifier that evolves testing->alpha->beta->stable along the pipeline or CI process.

I don't think there is something intrinsically bad, wrong or incorrect in the current PR. We have received positive feedback about it from new users, not to say that this was built from the experience we gathered in the past from many other users. As described in the text, this CI tutorial doesn't aim to be a silver bullet for all organizations and projects, but rather to present different concepts, practices and tools.
I think most of the feedback is around different implementation techniques and flows, something that we will try to add to this CI tutorial in future PRs. We even aim to create a full real working example in Github Actions or some other CI system.
So I'd recommend for open discussions, to create new tickets, one ticket focused on each issue. We can continue discussion there, and that would be useful to design and add those new sections to the CI tutorial. Thanks very much for the feedback.



For the ``package pipeline`` we will start with a simple source code change in the ``ai`` recipe, simulating some improvements
in the ``ai`` package, providing some better algorithms for our game.

✍️ **Let's do the following changes in the ai package**:

- Let's change the implementation of the ``ai/src/ai.cpp`` function and change the message from ``Some Artificial`` to ``SUPER BETTER Artificial``
- Let's change the default ``intelligence=0`` value in ``ai/include/ai.h`` to a new ``intelligence=50`` default.
- Finally, let's bump the version. As we did some changes to the package public headers, it would be adviced to bump the ``minor`` version,
so let`s edit the ``ai/conanfile.py`` file and define ``version = "1.1.0"`` there (instead of the previous ``1.0``). Note that if we
did some breaking changes to the ``ai`` public API, the recommendation would be to change the major instead and create a new ``2.0`` version.


The **packages pipeline** will take care of building the different packages binaries for the new ``ai/1.1.0`` and upload them to the ``packages``
binary repository to avoid disrupting or causing potential issues to other developers and CI jobs.
If the pipeline succeed it will promote (copy) them to the ``products`` binary repository, and stop otherwise.

There are different aspects that need to be taken into account when building these binary packages for ``ai/1.1.0``. The following tutorial sections do the same
job, but under different hypothesis. They are explained in increasing complexity.

memsharded marked this conversation as resolved.
Show resolved Hide resolved
Note all of the commands can be found in the repository ``run_example.py`` file. This file is mostly intended for maintainers and testing,
but it might be useful as a reference in case of issues.


.. toctree::
:maxdepth: 1

packages_pipeline/single_configuration
packages_pipeline/multi_configuration
packages_pipeline/multi_configuration_lockfile
195 changes: 195 additions & 0 deletions ci_tutorial/packages_pipeline/multi_configuration.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,195 @@
Package pipeline: multi configuration
=====================================

In the previous section we were building just 1 configuration. This section will cover the case in which we need to build more
than 1 configuration. We will use the ``Release`` and ``Debug`` configurations here for convenience, as it is easier to
follow, but in real case these configurations will be more like Windows, Linux, OSX, building for different architectures,
cross building, etc.

Let's begin cleaning our cache and initializing only the ``develop`` repo:


.. code-block:: bash

$ conan remove "*" -c # Make sure no packages from last run
$ conan remote remove "*" # Make sure no other remotes defined
# Add develop repo, you might need to adjust this URL
$ conan remote add develop http://localhost:8081/artifactory/api/conan/develop
memsharded marked this conversation as resolved.
Show resolved Hide resolved


We will create the packages for the 2 configurations sequentially in our computer, but note these will typically run
in different computers, so it is typical for CI systems to launch the builds of different configurations in parallel.

.. code-block:: bash
:caption: Release build

$ cd ai
$ conan create . --build="missing:ai/*" -s build_type=Release --format=json > graph.json
$ conan list --graph=graph.json --graph-binaries=build --format=json > built.json
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Side note, if we do --build=missing, it'd fail to pre-upload some of our dependencies if we fail during our root package build. It could be frustrating if dependencies are something like Qt or more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what does this mean in this context. We are not doing --build=missing, but just for ai. Dependencies aren't built and won't be uploaded.

Copy link

@Todiq Todiq Dec 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@memsharded
Would it please be possible to also show how to decouple the conan create into conan build + conan export-pkg with the json formatting? Should we create a graph for both of these commands then merge them?

It can be especially insteresting in CI, since creating an artifact with the whole conan cache (if built with conan create) can be cumbersome. This allows cases where one can create an artifact with only the build folder (if built with conan build)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Todiq

This would be a different topic than the one covered in this tutorial. This kind of things will probably follow in later versions, talking about implementation strategies like this.

However, in CI a conan build + conan export-pkg would be identical to conan create, there will be no difference in computing time or storage in disk or anything like that. It only changes the location of the build folder, but actually it would be very recommended to use create because that garantees that future re-builds with conan install --requires=thispkg/version --build=thispkg/version work correctly, because those are also executed in the cache.

So I am not sure about the value of your proposal. conan build + conan export-pkg has some value for developers trying things locally, but it shouldn't have any advantage in the CI, I might be missing something.

# Add packages repo, you might need to adjust this URL
$ conan remote add packages http://localhost:8081/artifactory/api/conan/packages
$ conan upload -l=built.json -r=packages -c --format=json > uploaded_release.json

We have done a few changes and extra steps:

- First step is similar to the one in the previous section, a ``conan create``, just making it explicit our configuration
``-s build_type=Release`` for clarity, and capturing the output of the ``conan create`` in a ``graph.json`` file.
- The second step is create from the ``graph.json`` a ``built.json`` **package list** file, with the packages that needs to be uploaded,
in this case, only the packages that have been built from source (``--graph-binaries=build``) will be uploaded. This is
done for efficiency and faster uploads.
- Third step is to define the ``packages`` repository
- Finally, we will upload the ``built.json`` package list to the ``packages`` repository, creating the ``uploaded_release.json``
package list with the new location of the packages (the server repository).

Likewise, the Debug build will do the same steps:


.. code-block:: bash
:caption: Debug build

$ conan create . --build="missing:ai/*" -s build_type=Debug --format=json > graph.json
memsharded marked this conversation as resolved.
Show resolved Hide resolved
$ conan list --graph=graph.json --graph-binaries=build --format=json > built.json
# Remote definition can be ommitted in tutorial, it was defined above (-f == force)
$ conan remote add packages http://localhost:8081/artifactory/api/conan/packages -f
$ conan upload -l=built.json -r=packages -c --format=json > uploaded_debug.json


When both Release and Debug configuration finish successfully, we would have these packages in the repositories:

.. graphviz::
:align: center

digraph repositories {
node [fillcolor="lightskyblue", style=filled, shape=box]
rankdir="LR";
subgraph cluster_0 {
label="Packages server";
style=filled;
color=lightgrey;
subgraph cluster_1 {
label = "packages\n repository"
shape = "box";
style=filled;
color=lightblue;
"packages" [style=invis];
"ai/1.1.0\n (Release)";
"ai/1.1.0\n (Debug)";
}
subgraph cluster_2 {
label = "products\n repository"
shape = "box";
style=filled;
color=lightblue;
"products" [style=invis];
}
subgraph cluster_3 {
rankdir="BT";
shape = "box";
label = "develop repository";
color=lightblue;
rankdir="BT";

node [fillcolor="lightskyblue", style=filled, shape=box]
"game/1.0" -> "engine/1.0" -> "ai/1.0" -> "mathlib/1.0";
"engine/1.0" -> "graphics/1.0" -> "mathlib/1.0";
"mapviewer/1.0" -> "graphics/1.0";
"game/1.0" [fillcolor="lightgreen"];
"mapviewer/1.0" [fillcolor="lightgreen"];
}
{
edge[style=invis];
"packages" -> "products" -> "game/1.0" ;
rankdir="BT";
}
}
}


When all the different binaries for ``ai/1.1.0`` have been built correctly, the ``package pipeline`` can consider its job succesfull and decide
to promote those binaries. But further package builds and checks are necessary, so instead of promoting them to the ``develop`` repository,
the ``package pipeline`` can promote them to the ``products`` binary repository. As all other developers and CI use the ``develop`` repository,
no one will be broken at this stage either:

.. code-block:: bash
:caption: Promoting from packages->product

# aggregate the package list
memsharded marked this conversation as resolved.
Show resolved Hide resolved
$ conan pkglist merge -l uploaded_release.json -l uploaded_debug.json --format=json > uploaded.json

# Promotion using Conan download/upload commands
memsharded marked this conversation as resolved.
Show resolved Hide resolved
# (slow, can be improved with art:promote custom command)
$ conan download --list=uploaded.json -r=packages --format=json > promote.json
$ conan upload --list=promote.json -r=products -c


The first step uses the ``conan pkglist merge`` command to merge the package lists from the "Release" and "Debug" configurations and
merge it into a single ``uploaded.json`` package list.
This list is the one that will be used to run the promotion.

In this example we are using a slow ``conan download`` + ``conan upload`` promotion. This can be way more efficient with
the ``conan art:promote`` extension command.

After running the promotion we will have the following packages in the server:

.. graphviz::
:align: center

digraph repositories {
node [fillcolor="lightskyblue", style=filled, shape=box]
memsharded marked this conversation as resolved.
Show resolved Hide resolved
rankdir="LR";
subgraph cluster_0 {
label="Packages server";
style=filled;
color=lightgrey;
subgraph cluster_1 {
label = "packages\n repository"
shape = "box";
style=filled;
color=lightblue;
"packages" [style=invis];
"ai/1.1.0\n (Release)";
"ai/1.1.0\n (Debug)";
}
subgraph cluster_2 {
label = "products\n repository"
shape = "box";
style=filled;
color=lightblue;
"products" [style=invis];
"ai/promoted release" [label="ai/1.1.0\n (Release)"];
"ai/promoted debug" [label="ai/1.1.0\n (Debug)"];
}
subgraph cluster_3 {
rankdir="BT";
shape = "box";
label = "develop repository";
color=lightblue;
rankdir="BT";

node [fillcolor="lightskyblue", style=filled, shape=box]
"game/1.0" -> "engine/1.0" -> "ai/1.0" -> "mathlib/1.0";
"engine/1.0" -> "graphics/1.0" -> "mathlib/1.0";
"mapviewer/1.0" -> "graphics/1.0";
"game/1.0" [fillcolor="lightgreen"];
"mapviewer/1.0" [fillcolor="lightgreen"];
}
{
edge[style=invis];
"packages" -> "products" -> "game/1.0" ;
rankdir="BT";
}
}
}
memsharded marked this conversation as resolved.
Show resolved Hide resolved


To summarize:

- We built 2 different configurations, ``Release`` and ``Debug`` (could have been Windows/Linux or others), and uploaded them
to the ``packages`` repository.
- When all package binaries for all configurations were successfully built, we promoted them from the ``packages`` to the
``products`` repository, to make them available for the ``products pipeline``.
- **Package lists** were captured in the package creation process and merged into a single one to run the promotion.


There is still an aspect that we haven't considered yet, the possibility that the dependencies of ``ai/1.1.0`` change
during the build. Move to the next section to see how to use lockfiles to achieve more consistent multi-configuration builds.
147 changes: 147 additions & 0 deletions ci_tutorial/packages_pipeline/multi_configuration_lockfile.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
Package pipeline: multi configuration using lockfiles
=====================================================

In the previous example, we built both ``Debug`` and ``Release`` package binaries for ``ai/1.1.0``. In real world scenarios the binaries to build would be different platforms (Windows, Linux, embedded), different architectures, and very often it will not be possible to build them in the same machine, requiring different computers.

The previous example had an important assumption: the dependencies of ``ai/1.1.0`` do not change at all during the building process. In many scenarios, this assumption will not hold, for example if there are any other concurrent CI jobs, and one succesfull job publishes a new ``mathlib/1.1`` version in the ``develop`` repo.

Then it is possible that one build of ``ai/1.1.0``, for example, the one running in the Linux servers starts earlier and uses the previous ``mathlib/1.0`` version as dependency, while the Windows servers start a bit later, and then their build will use the recent ``mathlib/1.1`` version as dependency. This is a very undesirable situation, having binaries for the same ``ai/1.1.0`` version using different dependencies versions. This can lead in later graph resolution problems, or even worse, get to the release with different behavior for different platforms.

The way to avoid this discrepancy in dependencies is to force the usage of the same dependencies versions and revisions, something that can be done with :ref:`lockfiles<tutorial_versioning_lockfiles>`.

Creating and applying lockfiles is relatively straightforward. The process of creating and promoting the configurations will be identical to the previous section, but just applying the lockfiles.

Creating the lockfile
---------------------

Let's make sure as usual that we start from a clean state:

.. code-block:: bash

$ conan remove "*" -c # Make sure no packages from last run
$ conan remote remove "*" # Make sure no other remotes defined
memsharded marked this conversation as resolved.
Show resolved Hide resolved
# Add develop repo, you might need to adjust this URL
$ conan remote add develop http://localhost:8081/artifactory/api/conan/develop


Then we can create the lockfile ``conan.lock`` file:

.. code-block:: bash

# Capture a lockfile for the Release configuration
$ conan lock create . -s build_type=Release --lockfile-out=conan.lock
# extend the lockfile so it also covers the Debug configuration
# in case there are Debug-specific dependencies
$ conan lock create . -s build_type=Debug --lockfile=conan.lock --lockfile-out=conan.lock

Note that different configurations, using different profiles or settings could result in different dependency graphs. A lockfile file can be used to lock the different configurations, but it is important to iterate the different configurations/profiles and capture their information in the lockfile.

.. note::

The ``conan.lock`` is the default argument, and if a ``conan.lock`` file exists, it might be automatically used by ``conan install/create`` and other graph commands. This can simplify many of the commands, but this tutorial is showing the full explicit commands for clarity and didactical reasons.

The ``conan.lock`` file can be inspected, it will be something like:

.. code-block:: json

{
"version": "0.5",
"requires": [
"mathlib/1.0#f2b05681ed843bf50d8b7b7bdb5163ea%1724319985.398"
],
"build_requires": [],
"python_requires": [],
"config_requires": []
}

As we can see, it is locking the ``mathlib/1.0`` dependency version and revision.


With the lockfile, creating the different configurations is exactly the same, but providing the ``--lockfile=conan.lock`` argument to the ``conan create`` step, it will guarantee that ``mathlib/1.0#f2b05681ed843bf50d8b7b7bdb5163ea`` will always be the exact dependency used, irrespective if there exist new ``mathlib/1.1`` versions or new revisions available. The following builds could be launched in parallel but executed at different times, and still they will always use the same ``mathlib/1.0`` dependency:


.. code-block:: bash
:caption: Release build

$ cd ai
czoido marked this conversation as resolved.
Show resolved Hide resolved
$ conan create . --build="missing:ai/*" --lockfile=conan.lock -s build_type=Release --format=json > graph.json
$ conan list --graph=graph.json --graph-binaries=build --format=json > built.json
# Add packages repo, you might need to adjust this URL
$ conan remote add packages http://localhost:8081/artifactory/api/conan/packages
$ conan upload -l=built.json -r=packages -c --format=json > uploaded_release.json

.. code-block:: bash
:caption: Debug build

$ conan create . --build="missing:ai/*" --lockfile=conan.lock -s build_type=Debug --format=json > graph.json
$ conan list --graph=graph.json --graph-binaries=build --format=json > built.json
# Remote definition can be ommitted in tutorial, it was defined above (-f == force)
$ conan remote add packages http://localhost:8081/artifactory/api/conan/packages -f
$ conan upload -l=built.json -r=packages -c --format=json > uploaded_debug.json

Note the only modification to the previous example is the addition of ``--lockfile=conan.lock``. The promotion will also be identical to the previous one:

.. code-block:: bash
:caption: Promoting from packages->product

# aggregate the package list
$ conan pkglist merge -l uploaded_release.json -l uploaded_debug.json --format=json > uploaded.json

# Promotion using Conan download/upload commands
# (slow, can be improved with art:promote custom command)
$ conan download --list=uploaded.json -r=packages --format=json > promote.json
$ conan upload --list=promote.json -r=products -c

And the final result will be the same as in the previous section, but this time just with the guarantee that both ``Debug`` and ``Release`` binaries were built using exactly the same ``mathlib`` version:

.. graphviz::
:align: center

digraph repositories {
node [fillcolor="lightskyblue", style=filled, shape=box]
rankdir="LR";
subgraph cluster_0 {
label="Packages server";
style=filled;
color=lightgrey;
subgraph cluster_1 {
label = "packages\n repository"
shape = "box";
style=filled;
color=lightblue;
"packages" [style=invis];
"ai/1.1.0\n (Release)";
"ai/1.1.0\n (Debug)";
}
subgraph cluster_2 {
label = "products\n repository"
shape = "box";
style=filled;
color=lightblue;
"products" [style=invis];
"ai/promoted release" [label="ai/1.1.0\n (Release)"];
"ai/promoted debug" [label="ai/1.1.0\n (Debug)"];
}
subgraph cluster_3 {
rankdir="BT";
shape = "box";
label = "develop repository";
color=lightblue;
rankdir="BT";

node [fillcolor="lightskyblue", style=filled, shape=box]
"game/1.0" -> "engine/1.0" -> "ai/1.0" -> "mathlib/1.0";
"engine/1.0" -> "graphics/1.0" -> "mathlib/1.0";
"mapviewer/1.0" -> "graphics/1.0";
"game/1.0" [fillcolor="lightgreen"];
"mapviewer/1.0" [fillcolor="lightgreen"];
}
{
edge[style=invis];
"packages" -> "products" -> "game/1.0" ;
rankdir="BT";
}
}
}

Now that we have the new ``ai/1.1.0`` binaries in the ``products`` repo, we can consider the ``packages pipeline`` finished and move to the next section, and build and check our products to see if this new ``ai/1.1.0`` version integrates correctly.
Loading