|
| 1 | +# Adding new C++ Dependencies |
| 2 | + |
| 3 | +## Step 1: Make the package available to the build |
| 4 | + |
| 5 | +First, decide if you must install the package in the container or if you |
| 6 | +may defer fetching until the build phase. In general, *prefer to fetch |
| 7 | +packages during the build phase*. You may be required to install |
| 8 | +packages into the container, however, if there is a runtime component |
| 9 | +(e.g. shared objects) that cannot be reasonably distributed with the |
| 10 | +wheel. |
| 11 | + |
| 12 | +### Install in the container |
| 13 | + |
| 14 | +#### Debian Packages via os package manager (e.g. apt, dnf) |
| 15 | + |
| 16 | +Add your package to one of the existing shell scripts used by the docker build |
| 17 | +under [docker/common/][1] Find the location where the package manager is |
| 18 | +invoked, and add the name of your package there. |
| 19 | + |
| 20 | +NOTE: nspect tooling will automatically detect the installation of this |
| 21 | +package and fetch sources using the source-fetching facilities of the OS |
| 22 | +package manager. |
| 23 | + |
| 24 | +[1]: https://github.com/NVIDIA/TensorRT-LLM/tree/main/docker/common. |
| 25 | + |
| 26 | +#### Python Packages via pip |
| 27 | + |
| 28 | +If it makes sense, add your package to one of the existing shell scripts used by |
| 29 | +the docker build under [docker/common/][2]. Grep for "pip3 install" to see |
| 30 | +existing invocations. If none of the existing shell scripts make sense, add a |
| 31 | +new shell script to install your package and then invoke that script in |
| 32 | +Dockerfile.multi. |
| 33 | + |
| 34 | +NOTE: If the new python package you are adding has a compiled component (e.g. a |
| 35 | +python extension module), you must coordinate with the [Security Team][20] to |
| 36 | +ensure that the source for this component is managed correctly. |
| 37 | + |
| 38 | +[2]: https://github.com/NVIDIA/TensorRT-LLM/tree/main/docker/common |
| 39 | + |
| 40 | +#### Tarball packages via HTTP/FTP |
| 41 | + |
| 42 | +Invoke `wget` in a shell script which is called from the docker build file. |
| 43 | +When it makes sense, please prefer to extend an existing script in |
| 44 | +[docker/common/][3] rather than creating a new one. If you are downloading a |
| 45 | +binary package, you must also download the source package that produced that |
| 46 | +binary. |
| 47 | + |
| 48 | +Ensure that the source package is copied to /third-party-source and retained |
| 49 | +after all cleanup within the docker image layer. |
| 50 | + |
| 51 | +[3]: https://github.com/NVIDIA/TensorRT-LLM/tree/main/docker/common |
| 52 | + |
| 53 | +### Fetch during the build |
| 54 | + |
| 55 | +#### Python Packages via pip |
| 56 | + |
| 57 | +Add an entry to |
| 58 | +[requirements-dev.txt][4] |
| 59 | +and/or |
| 60 | +[requirements-dev-windows.txt][5]. |
| 61 | +The package will be installed by build\_wheel.py during virtual |
| 62 | +environment initialization prior to configuring the build with cmake. |
| 63 | +Include a comment indicating the intended usage of the package. |
| 64 | + |
| 65 | +[4]: https://github.com/NVIDIA/TensorRT-LLM/blob/main/requirements-dev.txt |
| 66 | +[5]: https://github.com/NVIDIA/TensorRT-LLM/blob/main/requirements-dev-windows.txt |
| 67 | + |
| 68 | +**Example:** |
| 69 | + |
| 70 | +`requirements-dev.txt`: |
| 71 | + |
| 72 | +``` requirements.txt |
| 73 | +# my-package is needed by <feature> where it is used for <reason> |
| 74 | +my-package==1.2.24 |
| 75 | +``` |
| 76 | + |
| 77 | +#### C/C++ Packages via conan |
| 78 | + |
| 79 | +Add a new entry to [conandata.yml][6] indicating the package version for the |
| 80 | +dependency you are adding. Include a yaml comment indicating the intended usage |
| 81 | +of the package. Then add a new invocation of `self.require()` within the `def |
| 82 | +requirements(self)` method of [conanfile.py], referencing the version you added |
| 83 | +to conandata. |
| 84 | + |
| 85 | +[6]: https://github.com/NVIDIA/TensorRT-LLM/blob/main/cpp/conandata.yml |
| 86 | +[7]: https://github.com/NVIDIA/TensorRT-LLM/blob/main/cpp/conanfile.py |
| 87 | + |
| 88 | +**Example:** |
| 89 | + |
| 90 | +`conandata.yml`: |
| 91 | + |
| 92 | +```.yml |
| 93 | +# my_dependency is needed by <feature> where it is used for <reason> |
| 94 | +my_dependency: 1.2.24+1 |
| 95 | +``` |
| 96 | + |
| 97 | +`conanfile.py`: |
| 98 | + |
| 99 | +```.py |
| 100 | +def requirements(self): |
| 101 | + ... |
| 102 | + my_dependency_version = self.conandata["my_dependency"] |
| 103 | + self.requires(f"my_dependency/{my_dependency_version}") |
| 104 | +``` |
| 105 | + |
| 106 | +#### Source integration via CMake |
| 107 | + |
| 108 | +If you have a package you need to build from source then use CMake |
| 109 | +[FetchContent][8] of [ExternalProject][9] to fetch the package sources and |
| 110 | +integrate it with the build. See the details in the next section. |
| 111 | + |
| 112 | +[8]: https://cmake.org/cmake/help/latest/module/FetchContent.html |
| 113 | +[9]: https://cmake.org/cmake/help/latest/module/ExternalProject.html#id1 |
| 114 | + |
| 115 | +#### git Submodule - Don't Use |
| 116 | + |
| 117 | +Please *avoid use of git-submodule*. If, for some reason, the CMake integrations |
| 118 | +described below don't work and git-submodule is absolutely required, please add |
| 119 | +the submodule under the 3rdparty directory. |
| 120 | + |
| 121 | +**Rationale:** |
| 122 | + |
| 123 | +For a source-code dependency distributed via git, |
| 124 | +FetchContent/ExternalProject and git submodules both ultimately contain |
| 125 | +the same referential information (repository URL, commit sha) and, at |
| 126 | +the end of the day, do the same things. However |
| 127 | +FetchContent/ExternalProject have the following advantages: |
| 128 | + |
| 129 | +1. The git operations happen during the build and are interleaved with the rest |
| 130 | + of the build processing, rather than requiring an additional step managed |
| 131 | + outside of CMake. |
| 132 | + |
| 133 | +2. The fetch, patch, and build steps for the sub project are individually named |
| 134 | + in the build, so any failures are more clearly identified |
| 135 | + |
| 136 | +3. The build state is better contained within the build tree where it is less |
| 137 | + prone to interference by development actions. |
| 138 | + |
| 139 | +4. For source code that is modified, FetchContent/ExternalProject can manage |
| 140 | + application of the patches making it clear what modifications are present. |
| 141 | + |
| 142 | +5. The build does not have to make assumptions about the version control |
| 143 | + configuration of the source tree, which may be incorrect due to the fact |
| 144 | + that it is bind-mounted in a container. For example, `git submodule --init` |
| 145 | + inside a container will corrupt the git configuration outside the container |
| 146 | + if the source tree is a git worktree. |
| 147 | + |
| 148 | +6. External project references and their patches are collected under a more |
| 149 | + narrow surface, rather than being spread across different tools. This makes |
| 150 | + it easier to track third part dependencies as well as to recognize them |
| 151 | + during code review. |
| 152 | + |
| 153 | +**Example:** |
| 154 | + |
| 155 | +``` bash |
| 156 | +git submodule add https://github.com/some-organization/some-project.git 3rdparty/some-project |
| 157 | +``` |
| 158 | + |
| 159 | + |
| 160 | +## Step 2: Integrate the package |
| 161 | + |
| 162 | +There are many ways to integrate a package with the build through cmake. |
| 163 | + |
| 164 | +### find\_package for binary packages |
| 165 | + |
| 166 | +For binary packages (os-provided via apt-get or yum, or conan-provided), prefer |
| 167 | +the use of [find\_package][10] to integrate the package into the build. Conan |
| 168 | +will generate a find-script for packages that don't already come with a Cmake |
| 169 | +configuration file and the conan-specific logic is provided through the |
| 170 | +conan-generated toolchain already used in our build. |
| 171 | + |
| 172 | +For any packages which do not have provided find modules (either built-in, or |
| 173 | +available from conan), please implement one in [cpp/cmake/modules][11]. Please |
| 174 | +do not add "direct" invocations of `find_library` / `add_library` / `find_file` |
| 175 | +/ `find_path` outside of a find module the package. |
| 176 | + |
| 177 | +Please add invocations of `find_package` directly in the root Cmake file. |
| 178 | + |
| 179 | +[10]: https://cmake.org/cmake/help/latest/command/find_package.html |
| 180 | +[11]: https://github.com/NVIDIA/TensorRT-LLM/tree/main//cpp/cmake/modules?ref_type=heads |
| 181 | + |
| 182 | +**Example:** |
| 183 | + |
| 184 | +cpp/CMakeLists.txt |
| 185 | + |
| 186 | +```.cmake |
| 187 | +find_package(NIXL) |
| 188 | +``` |
| 189 | + |
| 190 | +cpp/cmake/modules/FindNIXL.cmake |
| 191 | +```.cmake |
| 192 | +... |
| 193 | + find_library( |
| 194 | +NIXL_LIBRARY nixl |
| 195 | +HINTS |
| 196 | + ${NIXL_ROOT}/lib/${NIXL_TARGET_ARCH} |
| 197 | + ${NIXL_ROOT}/lib64) |
| 198 | +... |
| 199 | + add_library(NIXL::nixl SHARED IMPORTED) |
| 200 | + set_target_properties( |
| 201 | + NIXL::nixl |
| 202 | + PROPERTIES |
| 203 | + INTERFACE_INCLUDE_DIRECTORIES ${NIXL_INCLUDE_DIR} |
| 204 | + IMPORTED_LOCATION ${NIXL_LIBRARY} |
| 205 | + ${NIXL_BUILD_LIBRARY} |
| 206 | +${SERDES_LIBRARY} |
| 207 | +) |
| 208 | +``` |
| 209 | + |
| 210 | +### FetchContent for source packages with compatible cmake builds |
| 211 | + |
| 212 | +For source packages that have a compatible cmake (e.g. where add\_subdirectory |
| 213 | +will work correctly), please use [FetchContent][12] to download the sources and |
| 214 | +integrate them into the build. Please add new invocations of |
| 215 | +FetchContent\_Declare in [3rdparty/CMakeLists.txt][13]. Add new invocations for |
| 216 | +FetchContent\_MakeAvailable wherever it makes sense in the build where you are |
| 217 | +integrating it, but prefer the root listfile for that build |
| 218 | +([cpp/CMakeLists.txt][14] for the primary build). |
| 219 | + |
| 220 | +CODEOWNERS for this file will consist of PLC reviewers who verify that |
| 221 | +third-party license compliance strategies are being followed. |
| 222 | + |
| 223 | +If the dependency you are adding has modified sources, please do the |
| 224 | +following: |
| 225 | + |
| 226 | +1. Create a repository on gitlab to mirror the upstream source files. If the |
| 227 | + upstream is also in git, please use the gitlab "mirror" repository option. |
| 228 | + Otherwise, please use branches/tags to help identify the upstream source |
| 229 | + versions. |
| 230 | + |
| 231 | +2. Track nvidia changes in a branch. Use a linear sequence (trunk-based) |
| 232 | + development strategy. Use meaningful, concise commit message subjects and |
| 233 | + comprehensive commit messages for the changes applied. |
| 234 | + |
| 235 | +3. Use `git format-patch \<upstream-commit\>\...HEAD` to create a list of |
| 236 | + patches, one file per commit, |
| 237 | + |
| 238 | +4. Add your patches under 3rdparty/patches/\<package-name\> |
| 239 | + |
| 240 | +5. Use CMake's [PATCH\_COMMAND][15] option to apply the patches during the |
| 241 | + build process. |
| 242 | + |
| 243 | +[12]: https://cmake.org/cmake/help/latest/module/FetchContent.html |
| 244 | +[13]: https://github.com/NVIDIA/TensorRT-LLM/tree/main//3rdparty/CMakeLists.txt?ref_type=heads |
| 245 | +[14]: https://github.com/NVIDIA/TensorRT-LLM/blob/main/cpp/CMakeLists.txt |
| 246 | +[15]: https://cmake.org/cmake/help/latest/module/ExternalProject.html#patch-step-options |
| 247 | + |
| 248 | +**Example:** |
| 249 | + |
| 250 | +3rdparty/CMakeLists.txt |
| 251 | + |
| 252 | +```.cmake |
| 253 | +FetchContent_Declare( |
| 254 | + pybind11 |
| 255 | + GIT_REPOSITORY https://github.com/pybind/pybind11.git |
| 256 | + GIT_TAG f99ffd7e03001810a3e722bf48ad1a9e08415d7d |
| 257 | +) |
| 258 | +``` |
| 259 | + |
| 260 | +cpp/CmakeLists.txt |
| 261 | + |
| 262 | +```.cmake |
| 263 | +FetchContent_MakeAvailable(pybind11) |
| 264 | +``` |
| 265 | + |
| 266 | +### ExternalProject |
| 267 | + |
| 268 | +If the package you are adding doesn't support FetchContent (e.g. if it's not |
| 269 | +built by CMake or if its CMake configuration doesn't nest well), then please use |
| 270 | +[ExternalProject][16]. In this case that project's build system will be invoked |
| 271 | +as a build step of the primary build system. Note that, unless both the primary |
| 272 | +and child build systems are GNU Make, they will not share a job server and will |
| 273 | +independently schedule parallelism (e.g. -j flags). |
| 274 | + |
| 275 | +[16]: https://cmake.org/cmake/help/latest/module/ExternalProject.html#id1 |
| 276 | + |
| 277 | +**Example:** |
| 278 | + |
| 279 | +```.cmake |
| 280 | +ExternalProject_Add( |
| 281 | + nvshmem_project |
| 282 | + URL https://developer.download.nvidia.com/compute/nvshmem/redist/libnvshmem/linux-x86_64/libnvshmem-linux-x86_64-3.2.5_cuda12-archive.tar.xz |
| 283 | + URL_HASH ${NVSHMEM_URL_HASH} |
| 284 | + PATCH_COMMAND patch -p1 --forward --batch -i |
| 285 | + ${DEEP_EP_SOURCE_DIR}/third-party/nvshmem.patch |
| 286 | + ... |
| 287 | + CMAKE_CACHE_ARGS |
| 288 | + -DCMAKE_C_COMPILER:STRING=${CMAKE_C_COMPILER} |
| 289 | + -DCMAKE_C_COMPILER_LAUNCHER:STRING=${CMAKE_C_COMPILER_LAUNCHER} |
| 290 | + ... |
| 291 | + BINARY_DIR ${CMAKE_CURRENT_BINARY_DIR}/nvshmem-build |
| 292 | + BUILD_BYPRODUCTS |
| 293 | + ${CMAKE_CURRENT_BINARY_DIR}/nvshmem-build/src/lib/libnvshmem.a |
| 294 | +) |
| 295 | +add_library(nvshmem_project::nvshmem STATIC IMPORTED) |
| 296 | +add_dependencies(nvshmem_project::nvshmem nvshmem_project) |
| 297 | +... |
| 298 | +set_target_properties( |
| 299 | + nvshmem_project::nvshmem |
| 300 | + PROPERTIES IMPORTED_LOCATION |
| 301 | + ${CMAKE_CURRENT_BINARY_DIR}/nvshmem-build/src/lib/libnvshmem.a |
| 302 | + INTERFACE_INCLUDE_DIRECTORIES |
| 303 | + ${CMAKE_CURRENT_BINARY_DIR}/nvshmem-build/src/include) |
| 304 | +``` |
| 305 | + |
| 306 | +## Step 3: Update third-party attributions and license tracking |
| 307 | + |
| 308 | +1. Clone the dependency source code to an NVIDIA-controlled repository. The |
| 309 | + consumed commit must be stored as-received (ensure the consumed commit-sha |
| 310 | + is present in the clone). For sources available via git (or git-adaptable) |
| 311 | + SCM, mirror the repository in the [oss-components][18] gitlab project. |
| 312 | + |
| 313 | +2. Collect the license text of the consumed commit |
| 314 | + |
| 315 | +3. If the license does not include a copyright notice, collect any copyright |
| 316 | + notices that were originally published with the dependency (these may be on |
| 317 | + individual file levels, in metadata files, or in packaging control files). |
| 318 | + |
| 319 | +4. Add the license and copyright notices to the ATTRIBUTIONS-CPP-x86\_64.md and |
| 320 | + ATTRIBUTIONS-CPP-aarch64.md files |
| 321 | + |
| 322 | +CODEOWNERS for ATTRIBUTIONS-CPP-\*.md are members of the PLC team and modifying |
| 323 | +this file will signal to reviewers that they are verifying that your change |
| 324 | +follows the process in this document. |
| 325 | + |
| 326 | +[18]: https://gitlab.com/nvidia/tensorrt-llm/oss-components |
| 327 | + |
| 328 | +## Step 4: File a JIRA ticket if you need help from the Security team |
| 329 | + |
| 330 | +This step is optional, if you need assistance from the Security team. |
| 331 | + |
| 332 | +File a Jira ticket using the issue template [TRTLLM-8383][19] to request |
| 333 | +inclusion of this new dependency and initiate license and/or security review. |
| 334 | +The Security Team will triage and assign the ticket. |
| 335 | + |
| 336 | +If you don’t have access to the JIRA project, please email the [Security |
| 337 | +Team][20]. |
| 338 | + |
| 339 | + |
| 340 | +[19]: https://jirasw.nvidia.com/browse/TRTLLM-8383 |
| 341 | + |
0 commit comments