-
Notifications
You must be signed in to change notification settings - Fork 230
OE4T Meeting Notes 2021 11 11
Dan Walkes edited this page Nov 18, 2021
·
3 revisions
7
- Deepstream 6 release
- PRs are done - see https://github.com/OE4T/meta-tegra/issues/837
- Only currently available in Honister and Dunfell
- Python bindings for deepstream are now published, and should be possible to add. Looking for someone to sponsor this.
- It’s not currently possible to use Deepstream 6 in master. Deepstream 6.0 has libraries linked against OpenSSL 1.1.1. OE core master updated to version 3, which is not backward compatible.
- Have deleted deepstream recipes in master, and opened an issue to track at https://github.com/OE4T/meta-tegra/issues/845
- May be able to address with a new recipe for openssl 1.1.1 which can be built/installed alongside openssl3. This may be the only way to address, unless NVIDIA updates (other than running in docker).
- Master will become Kirkstone, will have potential compatibility issues between now and Kirkstone release for any issues like Open SSL versions. We need to be prepared to support for several years to support existing hardware, since L4T r 32.6 will be the last release for these platforms.
- Triton inference server dependencies - don’t impact builds on dunfell or honister, do impact master due to dependency tracking. Triton inference server SDK, Rivermax SDK dependencies. Need recipe for Triton inference server and Rivermax in order to add to master.
- Deepstream 6 and Triton support may be ultimately migrated to meta-tegra-community since they aren’t part of the base BSP.
- CI/CD setups and docker builds
- Members have shared some thoughts and ideas in the slack thread here, not specifically discussed during the meeting. See the link shared on the homepage to access the slack thread.
- See https://github.com/crops/poky-container and video discussion
- See https://github.com/siemens/kas/blob/master/Dockerfile and relevant docker compose files here, here and here
- See docker support scripts at https://github.com/mandraga/tegra-demo-distro/tree/sumo-l4t-r28.4.0
- One member is using gitlab, pipelines with builds on check in. Not doing CD but have CI pipelines. Some testing on edge compute platforms.
- Brief discussion of state of qemu emulation, not currently possible to simulate enough in qemu to make this useful for CI automated testing.
- Matt uses spot instances in AWS and https://github.com/madisongh/autobuilder to run all builds, based off buildbot. Shared state and downloads are mirrored in S3, special S3 fetcher which replaces standard one in OE core. This allows you to share connections to AWS, speeds things up considerably. See more detail in this page. Not typically using containers for builds, containers are only used to reproduce older builds.
- Brief discussion about LAVA. Meant for hardware testing. Central server controlling devices, distributed notes connected to hardware. May be useful for coordinating groups of hardware devices for testing.
- Members have shared some thoughts and ideas in the slack thread here, not specifically discussed during the meeting. See the link shared on the homepage to access the slack thread.
- Licensing on libnvidia-container
- See this github issue
- Elfutils specifically says applications are GPLv3 but libraries are not, suspect GPLv3 is not the correct license for this use case.
- This is a moot point for meta-tegra since we don’t build with elfutils. Matt has updated the license with https://github.com/OE4T/meta-tegra/pull/847 to remove GPLv3 references.
- Orin AGX
- See https://forums.developer.nvidia.com/t/announcing-jetson-agx-orin-next-level-ai-performance-for-next-gen-robotics/194509
- Initial release will be Jetpack 5.
- Pin compatibility with AGX Xavier - some differences which may require new hardware layout, will need to redo carrier cards.
- Haven’t checked into dual boot support in the previous UEFI example release. Didn’t see anything in the README. This is a big question mark.
- Hopefully we will have source code for bootloader going forward, will attempt to clarify this with NVIDIA.
- Security hardware support is another risk item/unknown.
- Cost currently unknown. Xavier AGX was very expensive early as well, then has dropped in price. Did have a 50% off for development boards in the past.
- Moving from pre L4T 32.5 to 32.5 - IDs and partition layouts
- Were on 32.4.3 for quite a while, just moved to 32.5.2.
- When updating new code (with swupdate) on Nano platform the update completes but the device does not boot after update, complaining that it can’t find the linux partition. Looking at deltas in partitioning layout across 32.4.3 to 32.5.2 and have noticed some changes related to having IDs vs not, some size changes. GPT size changed to something smaller, went back to prior size. Wondering if there’s something about IDs which can impact how things are found.
- Didn’t have this problem with NX.
- SDCard Nano builds are different between 32.4 and 32.5. Flashing tools now treat the ID parameter in XML file as the partition number in 32.5 Instead of just starting at 1 and assigning incrementally.
- The emmc partition numbers get assigned from the ID. Matt believes this is working in his test distro. If you aren’t using exactly the same setup, there may be subtle differences.
- Not all partitions have IDs, only added where needed. IDs are likely assigned sequentially when not there.
- Sign code with HSM and protecting private key
- Still looking for a response to https://forums.developer.nvidia.com/t/signing-firmware-with-an-hsm-protected-private-key/191616, still not clear how to avoid placing key on the filesystem.
- Following up with someone who reached out privately on the forum about this.
- NVMe support and Mass flash support
- Will check in with Ridgerun on an update for these based on discussion last month.