-
-
Notifications
You must be signed in to change notification settings - Fork 14.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
linuxPackages.nvidiaPackages.beta: 555.52.04 -> 560.28.03 #329450
Conversation
So other changes besides the version:
There is also a bunch of new components for VulkanSC:
But since I'm not sure how this is used I'm not making any changes to account for these components for now. |
Details
nvidia-open> make -C src/nvidia
nvidia-open> make -C src/nvidia-modeset
nvidia-open> make[1]: Entering directory '/build/source/src/nvidia'
nvidia-open> make[1]: Entering directory '/build/source/src/nvidia-modeset'
nvidia-open> [ nvidia-modeset ] XZ _out/Linux_x86_64/maxwell_shaders.xz
nvidia-open> [ nvidia-modeset ] XZ _out/Linux_x86_64/pascal_shaders.xz
nvidia-open> [ nvidia-modeset ] XZ _out/Linux_x86_64/volta_shaders.xz
nvidia-open> [ nvidia-modeset ] XZ _out/Linux_x86_64/turing_shaders.xz
nvidia-open> [ nvidia-modeset ] XZ _out/Linux_x86_64/ampere_shaders.xz
nvidia-open> [ nvidia-modeset ] XZ _out/Linux_x86_64/hopper_shaders.xz
nvidia-open> [ nvidia-modeset ] XZ _out/Linux_x86_64/blackwell_shaders.xz
nvidia-open> [ nvidia-modeset ] CC ../common/shared/nvstatus/nvstatus.c
nvidia-open> [ nvidia-modeset ] CC ../common/softfloat/source/8086-SSE/s_commonNaNToF32UI.c
nvidia-open> [ nvidia-modeset ] CC ../common/softfloat/source/8086-SSE/s_commonNaNToF16UI.c
nvidia-open> [ nvidia-modeset ] CC ../common/softfloat/source/8086-SSE/s_commonNaNToF64UI.c
nvidia-open> [ nvidia-modeset ] CC ../common/softfloat/source/8086-SSE/s_f32UIToCommonNaN.c
nvidia-open> [ nvidia-modeset ] CC ../common/softfloat/source/8086-SSE/s_f64UIToCommonNaN.c
nvidia-open> [ nvidia-modeset ] CC ../common/softfloat/source/8086-SSE/s_propagateNaNF32UI.c
nvidia-open> [ nvidia-modeset ] CC ../common/softfloat/source/8086-SSE/s_propagateNaNF64UI.c
nvidia-open> [ nvidia-modeset ] CC ../common/softfloat/source/8086-SSE/softfloat_raiseFlags.c
nvidia-open> [ nvidia-modeset ] CC ../common/softfloat/source/f32_div.c
nvidia-open> [ nvidia-modeset ] CC ../common/softfloat/source/f32_add.c
nvidia-open> [ nvidia-modeset ] CC ../common/softfloat/source/f32_eq.c
nvidia-open> [ nvidia-modeset ] CC ../common/softfloat/source/f32_isSignalingNaN.c
nvidia-open> [ nvidia-modeset ] CC ../common/softfloat/source/f32_eq_signaling.c
nvidia-open> [ nvidia ] CC generated/g_access_cntr_buffer_nvoc.c
nvidia-open> ../common/shared/nvstatus/nvstatus.c:24:10: fatal error: 'nvstatus.h' file not found
nvidia-open> 24 | #include "nvstatus.h"
nvidia-open> | ^~~~~~~~~~~~
nvidia-open> 1 error generated.
nvidia-open> ../common/softfloat/source/8086-SSE/softfloat_raiseFlags.c:37:10: fatal error: 'platform.h' file not found
nvidia-open> 37 | #include "platform.h"
nvidia-open> | ^~~~~~~~~~~~
nvidia-open> 1 error generated.
nvidia-open> ../common/softfloat/source/f32_isSignalingNaN.c:38:10: fatal error: 'platform.h' file not found
nvidia-open> 38 | #include "platform.h"
nvidia-open> | ^~~~~~~~~~~~
nvidia-open> 1 error generated.
nvidia-open> ../common/softfloat/source/f32_add.c:39:10: fatal error: 'platform.h' file not found
nvidia-open> 39 | #include "platform.h"
nvidia-open> | ^~~~~~~~~~~~
nvidia-open> ../common/softfloat/source/8086-SSE/s_commonNaNToF64UI.c:38:10: fatal error: 'platform.h' file not found
nvidia-open> 38../common/softfloat/source/f32_eq_signaling.c../common/softfloat/source/f32_eq.c | ::#3939i:n:1010c:: l ude fatal error: fatal error: "pl'platform.h' file not founda'platform.h' file not foundt
nvidia-open>
nvidia-open> form. h39" | 39
nvidia-open> # | i| #ni ^~~~~~~~~~~~cn
nvidia-open> lculdued e" p"lpaltaftofromr.mh."h"
nvidia-open>
nvidia-open> | | ^~~~~~~~~~~~
nvidia-open> ^~~~~~~~~~~~
nvidia-open> ../common/softfloat/source/8086-SSE/s_propagateNaNF32UI.c:39:10: fatal error: 'platform.h' file not found
nvidia-open> 39 | #include "platform.h"
nvidia-open> | ^~~~~~~~~~~~
nvidia-open> ../common/softfloat/source/8086-SSE/s_commonNaNToF16UI.c:make[1]: *** [Makefile:215: _out/Linux_x86_64/softfloat_raiseFlags.o] Error 1
nvidia-open> 38:10: fatal error: 'platform.h' file not found
nvidia-open> 38 | #incmake[1]: *** Waiting for unfinished jobs....
nvidia-open> lude "platform.h"
nvidia-open> | ^~~~~~~~~~~~
nvidia-open> ../common/softfloat/source/f32_div.c:39:10: fatal error: 'platform.h' file not found
nvidia-open> 39 | #include "platform.h"
nvidia-open> | ^~~~~~~~~~~~
nvidia-open> ../common/softfloat/source/8086-SSE/s_f64UIToCommonNaN.c:38:10: fatal error: 'platform.h' file not found
nvidia-open> 38 | #include "platform.h"
nvidia-open> | ^~~~~~~~~~~~
nvidia-open> make[1]: *** [Makefile:215: _out/Linux_x86_64/nvstatus.o] Error 1
nvidia-open> ../common/softfloat/source/8086-SSE/s_commonNaNToF32UI.c:38:10: fatal error: 'platform.h' file not found
nvidia-open> 38 | #include "platform.1h error" generated.
nvidia-open>
nvidia-open> | ^~~~~~~~~~~~
nvidia-open> ../common/softfloat/source/8086-SSE/s_f32UIToCommonNaN.c:38:10: fatal error: 'platform.h' file not found
nvidia-open> 38 | #include "platform.h"
nvidia-open> | ^~~~~~~~~~~~
nvidia-open> 1 error generated.
nvidia-open> 1 error generated.
nvidia-open> 1 error generated.
nvidia-open> ../common/softfloat/source/8086-SSE/s_propagateNaNF64UI.c:39:10: fatal error: 'platform.h' file not found
nvidia-open> 39 | #include "platform.h"
nvidia-open> | ^~~~~~~~~~~~
nvidia-open> 1 error generated.
nvidia-open> 1 error generated.
nvidia-open> 1 error generated.
nvidia-open> 1 error generated.
nvidia-open> 1 error generated.
nvidia-open> 1 error generated.
nvidia-open> 1 error generated.
nvidia-open> [ nvidia ] CC generated/g_binary_api_nvoc.c
nvidia-open> [ nvidia ] CC generated/g_all_dcl_pb.c
nvidia-open> make[1]: *** [Makefile:215: _out/Linux_x86_64/f32_isSignalingNaN.o] Error 1
nvidia-open> make[1]: *** [Makefile:215: _out/Linux_x86_64/f32_div.o] Error 1
nvidia-open> make[1]: *** [Makefile:215: _out/Linux_x86_64/f32_eq.o] Error 1
nvidia-open> make[1]: *** [Makefile:215: _out/Linux_x86_64/f32_add.o] Error 1
nvidia-open> make[1]: *** [Makefile:215: _out/Linux_x86_64/s_commonNaNToF16UI.o] Error 1
nvidia-open> make[1]: *** [Makefile:215: _out/Linux_x86_64/s_commonNaNToF64UI.o] Error 1
nvidia-open> make[1]: *** [Makefile:215: _out/Linux_x86_64/s_f64UIToCommonNaN.o] Error 1
nvidia-open> make[1]: *** [Makefile:215: _out/Linux_x86_64/s_f32UIToCommonNaN.o] Error 1
nvidia-open> [ nvidia ] CC generated/g_ccsl_nvoc.c
nvidia-open> make[1]: *** [Makefile:215: _out/Linux_x86_64/s_commonNaNToF32UI.o] Error 1
nvidia-open> make[1]: *** [Makefile:215: _out/Linux_x86_64/s_propagateNaNF32UI.o] Error 1
nvidia-open> make[1]: *** [Makefile:215: _out/Linux_x86_64/f32_eq_signaling.o] Error 1
nvidia-open> make[1]: *** [Makefile:215: _out/Linux_x86_64/s_propagateNaNF64UI.o] Error 1
nvidia-open> [ nvidia ] CC generated/g_bindata.c
nvidia-open> [ nvidia ] CC generated/g_ce_utils_nvoc.c
nvidia-open> [ nvidia ] CC generated/g_channel_descendant_nvoc.c
nvidia-open> [ nvidia ] CC generated/g_chips2halspec_nvoc.c
nvidia-open> [ nvidia ] CC generated/g_chipset_nvoc.c
nvidia-open> generated/g_access_cntr_buffer_nvoc.c:2:10: fatal error: 'nvoc/runtime.h' file not found
nvidia-open> 2 | #include "nvoc/runtime.h"
nvidia-open> | ^~~~~~~~~~~~~~~~
nvidia-open> [ nvidia ] CC generated/g_client_nvoc.c
nvidia-open> [ nvidia ] CC generated/g_client_resource_nvoc.c
nvidia-open> [ nvidia ] CC generated/g_conf_compute_api_nvoc.c
nvidia-open> [ nvidia ] CC generated/g_compute_instance_subscription_nvoc.c
nvidia-open> [ nvidia ] CC generated/g_conf_compute_nvoc.c
nvidia-open> 1 error generated.
nvidia-open> [ nvidia ] CC generated/g_console_mem_nvoc.c
nvidia-open> [ nvidia ] CC generated/g_context_dma_nvoc.c
nvidia-open> [ nvidia ] CC generated/g_crashcat_engine_nvoc.c
nvidia-open> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_access_cntr_buffer_nvoc.o] Error 1
nvidia-open> make[1]: *** Waiting for unfinished jobs....
nvidia-open> generated/g_binary_api_nvoc.c:2:10: fatal error: 'nvoc/runtime.h' file not found
nvidia-open> 2 | #include "nvoc/runtime.h"
nvidia-open> | ^~~~~~~~~~~~~~~~
nvidia-open> generated/g_chips2halspec_nvoc.c:2:10: fatal error: 'nvoc/runtime.h' file not found
nvidia-open> 2 | #include "nvoc/runtime.h"
nvidia-open> | ^~~~~~~~~~~~~~~~
nvidia-open> generated/g_channel_descendant_nvoc.c:2:10: fatal error: 'nvoc/runtime.h' file not found
nvidia-open> 2 | #include "nvoc/runtime.h"
nvidia-open> | ^~~~~~~~~~~~~~~~
nvidia-open> generated/g_ce_utils_nvoc.c:2:10: fatal error: 'nvoc/runtime.h' file not found
nvidia-open> 2 | #include "nvogenerated/g_all_dcl_pb.cc:/3r:u10n:t imefatal error: .h"'nvtypes.h' file not found
nvidia-open>
nvidia-open> | ^~~~~~~~~~~~~~~~
nvidia-open> 3 | #include generated/g_chipset_nvoc.c":n2v:t10y:p es.fatal error: h"'nvoc/runtime.h' file not found
nvidia-open>
nvidia-open> | ^~~~~~~~~~~
nvidia-open> 2 | #generated/g_bindata.ci:n26c:l10u:d e "fatal error: nvo'core/bin_data.h' file not foundc/
nvidia-open> runtim e26. | h#"in
nvidia-open> c| lu ^~~~~~~~~~~~~~~~d
nvidia-open> e generated/g_ccsl_nvoc.c":c2o:r10e:/ binfatal error: _da'nvoc/runtime.h' file not foundta
nvidia-open> .h"
nvidia-open> | 2 ^~~~~~~~~~~~~~~~~ |
nvidia-open> #include "nvoc/runtime.h"
nvidia-open> | ^~~~~~~~~~~~~~~~
nvidia-open> 1 error generated.
nvidia-open> 1 error generated.
nvidia-open> 1 error generated.
nvidia-open> 1 error generated.
nvidia-open> 1 error generated.
nvidia-open> 1 error generated.
nvidia-open> 1 error generated.
nvidia-open> generated/g_client_nvoc.c:2:10: fatal error: 'nvoc/runtime.h' file not found
nvidia-open> 2 | #include "nvoc/rungenerated/g_console_mem_nvoc.ct:i2m:e10.:h "
nvidia-open> fatal error: | ^~~~~~~~~~~~~~~~'nvoc/runtime.h' file not found
nvidia-open>
nvidia-open> 2 | #include "nvoc/runtime.h"
nvidia-open> | ^~~~~~~~~~~~~~~~
nvidia-open> generated/g_conf_compute_api_nvoc.c:2:10: fatal error: 'nvoc/runtime.h' file not found
nvidia-open> 2 | #include "nvoc/runtime.h"
nvidia-open> | ^~~~~~~~~~~~~~~~
nvidia-open> generated/g_compute_instance_subscription_nvoc.c:2:10: fatal error: 'nvoc/runtime.h' file not found
nvidia-open> 2 | #include "nvoc/runtime.h"
nvidia-open> | ^~~~~~~~~~~~~~~~
nvidia-open> generated/g_conf_compute_nvoc.c:2:10: fatal error: 'nvoc/runtime.h' file not found
nvidia-open> 2 | #include "nvoc/runtime.h"
nvidia-open> | ^~~~~~~~~~~~~~~~
nvidia-open> generated/g_client_resource_nvoc.c:2:10: fatal error: 'nvoc/runtime.h' file not found
nvidia-open> 2 | #include "nvoc/runtime.h"
nvidia-open> | ^~~~~~~~~~~~~~~~
nvidia-open> 1 error generated.
nvidia-open> 1 error generated.
nvidia-open> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_ccsl_nvoc.o] Error 1
nvidia-open> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_chips2halspec_nvoc.o] Error 1
nvidia-open> 1 error generated.
nvidia-open> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_all_dcl_pb.o] Error 1
nvidia-open> 1 error generated.
nvidia-open> make[1]: Leaving directory '/build/source/src/nvidia-modeset'
nvidia-open> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_chipset_nvoc.o] Error 1
nvidia-open> make: *** [Makefile:46: src/nvidia-modeset/_out/Linux_x86_64/nv-modeset-kernel.o] Error 2
nvidia-open> make: *** Waiting for unfinished jobs....
nvidia-open> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_channel_descendant_nvoc.o] Error 1
nvidia-open> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_binary_api_nvoc.o] Error 1
nvidia-open> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_ce_utils_nvoc.o] Error 1
nvidia-open> 1 error generated.
nvidia-open> generated/g_context_dma_nvoc.c:2:10: fatal error: 'nvoc/runtime.h' file not found
nvidia-open> 2 | #include "nvoc/runtime.h"
nvidia-open> | ^~~~~~~~~~~~~~~~
nvidia-open> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_console_mem_nvoc.o] Error 1
nvidia-open> 1 error generated.
nvidia-open> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_client_nvoc.o] Error 1
nvidia-open> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_conf_compute_api_nvoc.o] Error 1
nvidia-open> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_compute_instance_subscription_nvoc.o] Error 1
nvidia-open> 1 error generated.
nvidia-open> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_conf_compute_nvoc.o] Error 1
nvidia-open> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_client_resource_nvoc.o] Error 1
nvidia-open> generated/g_crashcat_engine_nvoc.c:2:10: fatal error: 'nvoc/runtime.h' file not found
nvidia-open> 2 | #include "nvoc/runtime.h"
nvidia-open> | ^~~~~~~~~~~~~~~~
nvidia-open> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_context_dma_nvoc.o] Error 1
nvidia-open> 1 error generated.
nvidia-open> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_crashcat_engine_nvoc.o] Error 1
nvidia-open> 1 error generated.
nvidia-open> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_bindata.o] Error 1
nvidia-open> make[1]: Leaving directory '/build/source/src/nvidia'
nvidia-open> make: *** [Makefile:34: src/nvidia/_out/Linux_x86_64/nv-kernel.o] Error 2
error: builder for '/nix/store/nlyawfgxkjjm2v0cyq8yz2ksz41axbdj-nvidia-open-6.10.0-560.28.03.drv' failed with exit code 2;
last 40 log lines:
> | ^~~~~~~~~~~~~~~~
> generated/g_client_resource_nvoc.c:2:10: fatal error: 'nvoc/runtime.h' file not found
> 2 | #include "nvoc/runtime.h"
> | ^~~~~~~~~~~~~~~~
> 1 error generated.
> 1 error generated.
> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_ccsl_nvoc.o] Error 1
> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_chips2halspec_nvoc.o] Error 1
> 1 error generated.
> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_all_dcl_pb.o] Error 1
> 1 error generated.
> make[1]: Leaving directory '/build/source/src/nvidia-modeset'
> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_chipset_nvoc.o] Error 1
> make: *** [Makefile:46: src/nvidia-modeset/_out/Linux_x86_64/nv-modeset-kernel.o] Error 2
> make: *** Waiting for unfinished jobs....
> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_channel_descendant_nvoc.o] Error 1
> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_binary_api_nvoc.o] Error 1
> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_ce_utils_nvoc.o] Error 1
> 1 error generated.
> generated/g_context_dma_nvoc.c:2:10: fatal error: 'nvoc/runtime.h' file not found
> 2 | #include "nvoc/runtime.h"
> | ^~~~~~~~~~~~~~~~
> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_console_mem_nvoc.o] Error 1
> 1 error generated.
> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_client_nvoc.o] Error 1
> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_conf_compute_api_nvoc.o] Error 1
> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_compute_instance_subscription_nvoc.o] Error 1
> 1 error generated.
> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_conf_compute_nvoc.o] Error 1
> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_client_resource_nvoc.o] Error 1
> generated/g_crashcat_engine_nvoc.c:2:10: fatal error: 'nvoc/runtime.h' file not found
> 2 | #include "nvoc/runtime.h"
> | ^~~~~~~~~~~~~~~~
> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_context_dma_nvoc.o] Error 1
> 1 error generated.
> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_crashcat_engine_nvoc.o] Error 1
> 1 error generated.
> make[1]: *** [Makefile:203: _out/Linux_x86_64/g_bindata.o] Error 1
> make[1]: Leaving directory '/build/source/src/nvidia'
> make: *** [Makefile:34: src/nvidia/_out/Linux_x86_64/nv-kernel.o] Error 2
For full logs, run 'nix log /nix/store/nlyawfgxkjjm2v0cyq8yz2ksz41axbdj-nvidia-open-6.10.0-560.28.03.drv'. |
Result of 1 package blacklisted:
26 packages failed to build:
486 packages built:
|
@VeilSilence Which kernelPackages is that? Because |
hardware = {
nvidia = {
#package = encode-patch nvidiaPackage;
package = config.boot.kernelPackages.nvidiaPackages.mkDriver {
version = "560.28.03";
sha256_64bit = "sha256-martv18vngYBJw1IFUCAaYr+uc65KtlHAMdLMdtQJ+Y=";
sha256_aarch64 = "sha256-+u0ZolZcZoej4nqPGmZn5qpyynLvu2QSm9Rd3wLdDmM=";
openSha256 = "sha256-asGpqOpU0tIO9QqceA8XRn5L27OiBFuI9RZ1NjSVwaM=";
settingsSha256 = "sha256-b4nhUMCzZc3VANnNb0rmcEH6H7SK2D5eZIplgPV59c8=";
persistencedSha256 = "sha256-MhITuC8tH/IPhCOUm60SrPOldOpitk78mH0rg+egkTE=";
}; Used this override. Maybe thats why. |
Pretty sure none of the changes are required for the kernel module build to work, so I'm at a loss with where your failure is coming from. |
Hmm, looks like it's because I'm building linux kernel with stdenv clang17. stdenv
{
llvmPackages_17,
patchelf,
overrideCC,
pkgs,
}: let
noBintools = {
bootBintools = null;
bootBintoolsNoLibc = null;
};
hostLLVM = llvmPackages_17.override noBintools;
buildLLVM = llvmPackages_17.override noBintools;
mkLLVMPlatform = platform:
platform
// {
linux-kernel =
platform.linux-kernel
// {
makeFlags =
(platform.linux-kernel.makeFlags or [])
++ [
"LLVM=1"
"LLVM_IAS=1"
#"V=1" #to test if flags applied
"CC=${buildLLVM.clangUseLLVM}/bin/clang"
"LD=${buildLLVM.lld}/bin/ld.lld"
"HOSTLD=${hostLLVM.lld}/bin/ld.lld"
"AR=${buildLLVM.llvm}/bin/llvm-ar"
"HOSTAR=${hostLLVM.llvm}/bin/llvm-ar"
"NM=${buildLLVM.llvm}/bin/llvm-nm"
"STRIP=${buildLLVM.llvm}/bin/llvm-strip"
"OBJCOPY=${buildLLVM.llvm}/bin/llvm-objcopy"
"OBJDUMP=${buildLLVM.llvm}/bin/llvm-objdump"
"READELF=${buildLLVM.llvm}/bin/llvm-readelf"
"HOSTCC=${hostLLVM.clangUseLLVM}/bin/clang"
"HOSTCXX=${hostLLVM.clangUseLLVM}/bin/clang++"
];
};
};
stdenv' = pkgs.overrideCC hostLLVM.stdenv hostLLVM.clangUseLLVM;
in
stdenv'.override (old: {
hostPlatform = mkLLVMPlatform old.hostPlatform;
buildPlatform = mkLLVMPlatform old.buildPlatform;
extraNativeBuildInputs = [hostLLVM.lld patchelf];
})
I disabled my kernel override, set Fixed this issue. Error was on my side. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And
open = lib.mkEnableOption ''
the open source NVIDIA kernel module
'' // {
defaultText = lib.literalExpression ''lib.versionAtLeast config.hardware.nvidia.package.version "560"'';
};
This pull request has been mentioned on NixOS Discourse. There might be relevant details there: https://discourse.nixos.org/t/how-to-specify-nvidia-open-source-kenel-module-version/49515/2 |
Just tested with vainfo
vainfo
Trying display: wayland
libva info: VA-API version 1.21.0
libva info: User environment variable requested driver 'nvidia'
libva info: Trying to open /run/opengl-driver/lib/dri/nvidia_drv_video.so
libva info: Found init function __vaDriverInit_1_0
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.21 (libva 2.22.0)
vainfo: Driver version: VA-API NVDEC driver [direct backend]
vainfo: Supported profile and entrypoints
VAProfileMPEG2Simple : VAEntrypointVLD
VAProfileMPEG2Main : VAEntrypointVLD
VAProfileVC1Simple : VAEntrypointVLD
VAProfileVC1Main : VAEntrypointVLD
VAProfileVC1Advanced : VAEntrypointVLD
VAProfileH264Main : VAEntrypointVLD
VAProfileH264High : VAEntrypointVLD
VAProfileH264ConstrainedBaseline: VAEntrypointVLD
VAProfileHEVCMain : VAEntrypointVLD
VAProfileVP8Version0_3 : VAEntrypointVLD
VAProfileVP9Profile0 : VAEntrypointVLD
VAProfileAV1Profile0 : VAEntrypointVLD
VAProfileHEVCMain10 : VAEntrypointVLD
VAProfileHEVCMain12 : VAEntrypointVLD
VAProfileVP9Profile2 : VAEntrypointVLD
VAProfileHEVCMain444 : VAEntrypointVLD
VAProfileHEVCMain444_10 : VAEntrypointVLD
VAProfileHEVCMain444_12 : VAEntrypointVLD
clinfo
Number of platforms 1
Platform Name NVIDIA CUDA
Platform Vendor NVIDIA Corporation
Platform Version OpenCL 3.0 CUDA 12.6.32
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_device_uuid cl_khr_pci_bus_info cl_khr_external_semaphore cl_khr_external_memory cl_khr_external_semaphore_opaque_fd cl_khr_external_memory_opaque_fd But i got different kind of issue. Any games with i'm trying to run with proton just stuck at black screen state. |
Yes, my issue is one of a kind, I haven't found anyone else that has it. I really want a new GPU. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have been running this version for a few days with the open kernel modules without any issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should eventually migrate away from the custom builder script.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've made a tool to turn the nvidia-installer manifest into a json document listing all the files and where they should be installed. But I haven't had the time to turn it into a full installer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Kiskae I like where this is going. I had a similar approach some time ago, but didn't have time to finish. I pushed the changes here baracoder@5fcda66, in case there is something useful. It builds but something was not quite working yet.
It needs to be addressed before next release somehow, as it is with this PR having open by default will break every setup that uses pre-turing nvidia cards, users may not even notice problems immediately. |
It looks like the nvidia-installer handles this through runtime inspection and having a list of device ids that aren't supported: https://github.com/NVIDIA/nvidia-installer/blob/560.28.03/nvGpus.h The default SHOULD be open after this driver release, but I'm not sure we can do anything for older devices besides adding a release note that older devices should explicitly disable the open kernel modules. |
Or we can drop the default, make the user do the mandatory choice. |
NO IT SHOULD NOT. At least not in the way you did. My suggestion: |
Currently, there seems to be no assertion prompting you that you need to manually set whether to use the open driver. It will fail with |
I think that's enough to prompt you to set the option? Albeit the wording is some what vague. |
It definitely is not clear enough to get an inexperienced user to know what this means (source: myself). Thankfully, I have learned by this point that the best way to troubleshoot NixOS is to search the issues in this repo. Heck, even the option name is a bit confusing: I did not immediately realize the open meant open source. |
Instead of adding assertions everywhere we can just improve the error message. #338362 |
Description of changes
Things done
nix.conf
? (See Nix manual)sandbox = relaxed
sandbox = true
nix-shell -p nixpkgs-review --run "nixpkgs-review rev HEAD"
. Note: all changes have to be committed, also see nixpkgs-review usage./result/bin/
)Add a 👍 reaction to pull requests you find important.