Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Try to bump to GCC 14.2 #5602

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

[WIP] Try to bump to GCC 14.2 #5602

wants to merge 2 commits into from

Conversation

davidrohr
Copy link
Contributor

Will at least fail with current CUDA 12.6, but want to check for other failures.

@davidrohr davidrohr requested a review from a team as a code owner September 5, 2024 10:24
@davidrohr
Copy link
Contributor Author

Currently failing due to old json-c. We need to bump json-c, but the new version needs cmake instead of autoconf, so the recipe must be adapted.

@davidrohr
Copy link
Contributor Author

Now fails in AliAlfred/DimRpcParallal with

/sw/SOURCES/DimRpcParallel/v0.1.2/v0.1.2/src/dimrpcqueue.cpp: In member function 'void DimRpcQueue::processRequests()':
/sw/SOURCES/DimRpcParallel/v0.1.2/v0.1.2/src/dimrpcqueue.cpp:59:62: error: ignoring return value of 'std::lock_guard<_Mutex>::lock_guard(mutex_type&) [with _Mutex = std::mutex; mutex_type = std::mutex]', declared with attribute 'nodiscard' [-Werror=unused-result]
   59 |                 std::lock_guard<std::mutex>(this->accessMutex);
      |                                                              ^
In file included from /sw/slc7_x86-64/GCC-Toolchain/v14.2.0-alice2-local1/include/c++/14.2.0/mutex:47,
                 from /sw/SOURCES/DimRpcParallel/v0.1.2/v0.1.2/include/DimRpcParallel/dimrpcqueue.h:5,
                 from /sw/SOURCES/DimRpcParallel/v0.1.2/v0.1.2/src/dimrpcqueue.cpp:1:
/sw/slc7_x86-64/GCC-Toolchain/v14.2.0-alice2-local1/include/c++/14.2.0/bits/std_mutex.h:249:16: note: declared here
  249 |       explicit lock_guard(mutex_type& __m) : _M_device(__m)

I filed a bug report here: https://its.cern.ch/jira/browse/ALF-83 .

Also, as discussed with @ktf : binutils compilation fails randomly. Probably we should downgrade to the binutils of gcc-toolchian-13.2-alice1, which was working.

@davidrohr
Copy link
Contributor Author

@singiamtel @ktf : the slc9-aarch CI fails with

Downlod reference files in ci_test_dir .. OK
Uploading the ci_test_dir to GRID .. Could not upload reference files in 005_cp_dir.test/test.sh
Exception encountered! it will be logged to log.txt
Please report the error and send the log file and "alien.py version" output to [email protected]
If the exception is reproductible including on lxplus, please create a detailed debug report this way:
ALIENPY_DEBUG=1 ALIENPY_DEBUG_FILE=log.txt your_command_line
Failed test!!! Exitcode == 1

@adriansev
Copy link
Contributor

@davidrohr so, for the alien.py errors in alidist-slc9-aarch64 i would need the log file to see what happened .. weird is that test 004 and 006, both cp related, worked, so, if the actual log file is not available to debug what happened (on x86_64 Alma9 seems to work without problems) then just restart the test.

@davidrohr
Copy link
Contributor Author

Well, I don't know how to get a log file beyond the build log I get from the CI.
What log file are you actually referring to?

@adriansev
Copy link
Contributor

so for xjalienfs/alien.py these tests are run https://github.com/adriansev/jalien_py/tree/master/tests
in case of failure a log.txt file will be found in the respective directory. but i suspect that is an transient error so it should be enough to restart that failed test

@ktf
Copy link
Member

ktf commented Oct 17, 2024

@davidrohr AliRoot is now fine.

@ktf
Copy link
Member

ktf commented Oct 29, 2024

@davidrohr do you understand the issue with CUDA and the one with xmmintr? They both seem legit, and I do not understand why we did not see them with GCC 13.

@davidrohr
Copy link
Contributor Author

For CUDA it is clear, since GCC14 is not yet supported. We have to wait for a new CUDA.
For the GPU Standalone benchmark, it is because I am using X86 intrinsics. I will just disable the build on ARM, as I do for MacOS here https://github.com/AliceO2Group/AliceO2/blob/fb8e068eff4fba325c75b3fa9c77e59db10a50a6/GPU/GPUTracking/CMakeLists.txt#L562.

@davidrohr
Copy link
Contributor Author

@ktf : Now only the FullCI remains red, will stay like this until we bump CUDA.

@ktf ktf changed the title Try to bump to GCC 14.2 [WIP] Try to bump to GCC 14.2 Oct 30, 2024
@ktf
Copy link
Member

ktf commented Oct 30, 2024

Changed to WIP to avoid retesting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants