-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
help for weak scalability test #2
Comments
Hello, in general, we use benchmarks only for evaluation of the solver correctness. Nevertheless, you can use them also for scalability testing. In your case, the increasing solver runtime is probably caused by setting too small domains sizes (only 444 elements per domain). Hence, try to increase domains sizes. We tried to evaluate this benchmark on our cluster (with DIRICHLET preconditioner instead of LUMPED). We got the following times: mpirun -n 216 ./build/espreso -c benchmarks/linearElasticity3D/steadystate/pressure/espreso.ecf -vvv HEXA8 6 6 6 6 6 6 4 4 4 FETI HYBRID_FETI 5.2s Where do you try to evaluate this benchmark? Best Regards, |
Thank you! Another question is that neither espreso-master or espreso-readex supports GPU. GPU code is commented out in itersolverGPU.cpp. Is there a espreso version that supports the GPU? |
The GPU version is currently available only for our internal use. The public version should be available in August. |
Hello! The HTFETI method is efficient for large-scale problems. I want to use espreso to test weak scalability and I run espreso with the following command:
./waf configure -m release --intwidth=64
./waf -j16
source /env/threading.default 1
mpirun -n 216 ./build/espreso -c benchmarks/linearElasticity3D/steadystate/pressure/espreso.ecf -vvv HEXA8 6 6 6 6 6 6 4 4 4 FETI HYBRID_FETI
mpirun -n 512 ./build/espreso -c benchmarks/linearElasticity3D/steadystate/pressure/espreso.ecf -vvv HEXA8 8 8 8 6 6 6 4 4 4 FETI HYBRID_FETI
mpirun -n 1000 ./build/espreso -c benchmarks/linearElasticity3D/steadystate/pressure/espreso.ecf -vvv HEXA8 10 10 10 6 6 6 4 4 4 FETI HYBRID_FETI
mpirun -n 1728 ./build/espreso -c benchmarks/linearElasticity3D/steadystate/pressure/espreso.ecf -vvv HEXA8 12 12 12 6 6 6 4 4 4 FETI HYBRID_FETI
mpirun -n 2744 ./build/espreso -c benchmarks/linearElasticity3D/steadystate/pressure/espreso.ecf -vvv HEXA8 14 14 14 6 6 6 4 4 4 FETI HYBRID_FETI
SOLVER TIME
8.206s
8.66s
9.563s
10.59s
17.021s
As the number of processes increases, solver time grows too quickly. Is the example I chose wrong? Are there examples of structural mechanics 3d that test weak scalability?
The text was updated successfully, but these errors were encountered: