-
Notifications
You must be signed in to change notification settings - Fork 320
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
exit the test program when execute multi parallel copies of tests, why? #67
Comments
I don't think you should use this program for that purpose any longer. I'd
look for other solutions. UnixBench hasn't been updated in quite some time.
My guess is the main reason most people use it today is for historical or
learning use cases.
Lucas
kdLucas
…On Tue, Aug 11, 2020 at 2:22 AM xingxing ***@***.***> wrote:
Hello, I just installed unixBench in my machine. And make ./Run
After 30 mins, I got a result of single process. It seems work well. But
for running 64 parallel copies of tests, I just got
------------------------------------------------------------------------
Benchmark Run: Tue Aug 11 2020 16:46:02 - 16:46:02 64 CPUs in system;
running 64 parallel copies of tests
and then exit testing.
Could you help me figure out the reason?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#67>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA4T67LVCY23AI2CLP7BCLDSAEEWPANCNFSM4P22QKEA>
.
|
Thanks for your kindly remind. |
Similar issue reported in #74 |
Hello, I just installed unixBench in my machine. And make ./Run
After 30 mins, I got a result of single process. It seems work well. But for running 64 parallel copies of tests, I just got
------------------------------------------------------------------------ Benchmark Run: Tue Aug 11 2020 16:46:02 - 16:46:02 64 CPUs in system; running 64 parallel copies of tests
and then exit testing.
Could you help me figure out the reason?
The text was updated successfully, but these errors were encountered: