4 errors, 74 fail, 110 skipped, 3 633 pass in 10h 12m 5s
Annotations
Check warning on line 0 in distributed.tests.test_client
github-actions / Unit Test Results
All 10 runs failed: test_aliases_2 (distributed.tests.test_client)
artifacts/macos-latest-3.11-queue-ci1/pytest.xml [took 1s]
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-3.11-queue-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-mindeps-numpy-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-mindeps-queue-ci1/pytest.xml [took 0s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 1s]
artifacts/windows-latest-3.11-queue-ci1/pytest.xml [took 0s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 0s]
Raw output
assert [2] == [1]
At index 0 diff: 2 != 1
Full diff:
- [1]
+ [2]
c = <Client: No scheduler connected>
s = <Scheduler 'tcp://127.0.0.1:54472', workers: 0, cores: 0, tasks: 0>
a = <Worker 'tcp://127.0.0.1:54473', name: 0, status: closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0>
b = <Worker 'tcp://127.0.0.1:54475', name: 1, status: closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>
@gen_cluster(client=True)
async def test_aliases_2(c, s, a, b):
dsk_keys = [
({"x": (inc, 1), "y": "x", "z": "x", "w": (add, "y", "z")}, ["y", "w"]),
({"x": "y", "y": 1}, ["x"]),
({"x": 1, "y": "x", "z": "y", "w": (inc, "z")}, ["w"]),
]
for dsk, keys in dsk_keys:
result = await c.gather(c.get(dsk, keys, sync=False))
> assert list(result) == list(dask.get(dsk, keys))
E assert [2] == [1]
E At index 0 diff: 2 != 1
E Full diff:
E - [1]
E + [2]
distributed\tests\test_client.py:1050: AssertionError
Check warning on line 0 in distributed.tests.test_client
github-actions / Unit Test Results
All 10 runs failed: test_if_intermediates_clear_on_error (distributed.tests.test_client)
artifacts/macos-latest-3.11-queue-ci1/pytest.xml [took 1s]
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-3.11-queue-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-mindeps-numpy-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-mindeps-queue-ci1/pytest.xml [took 0s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 0s]
artifacts/windows-latest-3.11-queue-ci1/pytest.xml [took 0s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 0s]
Raw output
assert not True
+ where True = any(<generator object test_if_intermediates_clear_on_error.<locals>.<genexpr> at 0x0000022B36842C00>)
c = <Client: No scheduler connected>
s = <Scheduler 'tcp://127.0.0.1:54786', workers: 0, cores: 0, tasks: 0>
a = <Worker 'tcp://127.0.0.1:54787', name: 0, status: closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0>
b = <Worker 'tcp://127.0.0.1:54789', name: 1, status: closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>
@gen_cluster(client=True)
async def test_if_intermediates_clear_on_error(c, s, a, b):
x = delayed(div, pure=True)(1, 0)
y = delayed(div, pure=True)(1, 2)
z = delayed(add, pure=True)(x, y)
f = c.compute(z)
with pytest.raises(ZeroDivisionError):
await f
s.validate_state()
> assert not any(ts.who_has for ts in s.tasks.values())
E assert not True
E + where True = any(<generator object test_if_intermediates_clear_on_error.<locals>.<genexpr> at 0x0000022B36842C00>)
distributed\tests\test_client.py:1424: AssertionError
Check warning on line 0 in distributed.tests.test_client
github-actions / Unit Test Results
7 out of 10 runs failed: test_forget_in_flight (distributed.tests.test_client)
artifacts/macos-latest-3.11-queue-ci1/pytest.xml [took 1s]
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-3.11-queue-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-mindeps-numpy-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-mindeps-queue-ci1/pytest.xml [took 0s]
Raw output
AssertionError: assert 'ab' not in {'ab': <TaskState 'ab' processing>, 'ac': <TaskState 'ac' processing>, 'slowinc-664c9cf9a2b0c4108ad86e0378f1a03d': <TaskState 'slowinc-664c9cf9a2b0c4108ad86e0378f1a03d' memory>, 'slowinc-76db02cfa5d6770cc340c878ad391524': <TaskState 'slowinc-76db02cfa5d6770cc340c878ad391524' memory>, ...}
+ where {'ab': <TaskState 'ab' processing>, 'ac': <TaskState 'ac' processing>, 'slowinc-664c9cf9a2b0c4108ad86e0378f1a03d': <TaskState 'slowinc-664c9cf9a2b0c4108ad86e0378f1a03d' memory>, 'slowinc-76db02cfa5d6770cc340c878ad391524': <TaskState 'slowinc-76db02cfa5d6770cc340c878ad391524' memory>, ...} = <Scheduler 'tcp://127.0.0.1:33277', workers: 2, cores: 3, tasks: 5>.tasks
e = <Client: No scheduler connected>
s = <Scheduler 'tcp://127.0.0.1:33277', workers: 0, cores: 0, tasks: 0>
A = <Worker 'tcp://127.0.0.1:33243', name: 0, status: closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0>
B = <Worker 'tcp://127.0.0.1:44357', name: 1, status: closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>
@gen_cluster(client=True)
async def test_forget_in_flight(e, s, A, B):
delayed2 = partial(delayed, pure=True)
a, b, c, d = (delayed2(slowinc)(i) for i in range(4))
ab = delayed2(slowadd)(a, b, dask_key_name="ab")
cd = delayed2(slowadd)(c, d, dask_key_name="cd")
ac = delayed2(slowadd)(a, c, dask_key_name="ac")
acab = delayed2(slowadd)(ac, ab, dask_key_name="acab")
x, y = e.compute([ac, acab])
s.validate_state()
for _ in range(5):
await asyncio.sleep(0.01)
s.validate_state()
s.client_releases_keys(keys=[y.key], client=e.id)
s.validate_state()
for k in [acab.key, ab.key, b.key]:
> assert k not in s.tasks
E AssertionError: assert 'ab' not in {'ab': <TaskState 'ab' processing>, 'ac': <TaskState 'ac' processing>, 'slowinc-664c9cf9a2b0c4108ad86e0378f1a03d': <TaskState 'slowinc-664c9cf9a2b0c4108ad86e0378f1a03d' memory>, 'slowinc-76db02cfa5d6770cc340c878ad391524': <TaskState 'slowinc-76db02cfa5d6770cc340c878ad391524' memory>, ...}
E + where {'ab': <TaskState 'ab' processing>, 'ac': <TaskState 'ac' processing>, 'slowinc-664c9cf9a2b0c4108ad86e0378f1a03d': <TaskState 'slowinc-664c9cf9a2b0c4108ad86e0378f1a03d' memory>, 'slowinc-76db02cfa5d6770cc340c878ad391524': <TaskState 'slowinc-76db02cfa5d6770cc340c878ad391524' memory>, ...} = <Scheduler 'tcp://127.0.0.1:33277', workers: 2, cores: 3, tasks: 5>.tasks
distributed/tests/test_client.py:2195: AssertionError
Check warning on line 0 in distributed.tests.test_client
github-actions / Unit Test Results
All 10 runs failed: test_recreate_error_delayed (distributed.tests.test_client)
artifacts/macos-latest-3.11-queue-ci1/pytest.xml [took 31s]
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.11-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-numpy-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.11-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 30s]
Raw output
asyncio.exceptions.TimeoutError: Test timeout (30) hit after 29.99010705947876s.
========== Test stack trace starts here ==========
Stack for <Task pending name='Task-90992' coro=<test_recreate_error_delayed() running at D:\a\distributed\distributed\distributed\tests\test_client.py:4772> wait_for=<Future pending cb=[Task.task_wakeup()]>> (most recent call last):
File "D:\a\distributed\distributed\distributed\tests\test_client.py", line 4772, in test_recreate_error_delayed
function, args, kwargs = await c._get_components_from_future(error_f)
args = (), kwds = {}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79: in inner
return func(*args, **kwds)
distributed\utils_test.py:1101: in test_func
return _run_and_close_tornado(async_fn_outer)
distributed\utils_test.py:378: in _run_and_close_tornado
return asyncio_run(inner_fn(), loop_factory=get_loop_factory())
distributed\compatibility.py:236: in asyncio_run
return loop.run_until_complete(main)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\base_events.py:649: in run_until_complete
return future.result()
distributed\utils_test.py:375: in inner_fn
return await async_fn(*args, **kwargs)
distributed\utils_test.py:1098: in async_fn_outer
return await utils_wait_for(async_fn(), timeout=timeout * 2)
distributed\utils.py:1920: in wait_for
return await asyncio.wait_for(fut, timeout)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\tasks.py:445: in wait_for
return fut.result()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
async def async_fn():
result = None
with dask.config.set(config):
async with _cluster_factory() as (s, workers), _client_factory(
s
) as c:
args = [s] + workers
if c is not None:
args = [c] + args
try:
coro = func(*args, *outer_args, **kwargs)
task = asyncio.create_task(coro)
coro2 = utils_wait_for(
asyncio.shield(task), timeout=deadline.remaining
)
result = await coro2
validate_state(s, *workers)
except asyncio.TimeoutError:
assert task
elapsed = deadline.elapsed
buffer = io.StringIO()
# This stack indicates where the coro/test is suspended
task.print_stack(file=buffer)
if cluster_dump_directory:
await dump_cluster_state(
s=s,
ws=workers,
output_dir=cluster_dump_directory,
func_name=func.__name__,
)
task.cancel()
while not task.cancelled():
await asyncio.sleep(0.01)
# Hopefully, the hang has been caused by inconsistent
# state, which should be much more meaningful than the
# timeout
validate_state(s, *workers)
# Remove as much of the traceback as possible; it's
# uninteresting boilerplate from utils_test and asyncio
# and not from the code being tested.
> raise asyncio.TimeoutError(
f"Test timeout ({timeout}) hit after {elapsed}s.\n"
"========== Test stack trace starts here ==========\n"
f"{buffer.getvalue()}"
) from None
E asyncio.exceptions.TimeoutError: Test timeout (30) hit after 29.99010705947876s.
E ========== Test stack trace starts here ==========
E Stack for <Task pending name='Task-90992' coro=<test_recreate_error_delayed() running at D:\a\distributed\distributed\distributed\tests\test_client.py:4772> wait_for=<Future pending cb=[Task.task_wakeup()]>> (most recent call last):
E File "D:\a\distributed\distributed\distributed\tests\test_client.py", line 4772, in test_recreate_error_delayed
E function, args, kwargs = await c._get_components_from_future(error_f)
distributed\utils_test.py:1040: TimeoutError
Check warning on line 0 in distributed.tests.test_client
github-actions / Unit Test Results
8 out of 10 runs failed: test_recreate_error_collection (distributed.tests.test_client)
artifacts/macos-latest-3.11-queue-ci1/pytest.xml [took 31s]
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 31s]
artifacts/ubuntu-latest-3.11-queue-ci1/pytest.xml [took 31s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 32s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 31s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 31s]
artifacts/windows-latest-3.11-queue-ci1/pytest.xml [took 31s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 31s]
Raw output
asyncio.exceptions.TimeoutError: Test timeout (30) hit after 30.014450311660767s.
========== Test stack trace starts here ==========
Stack for <Task pending name='Task-92949' coro=<test_recreate_error_collection() running at D:\a\distributed\distributed\distributed\tests\test_client.py:4826> wait_for=<Future pending cb=[Task.task_wakeup()]>> (most recent call last):
File "D:\a\distributed\distributed\distributed\tests\test_client.py", line 4826, in test_recreate_error_collection
function, args, kwargs = await c._get_components_from_future(error_f)
args = (), kwds = {}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79: in inner
return func(*args, **kwds)
distributed\utils_test.py:1101: in test_func
return _run_and_close_tornado(async_fn_outer)
distributed\utils_test.py:378: in _run_and_close_tornado
return asyncio_run(inner_fn(), loop_factory=get_loop_factory())
distributed\compatibility.py:236: in asyncio_run
return loop.run_until_complete(main)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\base_events.py:649: in run_until_complete
return future.result()
distributed\utils_test.py:375: in inner_fn
return await async_fn(*args, **kwargs)
distributed\utils_test.py:1098: in async_fn_outer
return await utils_wait_for(async_fn(), timeout=timeout * 2)
distributed\utils.py:1920: in wait_for
return await asyncio.wait_for(fut, timeout)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\tasks.py:445: in wait_for
return fut.result()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
async def async_fn():
result = None
with dask.config.set(config):
async with _cluster_factory() as (s, workers), _client_factory(
s
) as c:
args = [s] + workers
if c is not None:
args = [c] + args
try:
coro = func(*args, *outer_args, **kwargs)
task = asyncio.create_task(coro)
coro2 = utils_wait_for(
asyncio.shield(task), timeout=deadline.remaining
)
result = await coro2
validate_state(s, *workers)
except asyncio.TimeoutError:
assert task
elapsed = deadline.elapsed
buffer = io.StringIO()
# This stack indicates where the coro/test is suspended
task.print_stack(file=buffer)
if cluster_dump_directory:
await dump_cluster_state(
s=s,
ws=workers,
output_dir=cluster_dump_directory,
func_name=func.__name__,
)
task.cancel()
while not task.cancelled():
await asyncio.sleep(0.01)
# Hopefully, the hang has been caused by inconsistent
# state, which should be much more meaningful than the
# timeout
validate_state(s, *workers)
# Remove as much of the traceback as possible; it's
# uninteresting boilerplate from utils_test and asyncio
# and not from the code being tested.
> raise asyncio.TimeoutError(
f"Test timeout ({timeout}) hit after {elapsed}s.\n"
"========== Test stack trace starts here ==========\n"
f"{buffer.getvalue()}"
) from None
E asyncio.exceptions.TimeoutError: Test timeout (30) hit after 30.014450311660767s.
E ========== Test stack trace starts here ==========
E Stack for <Task pending name='Task-92949' coro=<test_recreate_error_collection() running at D:\a\distributed\distributed\distributed\tests\test_client.py:4826> wait_for=<Future pending cb=[Task.task_wakeup()]>> (most recent call last):
E File "D:\a\distributed\distributed\distributed\tests\test_client.py", line 4826, in test_recreate_error_collection
E function, args, kwargs = await c._get_components_from_future(error_f)
distributed\utils_test.py:1040: TimeoutError
Check warning on line 0 in distributed.tests.test_client
github-actions / Unit Test Results
5 out of 7 runs failed: test_recreate_error_array (distributed.tests.test_client)
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 31s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 31s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 31s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 31s]
Raw output
asyncio.exceptions.TimeoutError: Test timeout (30) hit after 29.980392456054688s.
========== Test stack trace starts here ==========
Stack for <Task pending name='Task-94850' coro=<test_recreate_error_array() running at D:\a\distributed\distributed\distributed\tests\test_client.py:4845> wait_for=<Future pending cb=[Task.task_wakeup()]>> (most recent call last):
File "D:\a\distributed\distributed\distributed\tests\test_client.py", line 4845, in test_recreate_error_array
function, args, kwargs = await c._get_components_from_future(error_f)
args = (), kwds = {}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79: in inner
return func(*args, **kwds)
distributed\utils_test.py:1101: in test_func
return _run_and_close_tornado(async_fn_outer)
distributed\utils_test.py:378: in _run_and_close_tornado
return asyncio_run(inner_fn(), loop_factory=get_loop_factory())
distributed\compatibility.py:236: in asyncio_run
return loop.run_until_complete(main)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\base_events.py:649: in run_until_complete
return future.result()
distributed\utils_test.py:375: in inner_fn
return await async_fn(*args, **kwargs)
distributed\utils_test.py:1098: in async_fn_outer
return await utils_wait_for(async_fn(), timeout=timeout * 2)
distributed\utils.py:1920: in wait_for
return await asyncio.wait_for(fut, timeout)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\tasks.py:445: in wait_for
return fut.result()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
async def async_fn():
result = None
with dask.config.set(config):
async with _cluster_factory() as (s, workers), _client_factory(
s
) as c:
args = [s] + workers
if c is not None:
args = [c] + args
try:
coro = func(*args, *outer_args, **kwargs)
task = asyncio.create_task(coro)
coro2 = utils_wait_for(
asyncio.shield(task), timeout=deadline.remaining
)
result = await coro2
validate_state(s, *workers)
except asyncio.TimeoutError:
assert task
elapsed = deadline.elapsed
buffer = io.StringIO()
# This stack indicates where the coro/test is suspended
task.print_stack(file=buffer)
if cluster_dump_directory:
await dump_cluster_state(
s=s,
ws=workers,
output_dir=cluster_dump_directory,
func_name=func.__name__,
)
task.cancel()
while not task.cancelled():
await asyncio.sleep(0.01)
# Hopefully, the hang has been caused by inconsistent
# state, which should be much more meaningful than the
# timeout
validate_state(s, *workers)
# Remove as much of the traceback as possible; it's
# uninteresting boilerplate from utils_test and asyncio
# and not from the code being tested.
> raise asyncio.TimeoutError(
f"Test timeout ({timeout}) hit after {elapsed}s.\n"
"========== Test stack trace starts here ==========\n"
f"{buffer.getvalue()}"
) from None
E asyncio.exceptions.TimeoutError: Test timeout (30) hit after 29.980392456054688s.
E ========== Test stack trace starts here ==========
E Stack for <Task pending name='Task-94850' coro=<test_recreate_error_array() running at D:\a\distributed\distributed\distributed\tests\test_client.py:4845> wait_for=<Future pending cb=[Task.task_wakeup()]>> (most recent call last):
E File "D:\a\distributed\distributed\distributed\tests\test_client.py", line 4845, in test_recreate_error_array
E function, args, kwargs = await c._get_components_from_future(error_f)
distributed\utils_test.py:1040: TimeoutError
Check warning on line 0 in distributed.tests.test_client
github-actions / Unit Test Results
All 7 runs failed: test_recreate_task_delayed (distributed.tests.test_client)
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-numpy-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 30s]
Raw output
asyncio.exceptions.TimeoutError: Test timeout (30) hit after 29.99206781387329s.
========== Test stack trace starts here ==========
Stack for <Task pending name='Task-96792' coro=<test_recreate_task_delayed() running at D:\a\distributed\distributed\distributed\tests\test_client.py:4880> wait_for=<Future pending cb=[Task.task_wakeup()]>> (most recent call last):
File "D:\a\distributed\distributed\distributed\tests\test_client.py", line 4880, in test_recreate_task_delayed
function, args, kwargs = await c._get_components_from_future(f)
args = (), kwds = {}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79: in inner
return func(*args, **kwds)
distributed\utils_test.py:1101: in test_func
return _run_and_close_tornado(async_fn_outer)
distributed\utils_test.py:378: in _run_and_close_tornado
return asyncio_run(inner_fn(), loop_factory=get_loop_factory())
distributed\compatibility.py:236: in asyncio_run
return loop.run_until_complete(main)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\base_events.py:649: in run_until_complete
return future.result()
distributed\utils_test.py:375: in inner_fn
return await async_fn(*args, **kwargs)
distributed\utils_test.py:1098: in async_fn_outer
return await utils_wait_for(async_fn(), timeout=timeout * 2)
distributed\utils.py:1920: in wait_for
return await asyncio.wait_for(fut, timeout)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\tasks.py:445: in wait_for
return fut.result()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
async def async_fn():
result = None
with dask.config.set(config):
async with _cluster_factory() as (s, workers), _client_factory(
s
) as c:
args = [s] + workers
if c is not None:
args = [c] + args
try:
coro = func(*args, *outer_args, **kwargs)
task = asyncio.create_task(coro)
coro2 = utils_wait_for(
asyncio.shield(task), timeout=deadline.remaining
)
result = await coro2
validate_state(s, *workers)
except asyncio.TimeoutError:
assert task
elapsed = deadline.elapsed
buffer = io.StringIO()
# This stack indicates where the coro/test is suspended
task.print_stack(file=buffer)
if cluster_dump_directory:
await dump_cluster_state(
s=s,
ws=workers,
output_dir=cluster_dump_directory,
func_name=func.__name__,
)
task.cancel()
while not task.cancelled():
await asyncio.sleep(0.01)
# Hopefully, the hang has been caused by inconsistent
# state, which should be much more meaningful than the
# timeout
validate_state(s, *workers)
# Remove as much of the traceback as possible; it's
# uninteresting boilerplate from utils_test and asyncio
# and not from the code being tested.
> raise asyncio.TimeoutError(
f"Test timeout ({timeout}) hit after {elapsed}s.\n"
"========== Test stack trace starts here ==========\n"
f"{buffer.getvalue()}"
) from None
E asyncio.exceptions.TimeoutError: Test timeout (30) hit after 29.99206781387329s.
E ========== Test stack trace starts here ==========
E Stack for <Task pending name='Task-96792' coro=<test_recreate_task_delayed() running at D:\a\distributed\distributed\distributed\tests\test_client.py:4880> wait_for=<Future pending cb=[Task.task_wakeup()]>> (most recent call last):
E File "D:\a\distributed\distributed\distributed\tests\test_client.py", line 4880, in test_recreate_task_delayed
E function, args, kwargs = await c._get_components_from_future(f)
distributed\utils_test.py:1040: TimeoutError
Check warning on line 0 in distributed.tests.test_client
github-actions / Unit Test Results
5 out of 7 runs failed: test_recreate_task_collection (distributed.tests.test_client)
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 31s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 31s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 31s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 31s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 31s]
Raw output
asyncio.exceptions.TimeoutError: Test timeout (30) hit after 30.011723041534424s.
========== Test stack trace starts here ==========
Stack for <Task pending name='Task-98756' coro=<test_recreate_task_collection() running at D:\a\distributed\distributed\distributed\tests\test_client.py:4934> wait_for=<Future pending cb=[Task.task_wakeup()]>> (most recent call last):
File "D:\a\distributed\distributed\distributed\tests\test_client.py", line 4934, in test_recreate_task_collection
function, args, kwargs = await c._get_components_from_future(f)
args = (), kwds = {}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79: in inner
return func(*args, **kwds)
distributed\utils_test.py:1101: in test_func
return _run_and_close_tornado(async_fn_outer)
distributed\utils_test.py:378: in _run_and_close_tornado
return asyncio_run(inner_fn(), loop_factory=get_loop_factory())
distributed\compatibility.py:236: in asyncio_run
return loop.run_until_complete(main)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\base_events.py:649: in run_until_complete
return future.result()
distributed\utils_test.py:375: in inner_fn
return await async_fn(*args, **kwargs)
distributed\utils_test.py:1098: in async_fn_outer
return await utils_wait_for(async_fn(), timeout=timeout * 2)
distributed\utils.py:1920: in wait_for
return await asyncio.wait_for(fut, timeout)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\tasks.py:445: in wait_for
return fut.result()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
async def async_fn():
result = None
with dask.config.set(config):
async with _cluster_factory() as (s, workers), _client_factory(
s
) as c:
args = [s] + workers
if c is not None:
args = [c] + args
try:
coro = func(*args, *outer_args, **kwargs)
task = asyncio.create_task(coro)
coro2 = utils_wait_for(
asyncio.shield(task), timeout=deadline.remaining
)
result = await coro2
validate_state(s, *workers)
except asyncio.TimeoutError:
assert task
elapsed = deadline.elapsed
buffer = io.StringIO()
# This stack indicates where the coro/test is suspended
task.print_stack(file=buffer)
if cluster_dump_directory:
await dump_cluster_state(
s=s,
ws=workers,
output_dir=cluster_dump_directory,
func_name=func.__name__,
)
task.cancel()
while not task.cancelled():
await asyncio.sleep(0.01)
# Hopefully, the hang has been caused by inconsistent
# state, which should be much more meaningful than the
# timeout
validate_state(s, *workers)
# Remove as much of the traceback as possible; it's
# uninteresting boilerplate from utils_test and asyncio
# and not from the code being tested.
> raise asyncio.TimeoutError(
f"Test timeout ({timeout}) hit after {elapsed}s.\n"
"========== Test stack trace starts here ==========\n"
f"{buffer.getvalue()}"
) from None
E asyncio.exceptions.TimeoutError: Test timeout (30) hit after 30.011723041534424s.
E ========== Test stack trace starts here ==========
E Stack for <Task pending name='Task-98756' coro=<test_recreate_task_collection() running at D:\a\distributed\distributed\distributed\tests\test_client.py:4934> wait_for=<Future pending cb=[Task.task_wakeup()]>> (most recent call last):
E File "D:\a\distributed\distributed\distributed\tests\test_client.py", line 4934, in test_recreate_task_collection
E function, args, kwargs = await c._get_components_from_future(f)
distributed\utils_test.py:1040: TimeoutError
Check warning on line 0 in distributed.tests.test_client
github-actions / Unit Test Results
6 out of 7 runs failed: test_recreate_task_array (distributed.tests.test_client)
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-numpy-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 30s]
Raw output
asyncio.exceptions.TimeoutError: Test timeout (30) hit after 30.024043560028076s.
========== Test stack trace starts here ==========
Stack for <Task pending name='Task-100702' coro=<test_recreate_task_array() running at D:\a\distributed\distributed\distributed\tests\test_client.py:4954> wait_for=<Future pending cb=[Task.task_wakeup()]>> (most recent call last):
File "D:\a\distributed\distributed\distributed\tests\test_client.py", line 4954, in test_recreate_task_array
function, args, kwargs = await c._get_components_from_future(f)
args = (), kwds = {}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79: in inner
return func(*args, **kwds)
distributed\utils_test.py:1101: in test_func
return _run_and_close_tornado(async_fn_outer)
distributed\utils_test.py:378: in _run_and_close_tornado
return asyncio_run(inner_fn(), loop_factory=get_loop_factory())
distributed\compatibility.py:236: in asyncio_run
return loop.run_until_complete(main)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\base_events.py:649: in run_until_complete
return future.result()
distributed\utils_test.py:375: in inner_fn
return await async_fn(*args, **kwargs)
distributed\utils_test.py:1098: in async_fn_outer
return await utils_wait_for(async_fn(), timeout=timeout * 2)
distributed\utils.py:1920: in wait_for
return await asyncio.wait_for(fut, timeout)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\tasks.py:445: in wait_for
return fut.result()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
async def async_fn():
result = None
with dask.config.set(config):
async with _cluster_factory() as (s, workers), _client_factory(
s
) as c:
args = [s] + workers
if c is not None:
args = [c] + args
try:
coro = func(*args, *outer_args, **kwargs)
task = asyncio.create_task(coro)
coro2 = utils_wait_for(
asyncio.shield(task), timeout=deadline.remaining
)
result = await coro2
validate_state(s, *workers)
except asyncio.TimeoutError:
assert task
elapsed = deadline.elapsed
buffer = io.StringIO()
# This stack indicates where the coro/test is suspended
task.print_stack(file=buffer)
if cluster_dump_directory:
await dump_cluster_state(
s=s,
ws=workers,
output_dir=cluster_dump_directory,
func_name=func.__name__,
)
task.cancel()
while not task.cancelled():
await asyncio.sleep(0.01)
# Hopefully, the hang has been caused by inconsistent
# state, which should be much more meaningful than the
# timeout
validate_state(s, *workers)
# Remove as much of the traceback as possible; it's
# uninteresting boilerplate from utils_test and asyncio
# and not from the code being tested.
> raise asyncio.TimeoutError(
f"Test timeout ({timeout}) hit after {elapsed}s.\n"
"========== Test stack trace starts here ==========\n"
f"{buffer.getvalue()}"
) from None
E asyncio.exceptions.TimeoutError: Test timeout (30) hit after 30.024043560028076s.
E ========== Test stack trace starts here ==========
E Stack for <Task pending name='Task-100702' coro=<test_recreate_task_array() running at D:\a\distributed\distributed\distributed\tests\test_client.py:4954> wait_for=<Future pending cb=[Task.task_wakeup()]>> (most recent call last):
E File "D:\a\distributed\distributed\distributed\tests\test_client.py", line 4954, in test_recreate_task_array
E function, args, kwargs = await c._get_components_from_future(f)
distributed\utils_test.py:1040: TimeoutError
Check warning on line 0 in distributed.tests.test_client
github-actions / Unit Test Results
All 7 runs failed: test_robust_undeserializable (distributed.tests.test_client)
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-numpy-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 30s]
Raw output
asyncio.exceptions.TimeoutError: Test timeout (30) hit after 29.994189977645874s.
========== Test stack trace starts here ==========
Stack for <Task pending name='Task-105242' coro=<test_robust_undeserializable() running at D:\a\distributed\distributed\distributed\tests\test_client.py:5110> wait_for=<Future pending cb=[Task.task_wakeup()]>> (most recent call last):
File "D:\a\distributed\distributed\distributed\tests\test_client.py", line 5110, in test_robust_undeserializable
await wait(future)
args = (), kwds = {}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79: in inner
return func(*args, **kwds)
distributed\utils_test.py:1101: in test_func
return _run_and_close_tornado(async_fn_outer)
distributed\utils_test.py:378: in _run_and_close_tornado
return asyncio_run(inner_fn(), loop_factory=get_loop_factory())
distributed\compatibility.py:236: in asyncio_run
return loop.run_until_complete(main)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\base_events.py:649: in run_until_complete
return future.result()
distributed\utils_test.py:375: in inner_fn
return await async_fn(*args, **kwargs)
distributed\utils_test.py:1098: in async_fn_outer
return await utils_wait_for(async_fn(), timeout=timeout * 2)
distributed\utils.py:1920: in wait_for
return await asyncio.wait_for(fut, timeout)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\tasks.py:445: in wait_for
return fut.result()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
async def async_fn():
result = None
with dask.config.set(config):
async with _cluster_factory() as (s, workers), _client_factory(
s
) as c:
args = [s] + workers
if c is not None:
args = [c] + args
try:
coro = func(*args, *outer_args, **kwargs)
task = asyncio.create_task(coro)
coro2 = utils_wait_for(
asyncio.shield(task), timeout=deadline.remaining
)
result = await coro2
validate_state(s, *workers)
except asyncio.TimeoutError:
assert task
elapsed = deadline.elapsed
buffer = io.StringIO()
# This stack indicates where the coro/test is suspended
task.print_stack(file=buffer)
if cluster_dump_directory:
await dump_cluster_state(
s=s,
ws=workers,
output_dir=cluster_dump_directory,
func_name=func.__name__,
)
task.cancel()
while not task.cancelled():
await asyncio.sleep(0.01)
# Hopefully, the hang has been caused by inconsistent
# state, which should be much more meaningful than the
# timeout
validate_state(s, *workers)
# Remove as much of the traceback as possible; it's
# uninteresting boilerplate from utils_test and asyncio
# and not from the code being tested.
> raise asyncio.TimeoutError(
f"Test timeout ({timeout}) hit after {elapsed}s.\n"
"========== Test stack trace starts here ==========\n"
f"{buffer.getvalue()}"
) from None
E asyncio.exceptions.TimeoutError: Test timeout (30) hit after 29.994189977645874s.
E ========== Test stack trace starts here ==========
E Stack for <Task pending name='Task-105242' coro=<test_robust_undeserializable() running at D:\a\distributed\distributed\distributed\tests\test_client.py:5110> wait_for=<Future pending cb=[Task.task_wakeup()]>> (most recent call last):
E File "D:\a\distributed\distributed\distributed\tests\test_client.py", line 5110, in test_robust_undeserializable
E await wait(future)
distributed\utils_test.py:1040: TimeoutError
Check warning on line 0 in distributed.tests.test_client
github-actions / Unit Test Results
All 7 runs failed: test_robust_undeserializable_function (distributed.tests.test_client)
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-numpy-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 30s]
Raw output
asyncio.exceptions.TimeoutError: Test timeout (30) hit after 29.9792640209198s.
========== Test stack trace starts here ==========
Stack for <Task pending name='Task-107078' coro=<test_robust_undeserializable_function() running at D:\a\distributed\distributed\distributed\tests\test_client.py:5135> wait_for=<Future pending cb=[Task.task_wakeup()]>> (most recent call last):
File "D:\a\distributed\distributed\distributed\tests\test_client.py", line 5135, in test_robust_undeserializable_function
await wait(future)
args = (), kwds = {}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79: in inner
return func(*args, **kwds)
distributed\utils_test.py:1101: in test_func
return _run_and_close_tornado(async_fn_outer)
distributed\utils_test.py:378: in _run_and_close_tornado
return asyncio_run(inner_fn(), loop_factory=get_loop_factory())
distributed\compatibility.py:236: in asyncio_run
return loop.run_until_complete(main)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\base_events.py:649: in run_until_complete
return future.result()
distributed\utils_test.py:375: in inner_fn
return await async_fn(*args, **kwargs)
distributed\utils_test.py:1098: in async_fn_outer
return await utils_wait_for(async_fn(), timeout=timeout * 2)
distributed\utils.py:1920: in wait_for
return await asyncio.wait_for(fut, timeout)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\tasks.py:445: in wait_for
return fut.result()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
async def async_fn():
result = None
with dask.config.set(config):
async with _cluster_factory() as (s, workers), _client_factory(
s
) as c:
args = [s] + workers
if c is not None:
args = [c] + args
try:
coro = func(*args, *outer_args, **kwargs)
task = asyncio.create_task(coro)
coro2 = utils_wait_for(
asyncio.shield(task), timeout=deadline.remaining
)
result = await coro2
validate_state(s, *workers)
except asyncio.TimeoutError:
assert task
elapsed = deadline.elapsed
buffer = io.StringIO()
# This stack indicates where the coro/test is suspended
task.print_stack(file=buffer)
if cluster_dump_directory:
await dump_cluster_state(
s=s,
ws=workers,
output_dir=cluster_dump_directory,
func_name=func.__name__,
)
task.cancel()
while not task.cancelled():
await asyncio.sleep(0.01)
# Hopefully, the hang has been caused by inconsistent
# state, which should be much more meaningful than the
# timeout
validate_state(s, *workers)
# Remove as much of the traceback as possible; it's
# uninteresting boilerplate from utils_test and asyncio
# and not from the code being tested.
> raise asyncio.TimeoutError(
f"Test timeout ({timeout}) hit after {elapsed}s.\n"
"========== Test stack trace starts here ==========\n"
f"{buffer.getvalue()}"
) from None
E asyncio.exceptions.TimeoutError: Test timeout (30) hit after 29.9792640209198s.
E ========== Test stack trace starts here ==========
E Stack for <Task pending name='Task-107078' coro=<test_robust_undeserializable_function() running at D:\a\distributed\distributed\distributed\tests\test_client.py:5135> wait_for=<Future pending cb=[Task.task_wakeup()]>> (most recent call last):
E File "D:\a\distributed\distributed\distributed\tests\test_client.py", line 5135, in test_robust_undeserializable_function
E await wait(future)
distributed\utils_test.py:1040: TimeoutError
Check warning on line 0 in distributed.tests.test_scheduler
github-actions / Unit Test Results
All 7 runs failed: test_recompute_released_results (distributed.tests.test_scheduler)
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-numpy-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 30s]
Raw output
asyncio.exceptions.TimeoutError: Test timeout (30) hit after 29.97444796562195s.
========== Test stack trace starts here ==========
Stack for <Task pending name='Task-153294' coro=<test_recompute_released_results() running at D:\a\distributed\distributed\distributed\tests\test_scheduler.py:123> wait_for=<Future finished result=None>> (most recent call last):
File "D:\a\distributed\distributed\distributed\tests\test_scheduler.py", line 123, in test_recompute_released_results
await asyncio.sleep(0.01)
args = (), kwds = {}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79: in inner
return func(*args, **kwds)
distributed\utils_test.py:1101: in test_func
return _run_and_close_tornado(async_fn_outer)
distributed\utils_test.py:378: in _run_and_close_tornado
return asyncio_run(inner_fn(), loop_factory=get_loop_factory())
distributed\compatibility.py:236: in asyncio_run
return loop.run_until_complete(main)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\base_events.py:649: in run_until_complete
return future.result()
distributed\utils_test.py:375: in inner_fn
return await async_fn(*args, **kwargs)
distributed\utils_test.py:1098: in async_fn_outer
return await utils_wait_for(async_fn(), timeout=timeout * 2)
distributed\utils.py:1920: in wait_for
return await asyncio.wait_for(fut, timeout)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\tasks.py:445: in wait_for
return fut.result()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
async def async_fn():
result = None
with dask.config.set(config):
async with _cluster_factory() as (s, workers), _client_factory(
s
) as c:
args = [s] + workers
if c is not None:
args = [c] + args
try:
coro = func(*args, *outer_args, **kwargs)
task = asyncio.create_task(coro)
coro2 = utils_wait_for(
asyncio.shield(task), timeout=deadline.remaining
)
result = await coro2
validate_state(s, *workers)
except asyncio.TimeoutError:
assert task
elapsed = deadline.elapsed
buffer = io.StringIO()
# This stack indicates where the coro/test is suspended
task.print_stack(file=buffer)
if cluster_dump_directory:
await dump_cluster_state(
s=s,
ws=workers,
output_dir=cluster_dump_directory,
func_name=func.__name__,
)
task.cancel()
while not task.cancelled():
await asyncio.sleep(0.01)
# Hopefully, the hang has been caused by inconsistent
# state, which should be much more meaningful than the
# timeout
validate_state(s, *workers)
# Remove as much of the traceback as possible; it's
# uninteresting boilerplate from utils_test and asyncio
# and not from the code being tested.
> raise asyncio.TimeoutError(
f"Test timeout ({timeout}) hit after {elapsed}s.\n"
"========== Test stack trace starts here ==========\n"
f"{buffer.getvalue()}"
) from None
E asyncio.exceptions.TimeoutError: Test timeout (30) hit after 29.97444796562195s.
E ========== Test stack trace starts here ==========
E Stack for <Task pending name='Task-153294' coro=<test_recompute_released_results() running at D:\a\distributed\distributed\distributed\tests\test_scheduler.py:123> wait_for=<Future finished result=None>> (most recent call last):
E File "D:\a\distributed\distributed\distributed\tests\test_scheduler.py", line 123, in test_recompute_released_results
E await asyncio.sleep(0.01)
distributed\utils_test.py:1040: TimeoutError
Check warning on line 0 in distributed.tests.test_scheduler
github-actions / Unit Test Results
All 7 runs failed: test_graph_execution_width (distributed.tests.test_scheduler)
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 6s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 9s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 7s]
artifacts/ubuntu-latest-mindeps-numpy-ci1/pytest.xml [took 6s]
artifacts/ubuntu-latest-mindeps-queue-ci1/pytest.xml [took 7s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 9s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 7s]
Raw output
assert False
+ where False = all(<generator object test_graph_execution_width.<locals>.<genexpr> at 0x0000022B3A58C580>)
c = <Client: No scheduler connected>
s = <Scheduler 'tcp://127.0.0.1:64029', workers: 0, cores: 0, tasks: 0>
workers = (<Worker 'tcp://127.0.0.1:64030', name: 0, status: closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>, <W... 0>, <Worker 'tcp://127.0.0.1:64036', name: 3, status: closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>)
roots = [Delayed('inc-44f9e909-5228-42c2-9a0c-3b81bb5e0877'), Delayed('inc-e5e65ba9-dad4-47b8-b201-63bd83037192'), Delayed('in...00407'), Delayed('inc-baa815ef-04e3-44b0-8794-059ce00052a6'), Delayed('inc-6588225a-eeeb-4dfd-9d2b-870368cd7d20'), ...]
passthrough1 = [Delayed('slowidentity-49c38166-b0c0-43bd-ac2b-de3875de7237'), Delayed('slowidentity-a74a08c4-7aaa-4a59-acee-4339bc21e...slowidentity-3765ebcd-6928-4bde-a1b2-90966f88672c'), Delayed('slowidentity-94aa1fb5-9747-4c26-b618-12ed8aba6f5d'), ...]
passthrough2 = [Delayed('slowidentity-70ad7247-1e3c-487a-ae18-3589c5676f8a'), Delayed('slowidentity-608f1803-3974-465d-843d-bf2b136e1...slowidentity-4aac96d1-1e91-453c-b3e1-d1525013a361'), Delayed('slowidentity-a804822d-6822-48e1-98fe-eb8560a2648e'), ...]
@gen_cluster(
nthreads=[("", 2)] * 4,
client=True,
config={"distributed.scheduler.worker-saturation": 1.0},
)
async def test_graph_execution_width(c, s, *workers):
"""
Test that we don't execute the graph more breadth-first than necessary.
We shouldn't start loading extra data if we're not going to use it immediately.
The number of parallel work streams match the number of threads.
"""
roots = [delayed(inc)(ix) for ix in range(32)]
passthrough1 = [delayed(slowidentity)(r, delay=0) for r in roots]
passthrough2 = [delayed(slowidentity)(r, delay=0) for r in passthrough1]
done = [delayed(lambda r: None)(r) for r in passthrough2]
await c.register_plugin(CountData(keys=[f.key for f in roots]), name="count-roots")
fs = c.compute(done)
await wait(fs)
res = await c.run(lambda dask_worker: dask_worker.plugins["count-roots"].count)
> assert all(0 < count <= 2 for count in res.values())
E assert False
E + where False = all(<generator object test_graph_execution_width.<locals>.<genexpr> at 0x0000022B3A58C580>)
distributed\tests\test_scheduler.py:400: AssertionError
Check warning on line 0 in distributed.tests.test_scheduler
github-actions / Unit Test Results
All 7 runs failed: test_update_graph_culls (distributed.tests.test_scheduler)
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-mindeps-numpy-ci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-mindeps-queue-ci1/pytest.xml [took 0s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 0s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 0s]
Raw output
TypeError: Scheduler.update_graph() got an unexpected keyword argument 'keys'
s = <Scheduler 'tcp://127.0.0.1:65006', workers: 0, cores: 0, tasks: 0>
a = <Worker 'tcp://127.0.0.1:65007', name: 0, status: closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0>
b = <Worker 'tcp://127.0.0.1:65009', name: 1, status: closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>
@gen_cluster()
async def test_update_graph_culls(s, a, b):
# This is a rather low level API but the fact that update_graph actually
# culls is worth testing and hard to do so with high level user API. Most
# but not all HLGs are implementing culling themselves already, i.e. a graph
# like the one written here will rarely exist in reality. It's worth to
# consider dropping this from the scheduler iff graph materialization
# actually ensure this
dsk = HighLevelGraph(
layers={
"foo": MaterializedLayer(
{
"x": (inc, 1),
"y": (inc, "x"),
"z": (inc, 2),
}
)
},
dependencies={"foo": set()},
)
header, frames = serialize(ToPickle(dsk), on_error="raise")
> await s.update_graph(
graph_header=header,
graph_frames=frames,
keys=["y"],
client="client",
internal_priority={k: 0 for k in "xyz"},
submitting_task=None,
)
distributed\tests\test_scheduler.py:1364:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (<Scheduler 'tcp://127.0.0.1:65006', workers: 0, cores: 0, tasks: 0>,)
kwargs = {'client': 'client', 'graph_frames': [b'\x80\x05\x95?\x01\x00\x00\x00\x00\x00\x00\x8c\x1edistributed.protocol.serializ...ubsb.'], 'graph_header': {'serializer': 'pickle', 'writeable': ()}, 'internal_priority': {'x': 0, 'y': 0, 'z': 0}, ...}
async def wrapper(*args, **kwargs):
with self:
> return await func(*args, **kwargs)
E TypeError: Scheduler.update_graph() got an unexpected keyword argument 'keys'
distributed\utils.py:803: TypeError
Check warning on line 0 in distributed.tests.test_scheduler
github-actions / Unit Test Results
All 7 runs failed: test_cancel_fire_and_forget (distributed.tests.test_scheduler)
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-numpy-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 30s]
Raw output
asyncio.exceptions.TimeoutError: Test timeout (30) hit after 30.002965927124023s.
========== Test stack trace starts here ==========
Stack for <Task pending name='Task-175966' coro=<test_cancel_fire_and_forget() running at D:\a\distributed\distributed\distributed\tests\test_scheduler.py:1996> wait_for=<Future finished result=None>> (most recent call last):
File "D:\a\distributed\distributed\distributed\tests\test_scheduler.py", line 1996, in test_cancel_fire_and_forget
await asyncio.sleep(0.01)
args = (), kwds = {}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79: in inner
return func(*args, **kwds)
distributed\utils_test.py:1101: in test_func
return _run_and_close_tornado(async_fn_outer)
distributed\utils_test.py:378: in _run_and_close_tornado
return asyncio_run(inner_fn(), loop_factory=get_loop_factory())
distributed\compatibility.py:236: in asyncio_run
return loop.run_until_complete(main)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\base_events.py:649: in run_until_complete
return future.result()
distributed\utils_test.py:375: in inner_fn
return await async_fn(*args, **kwargs)
distributed\utils_test.py:1098: in async_fn_outer
return await utils_wait_for(async_fn(), timeout=timeout * 2)
distributed\utils.py:1920: in wait_for
return await asyncio.wait_for(fut, timeout)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\tasks.py:445: in wait_for
return fut.result()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
async def async_fn():
result = None
with dask.config.set(config):
async with _cluster_factory() as (s, workers), _client_factory(
s
) as c:
args = [s] + workers
if c is not None:
args = [c] + args
try:
coro = func(*args, *outer_args, **kwargs)
task = asyncio.create_task(coro)
coro2 = utils_wait_for(
asyncio.shield(task), timeout=deadline.remaining
)
result = await coro2
validate_state(s, *workers)
except asyncio.TimeoutError:
assert task
elapsed = deadline.elapsed
buffer = io.StringIO()
# This stack indicates where the coro/test is suspended
task.print_stack(file=buffer)
if cluster_dump_directory:
await dump_cluster_state(
s=s,
ws=workers,
output_dir=cluster_dump_directory,
func_name=func.__name__,
)
task.cancel()
while not task.cancelled():
await asyncio.sleep(0.01)
# Hopefully, the hang has been caused by inconsistent
# state, which should be much more meaningful than the
# timeout
validate_state(s, *workers)
# Remove as much of the traceback as possible; it's
# uninteresting boilerplate from utils_test and asyncio
# and not from the code being tested.
> raise asyncio.TimeoutError(
f"Test timeout ({timeout}) hit after {elapsed}s.\n"
"========== Test stack trace starts here ==========\n"
f"{buffer.getvalue()}"
) from None
E asyncio.exceptions.TimeoutError: Test timeout (30) hit after 30.002965927124023s.
E ========== Test stack trace starts here ==========
E Stack for <Task pending name='Task-175966' coro=<test_cancel_fire_and_forget() running at D:\a\distributed\distributed\distributed\tests\test_scheduler.py:1996> wait_for=<Future finished result=None>> (most recent call last):
E File "D:\a\distributed\distributed\distributed\tests\test_scheduler.py", line 1996, in test_cancel_fire_and_forget
E await asyncio.sleep(0.01)
distributed\utils_test.py:1040: TimeoutError
Check warning on line 0 in distributed.tests.test_scheduler
github-actions / Unit Test Results
All 7 runs failed: test_dont_recompute_if_persisted_4 (distributed.tests.test_scheduler)
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-numpy-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 30s]
Raw output
asyncio.exceptions.TimeoutError: Test timeout (30) hit after 29.97756338119507s.
========== Test stack trace starts here ==========
Stack for <Task pending name='Task-179242' coro=<test_dont_recompute_if_persisted_4() running at D:\a\distributed\distributed\distributed\tests\test_scheduler.py:2139> wait_for=<Future finished result=None>> (most recent call last):
File "D:\a\distributed\distributed\distributed\tests\test_scheduler.py", line 2139, in test_dont_recompute_if_persisted_4
await asyncio.sleep(0.01)
args = (), kwds = {}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79: in inner
return func(*args, **kwds)
distributed\utils_test.py:1101: in test_func
return _run_and_close_tornado(async_fn_outer)
distributed\utils_test.py:378: in _run_and_close_tornado
return asyncio_run(inner_fn(), loop_factory=get_loop_factory())
distributed\compatibility.py:236: in asyncio_run
return loop.run_until_complete(main)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\base_events.py:649: in run_until_complete
return future.result()
distributed\utils_test.py:375: in inner_fn
return await async_fn(*args, **kwargs)
distributed\utils_test.py:1098: in async_fn_outer
return await utils_wait_for(async_fn(), timeout=timeout * 2)
distributed\utils.py:1920: in wait_for
return await asyncio.wait_for(fut, timeout)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\tasks.py:445: in wait_for
return fut.result()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
async def async_fn():
result = None
with dask.config.set(config):
async with _cluster_factory() as (s, workers), _client_factory(
s
) as c:
args = [s] + workers
if c is not None:
args = [c] + args
try:
coro = func(*args, *outer_args, **kwargs)
task = asyncio.create_task(coro)
coro2 = utils_wait_for(
asyncio.shield(task), timeout=deadline.remaining
)
result = await coro2
validate_state(s, *workers)
except asyncio.TimeoutError:
assert task
elapsed = deadline.elapsed
buffer = io.StringIO()
# This stack indicates where the coro/test is suspended
task.print_stack(file=buffer)
if cluster_dump_directory:
await dump_cluster_state(
s=s,
ws=workers,
output_dir=cluster_dump_directory,
func_name=func.__name__,
)
task.cancel()
while not task.cancelled():
await asyncio.sleep(0.01)
# Hopefully, the hang has been caused by inconsistent
# state, which should be much more meaningful than the
# timeout
validate_state(s, *workers)
# Remove as much of the traceback as possible; it's
# uninteresting boilerplate from utils_test and asyncio
# and not from the code being tested.
> raise asyncio.TimeoutError(
f"Test timeout ({timeout}) hit after {elapsed}s.\n"
"========== Test stack trace starts here ==========\n"
f"{buffer.getvalue()}"
) from None
E asyncio.exceptions.TimeoutError: Test timeout (30) hit after 29.97756338119507s.
E ========== Test stack trace starts here ==========
E Stack for <Task pending name='Task-179242' coro=<test_dont_recompute_if_persisted_4() running at D:\a\distributed\distributed\distributed\tests\test_scheduler.py:2139> wait_for=<Future finished result=None>> (most recent call last):
E File "D:\a\distributed\distributed\distributed\tests\test_scheduler.py", line 2139, in test_dont_recompute_if_persisted_4
E await asyncio.sleep(0.01)
distributed\utils_test.py:1040: TimeoutError
Check warning on line 0 in distributed.tests.test_scheduler
github-actions / Unit Test Results
6 out of 7 runs failed: test_task_groups (distributed.tests.test_scheduler)
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 1s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 1s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 1s]
artifacts/ubuntu-latest-mindeps-numpy-ci1/pytest.xml [took 0s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 1s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 1s]
Raw output
assert 5 == 0
c = <Client: No scheduler connected>
s = <Scheduler 'tcp://127.0.0.1:49935', workers: 0, cores: 0, tasks: 0>
a = <NoSchedulerDelayWorker 'tcp://127.0.0.1:49936', name: 0, status: closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0>
b = <NoSchedulerDelayWorker 'tcp://127.0.0.1:49939', name: 1, status: closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>
no_time_resync = None
@gen_cluster(client=True, Worker=NoSchedulerDelayWorker, config=NO_AMM)
async def test_task_groups(c, s, a, b, no_time_resync):
start = time()
da = pytest.importorskip("dask.array")
x = da.arange(100, chunks=(20,))
y = (x + 1).persist(optimize_graph=False)
y = await y
stop = time()
tg = s.task_groups[x.name]
tp = s.task_prefixes["arange"]
repr(tg)
repr(tp)
> assert tg.states["memory"] == 0
E assert 5 == 0
distributed\tests\test_scheduler.py:2589: AssertionError
Check warning on line 0 in distributed.tests.test_scheduler
github-actions / Unit Test Results
6 out of 7 runs failed: test_task_group_non_tuple_key (distributed.tests.test_scheduler)
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 1s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 1s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 1s]
artifacts/ubuntu-latest-mindeps-numpy-ci1/pytest.xml [took 1s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 1s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 1s]
Raw output
assert 0 == 4
c = <Client: No scheduler connected>
s = <Scheduler 'tcp://127.0.0.1:50055', workers: 0, cores: 0, tasks: 0>
a = <Worker 'tcp://127.0.0.1:50056', name: 0, status: closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0>
b = <Worker 'tcp://127.0.0.1:50058', name: 1, status: closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>
@gen_cluster(client=True)
async def test_task_group_non_tuple_key(c, s, a, b):
da = pytest.importorskip("dask.array")
np = pytest.importorskip("numpy")
x = da.arange(100, chunks=(20,))
y = (x + 1).sum().persist()
y = await y
> assert s.task_prefixes["sum"].states["released"] == 4
E assert 0 == 4
distributed\tests\test_scheduler.py:2782: AssertionError
Check warning on line 0 in distributed.tests.test_scheduler
github-actions / Unit Test Results
All 7 runs failed: test_too_many_groups (distributed.tests.test_scheduler)
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-numpy-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 30s]
Raw output
asyncio.exceptions.TimeoutError: Test timeout (30) hit after 29.9778995513916s.
========== Test stack trace starts here ==========
Stack for <Task pending name='Task-185426' coro=<test_too_many_groups() running at D:\a\distributed\distributed\distributed\tests\test_scheduler.py:2917> wait_for=<Future finished result=None>> (most recent call last):
File "D:\a\distributed\distributed\distributed\tests\test_scheduler.py", line 2917, in test_too_many_groups
await asyncio.sleep(0.01)
args = (), kwds = {}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79: in inner
return func(*args, **kwds)
distributed\utils_test.py:1101: in test_func
return _run_and_close_tornado(async_fn_outer)
distributed\utils_test.py:378: in _run_and_close_tornado
return asyncio_run(inner_fn(), loop_factory=get_loop_factory())
distributed\compatibility.py:236: in asyncio_run
return loop.run_until_complete(main)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\base_events.py:649: in run_until_complete
return future.result()
distributed\utils_test.py:375: in inner_fn
return await async_fn(*args, **kwargs)
distributed\utils_test.py:1098: in async_fn_outer
return await utils_wait_for(async_fn(), timeout=timeout * 2)
distributed\utils.py:1920: in wait_for
return await asyncio.wait_for(fut, timeout)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\tasks.py:445: in wait_for
return fut.result()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
async def async_fn():
result = None
with dask.config.set(config):
async with _cluster_factory() as (s, workers), _client_factory(
s
) as c:
args = [s] + workers
if c is not None:
args = [c] + args
try:
coro = func(*args, *outer_args, **kwargs)
task = asyncio.create_task(coro)
coro2 = utils_wait_for(
asyncio.shield(task), timeout=deadline.remaining
)
result = await coro2
validate_state(s, *workers)
except asyncio.TimeoutError:
assert task
elapsed = deadline.elapsed
buffer = io.StringIO()
# This stack indicates where the coro/test is suspended
task.print_stack(file=buffer)
if cluster_dump_directory:
await dump_cluster_state(
s=s,
ws=workers,
output_dir=cluster_dump_directory,
func_name=func.__name__,
)
task.cancel()
while not task.cancelled():
await asyncio.sleep(0.01)
# Hopefully, the hang has been caused by inconsistent
# state, which should be much more meaningful than the
# timeout
validate_state(s, *workers)
# Remove as much of the traceback as possible; it's
# uninteresting boilerplate from utils_test and asyncio
# and not from the code being tested.
> raise asyncio.TimeoutError(
f"Test timeout ({timeout}) hit after {elapsed}s.\n"
"========== Test stack trace starts here ==========\n"
f"{buffer.getvalue()}"
) from None
E asyncio.exceptions.TimeoutError: Test timeout (30) hit after 29.9778995513916s.
E ========== Test stack trace starts here ==========
E Stack for <Task pending name='Task-185426' coro=<test_too_many_groups() running at D:\a\distributed\distributed\distributed\tests\test_scheduler.py:2917> wait_for=<Future finished result=None>> (most recent call last):
E File "D:\a\distributed\distributed\distributed\tests\test_scheduler.py", line 2917, in test_too_many_groups
E await asyncio.sleep(0.01)
distributed\utils_test.py:1040: TimeoutError
Check warning on line 0 in distributed.tests.test_worker
github-actions / Unit Test Results
All 7 runs failed: test_clean_nbytes (distributed.tests.test_worker)
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 3s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 4s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 4s]
artifacts/ubuntu-latest-mindeps-numpy-ci1/pytest.xml [took 3s]
artifacts/ubuntu-latest-mindeps-queue-ci1/pytest.xml [took 4s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 4s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 3s]
Raw output
assert (27 + 36) == 1
+ where 27 = len([28, 28, 28, 28, 28, 28, ...])
+ where [28, 28, 28, 28, 28, 28, ...] = list(<filter object at 0x0000022B3F5E5270>)
+ where <filter object at 0x0000022B3F5E5270> = filter(None, [28, 28, 28, 28, 28, 28, ...])
+ and 36 = len([28, 28, 28, 28, 28, 28, ...])
+ where [28, 28, 28, 28, 28, 28, ...] = list(<filter object at 0x0000022B3F5E5DE0>)
+ where <filter object at 0x0000022B3F5E5DE0> = filter(None, [28, 28, 28, 28, 28, 28, ...])
c = <Client: No scheduler connected>
s = <Scheduler 'tcp://127.0.0.1:54373', workers: 0, cores: 0, tasks: 0>
a = <Worker 'tcp://127.0.0.1:54374', name: 0, status: closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0>
b = <Worker 'tcp://127.0.0.1:54376', name: 1, status: closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>
@gen_cluster(client=True)
async def test_clean_nbytes(c, s, a, b):
L = [delayed(inc)(i) for i in range(10)]
for _ in range(5):
L = [delayed(add)(x, y) for x, y in sliding_window(2, L)]
total = delayed(sum)(L)
future = c.compute(total)
await wait(future)
await asyncio.sleep(1)
> assert (
len(list(filter(None, [ts.nbytes for ts in a.state.tasks.values()])))
+ len(list(filter(None, [ts.nbytes for ts in b.state.tasks.values()])))
== 1
)
E assert (27 + 36) == 1
E + where 27 = len([28, 28, 28, 28, 28, 28, ...])
E + where [28, 28, 28, 28, 28, 28, ...] = list(<filter object at 0x0000022B3F5E5270>)
E + where <filter object at 0x0000022B3F5E5270> = filter(None, [28, 28, 28, 28, 28, 28, ...])
E + and 36 = len([28, 28, 28, 28, 28, 28, ...])
E + where [28, 28, 28, 28, 28, 28, ...] = list(<filter object at 0x0000022B3F5E5DE0>)
E + where <filter object at 0x0000022B3F5E5DE0> = filter(None, [28, 28, 28, 28, 28, 28, ...])
distributed\tests\test_worker.py:786: AssertionError
Check warning on line 0 in distributed.tests.test_worker
github-actions / Unit Test Results
All 7 runs failed: test_clean_up_dependencies (distributed.tests.test_worker)
artifacts/ubuntu-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-no_queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-3.9-queue-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-numpy-ci1/pytest.xml [took 30s]
artifacts/ubuntu-latest-mindeps-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.10-queue-ci1/pytest.xml [took 30s]
artifacts/windows-latest-3.9-queue-ci1/pytest.xml [took 30s]
Raw output
asyncio.exceptions.TimeoutError: Test timeout (30) hit after 29.99039888381958s.
========== Test stack trace starts here ==========
Stack for <Task pending name='Task-318524' coro=<test_clean_up_dependencies() running at D:\a\distributed\distributed\distributed\tests\test_worker.py:935> wait_for=<Future finished result=None>> (most recent call last):
File "D:\a\distributed\distributed\distributed\tests\test_worker.py", line 935, in test_clean_up_dependencies
await asyncio.sleep(0.01)
args = (), kwds = {}
@wraps(func)
def inner(*args, **kwds):
with self._recreate_cm():
> return func(*args, **kwds)
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\Miniconda3\envs\dask-distributed\lib\contextlib.py:79: in inner
return func(*args, **kwds)
distributed\utils_test.py:1101: in test_func
return _run_and_close_tornado(async_fn_outer)
distributed\utils_test.py:378: in _run_and_close_tornado
return asyncio_run(inner_fn(), loop_factory=get_loop_factory())
distributed\compatibility.py:236: in asyncio_run
return loop.run_until_complete(main)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\base_events.py:649: in run_until_complete
return future.result()
distributed\utils_test.py:375: in inner_fn
return await async_fn(*args, **kwargs)
distributed\utils_test.py:1098: in async_fn_outer
return await utils_wait_for(async_fn(), timeout=timeout * 2)
distributed\utils.py:1920: in wait_for
return await asyncio.wait_for(fut, timeout)
C:\Miniconda3\envs\dask-distributed\lib\asyncio\tasks.py:445: in wait_for
return fut.result()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
async def async_fn():
result = None
with dask.config.set(config):
async with _cluster_factory() as (s, workers), _client_factory(
s
) as c:
args = [s] + workers
if c is not None:
args = [c] + args
try:
coro = func(*args, *outer_args, **kwargs)
task = asyncio.create_task(coro)
coro2 = utils_wait_for(
asyncio.shield(task), timeout=deadline.remaining
)
result = await coro2
validate_state(s, *workers)
except asyncio.TimeoutError:
assert task
elapsed = deadline.elapsed
buffer = io.StringIO()
# This stack indicates where the coro/test is suspended
task.print_stack(file=buffer)
if cluster_dump_directory:
await dump_cluster_state(
s=s,
ws=workers,
output_dir=cluster_dump_directory,
func_name=func.__name__,
)
task.cancel()
while not task.cancelled():
await asyncio.sleep(0.01)
# Hopefully, the hang has been caused by inconsistent
# state, which should be much more meaningful than the
# timeout
validate_state(s, *workers)
# Remove as much of the traceback as possible; it's
# uninteresting boilerplate from utils_test and asyncio
# and not from the code being tested.
> raise asyncio.TimeoutError(
f"Test timeout ({timeout}) hit after {elapsed}s.\n"
"========== Test stack trace starts here ==========\n"
f"{buffer.getvalue()}"
) from None
E asyncio.exceptions.TimeoutError: Test timeout (30) hit after 29.99039888381958s.
E ========== Test stack trace starts here ==========
E Stack for <Task pending name='Task-318524' coro=<test_clean_up_dependencies() running at D:\a\distributed\distributed\distributed\tests\test_worker.py:935> wait_for=<Future finished result=None>> (most recent call last):
E File "D:\a\distributed\distributed\distributed\tests\test_worker.py", line 935, in test_clean_up_dependencies
E await asyncio.sleep(0.01)
distributed\utils_test.py:1040: TimeoutError
Check warning on line 0 in distributed.dashboard.tests.test_scheduler_bokeh
github-actions / Unit Test Results
All 8 runs failed: test_memory_by_key (distributed.dashboard.tests.test_scheduler_bokeh)
artifacts/macos-latest-3.11-queue-notci1/pytest.xml [took 1s]
artifacts/ubuntu-latest-3.10-queue-notci1/pytest.xml [took 1s]
artifacts/ubuntu-latest-3.11-queue-notci1/pytest.xml [took 1s]
artifacts/ubuntu-latest-3.9-no_queue-notci1/pytest.xml [took 1s]
artifacts/ubuntu-latest-3.9-queue-notci1/pytest.xml [took 1s]
artifacts/windows-latest-3.10-queue-notci1/pytest.xml [took 0s]
artifacts/windows-latest-3.11-queue-notci1/pytest.xml [took 0s]
artifacts/windows-latest-3.9-queue-notci1/pytest.xml [took 0s]
Check warning on line 0 in distributed.diagnostics.tests.test_progress_stream
github-actions / Unit Test Results
All 8 runs failed: test_progress_stream (distributed.diagnostics.tests.test_progress_stream)
artifacts/macos-latest-3.11-queue-notci1/pytest.xml [took 1s]
artifacts/ubuntu-latest-3.10-queue-notci1/pytest.xml [took 1s]
artifacts/ubuntu-latest-3.11-queue-notci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-3.9-no_queue-notci1/pytest.xml [took 1s]
artifacts/ubuntu-latest-3.9-queue-notci1/pytest.xml [took 1s]
artifacts/windows-latest-3.10-queue-notci1/pytest.xml [took 0s]
artifacts/windows-latest-3.11-queue-notci1/pytest.xml [took 0s]
artifacts/windows-latest-3.9-queue-notci1/pytest.xml [took 0s]
Check warning on line 0 in distributed.diagnostics.tests.test_scheduler_plugin
github-actions / Unit Test Results
All 10 runs failed: test_update_graph_hook_complex (distributed.diagnostics.tests.test_scheduler_plugin)
artifacts/macos-latest-3.11-queue-notci1/pytest.xml [took 1s]
artifacts/ubuntu-latest-3.10-queue-notci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-3.11-queue-notci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-3.9-no_queue-notci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-3.9-queue-notci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-mindeps-numpy-notci1/pytest.xml [took 0s]
artifacts/ubuntu-latest-mindeps-queue-notci1/pytest.xml [took 0s]
artifacts/windows-latest-3.10-queue-notci1/pytest.xml [took 0s]
artifacts/windows-latest-3.11-queue-notci1/pytest.xml [took 0s]
artifacts/windows-latest-3.9-queue-notci1/pytest.xml [took 0s]
Check warning on line 0 in distributed.diagnostics.tests.test_worker_plugin
github-actions / Unit Test Results
10 out of 17 runs failed: test_dependent_tasks (distributed.diagnostics.tests.test_worker_plugin)
artifacts/macos-latest-3.11-queue-notci1/pytest.xml [took 5m 0s]
artifacts/ubuntu-latest-3.10-queue-notci1/pytest.xml [took 5m 0s]
artifacts/ubuntu-latest-3.11-queue-notci1/pytest.xml [took 5m 0s]
artifacts/ubuntu-latest-3.9-no_queue-notci1/pytest.xml [took 5m 0s]
artifacts/ubuntu-latest-3.9-queue-notci1/pytest.xml [took 5m 0s]
artifacts/ubuntu-latest-mindeps-numpy-notci1/pytest.xml [took 5m 0s]
artifacts/ubuntu-latest-mindeps-queue-notci1/pytest.xml [took 5m 0s]
artifacts/windows-latest-3.10-queue-notci1/pytest.xml [took 0s]
artifacts/windows-latest-3.11-queue-notci1/pytest.xml [took 0s]
artifacts/windows-latest-3.9-queue-notci1/pytest.xml [took 0s]
Raw output
pytest-timeout exceeded