Replies: 8 comments 56 replies
-
If the GIL is enabled, then it isn't possible for two threads to run at the same time. There is also ThreadSafeFlag if you need additional synchronization between a thread and an async task. |
Beta Was this translation helpful? Give feedback.
-
As @dlech says, on ESP32 the GIL means that threads do not run concurrently. We enable the GIL on ESP32 because it's single-core anyway. @beyonlo see the recent conversation we had about this on Discord (that I think you were part of?) https://discordapp.com/channels/574275045187125269/574275045611012097/1035373153809088512 The GIL essentially says that two threads cannot be executing bytecode at the same time, and that one bytecode instruction is atomic. So assigning a global variable for example, or two threads updating a shared dictionary. That said, it's still not safe to do higher level things like concurrently modifying a data structure where the update might span multiple bytecode operations (e.g. if you wrote your own data structure in Python). If you were on rp2 then there is no GIL, so the rules are different. It's not safe for two threads to modify the same dictionary because the internal C code that mutates the dictionary structure could be running concurrently. That said, in this particular case, assigning to an existing global variable is safe even without the GIL. However, remember that global variables are themselves just a dictionary. So two threads that modify globals (i.e. doing a |
Beta Was this translation helpful? Give feedback.
-
I have had no issues doing this, but i used a class instead of a global variable, note that i am using a pico and core 1 is only reading data set by core 0 |
Beta Was this translation helpful? Give feedback.
-
I am currently trying to define requirements. These are my current thoughts:
IssuesI suspect it is impossible to code a Python function which will be the subject of contention from more than one GIL-free threads. The bytecode of the function call statement will be under contention before the code can set the lock. Unless some guru can correct me? It may be possible to relax the single thread rule for environments with a GIL. I'm unsure on this. A ringbuf is not thread-safe with multiple sources or sinks, and waiting on a lock in an ISR is naughty. It may be that this limitation has to stay. One nagging question. If I manage to write such a beast, how do we test it properly? |
Beta Was this translation helpful? Give feedback.
-
A question for the gurus (@jimmo ?). Assume two threads on different cores accessing a common object. One runs self._wi = (self._wi + 1) % self._size # Values are small ints and the other runs if self._ri == self._wi: Obviously there is uncertainty as to whether, at the time of reading |
Beta Was this translation helpful? Give feedback.
-
Here is my first pass at this with rudimentary docs here. To test, on a Pico or Pico W, create a directory In my opinion there is no need for a mutex under one condition. The end of the queue in the non- So far this simple test script has processed >7M items without error. Note that bi-directional communication is achieved with two queues. import uasyncio as asyncio
from threadsafe import ThreadSafeQueue
import _thread
from time import sleep_ms
def core_2(getq, putq): # echo server on core 2
buf = []
while True:
while getq.qsize():
buf.append(getq.get_sync())
for x in buf:
putq.put_sync(x, block=True)
buf.clear()
sleep_ms(30)
async def sender(to_core2):
x = 0
while True:
await to_core2.put(x := x + 1)
async def main():
to_core2 = ThreadSafeQueue([0 for _ in range(10)])
from_core2 = ThreadSafeQueue([0 for _ in range(10)])
_thread.start_new_thread(core_2, (to_core2, from_core2))
asyncio.create_task(sender(to_core2))
n = 0
async for x in from_core2:
if not x % 1000:
print(f"Received {x} queue items.")
n += 1
assert x == n
asyncio.run(main()) This is beta quality and a lot more testing is needed. Comments and bug reports are welcome! |
Beta Was this translation helpful? Give feedback.
-
Thanks for testing. There's a lot to go on there! Firstly your error message with one queue: _thread.start_new_thread(thread_consume, (queue1)) should read _thread.start_new_thread(thread_consume, (queue1,)) # One element tuple The Regarding multiple threads, the rules are simple. Each One general point. In my opinion, on MicroPython, using threaded code on anything other than RP2 achieves nothing that isn't better achieved with
This is a deliberate omission. I intended the class to be used as an asynchronous iterator with That said, if you or anyone else suggests a use case for an asynchronous |
Beta Was this translation helpful? Give feedback.
-
Hi all!
I have a question about if is possible to write in a variable (msg_dict) using a
thread
and just read that same variable (msg_dict) at the same time, using auasyncio
task, to dump that variable toJSON
to send via socket. I know that read and write the same variable using justuasyncio
is not a problem, because each task happen at a different moment, but if is athread
writing and auasyncio
task reading at the same time?I did a very simple example below as proof of concept, and I have no error - just luck? So I would like to know if this is safe to do and why not happen problem if a
thread
can write at the same time that auasyncio
will read.In my understand, there is no problem to write on
thread
at same time thatuasyncio
are reading because thePython
variable accept read and write at the same time. But I'm not sure if that is true, and if is safe!I'm using
ESP32-S3
withMicroPython 1.19.1
Beta Was this translation helpful? Give feedback.
All reactions