-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace DirtyNIF execution model with a different mechanism #192
Comments
I wonder if you may be able to stick with the process model as long as you assume each instance of the database is backed by one process. This way you make Elixir processes your thread pool, instead of having to deal with it in C/C++/Rust. |
@josevalim I think that's a good idea where only one process in the supervision tree sits there waiting for another Let me noodle on this a little more. |
Right, I think in this sense DBConnection may be getting more in the way than helping. Because you need serial access around SQLite3 and having a pool around serial access is not giving you anything. And when you don't need serial access (i.e. read operations), you don't need a pool at all. DBConnection is necessary for Ecto but, even then, it should not be a requirement. There are only three features in Ecto.Adapters.SQL that require DBConnection (sandbox, stream, and disconnect_all) but they are all addressable. Then there is the part of DBConnection as a convenience. I can think of two features it gives:
Of course there is a lot of work here, so I am not saying we should yank DBConnection, but it may be worth considering moving the DBConnection bits to ecto_sqlite3. In this project, based on the analysis above (which may be incomplete), you would only be missing the transaction manager (and it may be provided by sqlite3). |
There might be an additional benefit to managing transactions in a sqlite specific way. Currently there's no information returned to changesets about which constraint was violated, but there seems to be some pragmas to figure those out as sqlite doesn't close transactions on errors. What I'm wondering about @josevalim's suggestion is how using processes to serialize or not serialize access would work for transactions which only become write transactions mid execution. Wouldn't the suggested approach require knowing if a transaction does writes or not in advance? |
|
@ruslandoga the reads can go directly to the NIF. Only the writes and operations inside a transaction would go through the process. |
Ah, I think I understand now. Since the reads are "scoped" to a prepared statement, it might be possible to reuse a single database connection for multiple statements at once. I'll try it out. UPDATE it works: ruslandoga#3; and reads seemingly became faster iex(1)> {:ok, conn} = Exqlite.start_link(database: "demo.db", debug: [:trace])
{:ok, #PID<0.245.0>}
iex(2)> Exqlite.write(conn, "create table users(name text not null) strict")
# *DBG* <0.245.0> got call checkout from <0.252.0>
# *DBG* <0.245.0> sent ok to <0.252.0>, new state {[],<0.252.0>}
# *DBG* <0.245.0> got cast checkin
{:ok, []}
# *DBG* <0.245.0> new state {[],nil}
iex(3)> Exqlite.write(conn, "insert into users(name) values(?)", ["john"])
{:ok, []}
iex(4)> Exqlite.write(conn, "insert into users(name) values(?)", ["david"])
{:ok, []}
iex(5)> Exqlite.write(conn, "insert into users(name) values(?)", ["peter"])
{:ok, []}
iex(6)> {:ok, task_sup} = Task.Supervisor.start_link
{:ok, #PID<0.254.0>}
iex(7)> Task.Supervisor.async_stream_nolink(task_sup, 1..10000, fn _ -> Exqlite.query(conn, "select * from users") end, max_concurrency: 1000) |> Enum.into([])
[
ok: {:ok, [["john"], ["david"], ["peter"]]},
ok: {:ok, [["john"], ["david"], ["peter"]]},
ok: {:ok, [["john"], ["david"], ["peter"]]},
ok: {:ok, [["john"], ["david"], ["peter"]]},
ok: {:ok, [["john"], ["david"], ["peter"]]}
# ... I wonder how the statements cache would work with this approach. It'd need to be serialised somehow as well. |
I think the tricky thing would be supporting concurrent reads from within transactions, which didn't yet do writes: Not sure how exqlite could know which statement within a transaction requires a SHARED LOCK (reads) vs one of the ones required for writing. Maybe it could naively check for |
@LostKobrakai I think all transactions would need to go through the queue. Even if a read transaction never becomes a write one, it still can't be executed without syncing with the writers first. Otherwise multiple transactions can be attempted: BEGIN; -- from writer in process 1
INSERT INTO ...
INSERT INTO ...
BEGIN; -- a reader starts a transaction in process 2
-- SQLite complains about re-entry: "cannot start a transaction within a transaction"
COMMIT; -- writer finishes the transaction from process 1 This is actually what I was worried about in the comments above. But since reads don't have to be wrapped in transactions, single connection approach works. It might be possible to do snapshots though, for read-only transactions without |
I just tried this with two table plus instances on the same database and it allowed me to start a read transaction just fine while another write transaction was started. Trying to start a write transaction I got |
That means two connections. I think the approach discussed in this issue is having only a single connection open to the database. $ sqlite3 demo.db
SQLite version 3.40.1 2022-12-28 14:03:47
Enter ".help" for usage hints.
sqlite> begin;
sqlite> begin;
Runtime error: cannot start a transaction within a transaction |
Given DBConnection is an abstraction over connections I'm not so sure. I'd for sure appreciate if it actually could do concurrent reads (in a transaction) given how much code in a business application requires to be run in a transaction, which usually start reading a bunch of things before they write. |
I think there is danger of those transactions failing with -- conn 1 -- conn 2
sqlite> begin;
sqlite> begin;
sqlite> select * from users;
sqlite> delete from users where name = 'patrick';
sqlite> insert into users(name) values('jacob');
Runtime error: database is locked (5) If it's about performance, we can try coming up with some benchmarks to see at what write/read ratio a single connection approach wins / loses. My hot take is that (read)write transactions are rare and reads are common and having un-pooled reads would end up being faster on average. Better memory usage too, probably. |
That's why I was wondering if exqlite could catch writes before they queue for "BUSY WAIT", but the queuing happens in elixir land. In the end this wouldn't be much different to e.g. a timeout due to a to long running query with e.g. postgres/ecto. Another option would be possible if exqlite can timeout a transaction from the outside and setting busy_timeout to a value larger than elixirs internal timeout. |
I don't understand. How would busy_timeout help in the above example? No matter how long we wait, if we started a deferred transaction before some other connection made a write to a table we read from, we won't be able to commit it even if the other connection's writer is finished. -- conn 1 -- conn 2
sqlite> begin;
sqlite> begin;
sqlite> select * from users;
sqlite> delete from users where name = 'patrick';
sqlite> commit;
sqlite> insert into users(name) values('jacob');
Runtime error: database is locked (5) We can however remove all queueing in Elixir and rely on what SQLite returns (error code or ok), but that seems more difficult to develop for. One would need to know and understand SQLite specifics. I think |
That's unexpected, I would've expected this to work based on the documentation on the topic. If this is the case I'd wonder why have busy_timeout in the first place, if waiting makes no sense. |
What if we try updating a record we read but that since has been deleted? I don't think it can work at all. -- conn 1 -- conn 2
sqlite> begin;
sqlite> begin;
sqlite> select rowid from users where name = 'john';
sqlite> delete from users where name = 'john';
sqlite> commit;
sqlite> update users set name = 'john jr.' where rowid = 1;
Runtime error: database is locked (5) |
It there are conflicting changes I think most databases will roll back one transaction. Though deleting one row and adding another shouldn't really be conflicting, but that might just be sqlite. |
I'm back to thinking that a single connection with un-pooled reads wouldn't work. If we issue reads directly to the NIF while a writer is in a transaction, we lose isolation guarantees as we can read data from unfinished transaction. iex> spawn(fn ->
Exqlite.transaction(db, fn conn ->
Exqlite.query(conn, "insert into users(name) values(?)", ["new"])
:timer.sleep(:timer.seconds(1000))
end)
end)
iex> Exqlite.query(db, "select * from users")
{:ok,
[
["david"],
["peter"],
["new"]
]} I'm thinking now about using two connections, one for writers and one for readers. The readers connection would accept un-pooled reads. But even when un-pooled, reads would still limited by the number of dirty schedulers.
Yeah, I think in the stock version SQLite "marks" whole database as dirty (or rather it compares the transaction ids when it started and at the first write) if it has been modified (outside of the current transaction) since the transaction has started. There is a |
That didn't end up working either. Reads start an implicit transaction and that's a problem since concurrent reads can force that transaction to be indefinitely open and that connection would never get new data.
|
Sorry for staying out of this conversation and thanks for exploring @ruslandoga. It seems like we would need the process to work as read-write lock. While reads are happening, we allow them to run concurrently. Once a write arrives, we start queueing events and draining the reads. Once they are all drained, execute the write, and then move to the next in the queue. If next is a read, we start doing concurrent work again. All queries can still run on the client. You can find an implementation of serialized reads-writes from NIF running from the client here: https://github.com/elixir-explorer/adbc/blob/main/lib/adbc_connection.ex Having a concurrent reads version should an expansion of the API above (but hey, you can even start with a full serialized version and add the read concurrency later). API wise, you could keep "query" as a write operation but you can introduce a "read_query" where the user guarantees they are exclusively reading. In any case, the main point is that I think this library can be cleaner and easier to maintain if it doesn't try to do NIFs, process management, and DBConnection at the same time. I think NIFs + process management would make this usable across more scenarios and DBConnection is an Ecto concern (which perhaps we can move all the way up to Ecto itself). |
@josevalim @warmwaffles if there are no other takers, I'm interested in exploring this approach :) |
Definitely go for it from my side - but that's @warmwaffles decision to make. :) |
Yea sure why not. @ruslandoga if you want to give it a shot I'm all for it. I'm really swamped with day job work at the moment, but I have time to do reviews and testing implementations. |
I've started a PR #255 for #192 (comment) and while working on it I had one concern. With this approach we miss out on WAL allowing multiple readers when there's a writer, but I guess it could be tackled in |
I might have misunderstood it a bit but here's a PR #256 Right now it opens two SQLite3 DBs in the same Elixir process, one for locked My concern with doing that is it seems a bit sneaky. I'm trying to come up with scenarios where it can bite the user:
Maybe
This would make |
@ruslandoga another interesting thought I had is that ideally the database is read from more than written to. Bottlenecks will be at the two genservers spun up, mainly on the reader. We could try to pool the readers and round robin the calls to each one, and maintain a single write instance. Probably something to explore later. |
The only thing the "reader genserver" does in "normal" state is preparing the statements. Since there is no cache, we can move that to the caller as well. Then the bottleneck would only be able to appear during read connection draining after a locked query finishes (assuming it was a write). Just to recap, we do this to make any implicit transactions opened in that read connection to be closed by finishing/releasing all ongoing statements. During this draining all incoming reads are stored in the queue inside genserver, but it doesn't appear to be all that different from how db_connection works, with the difference that it probably uses ets to queue the callers: the only reason for the read queue to grow are long reads started before the locked query finished, maybe during draining we can switch into codel mode... I'll try to cause this scenario in the benchmarks to see how big of an issue this is. I'll also try creating a new read connection each time we start draining the old one instead of queuing reads. |
One of the main reasons I wanted to look into replacing the DirtyNIF execution model is that I would like to be able to cancel queries that overrun the timeout. Ecto would timeout requests if they took too long to run while modifying records. One use case was a developer I was talking to was updating / cleaning up records and the query was touching thousands of records overrunning the default timeout. Although it looked like it timed out, instead it was still running in the background and causing compounding latency issues. What my hope is that with the new implementation that you are working on @ruslandoga we are not going to timeout the connection at all, but instead keep it open until the request finishes. |
Right now I'm exploring the following approach:
I'll try to finish it up by the end of the week and open new PRs. |
If we go with drop-DBConnection then there wouldn't be any need for timeouts, I think. But if we still want to provide them, it can be done while keeping dirty schedulers but with a slightly different INSERT handling: instead of building a single multi-row
|
This is intriguing to me. I thought it was required to implement |
This doesn't seem to improve much as I'm checking
|
What do you mean, can you expand? I am not well versed on sqlite pragmas. :) |
Since ongoing selects piggy back on the same read transaction we need to know when it's safe to issue new reads and when it's not. By safety here I mean reading own writes.If we write and then read from the read-only connection that has started the read transaction before our write, the data we wrote would be invisible to us. So to avoid that I'm checking data_version pragma which, if changed since the last invocation, means other connections wrote something into the database. If it has changed, I'm blocking/queuing checkouts until the current read transaction closes (i.e. until all statements finish). Since I'm checking that pragma before each select, it negatively affects read performance. |
For Ecto, I would say transactions always go to the write-based connection. Would that solve this problem? |
Unfortunately this wouldn't help since selects open a read transaction implicitly if there isn't one already. That read transaction is closed when the select statement finishes stepping through rows. If another select is started before the previous one finishes (on the same connection), the transaction is extended. |
I see, thank you, very helpful. So I guess we have no other option than having a single process that we send everything through? |
Or perhaps, we just live with the DirtyNIF for now. 😞 |
I believe we most likely still need DirtyNIFs. It is more of a question of removing the DBConnection dependency. :) |
I am reading this once more and I am wondering, how we can improve and simplify things. We have three options:
Pretty much all other designs we tried were not good enough. Have I summed up things properly? :) |
I don't think the assessment of what is the current state with point 2 is correct: What we currently have with ecto_sqlite3 is DBConnection with concurrent reads and writes. Concurrent writes block at the sqlite level though and are either successfuly serialized with sqlites busy mechanic or some writes run into busy timeouts and crash. |
Thank you @LostKobrakai. If that's the case, then we might as well have gone with Dirty NIFs all the way? What would DBConnection gives us if we allow both concurrent reads and writes? |
To my knowledge that was done to integrate with ecto (hence it being in ecto_sqlite3 and not this repo). Not sure if there would've been other / simpler options to make ecto work with exqlite. |
The We could strip this implementation down even further and lift the DbConnection out and into the The DirtyNIF mechanism works decently, until it doesn't. I usually run into issues where a query operation times out via ecto, but it is still running in the background consuming memory with no way to stop it without hard killing the erlang process. |
@warmwaffles I don't think you can replace dirty nifs though. That's the way to do C integration. Your issue with query operations may be addressed by the following:
WDYT? Could that work? We are solving similar issues in Apache ADBC. I would probably still say then that the DBConnection abstraction could be moved to the ecto_sqlite3 repo, it is not really necessary for SQLite3 itself, and then it should be debated if ecto_sqlite3 itself needs DBConnection. :) And in case I haven't seen it before, thank you for your work and everyone's feedback and inputs on the thread. ❤️ |
That may work. Handing back a handle to cancel a running query and doing that transparently.
I agree with this. I'll start looking at moving it out of this repository soon. Probably over the holiday break I'll knock it out. |
Apparently sqlite does not automatically serializes it for you, you are respnsible to do it, which pretty much means a DBConnection with a pool of 1 is the way to go. So perhaps this can be closed? More info here: https://www.sqlite.org/rescode.html#busy |
Yea sadly at the current state of things, I don't know if I can interrupt the current executing query to cancel it. I haven't had a whole lot of time to dedicate to exploring a way to timeout/cancel executions. |
https://www.sqlite.org/c3ref/interrupt.html This seems to be an explicit feature of sqlite. Couldn't this be used? |
Yes, that's the documentation I was looking at that spurred this ticket. |
The dirtynif execution model is okay for most cases, but we have an issue where long running writes need to timeout / be interrupted mid execution and aborted. The only way to do that is to utilize https://sqlite.org/c3ref/progress_handler.html
The text was updated successfully, but these errors were encountered: