-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate System.Runtime.Notify refcounting #3490
Comments
For the record: I investigated NeoGo code and turns out that Notifications size is not restricted in any way. I also deployed a contract and tried to send a lot of notifications, so it's confirmed in practice that it's possible for NeoGo. However, for C# node there's a chance that notifications number is restricted. The restriction goes from the way how C# VM refcounting works (hi, #2500), and if emitted Notifications are refcounted properly in C#, then the behaviour of C# and Go nodes is different. But for now I can't tell for sure what is the behaviour of C# node, we need to check it on practice using the same contract as for Go node and check the resulting behaviour (but not in mainnet/testnet, please). |
OK, I can't find the branch, but the contract was based on |
A part of #3490. Signed-off-by: Anna Shaleva <[email protected]>
As I said earlier, Go node does not restrict the number of notifications, it's proven by#3627. Happy news: C# node also allows to emit 2048+1 large notifications. I also tried with 2500 (tx was HALTed) and starting from 3000 notifications C# node does not responding (probably due to too slow
So here's the proof (I did it in NeoBench in order not to break testnet/mainnet):
And contract deploy/invocation steps:
Voilà, we can easily end up in broken mainnet, although it will cost a small fortune. We need restrictions. |
As a straightforward suggestion from my side: restrict the number of notifications directly, without refcounting:
We already have restrictions per a single serialized notification size: neo-go/pkg/core/interop/runtime/engine.go Lines 110 to 116 in 8e1fdd5
so the combination of these two restrictions won't allow to DoS the node. Another solution is to add notifications references to VM refcounter, but it's a little bit more complicated and I'm not sure it's correct to combine these two types of references since VM refcounter considers only items that are on stack (and notifications are not on stack). Or we may also try to add refcounter to interop context's notifications container. @roman-khimov, what do you think? |
A part of #3490. Signed-off-by: Anna Shaleva <[email protected]>
Looking at the broader N3 picture, VM stack is temporary in its essence, it's just some space used for computation, while storage changes and notifications are side-effects. Storage changes are strictly permanent, we have no limitation on storage changes, but each write into it costs quite some GAS. Notifications are not permanent, not even required to be stored for correct chain operation, they're supposed to be much cheaper than storage and they are, but they're still mostly understood as stored/available at least for some time. In that sense tying them to VM limits (reusing the same counter) is not necessary, they can have limits of their own. Even though intuitively it looks like cheating, we can have more items in notifications than ever allowed by VM. I'd try going with lower (than 2K) limit, we can analyze the chain data to see actual usage patterns. |
A part of #3490. Signed-off-by: Anna Shaleva <[email protected]>
Here's the summary of N3 mainnet data up to 6220598 height for Application execution trigger (transactions). The maximum number of events per transaction is 2K (I have transactions hashes if needed), but I'd say that we'll be fine with MaxNotificationsCount set to 256.
|
Whoa. That's a huge tail. And what are those 1000-2000 notification transactions? Are they real? Look like pure tests to me. |
Transactions with 2000 notifications are contract calls that transfer funds:
Transaction with 1000 notification is the same contract call with similar transfers: Basically, all transfers are transfers with identical sender/receiver/amount like this:
The transaction scripts itself look like this:
So the contract is NEP-11 token
To me it also look like tests, that's why I suggested 256 notifications at max. But at the same time I can't say for sure are they really tests or the contract works as expected. It should also be noted that these transactions are old. From one hand, contract may be rewritten in order not to perform a lot of transfers in a single transaction (I'm starting to think about Here's the tail of the transaction list sorted by the number of notifications:
|
So it's not a test, it's a token giveaway which can legitimately have a lot of transfers. I'd suggest 512 for the limit then, it should be sufficient for events like this (4 transactions are OK here) while restrictive enough to prevent problems in evil scenarios. |
Just for some reference: Our event (notification) price is not dependent on size, maybe it's a part of the problem and we can solve it economically. Events are limited to 1024 bytes, but some regular transfer is ~50 bytes (20+20+int+overhead), so 20 regular events have the same size as one big event, but you pay X for a biggy with 20X for regular transfers. |
Does C# node have any problems with 2000-5000 small notifications? I'm trying to understand what matters most, overall size or just the count. |
First of all, it should be noted that it's not proven that C# node has problems with large notifications. Everything I learnt is that C# node allows to emit more than 2048 notifications in a single transaction and C# RPC server has problems with a construction of transaction with 3K+ notifications:
So I guess we firstly need to know how Go/C# node handle a pure So let me try a standalone
From what I see and what I guess, the overall size of all resulting notifications matters, because this problem looks to me more like a potential OOM reason. So 5K of small objects may not affect the node if the overall objects size is not as big as 2K of large objects. But let's check. |
I constructed transaction manually and used RPC client only for func NotifyLarge(count int) {
for i := 0; i < count; i++ {
runtime.Notify("LargeEvent", args...)
}
}
func NotifySmall(count int) {
for i := 0; i < count; i++ {
runtime.Notify("SmallEvent", true)
}
} Results for NeoGo node:
and RPC server can't handle
And here are the logs from the client side:
But CNs continue normal blocks accepting afterwards which is OK. Results for C# node:
To summarize, I still stick to the previously added opinion: overall notifications size matters, not the number of notifications. But restriction of the number of notifications will restrict the overall notifications size because the size of every standalone notification is restricted by VM limits. |
Close #3490. Signed-off-by: Anna Shaleva <[email protected]>
Fix the problem described in nspcc-dev/neo-go#3490. The MaxNotificationsCount constraint is chosen based on the Mainnet notifications data, ref. nspcc-dev/neo-go#3490 (comment). Signed-off-by: Anna Shaleva <[email protected]>
Fix the problem described in nspcc-dev/neo-go#3490. Port the solution from nspcc-dev/neo-go#3640. MaxNotificationsCount constraint is chosen based on the Mainnet statistics of the number of notifications per every transaction, ref. nspcc-dev/neo-go#3490 (comment). Signed-off-by: Anna Shaleva <[email protected]>
Just for the record: here's some statistics for OnPersist/PostPersist triggers:
PostPersist:
|
It just follows the number of transactions (burning fees + sending something to the validator). |
Need to investigate how stackitem refcounting works for
System.Runtime.Notify
handler for both Go and C# nodes.The text was updated successfully, but these errors were encountered: