[ Audio Video of the Meeting] (https://www.youtube.com/watch?v=jJvS6QjhPlM)
Tim Beiko: Okay, we are live. Welcome everyone to CVE-162. Okay. So, today, I guess at the start, we've commissioned an impact analysis by debug around self-destruct removal. So they'll be here to give us an overview of the analysis. And there's already been some conversations in the last day or so about it, and some, you know, potential issues with, the proposal of EIP # 6780 and then some counter points to those issues. So we can have all that conversation. And then I, there's a bunch of stuff around the IP for it before. That's in the work. So it makes sense to cover all of those and after that, I think, thinking about like, how can coon shapes up to be So whether there are some things we want to add or removed from CFI. There's already some comments about that in the chat. And then Daniel has Daniel has put together a proposal to just align the upcode the upcodes that are used across all of the proposed EIP’s for both Kennedy and the next forks and then a couple of more updates on some other EIP’s. So hopefully, we get through all of this in 90 min. But yeah, to kick us off. Sorry. Neville, do you want to give us a rundown of your impact analysis of self destruct removal. And I'll post the link in the chat.
Neville (Dedaub): Sure. Yeah. Can I? Share my screen as well. Yes, yes, please. Okay. Cool. Yeah. So thanks for hosting. yeah. So I'm never Greg from, we work commission to do a study for the Zoom foundation regarding the removal of the self-destruct upcode or the chain in semantics of that upcode. And it's the team was basically me. All 3 of us are on the call. So feel free to ask us questions. All right. So the scope of the study was basically to determine the impact of change in the semantics of SELFDESTRUCT and some of DESTRUCT has recently been used, or, you know, throughout its lifetime to either safe to transfer it or to perform app upgrades ever since create to came about. But there are also some question marks. You know, things like, is it going? Is it actually being used to burn CRC20 tokens. Things like that? So wanted to find the affected projects wanted to find the impact. And all of these things are you know, coloured by the fact that you know some projects have not been recently used, or they don't have a high balance. They're not part of the known contract. So the impact has to be considered against all these factors. And so we and the other thing is that we have two different proposals. And we want to see, you know which one, or help determine which one to select, if any at all, right out of these two. So just to give a little bit of an overview of how SELFDESTRUCT currently works for those who are, you know, external. There's a a bit of a new analysis here, because so SELFDESTRUCT, you know, top one. And from the stack. That's an address, and what it does, it sends all ETH in the current in the current contract to the beneficiary address, but, unlike CALL, it does not actually create a CALL frame to execute the beneficiaries code. So, it can be used in a way that's even for smart contracts. That block, you know the receive function. Yeah, that brought the receive function. Then. some of the stuff is still gonna work. It also clears the runtime Bytecode of the current address, resets nonce to 0 and resets all the storage vars to 0. Okay. It does not issue a gas refund. So that's changed, you know. I think one and a half or a couple of years ago.So how is this gonna change in the EIP-4758. So, very simply right, it's gonna be renamed single. It's only going to SENDALL to the beneficiary address. But it's not going to clear the Runtime bytecodes. So I'm going to reset the Nonce, and it's not going to reset the storage Vars. So that's a simple proposal there. And in the way I remember, which ones which one is simple, which one is more complicated is by the first number. So the simple one starts with a 4, the more complicated one starts with a 6. So, in EIP-6780, the semantics of SELFDESTRUCT are essentially the same as they are now, except that there's a condition. That, if the address and by address I mean specifically the upcode address, right, is created in the same transaction, then you do the same thing as in SELFDESTRUCT you, clears the Runtime Byte code, reset the nonce, all storage vars. Otherwise these things will not work, they will not happen right? Okay. So, now that we know which each one of these proposals. What each one of these does. here's a little bit of a summary of what we found in the study. So, first of all. Some of these things are subjective. So these are protocols which you think are affected. Some of them are, you know, we estimated the impact too low because they're more likely to be a false positive. So we have to clarify, for instance, in the case of cellar over here. because we've seen some, you know, weird behavior. If you know, the SELFDESTRUCT instruction is going to change. But essentially, the impact is, you know, quite minimal. especially in one of the in the more complicated proposal. Right? So, 6780, so yeah. I'll go through these one by one. Just give it a little bit of a summary of what we saw there. So in the case of Axelar Network, Axelar creates contracts, and then it destroys the contract in order to do some safe Ether transfers. No, we think that it is upgradable. So, impact is high, but it is upgradable, you know, so that can be fixed. There is no impact in the case of the Axelar Network. If the second EIP is chosen, that's #6780, so it can remain operational without any upgrades with the second EIP. In the case of sorbet, the similar to gelato. It's not upgradable, but I mean, that's part of the smart contract that uses SELFDESTRUCT. But, it really isn't used much right? And the case of gelato, it's used in conjunction with Pine Finance. Okay, so find finance. You know we think it's affected as well. Now, I've skipped celar in the case of celar. That one is interesting because it reach it seems to replay messages from another chain. But the messages ought to be unique. And the uniqueness of this message is used as part of the salt when creating a smart contract, using, create 2, and then those smart contracts are subsequently destroyed. So we think that in this case. If the messages are indeed unique. Then that's not gonna be effective. But it's gonna take a while, obviously, to go through the entire Celar, Protocol is very complicated. We'd have to look at the way it interfaces with our change as well. But that's what we think at this point, you know, we'll try to confirm with the developers. Chainhop, actually works in a very similar way to Celer the part that has this issue. JPEGd, is affected, and then 1000-Ether. Their homepage is also affected as well in theory. But you know we haven't seen an instance in the past where, like, if we had to replay this, these sequences of transaction with we would find the same effect. But it is a, you know, impacted by this. Okay? So note that all these things are subjective in the case of estimated impact. But from the point of view of like finding potential protocols that are affected that is not subjective. We conducted this by, you know, looking at past in -transactions. By doing in some cases static analysis of the contracts by code as well. So we did cover a little bit. MV Bots as well, even though there weren't sources available. Now, if we look at this from a quantitative point of view, right? So the usage patterns. so we we took at we looked at big blocks between blocks 15-17 23million. We measured the number of times, for instance, that CREATE or CREATE2 is used. Quite a few times, as you can see over here. Most of great to pretty much dominates. There's an order of magnitude fewer sell the struct but interestingly. Most of the SELFDESTRUCTS are used in conjunction with Short-lived smart contract creations. As you can see now, potentially, this can be impacted by EIP-4758, because there's a shortlift smart contract. It's not gonna be, you know, removed at the end of the the transaction, and so someone else can interact with it. But then this is even more worrying. So metamorphic patterns where a contract is created on an address. And then it's destroyed in the same transaction. And then in another transaction. It could be like, you know, weeks or months after that the same smart contract is recreated. and then this one is destroyed. Now there's been 22,000 instances of this. Now, this pattern will not be impacted by EIP-6780, but it will be like outload by this other proposal? and actually Axelar Network. This is it's responsible for a few of these. yeah. And then, finally, this is what we were kind of like, mostly worried about so long lift metamorphic patterns. We're longed to refer to the fact that a contract is destroyed in one transaction and then recreated as another transaction. Hey? When I say short-lived, it means that a contract is created and destroyed within the same transaction. In this case, as you can see, it's like 2 orders of magnitude, almost 2 of the magnitude less than in the previous case. like the number of times that this has happened. So. yeah, I mean, just just looking at this quantitatively. It seems like, you know, especially if EIP-6780 is selected. Most of these you know, usage patterns will be would be valid. And you know this will not be. But then we'll see kind of. Like most of these, 735 upgrades are not actually done in mainstream protocols. They are done in, you know, unknown, or MV Bots, or things like that. It's okay. So let me give you an example here of a an example of a short-lived metamorphic contract. So a contract which is created and destroyed at an address and then created them destroyed at another one. I'll share these slides with you guys. So you can see. But basically, these slides are just a, They're just a summary of the full document. So you can see here destruct here. So this contract is created and destroyed. And in the next transaction same contract is created and destroyed. So these 2 are separate transactions. But, as you can see, like the same smart contract is created and destroyed. This is just an example, in Axelar Network. Yeah, we've also looked at some protocols where this thing is done. or could potentially be done according to how the smart contract code operates. But, for instance, in this case, this is a problem called revest we can look at sample transaction. But there's there's no need. But basically what happens is that re best creates a smart contract. And then I think it transfers a an NEFT, when you withdraw and it uses actually within the creates to solve the NEFT id. So again, this is just an example, you know, if you want to see more of these, you know examples. Look at the full report. Let's wait until this rolls up. But essentially what we think is because of this. So over here, this clone deterministic call over here. it's gonna pass the NEFT id, you know some entropy from the NEFT id, and then that call and deterministic down the line. It creates a new smart contract. And it's going to use, you know, the NEFT id as part of the salt, and we think that, you know, since the NEFT id is more authorly increasing. There's not going to be clashes. and the smartphones that are created. So even though, like the smart contract, can cannot be destroyed. If if the proposal goes through they're always gonna be new smart contracts that are created because of that. So you know it. It takes quite a while. I mean, this study was connected over two weeks. It takes quite a while to go through all of these probos and and verify. But all we need to do. you know essentially, now that we have. This discussion going on is, you know, just confirm that the ones that we thought were false. Positive are indeed false. Positive. Okay, so, Yeah. So, one of the other things that we try to do is to use static program analysis to find possible behaviors that we haven't seen on the blockchains. so far, but they may potentially happen. For instance, you know, someone deposits to a smart contract or to some address. and for some reason. with the proposal. They cannot be, you know, retrieved right, these tokens. So someone asked the question, on one of the from all the forms. that is used as to discuss these proposals. what? So what we did for this is we first of all, we found all contract factories. so for this thing, we actually. We perform static analysis to find contract factories because, what contract factories do is that they create a smart contract. So they use something like Create 2, but they are actually within the in, it's called. They would have like certain patterns there. Okay. Then out of these, okay? So we find the ones that do CREATE 2. And then we applied static analysis. And this is in a lock shell. I mean, this is not the real code. This is a pseudo-, but then I'm not sure what it did is. You know, if it's fine as a SELFDESTRUCT in a contract that was created by these factories. And you know it doesn't have. For instance, an ERC transfer an arbitrary call like, you know, //contracts.call and allows you to pass in any data, or there's no delegate call. And potentially you know, you can have funds stuck in this smart contract after the EIP goes through. So we looked at many of these examples. I mean many of these where I would be bots. Of course, I mean, you blocks don't have verified smart contracts, so verify the sources. So I have to use our D compiler. It takes a while to, you know, go through these implementations. And there's, you know, there's try and there here. So it's not going to be an exact science. But we thought that all of them were false positives, for you know, this particular pattern. So we haven't found any examples of this. okay, so what was surprising is that there's there's been a lot of discussion about create to and sell the structure being used to perform upgrades. But in mainstream particles we haven't actually found this anywhere. But if you look at the fine print, and you know some of the libraries that are used to do these metamorphic concrete upgrades. The Ux is a problem. When you do, metamorphic contract operates using SELFDESTRUCT and create2, because you cannot do this atomically. So you have to do this in two separate transactions. Now imagine doing a governance proposal where you propose. You know one transaction, you do something, and then the second one. you have to recreate the smart contract. so in. And then, in the meantime people might go in and use the protocol without with the smart contract, not 3 created, so that can cause, you know, all sorts of issues. And then the other thing is that the state setup, you know, the mainstream protocol would have. You know, quite a bit of states that needs to be recreated when you do a substrate and CREATE2. So that's not used much. It's mainly use their men with you. But and there's obviously there's arguments to be made as to why, for instance, if you box would use that because it's like more efficient to do it that way. You don't have proxies. but maybe we'll hear about this and in the rest of the meeting. So you know, in summary, we think the impact is moderate. You know, it's like an especially so with EIP- 6780, like we EIP -6780. So it's gonna affect mainstream protocols. Metamorphic contract out rates are low. One of one thing that we found is that even though there's a lot of discussion about, you know, have to start being duplicated. People are still using it. equally so, irrespective of the users of the through and blockchain again, but very rarely right and also throughout the study there was a coincidence that there was some evidence of said this drug being harmful as well. So, yeah, that concludes. My, you know, just just the summary here. And I think we'll be discussing a few things throughout the call, and if you have any questions, just just ask me or my colleagues on the call. Thank you.
Tim Beiko: Thanks a lot for the presentation. That was very good.
Neville (Dedaub): I guess before.
Tim Beiko: Oh, thanks, I, William, I see you have your hand up before we go into the like set code stuff. I just wanna ask So anyone, I guess on the client teams or others have like questions about the presentation or the report in general. and then. Yeah, after that, we can go about, you know, next steps for the specific proposals. But just yeah, any questions, thoughts, comments on the report or presentation Okay, And then, if not I guess you know where we were at prior to this report was that we'd included #6780, which is basically the the second proposal mentioned in the presentation and and allows SELFDESTRUCT if it's within the same transaction as create call. And so, William, you raised like a couple of cases where this could break things. do you want to take a minute to like, walk through those. And yeah, we can take it from there.
William Morriss: So I'm a user of the create to upgrade pattern. I have a prepared statement so the report skim over MEV, but doesn't speculate why this upgrade pattern is common among me. V- Bots, who wrote their contracts and assembly. The reason is broader than MEVand applies to normal decks, trading, indeed, any actor trying to get competitive transactions included with any urgency must participate in priority. Gas options off chain systems are being built to allow anyone to participate in these options. Such auctions are denominated and gas used, so any upgrade mechanism with any gas overhead whatsoever cannot be used competitively. That leaves code replacement as the only viable mechanism for traders. Suppose I want to be able to upgrade my code in order to trade on the newest protocol. It is much simpler now when I'm able to replace the code without the disability I would need to redeploy to a new account. Instead.: Every NFT, every token balance, every token, protocol, approval, every staking deposit, every registration. Indeed, my entire presence on the chain has to move over to this new account for identities, and moving all your possessions to another account is painfully expensive. If your account is a smart contract having a secure way to upgrade it can avoid such a migration. Indeed, this is the purpose of several of the mechanisms identified in the report. Such systems are inventing ways to secure the upgrade costs for the masses. SELFDESTRUCT removal destroys their work, set code, inclusion renews it. We can preserve the ability of accounts to change their code by including set code in the same. For systems anticipating the fork, can prepare by including setcode. If setcode is not included in the same, for we could not securely anticipate what the Semantics will be. It is critical for adoption of smart contract, Wallace, that we preserve code mutability. aside, I've had some discussion in the topic channel on discord. And it seems that it would be a significant improvement for static analysis if we also disabled setcode during delegate call, and this would indeed limit the scope and risk and make it much easier to identify potentially mutable contracts because you would just be looking for the opcode. Another note is that there's I forgot that. Anyway, I'm gonna read your responses now.
Tim Beiko: Thanks. And then, okay, we had. It all happened in the past day. I you have put like a a set of security concerns he had about setcode in the discord I just shared him here. He unfortunately couldn't make the call and then I think what it boil down to in the discord right before I hop on this call was that. Basically, if we want to introduce setcode as a mitigation for some issues caused by #6780 there are some security concerns with that. The worst of them have to do with using setcodes within a delegate call, and maybe we could enable setcode without enabling it in delegate call and and that would results with the issues. I guess.
MariusVanDerWijden: So. Id, yeah, I have a quick question, can we add setcode later on? Is it possible to add this upcode later on. If we is it.
Tim Beiko: assuming we do, we do remove SELFDESTRUCT right? That's what you're asking.
William Morriss: So in order to allow contracts that are currently upgrading with SELFDESTRUCT to remain upgradable through the upgrade. It would be better that we don't have downtime in which code mutability is not possible. And also it's important to solidify that opcode, so that at the time of the upgrade the contracts that are frozen are able to be prepared for the setcode upgrade. So it's better that they happen in the same. Hard, for
MariusVanDerWijden: is it technically impossible for them to be separated out?
Tim Beiko: What do you mean by separate it out?
MariusVanDerWijden: Is it? Is it possible for them to be in separate hot folks?
Tim Beiko: Well, I guess. What if I understand correctly? This means.
MariusVanDerWijden: Yes, but the contracts that are affected are basically bricked between those hard forks, right? anyway, break between those hard folks. This, of course. looks to me like. I think court mutability is something that should have never been been done. In the first place, it's very much an anti-pattern, in my opinion, and people are taking advantage of of it, and I don't really think we should include this upcode. No. and I don't. I also don't really think we should do this of code in the future, but we can have a debate debate about that in order to unstuck or like make it cheaper for MEV BOT to to extract them, maybe I don't think I think this this upcode has a huge amount of security implications, and we should. Shouldn't you know it?
Tim Beiko: Thanks Gil.
Guillaume: yep, I mean, basically a lot of it. So Mario's already said, Yes, it's I think it's a huge security issue. But the biggest problem that might be my question still would be if any of the arguments that that we're just given in the statement, we're true or at least we're significant enough to make us consider including setcode. Would that not be like? Would would not to simply not remove it. Removing self. This would be the superior solution, like, does it? Code offers something that not removing SELFDESTRUCT would not solve in a better way.
William Morriss: Yes, So the weakness of the SELFDESTRUCT upgrade pattern is that there's a downtime in the upgrade itself, where? Because the SELFDESTRUCT takes place at the end of the transaction rather than during it, so you must selfdestructed one transaction, and then in another transaction. Recreate the contracts setcode would allow us to upgrade safely and securely in place. If you all want more time to analyze setcode, please. just postpone SELFDESTRUCT. Thank you.
Tim Beiko: Thanks.Then
Ben Adams: If that code and SELFDESTRUCT depreciation happened, wouldn't that code need to be in a fork before SELFDESTRUCT because otherwise you couldn't upgrade the contract to use set code without putting in because of SELFDESTRUCT happened first. Then you couldn't. You can no longer upgrade the contracts. And if it and that I didn't exist beforehand, so you can't upgrade it to you.
William Morriss: yes, so I think it can be in the same hard forkr because contracts can contain invalid octos.
Tim Beiko: Okay? So people would basically deploy the contract with an invalid upcode prior to the for. and then the of code will become valid after the fork. Is that right?
Ben Adams: Yeah. But they would be dead contracts during that time.
Tim Beiko: Yes. yeah, I guess I'd be curious to hear generally from, you know, client teams given all of this how the people feel about 6780 setcode like. Do we want to? Basically, I think that you know, the the first question is, we we agreed to like, have 6780i in pending the results of like the impact analysis and and and making sure that, like not too many things would break our teams comfortable, leaving it in, regardless of what we do with set code and analysing that separately? Or do we feel like the like remaining inclusion of 6780 should be dependent on setcode.
Andrew Ashikhmin: I think we should leave 6780 in and analyze that called separately.
Tim Beiko: Okay.
MariusVanDerWijden: I feel the same way. We should have 6780, as soon as possible. And think about setcode.
William Morriss: Could we perhaps finalize the upcode reserved for set code, such that bricks, smart contracts, might eventually be up there on bricks in a future upgrade.
Tim Beiko: Yeah, we have a whole section about upcodes later on. Daniel's put a full list. So I assume we can at least. You know, quote on call to preserve one for setcode. yeah. And we can discuss that as part of that. I guess. Yeah, okay. So Aragon, get on board with leaving SELFDESTRUCT. That is, anyone else, I guess. Does anyone disagree with that? Because anyone disagree that we leave? we leave it. we leave the EIP- 6780 in. We discuss, we keep discussing setcode. It may or may not be included in this upgrade and security, you know, just in the last, like 12hrs. There was a ton of discussion. So there's still a lot of back and forth. But does that make sense to people? Okay, so let's do that. yeah. So no actual changes to the to the fork inclusion list. and yeah, I'll make sure to share. Well, I guess if anyone wants to read the full impact analysis for SELFDESTRUCTt it's linked in the agenda. And so people can see it from there. Okay, next up. EIP-4844. There's a ton of PR's Since then that updates some potential changes for the next 10 min, and then some RLP, and S changes that might affect are currently C, if I a piece so I guess to start off like, client, you had. PR-7062. That adds data. Guests used to the block header. You? Wanna? I briefly discuss this.
Lightclient: Yeah, I just wanted to see if there is any interest from other EEL devs. I'm not sure if Peter is on the call. But while he was implemented in some things related to Sync. He just kind of noticed that there are some it. It was a little bit different interaction with the sync code than he was expecting. And after dig into it we kind of realized that the excess data gas and and the base fees are not as similar, as we, you know initially, would have anticipated them to be so. Ultimately. The issue is that for excess data gas, it's kind of that number is going to be used in the block, the the descendant block. And so the value that you need to compute the cost of the data transactions has actually. The excess data gas from the parent header, and for base feeds slightly different. The base fee in the header is actually the base fee that's used during the execution of transactions. And so whenever you do your header validation, that's when you are checking that the base V is computed correctly from the header. You know it. This is kind of just a proposal. I was sort of sketching out what it would look like to bring those 2 things in line, so that they both are representing the value for the currently executing block. And so that's kind of what that that Pr is. They both do essentially the same thing. I think. Ultimately the question is, just. Do we care enough about having these similarities to try and make this sort of minor change? Or are we okay with the status quo? Personally, it's just like one of these thing that's in another one of these things, where, if we have the formatting slightly different. It increases the overall code complexity of clients like we saw. As you know, Peter was implementing the syncing code, whereas, if the you know, if we had a solar mechanism already, and we just reuse that mechanism in most of the same way, it would have kept things a bit simpler. So that's like, you know, my thoughts probably like slightly favor the data gas used approach in the PR. but I'm curious what other people think about it. Oh, Peter's here, too.
Peter Szilagyi (karalabe): Yeah. So perhaps just a a slight thing. I I maybe just an expansion to what Matt said, that the I think the complexity comes from the fact that previously we had the the fee was defined is defined by the base feed. And so essentially, we have to have a 2 header fields. One of them tell us how much gas we consumed, and based on that we can validate the base fees from header to header. and the base itself. If I want to run the transaction, I only need to to look at the current blocks, base fee, and with the block transactions we kind of merge these, how much we use and how much does the next one cost into a single field? And because it is a single field, it makes basically everywhere where previously. So previously. When we validated the header chain, we just looked at the header field, we didn't care about the block content at all. And then when we ran the block content, then we just need to look at the current header. and since this field somewhat got convoluted, it somehow means that I need to be able to both look at. So when I'm I'm running the blocks, I somehow need to both look at the current header and the parent header. Similarly, when downloading snap via snapsing. Because there, I even if I'm not running the block or executing block, I'm still I still want to verify the blog body. And the thing is that it doesn't really matter how we interpret access the excess data gas, whether it's for pre execution of post execution. The thing is that I. If I if I convolve these 2 fields together, I will have this extra complexity, that all validation code will all of us that need both the parent header and the current header in yeah, whether this is something worth I mean the week. One solution is to go with the current status quo and make the code around this structure a bit more complicated. The other solution is to split out the 2 fields so that it follows a similar pattern to basically, and the date and gas used. And if you have the 2 fields and the code route you become a bit of simpler. Yeah, it's not that complicated. It was just something that that was surprising to me that it's a new mechanism, something that was kind of unexpected.
Tim Beiko: Thanks. Andrew.
Andrew Ashikhmin: yeah, I think. he in Aragon we kind of we have separate stages for headers and bodies. So I would be very much in favor of making the change, so that that we don't need a parent's body to verify the block.So yeah, I'm totally in favor that they because I just just to double check. So with this change, we only need parents Header, not parents Body correct?
Peter Szilágyi (karalabe): no. Actually So what the orange current thing requires. If you want to verify the current blocks body, you need the parent header. So that's the weird thing that the current transactions required. The parents header. That's what the weirdness is with the proposed change. you could verify the headers completely, separately, and then, when you want to run or execute the block, you only need to look at the current header. You don't need to look at the parent header at all. So that's how currently everything else works. But it requires this field being split out. So that's the cost So, for example, for us, the complexity was that our in our in one we are snaps, thinking. We essentially also have this two-phase thing where we download the header separately, and then for every header, if they're the transaction hash is not empty, meaning that there are transactions, then we download the body block body and just feel the header. And up until now we just said that. Okay, I want to fill this header. I downloaded the all the contents, and then I just match it. That does this trans. As this transaction list match what they had there once, and if yes, great. And now with the block transactions, what I need to do is that after I'm downloading the list of transactions, what I need to do is that. Okay, okay. But now, does the excess data gas also get computed correctly, for which I think I need the parents access data. It. It gets a bit bit funky. It's it's doable. So the way we did it is that from now on our downloader, instead of a download task is a single header. It's actually heather. And it's parents. So I mean, it's not specifically complex. But for example, it makes. if you interrupt synchronization and resume it, then you will also need to dig up the parent of the first header. And yeah, it just these little tiny weirdnesses all over the place where up until now you. You just had the single header, and now you just need to, and it's it makes things we wonky. But again, the question is, is it worth it to add an extra integer to the heather field? Or is it too much? And I'm I don't want to make this decision, really, because the current design. Isn't that painful? So if if people say that it's not really worth it to stir up the wasp nest for it, then I can live with the current, whatever it design is.
lightclient: Yeah, I mean, I think, like this, one simple thing is really not that big a deal, but it's more of this mindset, and I'm like worried. What will happen if you know every fork? There's that one small thing that doesn't really align nicely, and we could have just fixed it. If we have that mindset in like 5 or 10 years, you know how many like special educations are? Are we really going to have, I think, a lot
Tim Beiko: right, I guess. Given that. Get an Aragon's comments. Does anyone think we should not do this change?
stokes: So that' a point about sort of refactoring the execution gas in a way that might make the current thing with 4 4 4. Make more sense. I don't understand. you know what he's envisioning well enough to really defend it. But I wonder if anyone else has?
lightclient: Is it written somewhere? I don't think I've seen it.
stokes: Yeah. Well, so I guess I'm also a little confused like there's like, maybe 2 things here. One of them is just changing how it is computed, and move into like a nicer sort of math to like better approximate exponentials. Definitely, it sounds like this change that we're bringing up right now is unrelated to that. And it's just more about. How do we actually
lightclient: validate these things? Is that correct? Yeah, yeah, I think it's it doesn't have anything to do with the exponentials, but more like, what? What is the order of validation? Where do you get the data for validating? And I don't think what Dancard is looking at doing. It involves getting rid of the gas used field. Which is like kind of what allows us to make the base V in the header the base for that header's block. That's kind of what we're missing for excess data gas.
Stokes: Yeah. So maybe if there was push back, it was a miscommunication. I'm not sure. But yeah, it sound. I mean, does anyone think this is going to push back #4844 timelines too much?
Andrew Ashikhmin: Yeah, I just just wanted to say, because I think it doesn't. So, yeah, I misunderstood it in the beginning. because I thought it eliminates the need to yet. So it is. It's not relevant to parent bodies. So I kind of. I need more time to look at the proposal. So I don't have a position on it at the moment.
Tim Beiko: Does it make sense? We have the #4844 call on Monday. Does it make sense to give people like the next 2, 3 days to look over it and make a decision on the #4844 call Monday.
stokes: Sounds good to me.
Tim Beiko: And if people can't make that call, just leave your comments on the PR directly, and we can. yeah, consider those on the call.
Okay, next up And I just add one more thing George, as actually asked how how this whole thing relates, for example, to live clients if you
Peter SzilÃgyi (karalabe): want to, to just verify the headers, but don't have the bodies, because obviously you don't download the bodies. And the short answer is that a live client will not be able to verify the access data gas in its current form. So the same way that the live client cannot verify the gas used field because you need to run the transactions the same way, a light kind, and cannot verify the access data.Yes, the the part of it which which tracks, how many blobs are included?
And the other problem did. This is another one of those witnesses, because even the like client can verify the the base fee. But it won't be able to verify the block. See? Because it's because of this dual nature of the like access data. Guess so for light times, you would just need to take it for granted that that field is correct. But yeah, again, with my clients. It's If we are following the. If we assume that the consensus network is on a semi-good chain, then a lot of validations could be omitted. I mean, you could even debate that if the consensus client tells you that this is the header, why even bother validating anything, just roll with it. So yeah, again. So it's not really end of the world. It just just a quirky thing that we have to decide which Quirk we want to live with.
Tim Beiko: Got it? Thanks. okay. Next up this is like an old PR that recently had to move in. So refactoring the delivery conditions for blocks.so I believe this is the same where the blocks didn't actually check whether the blob cap was exceeded and only the individual transactions.
Stokes: Yeah. so as far as working on this, I don't think you can make the call. So I will answer any questions on this. PR, and yeah, that's basically it. Some things kinda got dropped. And I think the main change at this point is exactly what you said. So there is no way. There's no way currently in for to specify that there's only so many Bob's per block. If you look at the current spec I could send maybe only a few blobs per transaction, but I could send, you know as many transactions as I can pay for. And then now there's like, you know, 30 MB of blobs.So it'd be nice to have this defined at the El. And that's what this change does.
Tim Beiko: Got it. anyone. The post of this like, it seems pretty straightforward. Okay? Then I guess we could probably go. Oh, yeah, sorry. I was trying to find the button.
Peter Szilágyi (karalabe): I think it was done yesterday. Who said that? this field is already validated by the conservative client? My 2 cents. I already wrote it on the Channel to that, in my opinion it would be nice to have it validated by the execution client to, simply because one, if this simpler and the I mean safer, and the other is that I think it's It would be useful to have the validation somewhat self-contained. So basically have everything. So that if the execution, if I just give a batch blocks to the execution of client, then it can do as much validation as possible. Maybe it cannot do change selection, but it should be able to verify everything else. And this this extra check is needed to to forbid blocks containing hundreds of blobs.
Stokes: Yeah, I think we all agree. And so yeah, like, it seems to make sense to me to like, keep this in line with execution gas where it's like, yeah, there's a limit at the yell, and that's sort of the ground truth. there is a cap at the cl, but that's more networking thing at this point. And again, as we've discussed, you could imagine this kind of varying independently subject to each layer. Be uncomfortable with that.
Tim Beiko: Okay, so I guess we can go ahead and merge this whatever. And stokes it's ready. Okay, next up. Okay. So devnets. So part of us, you wanted to give a quick update on devnet 5, and then Gajinder, you had a 3 PR's related to devnet 6. so let's do that. Barnabas.
Barnabas Busa: Yeah, sure So they to recap it in the past 2 weeks we as a long period of one finalisation on Devnet 5. And it ended up training 900 validators accounts to force them to exit, and we managed to get into a final stage again. Be the light house, and that your mind. And at that point we had a hundred liters running on a single node, and everyone was able to catch back to that, using a checkpoint sync. and I decided to make some new deposits, so we can still see if any of the go off I now. So I made a thousand deposits yesterday to it, and these are being processed right now. the I know this is something that every one get ejected about at balance of 31.7, which is a bit strange, because, in the config I have set it to being ejected at the 31. So I'm not quite sure why that happened. here you can. Did they come pick for this? And in the interrupt channel? we had some discussion about it. What is that?
Tim Beiko: Oh, Ben says his terraces. Can you expand on that?
Ben Edgington: if my microphone is working? yeah, it it would be based on the effective balance, and when the notes balance drops below 31.75E. The effective balance drops to 31 is so if you set the injection balance to 31. Then when the actual balance reaches below 31.75 then you'll trigger the ejection which sounds like what you're saying.
Barnabas Busa: Okay, yeah, that makes sense. I had no idea, anyway. right now we are finalizing. And we have what 500 on the train right now. it's looking quite nicely. You also have a be continuous for running the another thing is 786 specs are being collected right now, and I'm open for including anything or not, including anything else. We have a link for that, too. There's a quite some PR. that I would like to include in Denvet 6. And this would still be a for it, for for specific Devnet, and then hopefully, Devnet7 be a then come. That's which would combine other PR that are not related to for it, for awesome.
Tim Beiko: and that is
Barnabas Busa: Devent7 months from now. bye. But first, we should focus on devnet 6 launch.
Tim Beiko: Yeah. I'm yeah. That sounds good. And I see, okay, so I'm looking at your ducks now. And I see some of the PR's. Yeah, basically, the a lot of the red PR's are the same ones Gajinder had up and we can. I think it makes sense to probably take most of the call on Monday to go over this list that you have. But you, do you want to discuss the 3 that you brought up on the agenda for today? So first of all, was #7038.
Gajinder: Yeah, Hi, Tim. So I'm 238. Okay? So what 7038 does is it basically refactors a little bit how the network payload is built. So it's now, first a transaction payload, then blobs, then commitments and proofs. Earlier it was transaction payload, then commitments and blobs and proofs. So it just I mean, it feels nice this way. and the second thing It also adds clarification that the blobs are flat otherwise. there was this interpretation that we also had discussion in the discord that each blob itself is a list of field elements. So this PR also flattens out the blobs, and in a big Indian way. So oh, and then it just adds some. It just clean up some references and add some validation conditions that were missing.
Tim Beiko: Okay. yeah, I see there's been a couple of comments on this. But any other thoughts from anyone on the call. Okay, next up So this is following for discussion last time following our discussion from last time about the pric, about inputs to be encoded. A big Indian. Yeah, any thoughts or or.
Gajinder: I, I think in the consensus specs the case of G, big Indian change has already been merge So.
Tim Beiko: Okay, you got it.
Gajinder: This is quite natural. Then.
Tim Beiko: Sorry someone else was trying to say something. Okay? Yeah. I mean, there's no no more issues. We could probably work out as well. And then, okay, this one was to the execution EIP’s to basically add, did a gas used at a gas price to receipts in #4844 transactions.
Gajinder: Yeah, it just has those fails in the RPC response.
Tim Beiko: Yeah. and I guess, yeah, anyone have strong opinions about this. Okay, so it probably makes sense to included. Then, I think like client had a comment there, saying that, we want to wait until we effectively have the full set of changes for cancun, so that we can merge them all at once. But that seems to be the only concerns. And Roberto saying that, yeah, some of your clients already have protection. Okay? So I think that's what we had in terms of PR’s for today. And yeah. Let's go over Barnabas's list for Devnet- 6 and more depth on Monday's call as well as the PR that Math opened which I'm now forgetting what that one was about scrolling up. Sorry I have way too many tabs open here. oh, that. Okay, yeah. The data gas used to the header. So that's a this, that more thoroughly on Monday as well. Cool. Okay? And then, okay, last thing after #4844, So we agreed to move it. The RLP, that said we had. We had, basically included the SSZ optional EIP in Cancun, and we'd also CAFID 6493. The SSZ transaction signature scheme. There were some comments that there are some comments. that we should remove those. yeah, the there was some cons that we should remove those given. We've moved 4042 RLP instead. yeah, there's anyone think we should keep any of the SSZ EIP’s, either CFI included in cancun. Okay? no objection. So okay, I'll do this after the call. I'll take 6475 out of the included list, and I'll remove 6493 from this Cfi list. And okay. anything else on 4844.okay. And next up. Oh, yes.
Alexey (@flcl42): My question. should we like can we have for empty to fill it in updates? As far as I remember, we do not want did not want that because of SSZ and so on, but for now we can have it like So we need still to. for kids empty, too killed.
Tim Beiko: I'm sorry. I'm not sure, I quite understood it is this about the like contract creation?
Alexey (@flcl42): Yes.
Tim Beiko: Yeah. So banning contract creation from block transactions. Right?
Alexey (@flcl42): Yeah. what? What is the reason, could you? yeah, it's someone cool because so great. I believe you. I can take a stop.
Lightclient: So, the original reason was partially motivated by the fact that this thing was not well specified in SSZ. But I think that there's still a good reason to do it, and that's that we have these 2 upcodes for creating contracts. And there's not really any particular use case that I'm aware of where we have to have the ability to to create a contract via an ELA And through the hard work we've seen that this is one of the these frustrating things to test, because for every kind of change to the transaction, and often for new upcodes, new functionality of the EVM, we have to test in the context of both just normal execution. And then always in the context of a nit code, both in the create, in context of it, with create cop upcodes and in the context of the create transaction. So I would like to move, start moving away from using. you know, having create trend, the create transactions in general and simply rely on just the create functionality within the EVM.
Alexey (@flcl42): No, I see since.
Tim Beiko: yeah. And you see, also on #4844 if not? Okay. So, Daniel, you put together this document because we're proposing a bunch of upCodes for Cancun and Prague, and it's all starting to be a mess, and you have a proposal for how we can make it a cleaner. I I'm not sure if you're speaking, but you're now.
Danno Ferrin: I'm on my yes, you're on mute it now.
Tim Beiko: And we see your screen.
Danno Ferrin: Yeah. So so this is a like like, Tim said. There was a lot of up codes coming in, and a lot of space being occupied and moved around. Part of it, I had to say, was, was the early responsibility of the EOS, because they are occupying 3 key upcodes, the last of the 5 series. Which. So here's a quick overview of the of the opcode box we currently have right now. 5 is what was basically filling up having the storage memory control flow. And that's where the initial EOF Control flow off. Codes went in. So the proposal starts with with a couple of philosophies. One of them is to move all EOF only upCodes that only make sense of the you have popcorn containers to a separate block the E block. and that moves the 500 out of the the 5 x's out of there. and then move everything back to the block, where it makes the most sense. so that would then move to you, load and to store back in the 5 c. And 5 d. Probably, and copy if it passes into 5 E. This does not affect blot, hash or beacon root beacon rib might be out. That's a late breaking change overnight. And then so proposed changes in the F series that series is filling up. There's another proposal for another series of call call queues later on in this meeting. So is there a space for those reserve space for pay, and I guess we need a reserve space for set code as mentioned earlier in the meeting. But the the purpose of this is to get a more sensible packing and grouping of the upcodes and to fill in the space left by EOF moving into its own block. so this is what's proposed. Two-story teload are currently in, and everything else in blood hashes and everything else is, is speculative. it to be added at some point. If it added it all.
Tim Beiko: Thank you. Anyone have thoughts, questions, comments.
William Morriss: We assigned set code in this meeting.
Danno Ferrin: Aagain until it ships. They could move it, it move away. But my thought is, it should be in the app series rather than the 4 series, and I think the last F series available is FC. So we could, we could put that in, as is the current location for it.
Tim Beiko: Okay. Alex.
Alex Beregszaszi: yeah. Regarding the Us side we have been discussing. You know, these upcodes in like the last 2 or 3. You have breakout calls. so I think we can do this. in any case for you. and you know mean that there's no question about that we were really in favor of. We we get from the US Side.
Tim Beiko: Sweet
**Charles C **: anything. There was a bit of discussion of putting M. Copy at Xerox for F, which is like slightly before the 0X5 series. But I don't know if it really makes so much sense. It was just an idea.
Danno Ferrin: So the reason I wouldn't want it in 4 F, because those are all focused on block data ones. it's stuff that might come in from the environment. You know what's what's in your block headers. The only reason that push 0 got put into 5 F, Is because of some fun math related to the push series operations. Otherwise I would have put it somewhere else rather than in 5 F. so putting F and and copy it for F. I don't think makes quite the same sense. I think a better home with it would be it 5. The I think the 5 E where it would be at.
Tim Beiko: Is this something you should try, and but somewhere better than the I can be under your hand. I know we have. Yeah, we have something similar for transaction types somewhere.
Danno Ferrin: So that's exactly what I'm thinking. I wasn't sure if the right place for this for the execution specs. Or is it informative, EIP? So, to get the discussion rolling. I just did my own. He put it into the link, and then we can move it in an app, and if it's kept there it should be kept a live document. As proposals become non-viable, they should be removed from the list. and as it become viable, maybe some speculative upcode placings. But again, shipping off codes get priority over discussed upcodes.
Tim Beiko: Yeah, I think. Where is? let's say, are the transaction types in the execution specs? Is that where we yeah, they're in a folder called list slash signature dash types. And there's a read me in there, and that also keeps track of tentative signature types.
Lightclient: So this is this pretty similar thing. I don't think it would go in the folder lists, I mean, maybe. But yeah, it it makes sense and execution specs in general. Yeah.
Tim Beiko: yeah, II kind of like lists, because, like, I think. I guess it is a list of all codes.
Lightclient: Yeah, I mean, I think it fits well within the other stuff, like having CFI EIP’S and just having, like an idea of proposed changes for forks. This is just a different way of looking at the proposed changes for forks.
Danno Ferrin: Okay, I'll make up a Pr for execution specs that includes all this, and I'll try and follow the signature types. but are there any objections? If I were to open up PR’s against t- to move the up to odes for Cancun? All right. I'll do that today, too.
Tim Beiko: Cool. Yeah, thank you. This is a really great doc. yeah. okay, couple of more things. So on the last call we briefly discussed, we briefly discussed trying to make some decisions on this call about any other EIP’s we might want to include. in cancun So a couple. I guess, 4788 has had some updates since the last discussion. And then I believe this is a new proposal for revamped call instructions. and I there's a whole bunch of other proposed EIP’S. so I guess quickly, maybe at Alex, Stokes. You want to give it a quick overview of the changes that 4788, then Alex at can give a quick overview of the call EIP’s and then we can hear from clients about what they feel might make sense to include in cancun.
stokes: Sure. So it's kind of continues from AC DC. Last week.the way that 478 was before the current updates was that it would add a new upcode something like beacon route. and that would basically call out to some storage, and the execution states where these boxers would be. This is like kind of merging 2 different mechanisms with like. I'm sorry, pre-compile, like thing, and an up code like thing. So we kind of decided it would just be cleaner to bite this bullet on a stateful pre-compile. And so that's what the current update does. yeah, basically, it's just a pre and pile like we're all familiar with it happens to have access to execution states. And that's where this data is stored. this is a bit different from, say, like, how block hash works today where rather than read, the execution states, there's this history buffer that just happens to also be there. And the idea is that you know, in an ideal world. The State transition function for etherium would be up your function of the State and not rely on this like other little history thing. And this is implications for Taylor's clients and things like that. So I think we well, at least people who I've engaged with this so far, I think, generally like the stateful Prec and pilot direction. Mario has an implementation in get, I think, for both things. But definitely the staple pre compile. And yeah, I think the changes been merged already. So yeah, I think that's not too controversial. The other big thing was. And probably the last question on this was just discussing how we actually went to key these routes. So we assume, then we have the staple precompile. And now we basically need, like some inputs data to, like, you know, figure out, okay at the slot. Or maybe this timestamp, what was the actual beacon route? yeah. So from here, does I mean, those are probably the 2 big contenders is like either using the the EEL timestamp, which is sort of a proxy for slots. or introducing some way to map into the CL slots while we're writing this thing into the Ill States and doing that. so the catch there is that we don't want to violate the barriers of abstraction between the ENCL. So it's it's a little tricky. yeah, I guess maybe I'll just pause there. Does anyone have any questions so far? Okay, The one thing from here, then, is, maybe if I can just ask Mario directly. because he had some feedback via the prototype. Yeah, I can grab it. is Marius on the call.
MariusVanDerWijden: Yes, I'm here. So the prototype is very prototype. and it has. It has both the state for pre-compile and the upcode implemented In the beginning it was specified as an opcode but I think the and and of course, basically the upcode, like the idea is that this data need is going to be in the State, because otherwise we would have a separate storage con storage segment that notes would need to maintain and which would complicate the state transition function. Basically, the State transition function would then be the state, the transactions, the header chain for the last 128 headers for the block hash upcode. plus this additional storage field storage segment that keeps the the beacon rules. And so what we decided on is to move this well. What Alex proposed was to move this into the State. if it's kind of weird to put something from outside into the State. but it's I think, the best it gets it. It needs some getting used to, but in the end I think it's it's kind of fine and the So why, we decided against the prototype, or why I am against the not the prototype. The upcode is that this upcode would read storage, slot storage slots from a specific address. but that would mean we have this kind of address that has some storage slots that is not really a pre-compile and so that it would introduce a new paradigm. So we have to introduce a new program somehow. And so I think the best way to do it is to just add a pre-compile. that returns this data. And now the only open question for me is, how are we going to key this data?
Stokes: Basically. Yeah. And so in the prototype, I think you just wrote like the timestamp from the header.which at least to me, is like half of the problem, so they can. The problem with this is that if there's skip slots, it's like very unclear to the caller how to like find the actual next route.
MariusVanDerWijden: Why?
Stokes: Because there would be gaps right? So like, basically, you're saying you know, timestamp, let's say T - 0. I write the roots. Let's say there's like 4, Miss Slots, and then. Now there's like some time stamp in some time stamp from like, you know, much greater than just 12 Secs. And then, if I want to find the block route for the next thing, it's just like because the way that it's written is I would read back, you know basically 0 as the route. And then I would not really know what to do. I would need to like jump forward some amounts that I just really don't know and then have to like search through the contract. Basically.
MariusVanDerWijden: yeah. So the I'm not really sure how this would be used by contracts. So that I mean, yeah, that's a good question. So like one example would be, I have a slot.
Stokes: And you know. I guess where I'm coming from is, I don't know if it's missed or not. I just have some slots, because, like, you know, I know there's 64 bits, and I can just pick one out of that out of that type space. So I have some slot, and then I want to know what the root is and like. That would be the AIP. I would like. The thing that you're kind of suggesting is more like I, as the caller also need to know, kind of ahead of time that, like there was already a block there which you know might be an okay relaxation.
MariusVanDerWijden: Oh. oh, oh, we could even turn it around and say that it's it's keyed by vicin root, and the values is is the is the timestamp. So the caller could say, I have this speaking root. What is the time stem for it? If it's only about needing to know that a specific beacon root was there at some point.
Stokes: Right? So yeah, let's zoom out a bit just to respect everyone's time here. has anyone else looked at this and or feel strongly about including this? And okay.
MariusVanDerWijden: oh, I don't feel strongly about including it in. By the way, I think it's a it's nice, and it should be included. but I'm not sure if we should include it in cancun.
Stokes: Okay, does anyone else have any input because I think we can resolve this like key question one way or another, and then from there, just a question of Do we want to also include it in Cancun? It would help a lot of different applications that people want to use, like, for example, in the chat George has called out, you know all sorts of things around accessing beacon, state and execution layer. And the other thing is this does kind of T up other things in future forks like you honestly your exits. So there's ways in which this like tightens up the staking model that are like really valuable. And this change kind of leaves the groundwork for that. So in that sense, I think the sooner the better. just because it unboxes other stuff they want to do.
Tim Beiko: And I guess, yeah, maybe if if no one has strong opinion, strong opinions on this. do you client teams have strong opinions on if they want to include anything else at all? or not, for now, and if there are things you know that are not for 7088 that they want to include.
Andrew Ashikhmin: We would like to include a 5920 pay upcode. It's a simple usability improvement.
Tim Beiko: Anyone else have thoughts, proposals.
Stokes: I mean, it might be helpful just to hear. You know the on what we currently have, which is what 4844 T Store. And one of the SELFDISTRUCTION like, do we feel like there's room for another EIP at all could we discuss? M Copy and pay.
Tim Beiko: And okay, 2537 as well, which is the okay. So I guess we have 4 min. yeah, maybe if we could do like, if anyone has thoughts, we can do M Copy pay. And then, Alex, if you want to do the calls. of course. but yeah, it's like with with M copy. I, how do you have an update on it? Or do you just want to get the feeling from people.
Charles C : there's no update on it. I I brought it up a couple of calls ago, and I think you said you wanted to give everybody time to review it, and I think people have had time to, you know, review and think about it. So I guess we should discuss if there's any reservations, I think, Marius said. You know, maybe you know, along with telome, and she starts too complicated, which I don't. My personal feeling is that I don't know about that, because, you know, they're affecting completely different separate regions with EVM. But if anybody else has similar or other concerns. So I think we should discuss those.
Tim Beiko: Damno?
Danno Ferrin: So I'm copy is relatively easy. It's it's kind of like the return data copies. There's a lot of well warned testing path on that. So I want to hear Mary's opinion. But for pay I think we need to discuss it in context of the new call to series. And as to why pay is needed if it's just to make things cheaper, I think called to will handle it. But I don't think we have time on this call. To discuss the pros and cons of pay versus call to.
Charles C : pay is not just for for gas. There's a number of high-profile reency attacks involving sending ether that would be easily prevented if people had a way to transfer ether without transferring execution context.
MariusVanDerWijden: So I think the pay up code kind of enables. I think it's a it's kind of good. It's something that we should do but it warrants a whole new like we, we really need to look at the implications of it. hmm! Basically, it enables a new way of a contract touching another one and so this is usually where, like most of the bugs are. I don't think we have like I, personally don't think we we have enough time in cancun for testing this. with all its implications, it has on the other the on the on the other things.
Tim Beiko: William
William Morriss: Regarding the payoff code. we already know that contracts can receive Either in other contexts, such as from mining new blocks and from SELFDISTRUCT So the payoff code shouldn't introduce any new security considerations because anything that it allows is already allowed
MariusVanDerWijden: it. It's not about security and the considerations on the on the smart contract. It's it's more about like, how is it implemented and like, basically we need to. we need to test this up code with every combination of everything ever. This kind of makes it very complex.
Tim Beiko: Okay? And we're we're already at time.Alex, do you want to give a quick up, a quick overview of your eips? just so people have the context.
Alex Beregszaszi: Hmm. yeah. So it is called event. call instructions and it was pushing EIP rep only today. But the work has been and the number is #7629. But the work on this has actually started in January. trying to get device where the requirement for us was brought up to also try to eliminate gas obserability. And we designed the the replacement call instructions at the time. with that in mind. But then, maybe, like 3 US breakout calls ago, we realized that these instructions are actually not dependent on the UF, and they could be just introduced in the current. TVM. And there are 2 benefits to to doing so. One If you read the this application, it actually simplifies a number of rules regarding gas and there's some a number of cases where it would be actually much cheaper and and better to use this new kind of call instructions instead of the the current ones. So existing legacy contracts you choose to to so would would benefit. And the second reason why it would be beneficial is. If this would be introduced in a different hard work. Then you, then the Uf. Changes would be much smaller. Because the only change there would be to reject the current call instructions. And these new proposed call instructions, for they already would be already there. And then you know what these instructions actually do you know?, basically there. there is the Gas observability which is, you know, one big push. We wanted to get rid of So, here we remove the the gas limit input and we just rely on the and 6364. True. We also changed the way the stipend works. it is much more simplified there is no output buffer address, because return data copy and return data size can be used. Instead of that. There has been a number of discussions with solidity. How it actually uses the call instructions and some discussions with Wiper. and then the the last changes, the the return value it, it actually returns more. it returns success, reward, and failure and the back. When the revert feature was introduced. there was a plan to add that status to the calls but it couldn't have been done at the time in the legacy call instructions, because contracts were depending on on the behavior and introducing that would be would have been a breaking change. I guess I don't really have time to go through everything. But there's one more comment I wanted to make. And so there's this version of EIP which is in in draft mode But as we went through it we realized that there would be another option as well. Which would mean that these call instructions wouldn't. They would check whether the target is an actual contract. So it would do an exclude size check and the call would would fail. If it's not a a contract on the other side. doing so would simplify the rules even further, because these call instructions would only be usable to interact with contracts. what this would mean is that you know something like a transfer or pay instruction. A separate transfer of pay instruction would be needed. in order to to establish where you transfer to aways or not executing transfers to to code accounts. yeah, I think that's that's that's, in short, and you know, ideally, something like this. would be included in cancun. And then us would be much more simplified. For if work afterwards.
Tim Beiko: thank you. okay, we're already right past time. I don't know if people have comments on this. Specifically. if not, I guess what I'd suggest is surely it seems like we're probably not in a spot to make like the final decisions about smaller EIP's today. And we're not even in a spot where 6780 and 4844 fully implemented in clients. so I would suggest. I don't know if we want to cify them, but it seems like M copy pay. These call upcodes as well as the existing as well as the existing yes, compiled, and and 4788 EIP's are sort of the ones we're considering, so should we move those 3 other ones? So like 5920, 5656, and then the call, 12 CFi and sort of restrict the discussion of those. And yeah, we can see in the next couple of weeks. how how things progress. Anyone object that
Stokes: That sounds fine. I don't think we should add more, and if anything, we should probably buy a storage just freezing the current side. But right? Yeah.
Tim Beiko: So that's what I'm saying is like we. We freeze the current set with those things. Everything else is the fact. They'll sort of excluded. Obviously there's some last minute issue. We can always change things. But that gives us like 3 EIP's currently in the fork, and we'd be up to, I believe, 5 cfied ones. because we have 3. Now, adding 3 and removing one
Stokes: right and 4844 is a big one. Just so we're all aware.
Tim Beiko: Yes. okay, we will let them know by client. sweet, let's wrap up here. We're already over time. Appreciate everyone sticking around and talk to you all on the next one of these.
Pooja: Thank you.
Guillaume: Thanks, Tim. Bye.
Peter SzilÃgyi (karalabe): thank you.
Ahmad Bitar: Bye.
- Guillaume
- Tim Beiko
- Péter Szilágyi (karalabe)
- Stokes
- Alex Beregszaszi
- MariusVanDerWijden
- Ben Adams
- Danno Ferrin
- Andrew Ashikhmin
- Lightclient
- Andrew Ashikhmin
- Barnabas Busa
- Ben Edgington
- Gajinder
- Alexey (@flcl42)
- Danno Ferrin
- Ahmad Bitar
- Neville (Dedaub):