Replies: 8 comments 1 reply
-
Benchmarking Contract operations at the SwingSet VM LevelOne somewhat mature set of tools for benchmarking smart contract operations runs within a SwingSet VM, but not using the cosmos-sdk consensus layer: There's a somewhat extended write-up: |
Beta Was this translation helpful? Give feedback.
-
end-to-end testing with transactions and queriesThe ultimate test is to run an actual blockchain, submit transactions, and run queries. For example, we have a shell script that tests minting a KREAd character: KREAD_ITEM_OFFER=$(mktemp -t kreadItem.XXX)
node ./generate-kread-item-request.mjs > $KREAD_ITEM_OFFER
agops perf satisfaction --from $GOV1ADDR --executeOffer $KREAD_ITEM_OFFER --keyring-backend=test
agd query vstorage data published.wallet.$GOV1ADDR.current -o json >& gov1.out
name=`jq '.value | fromjson | .values[2] | fromjson | .body[1:] | fromjson | .purses[1].balance.value.payload[0][0].name ' gov1.out`
test_val $name \"ephemeral_Ace\" "found KREAd character"
Before KREAd went into production, we generated various levels of load using scripts like that and looked at various metrics. I think we ran them not just in an a3p context, but on an actual multi-validator-node test network in a kubernetes cluster, and we looked at the output using tools such as datadog. @toliaqat is there any more detail to share in this context? |
Beta Was this translation helpful? Give feedback.
-
end-to-end testing APIsThere are a few other tools for submitting transactions and making queries in the @agoric/synthetic-chain package. See also: ... including recent const wdUser1 = await provisionSmartWallet(agoricAddr, {
BLD: 100_000n,
IST: 100_000n,
});
t.log(`provisioning agoric smart wallet for ${agoricAddr}`);
const doOffer = makeDoOffer(wdUser1);
const brands = await vstorageClient.queryData(
'published.agoricNames.brand',
); -- https://github.com/Agoric/agoric-sdk/blob/master/multichain-testing/test/send-anywhere.test.ts |
Beta Was this translation helpful? Give feedback.
-
Attn @gibson042 |
Beta Was this translation helpful? Give feedback.
-
Here's a goal from internal discussion. I don't expect that this is self-explanatory, but I'll go ahead and share it...
we should learn how many computrons are spent to do one of those offers, to guess what our 65Mc limit will do to the scheduling. Once we deploy to a multi-node net, we should see how it compares against the actual wallclock time needed by those validators. p.s. 65Mc = 65,000,000 computrons swingset has a run policy... cosmic-swingset uses a limit on computrons per block to make such a policy. screenshot from swingset params: |
Beta Was this translation helpful? Give feedback.
-
Note @tgrecojs reports some interesting contract performance testing results for the Tribbles Airdrop contract on a local chain. |
Beta Was this translation helpful? Give feedback.
-
As discussed in today's office hours... await agdWalletUtils.broadcastBridgeAction(GOV1ADDR, {
method: 'executeOffer',
offer: {
id: 'request-stake',
invitationSpec: {
source: 'agoricContract',
instancePath: ['stakeBld'],
callPipe: [['makeStakeBldInvitation']],
},
proposal: {
give: {
In: { brand: BLDBrand, value: 10n },
},
},
},
}); then something like... const istBalanceAfter = await retryUntilCondition(
async () => getBalances([GOV1ADDR], 'uist'),
// We only check gov1's IST balance is changed.
// Because the reclaimed amount (0,015 IST) is less than execution fee (0,2 IST)
// we might end up with less IST than the one before reclaiming the stuck payment.
istBalance => istBalance !== istBalanceBefore,
'tryExitOffer did not end up changing the IST balance',
{ log: t.log, setTimeout, retryIntervalMs: 5000, maxRetries: 15 },
); see also |
Beta Was this translation helpful? Give feedback.
-
In qualifying Fast USDC for release, we (@turadg , @mhofman , and co) came up with some guidelines to recommend as well as some techniques for addressing them. We distinguished between "resource consumption" testing and "divergence" testing. The latter (e.g. #10940 for Fast USDC) needs coverage whereas the former (#10890) needs realistic production loads. Resource consumption testingWhile this often goes by “perf” testing, MN2 qualification is less about user-perceived latency and throughput and more about impact on long-term health of the chain, including leaked resources as well as consuming too much compute. Structure
Per iteration
While Divergence testingJust needs coverage to exercise all the code paths and confirm no divergence happens (with >1 validators so a3p and current multichain-testing are insufficient). The report is to describe the coverage and show that no divergence occurred. |
Beta Was this translation helpful? Give feedback.
-
It's required by the checklist:
Any suggestions on how to do it? Are there any available tools?
cc @tgrecojs
Beta Was this translation helpful? Give feedback.
All reactions