Chainflow's operated in crypto testnets since first joining the Cosmos Validator working group in late 2017. We've operated on many testnets through the years, starting with the original incentivized testnet, Game of Stakes. We also helped design a number of these testnets in the early years, when the concept was new and teams were trying to determine the characteristics of an effective incentivized testnet. We've done this for projects like NuCypher, Akash and helped design Solana's Tour de Sol.
We'd like to now share that experience with the Story Protocol team, as a participant in the current testnet and aspiring mainnet validator. While most of the feedback applies to incentivized testnets in general, we've made an effort to adapt specifically to the ongoing and planned Story Protocol testnets, based on our current understanding of them.
What are the goals of the testnet?
This is the first question we suggest teams ask themselves when planning a testnet program. The goals often fall along the spectrum from assumed to loosely understood. We suggest a team start by defining the goals, write them down, agree with them as a team, then communicate the goals clearly to the validator participants.
Goal examples include things like identifying the validators that will make it into the active set and/or generating visibility for for the project. Regardless of the goal details, being sychronized as a team internally and with the validator set externally is critical to the successful outcome of a testnet program. Furthermore, goals can be weighted based on their importance and related priority.
Suggestion - The Story Protocol testnet program is underway and moving fast. We suggest that the team synchronizes on a clear set of goals internally and communicates them to validator participants. One important goal would be to identify exactly how many validator spots Story is looking to fill to support mainnet launch.
Identify specific tasks once the goals are established. These tasks should be linked directly to accomplishing the foundational goals. It's often helpful to break these tasks into stages or phases. Points can be assigned based on weights given to each of the goals.
There are two ways to do this -
1 - Each stage or phase can have a specific set of goals. In this case tasks are discrete, finite and confined to a specific stage or phase.
2 - Goals span multiple stages or phases. In this case tasks build upon themselves over stages or phases.
Starting with the goals and working backwards provides the team with confidence that what they're asking validators to do are directly linked to the testnets' desired goals. Taking pauses between phases is also very beneficial, as it helps keep the team and validators aligned throughout the testnet program.
Suggestion - We suggest the Story Protocol team chooses one of the two approaches and builds a spreadsheet that lists tasks in phases, identifying which end goal each task supports and assign points based on the associated goal's priority.
There are two sets of expectations to establish.
1 - What does Story expect from validators?
Establish the level of commitment Story expects to see here. Should validators be treating these testnets like they would a mainnet? How fast does Story expect validators to respond to announcements? How should submissions be made? Can validators confirm receipt of submissions and if yes, how?
2 - What can validators expect from Story?
Will Story be sharing a scoring or points system? What is the preferred way to ask questions? What level of responsiveness can validators expect from Story? How many validator slots are to be competed for? What will happen to valiadators who don't make the cut, i.e. are there token awards for participants who don't make the mainnet set?
Suggestion - Share the answers to the above questions with the validators in a blog post or a Discord annoucement BEFORE the next phase of the testnet starts.
We've seen testnet programs operate under varying degrees of transparency. The most successful have maintained a consistent level of transparency throughout the program. The least successful have promised to be very transparent and end up not being transparent at all. The middle scenario is when the level of transparency shifts throughout the testnet.
Many validators have been around for a while and have participated in many testnets. Their BS antennas are up and can usually pretty quickly identify when they're being led-on or otherwise gamed in service of the protocol. Some or even many may tolerate this, to avoid losing out on potential rewards.
However, the establishment of goodwill goes a long way toward building a healthy network. How a testnet runs is often an indicator of a network's future success. Issues that go unresolved during testnets, be they technical or social, can do damage down the road and jeopardize the long-term success of the network.
Suggestion - Determine the level of transparency now, prior to the next phase of the testnet starting. Establishing transparency related to the scoring system is probably the most important suggestion we could make, namely and at risk of being repetitive for emphasis -
1 - How are scores calculated?
2 - How are submissions made?
3 - How can a validator confirm a submission has been received and considered?
4 - Will scores be shared publicly and/or privately (individually)?
In addition to being transparent about this, consistency in adhering to the scoring system is critical, as exceptions and deviations from it can undermine trust very fast. Validators are pretty adept at identifying these exceptions, deviations and preferential treatment.
This point goes hand-in-hand with the transparency recommendation In the early days, we encouraged teams to use a single announcements channel. Fortunately this has become common practice today.
Choose a single way to communicate messages you want to be sure validators see critical updates. Don't switch between a Discord announcements channel and an @all role in a separate Discord channel for announcements.
Like most people in crypto, validators are in more chat channels than they can realistically follow 24x7 and respond to immediately. However, validators do have processes in place that allow them to monitor and respond quicly to a smaller number of channels.
Use a consistent messaging format when possible, e.g. for upgrade announcements include the code link, tag/hash, target upgrade time, any instructions that may be new or different from past upgrades, as well as the window validators are expected to respond within.
More generally, keep the level of communication consistent when at all possible. It becomes difficult when validators have a hard time understanding -
1 - How to contact a core team member and who to contact
What's the communication flow from the validator to core team look like? Are core team members open to receiving DMs? Should validators tag them? Should a moderator be contacted, who can then route the request to the right team member?
2 - What the expected response time from the core team member is
If core team members can't be expected to be available consistently, that's fine. Knowing this makes it clearer to validators that we should be getting and giving help among ourselves, rather waiting for a response from a core team member.
Suggestion - We suggest the core team answer the above questions and post them in a comprehensive blog post announcing the next phase of testnet launch (preferred) or in a Discord #validator-annoucements post.
There's a lot that goes into planning and executing a successful testnet program. The level of effort is often underestimated. Planning at the beginning goes a long way toward setting a foundation, keeping all parties synchronized and ultimately achieving the testnet program's goals.
Suggestion - The Story Protocol testnet program is in flight. We suggest that team think through and execute on the suggestions listed above, prior to initiating the next phase of the testnet.
This is my github and discord username. I am an individual validator participated in many testnets and mainnet validator programs like Avalanche Subnet, Gnosis, Eigenlayer AVSs, Forta, Aptos, Sui, Celestia, Manta, Avail, Kroma, Dymension, Streamr, 0g, Powerloom, Ar.io and many more for more than three years. I have been running Story validator since the begining but I am not in the active set as the team declared no need to abuse the faucet. As an experienced validator I have some suggestions for the team. Many thanks to Chainflow team for this opportunity.
Suggestion - As a simulation of the mainnet, what a testnet needs most to take into attention are uptime, sustainability and decentralization. So Story Protocol had better select the validators according to the time they spent for the testnet (not only for the ones in the active set), willingness to contribute to the project, uniqeness (could be achieved by kyc or some sybil detection techniques - this can be a key to enable decentralization which can be achieved by onboarding both companies/teams and individual validators). These main features could ensure a fair and successful testnet.
- I really appreciate the initiative to involve validators like myself in the testnet. It shows a strong commitment to decentralization.
- It's great to see the community so active and interested in the project.
-
Clarify Testnet Goals
- From my experience, the current emphasis on accumulating test tokens to climb the rankings creates a competitive atmosphere that values quantity over quality. I feel this might not be the best approach for the network's long-term health.
-
Prevent Token Misuse
- I've noticed that test tokens are sometimes being misused, which seems to undermine the integrity of the testnet. Implementing measures to prevent this could help promote fairness among participants.
-
Promote Quality Contributions
- I'd love to see more recognition for meaningful contributions, like identifying bugs, creating detailed content, or improving documentation. Focusing solely on stake accumulation means that some validators with high stakes but low uptime end up ranking higher, which doesn't seem ideal.
-
Enhance Transparency
- Clear communication about the rewards for both active and inactive validators, as well as the selection criteria, would really help build trust and align everyone's expectations.
-
Implement a Hybrid Validator Selection Model
-
For Mainnet: Stake-Based Validation
I recommend that the mainnet operates on a clear stake-based model. This approach ensures that validators have a significant investment in the network's success, aligning their incentives with the health and security of the blockchain.
-
For Testnet: Contribution and Performance-Based Selection
During the testnet phase, it would be beneficial to focus on validators' contributions and performance rather than their stake. By evaluating validators based on their technical contributions—such as bug reports, code improvements, documentation, and maintaining high uptime—we can promote organic growth and meaningful participation.
If measuring contributions becomes challenging, a lottery-based selection could be a fair alternative to ensure all interested participants have an opportunity to contribute.
-
General Feedback:
I'm excited to be part of the Story Protocol testnet and believe in the project's potential. By refining the testnet's objectives and fostering an environment that values quality contributions, I think we can work together to build a robust and successful future. Thank you.