Skip to content

Commit

Permalink
resolve merge conflicts
Browse files Browse the repository at this point in the history
  • Loading branch information
shyam-patel-kira committed Jul 30, 2024
2 parents 499e617 + ceeb221 commit 0e3ecf0
Show file tree
Hide file tree
Showing 18 changed files with 1,116 additions and 134 deletions.
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
**/.DS_Store
**/.vscode

.idea/
node_modules/

package-lock.json
Expand Down
218 changes: 109 additions & 109 deletions development-updates.md

Large diffs are not rendered by default.

6 changes: 5 additions & 1 deletion notes/AdityaGupta.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,14 @@ These are my twitter and github handles:
- [Twitter](https://x.com/darex_1010)
- [Github](https://github.com/1010adigupta)

# Project Proposal
This is my project proposal for the EPF: [reth-verkle-poc](../projects/reth-verkle-poc.md)
# Weekly Updates
These are my weekly EPF updates:
- [Week 1](https://hackmd.io/G3wd3b9YT8mApG_BoH87TQ?viewR)
- [Week 2](https://hackmd.io/f45sFCcLQ32bxdKGRSCGAw?view)
- [Week 3](https://hackmd.io/@adigupta/S1_Lq4-wR)
- [Week 4](https://hackmd.io/@adigupta/rJ2y2koDR)
- [Week 5](https://hackmd.io/@adigupta/rym-4nXdR)
- [Week 5](https://hackmd.io/@adigupta/rym-4nXdR)
- [Week 6](https://hackmd.io/@adigupta/H139c34KA)
- [Week 7](https://hackmd.io/@adigupta/S1m6RhVFC)
6 changes: 5 additions & 1 deletion notes/Bastin.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,11 @@ For this project the goal is to implement the needed functions for light clients

Here you can see my weekly updates as well in the [`development-updates.md`](https://github.com/eth-protocol-fellows/cohort-five/blob/main/development-updates.md):

- [Project Proposal](https://github.com/eth-protocol-fellows/cohort-five/blob/main/projects/light-client-support-in-prysm.md)
-
- [Week 0](https://hackmd.io/@Bastin/HJ6hOLQHC)
- [Week 1](https://hackmd.io/@Bastin/HyM3AmnrA)
- [Week 2](https://hackmd.io/@Bastin/H1JgDZLU0)
- [Week 3](https://hackmd.io/@Bastin/By8UVwlPA)
- [Week 3](https://hackmd.io/@Bastin/By8UVwlPA)
- [Week 5](https://hackmd.io/@Bastin/HyqHfO9OR)
- [Week 6](https://hackmd.io/@Bastin/Hke55_9dR)
39 changes: 37 additions & 2 deletions notes/ChiragMahaveerParmar.md
Original file line number Diff line number Diff line change
Expand Up @@ -209,8 +209,43 @@ There are more modifications, but the coolest one is the modification on the loo
* metadata connections are aptly represented by the dialogues "I have seen a particular message", "I want that particular message". The actual message is not included in these.
* and the actual message is only transferred through a full message connection
* While the gossip(metadata) flooding is controlled by the peering degree the heavy messages are controlled by the grafting and pruning strategy. The two strategies provide different controls.
* there is an additonal control - fan out - publishing messages to topics you are not subscribed to - unidriectional connections to peers who are subscribed to that topic - these fanout peers are remembered on top of the existing peering degree.

* there is an additonal control - fan out - publishing messages to topics you are not subscribed to - unidriectional connections to peers who are subscribed to that topic - these fanout peers are remembered on top of the existing peering degree.

### peerDAS and other DAS DHT protocols

[peerDAS](https://ethresear.ch/t/peerdas-a-simpler-das-approach-using-battle-tested-p2p-components/16541) -

* peerDAS Summary: proposes deterministic selection of samples to be custodied (pseudo random function of node_id, epoch, custody_size) - custody_size = CUSTODY_REQUIREMENT - presents various other parameters for the system just like CUSTODY_REQ.. - NUMBER_OF_PEERS as minimum peering degree of DAS overlay - deterministic selection allows other nodes to be certain which peers have which sample, if they don't have the samples they must they are negatively scored - while selecting sample it is node's responsibility to select a good mix of "honest custody nodes", "super nodes", "high capacity nodes" - sampling is done through DO_YOU_HAVE messages on the gossipsub domain - hence even the samples are provided on the gossipsub domain - how? - after using the deterministic selection the node subscribes to the appropriate gossip channels

### General P2P related stuff

* [this](https://github.com/libp2p/js-libp2p/blob/main/doc/PEER_DISCOVERY.md#discovery-mechanisms) link specifies two methods of peer discovery. Active and Ambient peer discovery. I guess they are trying to differentiate between "Actively" looking for other peers offering the same service and "passively" discovering them. I guess when an overlay network triggers the underlying DHT lookup for protocol negotiation they are "actively" discovering. Whereas when the underlying DHT events happen without a trigger and the overlay network just happens to "piggyback"(not payload piggybacking)i.e. using the untriggered conection to negotiate protocol then it is "passively" discovering.
*

### Data Availability

[Why do you require data availability checks](https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html) - [legendary paper](https://eprint.iacr.org/2023/1079.pdf)

1. The first article is easy to read and explains what fraud proofs are, how they improve security assumptions for light nodes and why data availability checks are needed for the fraud proofs to actually deliver value to the light nodes - note: the fraud proof in the article is constructed only using state roots and transaction however in actuality you would require some amount of state data too. especially the data that the particular malformed transaction touches.
2. The probabilities within the first article surpass my knowledge - but I think 2^(-100) comes from random picking event "with replacement"
3. The second paper is "perfetto". It is an unambiguous source of info on the topic. The below points summarize the paper
4. a DAS scheme must satisfy three protperties - completeness, soundness and consistency
* retrospective note: section 3.1 describes these three really well for the context of DAS
* completeness - verifiers holding a commitment `com` and probing/sampling an encoding `C` must be able to conclude that the data is fully available.
* soundness - enough successful samples should allow for recovering some data bit string
* consistency - for a fixed but possibly malformed commitment `com`, one can recover AT MOST one unique data bit string.
* usually, completeness is a proof that the said scheme works and soundness is the statement that the scheme is "sound", as in secure and achieves what we are trying to do. for example, for a encryption scheme proving Decrypt(k, Encrypt(k, m)) = m is proving it is complete, and proving that the encrypted message cannot be decrypted is the soundness.
* here is what chatgpt has to say about consistency If `com` is malformed, it should not map to multiple different datasets. Ideally, it should be clear that `com` is invalid or only map to a single possible dataset if recovery is attempted.
5. sampling encoding of data for verifying its availability have existed before DAS - Proofs of Retrievability
* But PoRs assume honest encoder which DAS does not
* PoR construction do not provide strong consistency (apparently, I haven't checked this myself because it is a detour not worth taking)
* PoR constructions usually have to perform a computation over the entire dataset to respond to queries from the verifier (even though the verifier is only querying samples of the data). This stops PoRs from being distributed in nature.
* There are other differences but the point is that they are different
6. Verifiable Information Disperesal is another related work of the sorts.
* compared to scheme DAS doesn't require all servers holding a part of the data to interact with each other to ascertain that the encoding is right or the data is available. DAS schemes aspire to achieve this "completeness" non - interactively
* In a DAS scheme adversaries can be adaptive, that is they can adapt and respond based on the queries made and the metadata within them (worse if they can link queries from past as well). VID inherently considers non-adaptive adversary.
7. There are other works similar but not going to bother with them XD
8.

### Project Specific

Expand Down
24 changes: 5 additions & 19 deletions notes/Hamid.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,25 +11,11 @@

🚀 Privacy and Censorship resistance are my interest area

### 💡 Brainstorming Table
## EPF Project

| 📌 Title | 📝 Short Description | 🔗 Related Link |
<<<<<<< HEAD
|----------|----------------------|-----------------|
| | | |
- [ Project Proposal: Inclusion List with Plausible Deniability ](../projects/attestation-based-inclusion-list.md)

### 📚 Interesting Resource Table
- [ Presentation Slides](https://github.com/irnb/board/blob/main/content/Inclusion%20List%20with%20Plausible%20Deniability%20(1).pdf)

| 📌 Title | 📝 Short Description | 🔗 Link |
|----------|----------------------|---------|
| | | |
=======
| ------- | ------------------- | -------------- |
| | | |

### 📚 Interesting Resource Table

| 📌 Title | 📝 Short Description | 🔗 Link |
|---------------|----------------------------------------------------------------------|------------------------|
| The Eth2 Book | A technical handbook on Ethereum’s move to proof of stake and beyond | https://eth2book.info/ |
>>>>>>> 9d6e90201dad156eb5b52aed2ec9bb4eafc642a4
## Insightful Links:
https://eth2book.info/
3 changes: 2 additions & 1 deletion notes/RupamDey.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,4 +15,5 @@ I'll posting my weekly updates on my [hackmd](https://hackmd.io/@rupam-04)
* [Week 3](https://hackmd.io/@rupam-04/Week3)
* [Week 4](https://hackmd.io/@rupam-04/Week4)
* [Week 5](https://hackmd.io/@rupam-04/Week5)
* [Week 6](https://hackmd.io/@rupam-04/Week6)
* [Week 6](https://hackmd.io/@rupam-04/Week6)
* [Week 7](https://hackmd.io/@rupam-04/Week7)
79 changes: 79 additions & 0 deletions projects/Grandine-Support-Documentation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# Grandine Support and Documentation

This project is focused on building out a more comprehensive support and documentation book for Grandine through research and trial and error.

## Motivation

Grandine is the newest Consensus Layer client and the documentation is on the lighter side. I think it's important to have more comprehsneive documentation around Grandine and its capabilities. I also think this will help attract new users to the client, which would increase client diversity among CL clients.

## Project description

I am proposing to add more support and documentation to [Grandine's book](https://github.com/grandinetech/grandine/tree/develop/book) which will require me to become intimately familiar with Grandine through research and trial and error.

The following areas are ones I think should be addressed but this is likely to change as I begin making my way deeper into Grandine. I am taking inspiration from the Lighthouse and Prysm documentation books, so whilst not everything that they have will be applicable to Grandine, I think it is a good base to work from.
- ### Installation
- ### Eth-docker integration
- ### Validator Client
- ### Contributing/Testing

## Specification

I have already begun to make myself familiar with Grandine and the CL in general. The Grandine Discord channel is becoming active for queries around errors from other fellows as well. This will be a good method and way to keep track of issues others are facing and it also fits in nicely with everybody working more collaboratively together.

I believe that these solutions can be implemented through research and trial and error. Since this is all new to me, I need to personally perform these tasks so that I can learn and also understand what pain points there are for new users. For more technical topics that other fellows and users are working on, I will have to rely on them to relay to me where there are errors in the current documentation and what solutions they found to get through their blockers.

## Roadmap

**By end of July:**

- Have Installation section complete, including support for non Linux systems

**By end of August:**

- Have Eth-Docker integration section complete

**By end of September:**

- Have Validator Client section complete

**By end of October:**

- Have Contributing/Testing section complete

**By Devcon VII:**

- Complete other sections that were identified for updates

## Possible challenges

I am still learning how to code (focusing on Rust) and building up my technical chops by learning more about the Consensus Layer and the Ethereum protocol in general. Working on this documentation for Grandine, while not as technical as other projects, will still require a great deal of technical understanding through trial and error.

One important aspect of documentation and support is going through the steps myself, so I will be doing much testing in that sense and could face challenges and roadblocks there. However, it is better for me to discover those than others who want to use Grandine!

There's the possibility of being sidetracked with other areas of the CL that impact Grandine and I may lose focus or get distracted. However, I anticipate that as the fellowship continues to progress, my technical understanding will ramp up and technical areas will become clearer and easier to understand.

## Goal of the project

As stated in Grandine's first [blog post](https://medium.com/@grandine/grandine-is-open-sourced-b1815cf0ae39), their long-term goal is to help diversify Ethereum CL clients "where every client has less than 1/3 of mainnet validators share." One solution to this problem is having the support and documentation available for anybody to digest. It should be fairly easy and simple for anybody to download Grandine and use the client.
According to [clientdiversity.org](https://clientdiversity.org/), and more specifically [Miga Labs](https://migalabs.io/), Grandine has a 0.92% marketshare. It is difficult to say what % this could increase to or how it could be attributed to this project, but as long as Grandine's % continues to increase, I would view it as success.

Each of the sections I mentioned above should be fully fleshed out. Currently available sections should also contain more detailed information. I also imagine that as Grandine continues pushing new releases, more sections will likely need to be published.

## Collaborators

### Fellows

There are several fellows that have attended the weekly Grandine calls with Saulius, who may indireclty contribute to this as they progress with their projects.

### Mentors

- [Saulius](https://github.com/sauliusgrigaitis)

## Resources

- [Grandine Repository](https://github.com/grandinetech/grandine)
- [Grandine Book](https://docs.grandine.io/)
- [Prysm Documentation](https://docs.prylabs.network/docs/getting-started)
- [Lighthouse Book](https://lighthouse-book.sigmaprime.io/intro.html)
- [Rust Developer Roadmap](https://roadmap.sh/rust)
- [The Rust Programming Language](https://doc.rust-lang.org/book/title-page.html)
2 changes: 2 additions & 0 deletions projects/attestation-based-inclusion-list.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,8 @@ Attesters for slot n+2 check whether the transactions in the inclusion list were

## Specification 📋

- https://ethresear.ch/t/one-bit-per-attester-inclusion-lists/19797

i working on specs in the `consensus-specs` repo, and i plan to add the `fork choice rule` and new method in the `Engine API` to support the inclusion list.

i'll share the PR links here in coming weeks.
Expand Down
Loading

0 comments on commit 0e3ecf0

Please sign in to comment.