Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Near-match to my sandbox + proposed use of "Trust Is Risk" for Sybil resilience #1

Open
veox opened this issue Nov 10, 2017 · 11 comments

Comments

@veox
Copy link

veox commented Nov 10, 2017

Hi! I've held an interest in on-chain UBI systems for a while, including Circles when it first came up.

I've got a rough sketch that matches quite closely the system described in the overview. At a glance, the biggest difference is the minting unit-duration: a second instead of a minute. In other words, not much difference. :)

You can find it at gitlab.com/veox/oobiqoo. It's placeholders all over. Probably not of much interest code-wise, since I see you've already got some form of prototype.


Regarding the "fake accounts" section, I urge you to look at "Trust Is Risk" by Orfeas Stefanos Thyfronitis Litos (@OrfeasLitos) and Dionysis Zindros (@dionyziz).

The claim made in that section requires much more rigid demonstration than one example and one diagram. Reliability of the system on a protocol level (as compared to implementation level) depends on the claim being justifiable for all possible cases; and the limitations being clearly laid out.

A stub issue in oobiqoo has a couple links (feel free to dump more there if it's deemed irrelevant here):

The github repo for TIR is https://github.com/decrypto-org/TrustIsRisk.

@veox veox changed the title Near-match to my sandbox + proposed use of "Trust Is Risk" for Sybil protection Near-match to my sandbox + proposed use of "Trust Is Risk" for Sybil resilience Nov 10, 2017
@edzillion
Copy link
Contributor

Thanks for the suggestions Noel

The claim made in that section requires much more rigid demonstration than one example and one diagram. Reliability of the system on a protocol level (as compared to implementation level) depends on the claim being justifiable for all possible cases; and the limitations being clearly laid out.

Agreed, we definitely need to expand this section or start on a paper just talking this issue; w/ various use cases and observations on each. Perhaps we can get @koeppelmann to weigh in?

@apmilen
Copy link
Contributor

apmilen commented Nov 14, 2017

Thanks for the TIR links, I'll review this paper soon and start to think about how it relates to Circles. Totally agree that more rigor is needed in general. This overview was just a quick thing I put together in preparation for a UBI conference last month. We're planning a longer term effort to flesh out these ideas and validate them from multiple perspectives (i.e. both theoretical and experimental).

The current prototype indeed mints on a per-second basis. I just made it per-minute in that example for ease of explanation to newcomers. In a later revision I'll definitely fix this and add more detail.

@dionyziz
Copy link

As believers in crypto-based UBIs and authors of the TIR paper, @OrfeasLitos and I would be happy to help with any questions there may be in regards to our paper and its applications to your scheme.

@TomTem
Copy link

TomTem commented Nov 19, 2017

About the trust is risk scheme: what if I only have 1 friend, and many links down the line there are several shops that sell the same things for the same price. The trust is risk network provides me a number that tells me which shop is more trust worthy, so I buy there. Now I never receive the product, or the product is bad. No problem, I now know never to buy there anymore. What I don’t understand is how the ‘rating’ of the bad shop is affected. I could cut the trust with my only friend because I linked to the shop trough him, but then I would have no links to connect to the other good shops. And even if I would have other trusted links to other shops, even then I would only hurt the trust of my friend, but I would prefer to hurt the trust of the bad shop many links down the line. Did I misunderstand the concept? Or are these issues that need to be solved?

@OrfeasLitos
Copy link

OrfeasLitos commented Nov 19, 2017 via email

@TomTem
Copy link

TomTem commented Nov 19, 2017

Thx for your answer!

I used the example of having only 1 connection to show that Alice has no way to convey her bad shopping experience. But you are right, in such a case Alice should try to have more connections.

Maybe a better example to clarify my problem would be as follows: 1000 happy customers are connected to the shop, they all have done good purchases in the past. These 1000 have each 10 trust connections to their friends, and each of them have 10 connections as well. If you go down 3 levels, you have a million users. Alice has a direct trust connection to a few of these million users. Let’s say there are 1000 users like Alice that have trust connections to some of these million users that are 3 hops away from the shop. Now let’s say all of these 1000 new shoppers did not receive their product. What should these 1000 unhappy users do to warn the network that the shop is not trust worthy anymore? They can cut some of their friends trust connections, but that way they also cut their indirect connections to good shops. And you only have so many friends to which you would trust your money. If you have to cut trust for every bad shopping experience, then you would quickly run out of friends …

So in the second part of your answer you say that these 1000 unhappy users should make more direct connections to merchants. That way if they have an unhappy shopping experience, they can cut the trust with the merchant, and they don’t need to cut trust with one of their friends.

But then I think then the system might become unpractical. I think it could work if you only have to risk some of your money to some close friends and family. But if you have to start trusting all businesses that you interact with, you would need a lot of money to put in the shared accounts.

I really like the idea, and I’m just trying to figure out how it could work in practice.

@OrfeasLitos
Copy link

OrfeasLitos commented Nov 19, 2017 via email

@TomTem
Copy link

TomTem commented Nov 20, 2017

It's more clear now, thx.

Maybe you could make the out-of-band feedback also in-game. For example each user like Alice could give feedback + some rating per transaction (inside her own node/wallet). Everyone that connects trough her can use that feedback + rating.

For example when the algorithm finds several paths to the shop to calculate the TIR numbers, it might as well also collect the user ratings on these paths to provide a second number to the buyer. Because trust and satisfaction is not necessarily the same, so having this second separate number could be interesting. The downside is that you will only have those ratings on the paths that connect you to the shop, but the upside is that you can be certain that these few ratings are genuine.

(Or maybe you could even go and collect all the ratings for the shop in the network, but that might be too expensive to calculate …)

@OrfeasLitos
Copy link

OrfeasLitos commented Nov 20, 2017 via email

@TomTem
Copy link

TomTem commented Nov 21, 2017

Ah I see, the shop owner could make a fake account, get his friends to trust it, and then create a long chain of fake accounts to the shop with good ratings.

Maybe you can use weighted ratings (closer to Alice are more important than close to the shop) … I’ll think about it some more …

@OrfeasLitos
Copy link

OrfeasLitos commented Nov 21, 2017 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants