Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sybil Resistance in situation where user has a different network for each account. #4

Open
edzillion opened this issue Nov 27, 2017 · 20 comments

Comments

@edzillion
Copy link
Contributor

Here's a scenario:

I have two different accounts, each with different trust network. Each network shares no users at 0th position. I would be able to spend my full issuance on each account, giving me twice the UBI.

Caveats:

  • No usage of Validators on both, or just one network (depending on the validation design).
  • No shared connections on any position in the graph (depending on the user search functionality).
  • No ability to pool the balances of the accounts.

Which basically means that each network must be small and little-connected (islands?).

What say you @apmilen ?

@nkoren
Copy link

nkoren commented Dec 11, 2017

Yes, I'd love to see an actually decentralised UBI coin work, but I feel like this attack vector might undo the concept.

  • Alice creates AliceCoin01 which is trusted by her colleagues at work.
  • Alice creates AliceCoin02 which is trusted by her family.
  • Alice creates AliceCoin03 which is trusted by friends at the community centre.
  • Etc...

None of these parties would have any reason to believe that Alice is being less than genuine. Even if these networks connect at a small distance from Alice, how could the attack be detected? Presumably the coinage ID is just a hash, rather than something that is biometrically or legally identifiable. So if Alice's colleague Richard trusts AliceCoin01, and Alice's uncle Bob trusts AliceCoin02, and Richard and Bob trade with each other, what indication would they have that AliceCoin01 and AliceCoin02 are being generated by the same person?

I'd love it if there were a purely-decentralised, web-of-trust way to prevent something like this, but so far I've not been able to think of one. I suspect that the validation process may need to be considerably more stringent, and bundled with the coin generation process. Eg:

  • Alice provides proof of individual identity to Validator service, showing proof of legal name, DoB, passport/driver's license/national identity number, etc.
  • The Validator issues a signed certificate consisting of a non-reversible hash of the personally identifiable data. This is attached to every coin that is issued, making them spendable.
  • The Validator agrees to trust other validation services so long as their processes are sufficiently rigorous as to ensure that if the same person approached them for validation, they would require the same documentation, producing an identical hash. In this way, the Valdation can be federated, but not fully decentralised.

But like I say -- if there's a more decentralised way to solve this problem, then let's do it!

@adamstallard
Copy link

You will need a good validator, preferably one that's transparent, decentralized, encompasses all the different facets of alice (alice01, alice02, alice03) and is good at protecting against sybils.

Alice provides proof of individual identity to Validator service, showing proof of legal name, DoB, passport/driver's license/national identity number, etc.

This type of centralized validator service relying on Ids or other legal info is going to be pretty weak compared to a good blockchain solution. Sure it might work that one time for Alice, but what if Bob from Acme Validation Service stays late at work one night and creates 10,000 sybils for himself?

What about a decentralized system that verifies personal uniqueness using a social graph? Combine it with a social network. I think social network users would be the most likely to catch Alice--"wait a minute, isn't this the same Alice that Bob knows?" There are also automated sybil detection systems running on the decentralized nodes that host the graph.

My goal is to work on the development of both brightside and circles.

If any of this makes sense to you, I encourage you to come by the decstack projects team and join the brightside channel. Or post an issue somewhere on Brightside so I can find you again.

@adamstallard
Copy link

Which basically means that each network must be small and little-connected (islands?).

By the way, little-connected islands is exactly what the automated anti-sybil routines are looking for when they analyze the social graph.

@nkoren
Copy link

nkoren commented Dec 30, 2017

What about a decentralized system that verifies personal uniqueness using a social graph? Combine it with a social network. I think social network users would be the most likely to catch Alice--"wait a minute, isn't this the same Alice that Bob knows?"

If the coin is visibly and necessarily tied to a social network identity, then yes I could see that helping quite a bit. But not everybody uses or wants to use social networks, so I don't think that's a sufficiently universal approach.

By the way, little-connected islands is exactly what the automated anti-sybil routines are looking for when they analyze the social graph.

In the scenario where somebody is creating fake accounts that validate each other, then sure. It's important to note that in my scenario above, each account is connected to an entirely legitimate network, with people Alice knows face-to-face. If Alice lives in a big city or does a lot of international business or some such, then -- purely in terms of network topology -- each of her coins could easily be better-connected than some of her rural relatives could manage.

Coins tied to a compulsory social network could indeed help with this; Alice's work colleagues could then start to query why none of her family members have validated AliceCoin01, etc. This would make it much easier to get caught. But from a privacy perspective, would that not be rather worse than a state-ID-backed Validator?

@adamstallard
Copy link

adamstallard commented Dec 30, 2017

Coins tied to a compulsory social network could indeed help with this; Alice's work colleagues could then start to query why none of her family members have validated AliceCoin01, etc. This would make it much easier to get caught. But from a privacy perspective, would that not be rather worse than a state-ID-backed Validator?

I was thinking of a decentralized social graph where the nodes that form consensus only have a social graph connected by public keys, and no identifying information.

If the coin is visibly and necessarily tied to a social network identity, then yes I could see that helping quite a bit. But not everybody uses or wants to use social networks, so I don't think that's a sufficiently universal approach.

What I'm thinking of here is not something like facebook, but rather a utility that cooperates with applications--one of them being circles--to manage user connections based on interactions that occur in those apps. By cooperating with other apps: rating systems, voting systems, meetup-type apps, gift economies, etc., the connections made by each could be shared and build a more extensive (and therefore valuable) graph. This is what I want to do with Brightside.

I see your point about not everyone wanting to use social network XYZ, but if it was more of a transparent connection manager that interfaced with circles, and it provided an increased value for your currency as you made more connections, it could be very useful. Indeed, a large, decentralized social graph provider that protected against sybils would probably be an ideal validator for people who wanted a fairly easy way to ramp up the value of their personal currency.

@wkampmann
Copy link

wkampmann commented Dec 31, 2017

Something like this, @adamstallard ? Compare the network topology of multiple apps and calculate connections overlap?

network-compare

I guess you could take the transaction dynamics into account, not just the static topology. E.g. verify that (x, a) activity on the payment network coincided with or consistently preceded or followed (z, a) activity in the meet4beers network.

I would be interested to see how something like this would work in the real world.

The amount of data necessary to make this reliable would still be a major privacy concern, however. Because the data ultimately has to be traced back to the app accounts in order for this to be useful. The fact that the data is stored decentralized and out there in the open does not exactly help in this case, or does it?

@adamstallard
Copy link

@wkampmann That's an interesting approach I hadn't thought of. I was thinking that each application would use a common social graph provider. You could share your ID (public key) with an application to allow it to query the common social graph to check your uniqueness score.

You still have to be careful about what information you share with an application (or personal contact) once you've shared your ID (public key) with it, because that information could be leaked and then permanently associated with your ID. You could remain anonymous and/or keep your info private, but that would mean never sharing these with an app or contact that knows your ID.

@wkampmann
Copy link

What exactly does the uniqueness score represent, and why is it a good metric for "non-sybilness"?

@adamstallard
Copy link

@wkampmann It's the likelihood that a vertex in the social graph represents a unique person; it's obtained by analyzing the social graph, e.g. using Sybilinfer

@wkampmann
Copy link

wkampmann commented Jan 1, 2018

@adamstallard : If I understand correctly, Sybilfinder is ultimately based on the assumption that sybil accounts have a harder time creating new connections to honest nodes than honest nodes have, on average. Therefore they are "slow(er)-mixing" with the rest of the network. This statistical difference can be detected and used as a sybil detection flag.

However, wouldn't this leave the algorithm quite vulnerable to detecting other forms of connection frictions, and label them as a Sybil attack? The algorithm cannot know the difference between friction due to being a suspicious sybil, vs. being a dislikable individual having a hard time connecting with a network, vs. being an individual that is discriminated by the network.

The authors indeed note themselves that the use case stated in this thread will either be a false positive, or the tolerance factor needs to be increased up to the point where the network is again vulnerable for sybil attacks:

While in theory fast mixing networks should not exhibit any small cuts, or regions of abnormally low conductance, in practice they do. This is especially true for regions with new users that have not had the chance to connect to many others, as well as social networks that only contain users with particular characteristics (like interest, locality, or administrative groups.) Those regions yield, even in the honest case, sample cuts that have the potential to be mistaken as attacks. This effect forces us to consider a threshold EXX under which we consider cuts to be simply false positives. In turn this makes the guarantees of schemes weaker in practice than in theory, since the adversary can introduce Sybils into a region undetected, as long as the set threshold EXX is not exceeded.

They also note that their algorithm will not detect compromised nodes for the same reason. Compromised nodes grew up as a normal human, resulting in a statistically natural network. Only later did they become infected with a robot virus.

I see your point now, that a more complete network would give better results.

@adamstallard
Copy link

The algorithm cannot know the difference between friction due to being a suspicious sybil, vs. being a dislikable individual having a hard time connecting with a network

Yes. In the case of something important like UBI, I would want to make an effort to reach these legitimate people who are having trouble making the right kinds of connections to increase their uniqueness score and get the full value for their personal currency.

They also note that their algorithm will not detect compromised nodes for the same reason. Compromised nodes grew up as a normal human, resulting in a statistically natural network. Only later did they become infected with a robot virus.

If your account is compromised and you don't notice it and neither do any of your contacts, that's a problem. To help with this, we're encouraging people to review their contacts and remove contacts with whom they no longer have contact. If you lose your key or feel like it's been compromised, you can create a new one and then start to reconnect with the same people you had as contacts before. If enough of them validate you, you can replace your old Id (key).

But I'm not sure I would ultimately put more trust in

One really nice thing about circles is you will have your choice of validators--and no one says you are limited to just one. I find the idea of decentralized, transparent validator appealing, which is why I'm working on one. The sybil detection won't be perfect--it may be better to err on the side of being strict and having some false sybil positives, with the understanding that people would have other validation options. If our system isn't working well for some people--e.g. for people in "islands" (geographically, etc) where the social graph isn't well connected, a good strategy for them would be to use other validators while continuing to help grow our social graph.

@wkampmann
Copy link

One really nice thing about circles is you will have your choice of validators--and no one says you are limited to just one. I find the idea of decentralized, transparent validator appealing, which is why I'm working on one. The sybil detection won't be perfect--it may be better to err on the side of being strict and having some false sybil positives, with the understanding that people would have other validation options. If our system isn't working well for some people--e.g. for people in "islands" (geographically, etc) where the social graph isn't well connected, a good strategy for them would be to use other validators while continuing to help grow our social graph.

Yeah. But that would mean that you give up on detecting sybils across islands, or not? If the solution to my being banned as a sybil simply is to switch validators, does that mean that I can create one account per validator, respecting each validator's rules, and receive a UBI per validator? I can use a separate validator for each of my unconnected networks: one for my colleagues, one for my friends, one for my family. I receive three times UBI.

@adamstallard
Copy link

Yeah. But that would mean that you give up on detecting sybils across islands, or not? If the solution to my being banned as a sybil simply is to switch validators, does that mean that I can create one account per validator, respecting each validator's rules, and receive a UBI per validator? I can use a separate validator for each of my unconnected networks: one for my colleagues, one for my friends, one for my family. I receive three times UBI.

You have a good point. If there's no overlap between validators (i.e. they represent true "islands") then losing access to one validator is a complete loss; you can't make up the lost trust by connecting to any number of other validators. But if there is overlap then someone could create a different account to use with each validator and get more total "trust" than they would by using the same account for each.

So saying "let's be strict about identifying sybils because it's not a problem if there's a false positive--they'll just use another validator" doesn't sound like a good solution after all.

It's actually hard to see how multiple validators could coexist without being abused unless they share information. I think a federation like @nkoren was talking about--where validators can cross-validate their users--might be the inevitable result.

@fermmm
Copy link

fermmm commented Apr 26, 2018

Stop wasting your energy, the only way I know to stop a sybil attack is to make a system where everybody have to meet with random people at the same time once a month, because nobody can be at the same time in different places, accounts gets validated. It's difficult to use the system like this but can be possible.

@cyphunk
Copy link

cyphunk commented Apr 26, 2018

@fermmm could you edit your post? Just take a bit more time to include us in on some insights you may have. It's just brash is all.

@fermmm
Copy link

fermmm commented Apr 27, 2018

Another solution to the multiple accounts that can be another security layer is the one mentioned in this paper:
https://github.com/UBIC-repo/Whitepaper

@adamstallard
Copy link

adamstallard commented Apr 29, 2018

@fermmm

There's merit to the idea of using physical proximity to ensure that each individual exists only once in a system.

If you think about your second example, UBIC, which uses ePassports: ultimately, proof-of-citizenship is traced back to witnesses of your birth. There are short-comings of course: each ePassport is for a single country, governments could abuse the system (agents could create duplicate IDs for themselves), it requires you to let the government gather information about you, it's still possible to have multiple ePassports under some circumstances. ePassports are a pretty good solution to keeping out sybils, but they aren't very widely used yet, and even for those that can get them, they are expensive to obtain. It's very cheap to do the verification, though, so there's that advantage.

I invite you to check out BrightID(formerly Brightside)--if you haven't already--which uses people that you know to verify your uniqueness.

@fermmm
Copy link

fermmm commented Apr 30, 2018

@adamstallard

From BrightID white paper:

These requirements have the effect of forcing sybils, sybil creators, and their collaborators into groups with each other (since honest individuals won’t connect with sybils).

Why?. Reading all BrightID's white paper I failed to understand why honest individuals won't connect with sybils. Circles also assumes that honest and cybil users won't connect with each other and I don't understand why. Because validators are going to make a good job? they will be invulnerable? how?, this is not specified in the white paper.

@fermmm
Copy link

fermmm commented May 3, 2018

I edited my issue with all that I could gather related with the validators, see it here: #8

@andrewzhurov
Copy link

There's a recent work by E. Shapiro, in which he described a way to build a Grassroots Digital Economy backed by Grassroots Cryptocurrencies. Same idea of people having their own crypto with a broader scope of allowing any economical activity with it (providing services and goods priced in your coins).

He faced the same issue of how to deal with Sybils.
Intrestingly, the answer is that Sybils are repelled by people, as people in that system are investing in each other when they establish trust by opening a mutual credit line.
"Money is the most universal and effective system of mutual trust ever devised."
A to-be-sybil would think twice, as doing so would hurt those who trust him.
In our example, Alice would be incentivized to show family and collegues the same Alice ID. Otherwise she would hurt them.

More on that in "Mutual Credit Lines are a Sybil-Repellent" of the paper.
Also, there's a great presentation, where Udi gave a nice long answer to the question at 51:00.

Keen to hear your thoughts on whether this strategy can be applied for CirclesUBI.

Kudos for all your effort in making UBI a reality, guys. 💪 Thanks to it we're getting there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants