You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
Thanks for making your repository publicly available.
I find myself a bit confused, and after having read the whitepaper and codebase I still find myself at a loss.
I tried this example:
importtorchfromtransformersimportAutoModelForSequenceClassification, AutoTokenizer# Initialize model and tokenizermodel_name="nesaorg/distilbert-sentiment-encrypted"model=AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer=AutoTokenizer.from_pretrained(model_name)
print("Test input 1:")
inputs=tokenizer("I feel much safer using the app now that two-factor authentication has been added", return_tensors="pt")
print(inputs)
print("Test input 2:")
inputs=tokenizer("I do not feel much safer now", return_tensors="pt")
print(inputs)
I am not sure I fully understand where the encryption is coming from. My understanding is that the inputs to the model should be protected from the model provider - however, the above tokenization demonstrates that there is a one-to-one mapping between the plaintext tokens and the corresponding ids (e.g. 'I' -> 21666, 'feel' -> 7721). This makes sense given the implementation of the HF tokenizer, but implies that the model provider can trivially recover the plaintext from the user that is supposed to be private. If the tokenizer is secret from the server, a simple statistical analysis based approach can recover the token mappings.
Moreover, the model itself seems to just be a normal distilbert architecture, but with different weights. Hence, I am a bit confused by this example - where is the encryption being applied?
The text was updated successfully, but these errors were encountered:
Hi SamG,
Thanks for your comment. Yes this is just a demo version of the encryption. In practice the key should be rotated frequently to prevent frequency analysis attacks. The encryption on the model side is produced by our proprietary code that allows the new tokens to be ingested to the model so that data is not recoverable within the model (again provided that key rotation is managed); hence the different weights. I might also add that given that there are over 30k tokens it's not very easy to recover the mapping with statistical analysis without a large throughput. In addition to tokenizer rotation, we are researching into other layers of security around this
Hello,
Thanks for making your repository publicly available.
I find myself a bit confused, and after having read the whitepaper and codebase I still find myself at a loss.
I tried this example:
Output:
I am not sure I fully understand where the encryption is coming from. My understanding is that the inputs to the model should be protected from the model provider - however, the above tokenization demonstrates that there is a one-to-one mapping between the plaintext tokens and the corresponding ids (e.g. 'I' -> 21666, 'feel' -> 7721). This makes sense given the implementation of the HF tokenizer, but implies that the model provider can trivially recover the plaintext from the user that is supposed to be private. If the tokenizer is secret from the server, a simple statistical analysis based approach can recover the token mappings.
Moreover, the model itself seems to just be a normal distilbert architecture, but with different weights. Hence, I am a bit confused by this example - where is the encryption being applied?
The text was updated successfully, but these errors were encountered: