You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to fix an issue with my published keys getting out of sync and I noticed that once I mark them as published, they are no longer returned via olm_account_one_time_keys (as per the documentation).
I am trying to understand- Is there a reason why there's no possibility to query them via olm::Account::get_one_time_keys_json?
For context: I have an entirely different setup than Matrix does, so I do not have an analogue to a Matrix server that can manage the claim-keys process for me, therefore I have to trust the other party to communicate the OTPK they supposedly used and if they report the wrong key for example (maliciously or not), olm would mark the key as used, but I would not replace the correct key in my public registry, therefore advertising a stale, unusable OTPK which can lead to a bad user experience if someone receives a message signed with such a key (or, why not, a malicious denial of service where I would purposefully invalidate half of someone's OTPKs and have them replace the wrong keys in their public registry).
My concern now is to fix this issue for my existing users who are partly out of sync which is why I think exposing all valid OTPKs would be welcome.
Perhaps another approach for my issue here would be to either bubble up the OTPK from olm::session::new_inbound_session, or to expose decode_one_time_key_message() and just have better control over what has actually been claimed. Thoughts?
The text was updated successfully, but these errors were encountered:
It may not be possible to do what you want to do with libolm as-is. I'm not sure what kind of API you need, but if you have a suggestion that won't break the existing API, then we will consider including it.
Thanks for the response. Indeed it is currently not possible with the existing API.
I had something like this in mind for querying all the local OTPKs: alinradut/olm@b0de208
As for the second part I am not sure how it would be best to approach it either, but the fact is that, unless I am misunderstanding something, I cannot tell as an external user of the library what one time key was actually used to set up the new incoming session, I have to trust what the sender claims he picked from my list.
This would be in part mitigated by my commit above because I can just compare my local OTPK list with my public registry list and figure out which ones I need to replace.
Hello
I am trying to fix an issue with my published keys getting out of sync and I noticed that once I mark them as published, they are no longer returned via
olm_account_one_time_keys
(as per the documentation).I am trying to understand- Is there a reason why there's no possibility to query them via olm::Account::get_one_time_keys_json?
For context: I have an entirely different setup than Matrix does, so I do not have an analogue to a Matrix server that can manage the claim-keys process for me, therefore I have to trust the other party to communicate the OTPK they supposedly used and if they report the wrong key for example (maliciously or not), olm would mark the key as used, but I would not replace the correct key in my public registry, therefore advertising a stale, unusable OTPK which can lead to a bad user experience if someone receives a message signed with such a key (or, why not, a malicious denial of service where I would purposefully invalidate half of someone's OTPKs and have them replace the wrong keys in their public registry).
My concern now is to fix this issue for my existing users who are partly out of sync which is why I think exposing all valid OTPKs would be welcome.
Perhaps another approach for my issue here would be to either bubble up the OTPK from olm::session::new_inbound_session, or to expose decode_one_time_key_message() and just have better control over what has actually been claimed. Thoughts?
The text was updated successfully, but these errors were encountered: