Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add position to OwnedPositionIdsResponse #4837

Closed
wants to merge 1 commit into from
Closed

Conversation

grod220
Copy link
Contributor

@grod220 grod220 commented Aug 27, 2024

Describe your changes

To support limit orders in the web, we need the ability for the view service to return position info to a client. At the moment, it only returns an id. Augmenting the response with this will provide a better view for users and also provided the necessary inputs to take action on a position.

Checklist before requesting a review

  • If this code contains consensus-breaking changes, I have added the "consensus-breaking" label. Otherwise, I declare my belief that there are not consensus-breaking changes, for the following reason:

Only changes view rpc return response

let position_id_bytes: Vec<u8> = row.get("position_id")?;
let id = position::Id(position_id_bytes.as_slice().try_into()?);

let position_state_str: String = row.get("position_state")?;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Getting errors that position_state isn't in the database 🤔 , but it's here:

position_state TEXT NOT NULL,

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did you do a view reset or otherwise regenerate the DB schema? 🤔

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Weirdly, this seems to only fail in smoke tests.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Weirdly, this seems to only fail in smoke tests.

Not just in smoke tests: I can replicate the smoke test failure locally, yes, but I can also run the same cmd locally against a devnet and observe the same failure:

❯ cargo run --release --bin pcli -- --home ~/.local/share/pcli-localhost/ tx position close-all
warning: /home/conor/src/penumbra/crates/bin/pindexer/Cargo.toml: `default-features` is ignored for penumbra-app, since `default-features` was not specified for `workspace.dependencies.penumbra-app`, this could become a hard error in the future
    Finished release [optimized] target(s) in 0.28s
     Running `target/release/pcli --home /home/conor/.local/share/pcli-localhost/ tx position close-all`
Scanning blocks from last sync height 628 to latest height 628
[0s] ██████████████████████████████████████████████████       0/0       0/s ETA: 0s
Error: status: Unavailable, message: "error getting positions: Invalid column name: position_state", details: [], metadata: MetadataMap { headers: {"content-type": "application/grpc"} }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's at least the close-all subcommand that's affected. Currently there are a lot of calls to both owned_positions and owned_position_ids, and we'd have to sort out all the different ones to get the tests passing again. The smoke test failures are correct, though, in that they accurately indicate breakage of the tested subcommands.

Comment on lines +1724 to +1725
// TODO: DB currently does not save full position. Hence, default backups are used below.
// We should save the full position in the db to be able to provide the full correct data.
Copy link
Contributor Author

@grod220 grod220 Aug 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand if relying on defaults may not be an acceptable change and we will want to update the database schema + sync logic (hard to tell how much work this is though).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking through the requirements here: the existing Position data is recorded in record_position in storage.rs, which takes a Position but only stores certain fields from it - we could refactor the table to store the whole Position object either as a blob or decomposed into new columns - however, this would not be a complete solution, since the reserves wouldn't be updated in update_position and the refactor would want to ensure that the position data stored is correctly updated with any changes. Seems doable but with some care & thoroughness to account for all places where position reserves may change

Copy link
Member

@zbuc zbuc Aug 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tricky part here is that there's currently nothing that comes across in sync that would tell you when a position's reserves changed.

The three approaches that come to mind are:

  1. adding a list of all touched positions during a block to the compact block

this seems like it runs the danger of bloating the compact block with data that is not relevant to nearly every client syncing, so I think it should be excluded from consideration

  1. adding an additional step to the sync to call against APIs to fetch current position data

This is a little bad because we can't ensure that the block height of the position data is perfectly consistent with the sync block height, but I think it might be fine.

However, this represents a significant shift in view server data, which to my understanding, is currently built entirely on compact block data. Do we want to head down the path of enriching it with data from other RPCs? it looks like this line has already been crossed: https://github.com/penumbra-zone/penumbra/blob/main/crates/view/src/storage.rs#L1403

  1. leave the table as-is and have the web client fetch the complete, up-to-date position data from the existing RPCs based on the position ID retrieved from the view server

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

leave the table as-is and have the web client fetch the complete, up-to-date position data from the existing RPCs based on the position ID retrieved from the view server

Late to the party, but that feels like a very appropriate solution to the problem at hand. We already have penumbra.core.component.dex.v1.QueryService.LiquidityPositionById which exists specifically for this use case: accept position IDs, return full positions. In OP, @grod220 did say

To support limit orders in the web, we need the ability for the view service to return position info to a client.

but I lack context on why that might be a hard requirement. To my eye, we already have "the ability for the view service to return position info to a client."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Helpful discussion!

@zbuc ah, I see what you mean. For point 2, it is not clear to me at what frequency or signal would trigger re-fetching a position's data 🤔 (or how to do this in a privacy-preserving way).

@conorsch thanks for sharing that. I did run into this rpc method. However, before I realized the reserves limitation, I thought it would be simple to just serve the position data that is already in the database. I was gravitating toward updating the OwnedPositionIds view service RPC as this does not reveal my positions to the node. However, if there isn't a reasonable way to have up-to-date positions in the db without specifically querying the node with position ID's, it sounds like a fresh query is all the same.

Copy link
Member

@zbuc zbuc Aug 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@grod220 the naive solution would be to fetch all the positions during handle_block alongside the check for new app parameters. And yes this does introduce privacy concerns as you are essentially linking your IP address to all those position IDs. The same problem seems to exist if we do it in the web too. Splitting them apart over time doesn't really fix the issue either.

For privacy's sake, either adding the positions to the compact block or doing something like having the client fetch additional randomly selected positions as "noise" to reduce linkability seems good, but there still seems to be danger of linkability over time based on the union of positions between sets of queries.

Additionally there might be other heuristics that could be applied and lead to linkability, for example, "this IP address queries for this list of positions regularly, the majority of which are for trading pair XY, so it seems like this IP address is providing liquidity for trading pair XY".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given there doesn't seem like a practical privacy-preserving solution, it sounds like the roundtrip OwnedPositionIds -> LiquidityPositionById query is the way to go for now. Closing this PR.

Thanks everyone! 🙏

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hdevalence tagging you for posterity re: the accepted LP linkability risk for the present moment. IMO it is worth investigating what a better privacy preserving design here would look like.

Copy link
Contributor

@aubrika aubrika left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good start for this I think, just needs some extra attention as noted in comment

let position_id_bytes: Vec<u8> = row.get("position_id")?;
let id = position::Id(position_id_bytes.as_slice().try_into()?);

let position_state_str: String = row.get("position_state")?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did you do a view reset or otherwise regenerate the DB schema? 🤔

Comment on lines +1724 to +1725
// TODO: DB currently does not save full position. Hence, default backups are used below.
// We should save the full position in the db to be able to provide the full correct data.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking through the requirements here: the existing Position data is recorded in record_position in storage.rs, which takes a Position but only stores certain fields from it - we could refactor the table to store the whole Position object either as a blob or decomposed into new columns - however, this would not be a complete solution, since the reserves wouldn't be updated in update_position and the refactor would want to ensure that the position data stored is correctly updated with any changes. Seems doable but with some care & thoroughness to account for all places where position reserves may change

Copy link
Contributor

@conorsch conorsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@grod220 In the OP you say that "we need the ability for the view service to return position info to a client" but to my eye that capability already exists in penumbra.core.component.dex.v1.QueryService.LiquidityPositionById. Can you explain why LPById is insufficient for the problem at hand? It seems to my naive eye that a two-pass approach in web code, which 1) queries for the relevant (i.e. owned) position ids; then 2) hydrates those positions by requesting the full position by id, is the right fit for the problem at hand. Happy to read up on discussions elsewhere if this design requirement has already been covered and I missed it!

@@ -1177,9 +1177,9 @@ impl TxCmd {
.set_gas_prices(gas_prices)
.set_fee_tier((*fee_tier).into());

for position_id in positions_to_close_now {
for (id, _) in positions_to_close_now {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: would prefer to keep position_id var name for comprehension. Also makes the diff smaller and easier to review.

@@ -689,6 +689,7 @@ message OwnedPositionIdsRequest {

message OwnedPositionIdsResponse {
core.component.dex.v1.PositionId position_id = 1;
core.component.dex.v1.Position position = 2;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not convinced that OwnedPositionIdsResponse should return OwnedPositions; as the method name indicates, it should return OwnedPositionIds.

let position_id_bytes: Vec<u8> = row.get("position_id")?;
let id = position::Id(position_id_bytes.as_slice().try_into()?);

let position_state_str: String = row.get("position_state")?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's at least the close-all subcommand that's affected. Currently there are a lot of calls to both owned_positions and owned_position_ids, and we'd have to sort out all the different ones to get the tests passing again. The smoke test failures are correct, though, in that they accurately indicate breakage of the tested subcommands.

@grod220 grod220 closed this Aug 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants