Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add position to OwnedPositionIdsResponse #4837

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions crates/bin/pcli/src/command/tx.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1177,9 +1177,9 @@ impl TxCmd {
.set_gas_prices(gas_prices)
.set_fee_tier((*fee_tier).into());

for position_id in positions_to_close_now {
for (id, _) in positions_to_close_now {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: would prefer to keep position_id var name for comprehension. Also makes the diff smaller and easier to review.

// Close the position
planner.position_close(*position_id);
planner.position_close(*id);
}

let final_plan = planner
Expand Down Expand Up @@ -1229,13 +1229,13 @@ impl TxCmd {
.set_gas_prices(gas_prices)
.set_fee_tier((*fee_tier).into());

for position_id in positions_to_withdraw_now {
for (id, _) in positions_to_withdraw_now {
// Withdraw the position

// Fetch the information regarding the position from the view service.
let position = client
.liquidity_position_by_id(LiquidityPositionByIdRequest {
position_id: Some((*position_id).into()),
position_id: Some((*id).into()),
})
.await?
.into_inner();
Expand All @@ -1254,7 +1254,7 @@ impl TxCmd {
.pair
.expect("missing trading function pair");
planner.position_withdraw(
*position_id,
*id,
reserves.try_into().expect("invalid reserves"),
pair.try_into().expect("invalid pair"),
);
Expand Down
Binary file modified crates/cnidarium/src/gen/proto_descriptor.bin.no_lfs
Binary file not shown.
4 changes: 4 additions & 0 deletions crates/proto/src/gen/penumbra.view.v1.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1529,6 +1529,10 @@ pub struct OwnedPositionIdsResponse {
pub position_id: ::core::option::Option<
super::super::core::component::dex::v1::PositionId,
>,
#[prost(message, optional, tag = "2")]
pub position: ::core::option::Option<
super::super::core::component::dex::v1::Position,
>,
}
impl ::prost::Name for OwnedPositionIdsResponse {
const NAME: &'static str = "OwnedPositionIdsResponse";
Expand Down
17 changes: 17 additions & 0 deletions crates/proto/src/gen/penumbra.view.v1.serde.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4307,10 +4307,16 @@ impl serde::Serialize for OwnedPositionIdsResponse {
if self.position_id.is_some() {
len += 1;
}
if self.position.is_some() {
len += 1;
}
let mut struct_ser = serializer.serialize_struct("penumbra.view.v1.OwnedPositionIdsResponse", len)?;
if let Some(v) = self.position_id.as_ref() {
struct_ser.serialize_field("positionId", v)?;
}
if let Some(v) = self.position.as_ref() {
struct_ser.serialize_field("position", v)?;
}
struct_ser.end()
}
}
Expand All @@ -4323,11 +4329,13 @@ impl<'de> serde::Deserialize<'de> for OwnedPositionIdsResponse {
const FIELDS: &[&str] = &[
"position_id",
"positionId",
"position",
];

#[allow(clippy::enum_variant_names)]
enum GeneratedField {
PositionId,
Position,
__SkipField__,
}
impl<'de> serde::Deserialize<'de> for GeneratedField {
Expand All @@ -4351,6 +4359,7 @@ impl<'de> serde::Deserialize<'de> for OwnedPositionIdsResponse {
{
match value {
"positionId" | "position_id" => Ok(GeneratedField::PositionId),
"position" => Ok(GeneratedField::Position),
_ => Ok(GeneratedField::__SkipField__),
}
}
Expand All @@ -4371,6 +4380,7 @@ impl<'de> serde::Deserialize<'de> for OwnedPositionIdsResponse {
V: serde::de::MapAccess<'de>,
{
let mut position_id__ = None;
let mut position__ = None;
while let Some(k) = map_.next_key()? {
match k {
GeneratedField::PositionId => {
Expand All @@ -4379,13 +4389,20 @@ impl<'de> serde::Deserialize<'de> for OwnedPositionIdsResponse {
}
position_id__ = map_.next_value()?;
}
GeneratedField::Position => {
if position__.is_some() {
return Err(serde::de::Error::duplicate_field("position"));
}
position__ = map_.next_value()?;
}
GeneratedField::__SkipField__ => {
let _ = map_.next_value::<serde::de::IgnoredAny>()?;
}
}
}
Ok(OwnedPositionIdsResponse {
position_id: position_id__,
position: position__,
})
}
}
Expand Down
Binary file modified crates/proto/src/gen/proto_descriptor.bin.no_lfs
Binary file not shown.
18 changes: 11 additions & 7 deletions crates/view/src/client.rs
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ pub trait ViewClient {
&mut self,
position_state: Option<position::State>,
trading_pair: Option<TradingPair>,
) -> Pin<Box<dyn Future<Output = Result<Vec<position::Id>>> + Send + 'static>>;
) -> Pin<Box<dyn Future<Output = Result<Vec<(position::Id, Position)>>> + Send + 'static>>;

/// Generates a full perspective for a selected transaction using a full viewing key
fn transaction_info_by_hash(
Expand Down Expand Up @@ -697,7 +697,7 @@ where
&mut self,
position_state: Option<position::State>,
trading_pair: Option<TradingPair>,
) -> Pin<Box<dyn Future<Output = Result<Vec<position::Id>>> + Send + 'static>> {
) -> Pin<Box<dyn Future<Output = Result<Vec<(position::Id, Position)>>> + Send + 'static>> {
// should the return be streamed here? none of the other viewclient responses are, probably fine for now
// but might be an issue eventually
let mut self2 = self.clone();
Expand All @@ -717,11 +717,15 @@ where
let position_ids = pb_position_ids
.into_iter()
.map(|p| {
position::Id::try_from(p.position_id.ok_or_else(|| {
anyhow::anyhow!("empty OwnedPositionsIdsResponse message")
})?)
let id = position::Id::try_from(p.position_id.ok_or_else(|| {
anyhow::anyhow!("OwnedPositionsIdsResponse missing position_id")
})?)?;
let position = Position::try_from(p.position.ok_or_else(|| {
anyhow::anyhow!("OwnedPositionsIdsResponse missing position")
})?)?;
Ok((id, position))
})
.collect::<anyhow::Result<Vec<position::Id>>>()?;
.collect::<anyhow::Result<Vec<(position::Id, Position)>>>()?;

Ok(position_ids)
}
Expand Down Expand Up @@ -1048,6 +1052,6 @@ where

Ok(resp)
}
.boxed()
.boxed()
}
}
9 changes: 5 additions & 4 deletions crates/view/src/service.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1725,16 +1725,17 @@ impl ViewService for ViewServer {
.map_err(|e: anyhow::Error| e.context("could not decode trading pair"))
.map_err(|e| tonic::Status::invalid_argument(format!("{:#}", e)))?;

let ids = self
let positions = self
.storage
.owned_position_ids(position_state, trading_pair)
.owned_positions(position_state, trading_pair)
.await
.map_err(|e| tonic::Status::unavailable(format!("error getting position ids: {e}")))?;
.map_err(|e| tonic::Status::unavailable(format!("error getting positions: {e}")))?;

let stream = try_stream! {
for id in ids {
for (id, position) in positions {
yield pb::OwnedPositionIdsResponse{
position_id: Some(id.into()),
position: Some(position.into())
}
}
};
Expand Down
32 changes: 27 additions & 5 deletions crates/view/src/storage.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ use url::Url;

use penumbra_app::params::AppParameters;
use penumbra_asset::{asset, asset::Id, asset::Metadata, Value};
use penumbra_dex::lp::{BareTradingFunction, TradingFunction};
use penumbra_dex::{
lp::position::{self, Position, State},
TradingPair,
Expand Down Expand Up @@ -1097,7 +1098,7 @@ impl Storage {

tx.commit()?;
Ok::<(), anyhow::Error>(())
})
})
.await??;

Ok(())
Expand Down Expand Up @@ -1679,11 +1680,11 @@ impl Storage {
Ok(())
}

pub async fn owned_position_ids(
pub async fn owned_positions(
&self,
position_state: Option<State>,
trading_pair: Option<TradingPair>,
) -> anyhow::Result<Vec<position::Id>> {
) -> anyhow::Result<Vec<(position::Id, Position)>> {
let pool = self.pool.clone();

let state_clause = match position_state {
Expand Down Expand Up @@ -1714,8 +1715,29 @@ impl Storage {
pool.get()?
.prepare_cached(&q)?
.query_and_then([], |row| {
let position_id: Vec<u8> = row.get("position_id")?;
Ok(position::Id(position_id.as_slice().try_into()?))
let position_id_bytes: Vec<u8> = row.get("position_id")?;
let id = position::Id(position_id_bytes.as_slice().try_into()?);

let position_state_str: String = row.get("position_state")?;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Getting errors that position_state isn't in the database 🤔 , but it's here:

position_state TEXT NOT NULL,

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did you do a view reset or otherwise regenerate the DB schema? 🤔

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Weirdly, this seems to only fail in smoke tests.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Weirdly, this seems to only fail in smoke tests.

Not just in smoke tests: I can replicate the smoke test failure locally, yes, but I can also run the same cmd locally against a devnet and observe the same failure:

❯ cargo run --release --bin pcli -- --home ~/.local/share/pcli-localhost/ tx position close-all
warning: /home/conor/src/penumbra/crates/bin/pindexer/Cargo.toml: `default-features` is ignored for penumbra-app, since `default-features` was not specified for `workspace.dependencies.penumbra-app`, this could become a hard error in the future
    Finished release [optimized] target(s) in 0.28s
     Running `target/release/pcli --home /home/conor/.local/share/pcli-localhost/ tx position close-all`
Scanning blocks from last sync height 628 to latest height 628
[0s] ██████████████████████████████████████████████████       0/0       0/s ETA: 0s
Error: status: Unavailable, message: "error getting positions: Invalid column name: position_state", details: [], metadata: MetadataMap { headers: {"content-type": "application/grpc"} }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's at least the close-all subcommand that's affected. Currently there are a lot of calls to both owned_positions and owned_position_ids, and we'd have to sort out all the different ones to get the tests passing again. The smoke test failures are correct, though, in that they accurately indicate breakage of the tested subcommands.

let trading_pair_str: String = row.get("trading_pair")?;

// TODO: DB currently does not save full position. Hence, default backups are used below.
// We should save the full position in the db to be able to provide the full correct data.
Comment on lines +1724 to +1725
Copy link
Contributor Author

@grod220 grod220 Aug 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand if relying on defaults may not be an acceptable change and we will want to update the database schema + sync logic (hard to tell how much work this is though).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking through the requirements here: the existing Position data is recorded in record_position in storage.rs, which takes a Position but only stores certain fields from it - we could refactor the table to store the whole Position object either as a blob or decomposed into new columns - however, this would not be a complete solution, since the reserves wouldn't be updated in update_position and the refactor would want to ensure that the position data stored is correctly updated with any changes. Seems doable but with some care & thoroughness to account for all places where position reserves may change

Copy link
Member

@zbuc zbuc Aug 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tricky part here is that there's currently nothing that comes across in sync that would tell you when a position's reserves changed.

The three approaches that come to mind are:

  1. adding a list of all touched positions during a block to the compact block

this seems like it runs the danger of bloating the compact block with data that is not relevant to nearly every client syncing, so I think it should be excluded from consideration

  1. adding an additional step to the sync to call against APIs to fetch current position data

This is a little bad because we can't ensure that the block height of the position data is perfectly consistent with the sync block height, but I think it might be fine.

However, this represents a significant shift in view server data, which to my understanding, is currently built entirely on compact block data. Do we want to head down the path of enriching it with data from other RPCs? it looks like this line has already been crossed: https://github.com/penumbra-zone/penumbra/blob/main/crates/view/src/storage.rs#L1403

  1. leave the table as-is and have the web client fetch the complete, up-to-date position data from the existing RPCs based on the position ID retrieved from the view server

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

leave the table as-is and have the web client fetch the complete, up-to-date position data from the existing RPCs based on the position ID retrieved from the view server

Late to the party, but that feels like a very appropriate solution to the problem at hand. We already have penumbra.core.component.dex.v1.QueryService.LiquidityPositionById which exists specifically for this use case: accept position IDs, return full positions. In OP, @grod220 did say

To support limit orders in the web, we need the ability for the view service to return position info to a client.

but I lack context on why that might be a hard requirement. To my eye, we already have "the ability for the view service to return position info to a client."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Helpful discussion!

@zbuc ah, I see what you mean. For point 2, it is not clear to me at what frequency or signal would trigger re-fetching a position's data 🤔 (or how to do this in a privacy-preserving way).

@conorsch thanks for sharing that. I did run into this rpc method. However, before I realized the reserves limitation, I thought it would be simple to just serve the position data that is already in the database. I was gravitating toward updating the OwnedPositionIds view service RPC as this does not reveal my positions to the node. However, if there isn't a reasonable way to have up-to-date positions in the db without specifically querying the node with position ID's, it sounds like a fresh query is all the same.

Copy link
Member

@zbuc zbuc Aug 29, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@grod220 the naive solution would be to fetch all the positions during handle_block alongside the check for new app parameters. And yes this does introduce privacy concerns as you are essentially linking your IP address to all those position IDs. The same problem seems to exist if we do it in the web too. Splitting them apart over time doesn't really fix the issue either.

For privacy's sake, either adding the positions to the compact block or doing something like having the client fetch additional randomly selected positions as "noise" to reduce linkability seems good, but there still seems to be danger of linkability over time based on the union of positions between sets of queries.

Additionally there might be other heuristics that could be applied and lead to linkability, for example, "this IP address queries for this list of positions regularly, the majority of which are for trading pair XY, so it seems like this IP address is providing liquidity for trading pair XY".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given there doesn't seem like a practical privacy-preserving solution, it sounds like the roundtrip OwnedPositionIds -> LiquidityPositionById query is the way to go for now. Closing this PR.

Thanks everyone! 🙏

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hdevalence tagging you for posterity re: the accepted LP linkability risk for the present moment. IMO it is worth investigating what a better privacy preserving design here would look like.

let position = Position {
state: State::from_str(&position_state_str)?,
phi: TradingFunction {
component: BareTradingFunction {
fee: 0,
p: Default::default(),
q: Default::default(),
},
pair: TradingPair::from_str(&trading_pair_str)?,
},
reserves: Default::default(),
nonce: [0u8; 32],
close_on_fill: false,
};
Ok((id, position))
})?
.collect()
})
Expand Down
1 change: 1 addition & 0 deletions proto/penumbra/penumbra/view/v1/view.proto
Original file line number Diff line number Diff line change
Expand Up @@ -689,6 +689,7 @@ message OwnedPositionIdsRequest {

message OwnedPositionIdsResponse {
core.component.dex.v1.PositionId position_id = 1;
core.component.dex.v1.Position position = 2;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not convinced that OwnedPositionIdsResponse should return OwnedPositions; as the method name indicates, it should return OwnedPositionIds.


// Requests information on an asset by asset id
Expand Down
Loading