Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Follow-ups to #2688 #2791

Merged
82 changes: 37 additions & 45 deletions lightning/src/ln/channel.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2543,26 +2543,24 @@ impl FailHTLCContents for msgs::OnionErrorPacket {
HTLCUpdateAwaitingACK::FailHTLC { htlc_id, err_packet: self }
}
}
impl FailHTLCContents for (u16, [u8; 32]) {
type Message = msgs::UpdateFailMalformedHTLC; // (failure_code, sha256_of_onion)
impl FailHTLCContents for ([u8; 32], u16) {
type Message = msgs::UpdateFailMalformedHTLC;
fn to_message(self, htlc_id: u64, channel_id: ChannelId) -> Self::Message {
msgs::UpdateFailMalformedHTLC {
htlc_id,
channel_id,
failure_code: self.0,
sha256_of_onion: self.1
sha256_of_onion: self.0,
failure_code: self.1
Comment on lines +2552 to +2553
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should revert this change.

In the reading and writing of tlvs, sha256_of_onion is followed by failure_code.

write_tlv_fields!(w, {
					(0, htlc_id, required),
					(1, failure_code, required),
					(2, dummy_err_packet, required),
					(3, sha256_of_onion, required),
				});

And if we want to prevent tuple destructing, I think we should follow tlvs order in the rest of the codebase.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in principle I agree with you, we should have gone with (failure_code, sha256) rather than the opposite, but sadly we can't (easily) change the InboundHTLCRemovalReason serialization at this point, and IMO we should prefer to match our existing order everywhere rather than have a different order here. Its not all that critical in any direction, of course, though, we're talking about moving around 32+2 bytes.

}
}
fn to_inbound_htlc_state(self) -> InboundHTLCState {
InboundHTLCState::LocalRemoved(
InboundHTLCRemovalReason::FailMalformed((self.1, self.0))
)
InboundHTLCState::LocalRemoved(InboundHTLCRemovalReason::FailMalformed(self))
}
fn to_htlc_update_awaiting_ack(self, htlc_id: u64) -> HTLCUpdateAwaitingACK {
HTLCUpdateAwaitingACK::FailMalformedHTLC {
htlc_id,
failure_code: self.0,
sha256_of_onion: self.1
sha256_of_onion: self.0,
failure_code: self.1
}
}
}
Expand Down Expand Up @@ -2888,7 +2886,7 @@ impl<SP: Deref> Channel<SP> where
pub fn queue_fail_malformed_htlc<L: Deref>(
&mut self, htlc_id_arg: u64, failure_code: u16, sha256_of_onion: [u8; 32], logger: &L
) -> Result<(), ChannelError> where L::Target: Logger {
self.fail_htlc(htlc_id_arg, (failure_code, sha256_of_onion), true, logger)
self.fail_htlc(htlc_id_arg, (sha256_of_onion, failure_code), true, logger)
.map(|msg_opt| assert!(msg_opt.is_none(), "We forced holding cell?"))
}

Expand All @@ -2901,7 +2899,7 @@ impl<SP: Deref> Channel<SP> where
/// return `Ok(_)` if preconditions are met. In any case, `Err`s will only be
/// [`ChannelError::Ignore`].
fn fail_htlc<L: Deref, E: FailHTLCContents + Clone>(
&mut self, htlc_id_arg: u64, err_packet: E, mut force_holding_cell: bool,
&mut self, htlc_id_arg: u64, err_contents: E, mut force_holding_cell: bool,
logger: &L
) -> Result<Option<E::Message>, ChannelError> where L::Target: Logger {
if !matches!(self.context.channel_state, ChannelState::ChannelReady(_)) {
Expand Down Expand Up @@ -2968,18 +2966,18 @@ impl<SP: Deref> Channel<SP> where
}
}
log_trace!(logger, "Placing failure for HTLC ID {} in holding cell in channel {}.", htlc_id_arg, &self.context.channel_id());
self.context.holding_cell_htlc_updates.push(err_packet.to_htlc_update_awaiting_ack(htlc_id_arg));
self.context.holding_cell_htlc_updates.push(err_contents.to_htlc_update_awaiting_ack(htlc_id_arg));
return Ok(None);
}

log_trace!(logger, "Failing HTLC ID {} back with {} message in channel {}.", htlc_id_arg,
E::Message::name(), &self.context.channel_id());
{
let htlc = &mut self.context.pending_inbound_htlcs[pending_idx];
htlc.state = err_packet.clone().to_inbound_htlc_state();
htlc.state = err_contents.clone().to_inbound_htlc_state();
}

Ok(Some(err_packet.to_message(htlc_id_arg, self.context.channel_id())))
Ok(Some(err_contents.to_message(htlc_id_arg, self.context.channel_id())))
}

// Message handlers:
Expand Down Expand Up @@ -3576,7 +3574,7 @@ impl<SP: Deref> Channel<SP> where
// the limit. In case it's less rare than I anticipate, we may want to revisit
// handling this case better and maybe fulfilling some of the HTLCs while attempting
// to rebalance channels.
match &htlc_update {
let fail_htlc_res = match &htlc_update {
&HTLCUpdateAwaitingACK::AddHTLC {
amount_msat, cltv_expiry, ref payment_hash, ref source, ref onion_routing_packet,
skimmed_fee_msat, blinding_point, ..
Expand Down Expand Up @@ -3604,6 +3602,7 @@ impl<SP: Deref> Channel<SP> where
}
}
}
None
},
&HTLCUpdateAwaitingACK::ClaimHTLC { ref payment_preimage, htlc_id, .. } => {
// If an HTLC claim was previously added to the holding cell (via
Expand All @@ -3617,40 +3616,33 @@ impl<SP: Deref> Channel<SP> where
{ monitor_update } else { unreachable!() };
update_fulfill_count += 1;
monitor_update.updates.append(&mut additional_monitor_update.updates);
None
},
&HTLCUpdateAwaitingACK::FailHTLC { htlc_id, ref err_packet } => {
match self.fail_htlc(htlc_id, err_packet.clone(), false, logger) {
Ok(update_fail_msg_option) => {
// If an HTLC failure was previously added to the holding cell (via
// `queue_fail_htlc`) then generating the fail message itself must
// not fail - we should never end up in a state where we double-fail
// an HTLC or fail-then-claim an HTLC as it indicates we didn't wait
// for a full revocation before failing.
debug_assert!(update_fail_msg_option.is_some());
update_fail_count += 1;
},
Err(e) => {
if let ChannelError::Ignore(_) = e {}
else {
panic!("Got a non-IgnoreError action trying to fail holding cell HTLC");
}
}
}
Some(self.fail_htlc(htlc_id, err_packet.clone(), false, logger)
.map(|fail_msg_opt| fail_msg_opt.map(|_| ())))
},
&HTLCUpdateAwaitingACK::FailMalformedHTLC { htlc_id, failure_code, sha256_of_onion } => {
match self.fail_htlc(htlc_id, (failure_code, sha256_of_onion), false, logger) {
Ok(update_fail_malformed_opt) => {
debug_assert!(update_fail_malformed_opt.is_some()); // See above comment
update_fail_count += 1;
},
Err(e) => {
if let ChannelError::Ignore(_) = e {}
else {
panic!("Got a non-IgnoreError action trying to fail holding cell HTLC");
}
}
}
},
Some(self.fail_htlc(htlc_id, (sha256_of_onion, failure_code), false, logger)
.map(|fail_msg_opt| fail_msg_opt.map(|_| ())))
}
};
if let Some(res) = fail_htlc_res {
match res {
Ok(fail_msg_opt) => {
// If an HTLC failure was previously added to the holding cell (via
// `queue_fail_{malformed_}htlc`) then generating the fail message itself must
// not fail - we should never end up in a state where we double-fail
// an HTLC or fail-then-claim an HTLC as it indicates we didn't wait
// for a full revocation before failing.
debug_assert!(fail_msg_opt.is_some());
update_fail_count += 1;
},
Err(ChannelError::Ignore(_)) => {},
Err(_) => {
panic!("Got a non-IgnoreError action trying to fail holding cell HTLC");
},
}
}
}
if update_add_count == 0 && update_fulfill_count == 0 && update_fail_count == 0 && self.context.holding_cell_update_fee.is_none() {
Expand Down
47 changes: 21 additions & 26 deletions lightning/src/ln/channelmanager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4344,7 +4344,7 @@ where
if let Some(ChannelPhase::Funded(ref mut chan)) = peer_state.channel_by_id.get_mut(&forward_chan_id) {
let logger = WithChannelContext::from(&self.logger, &chan.context);
for forward_info in pending_forwards.drain(..) {
match forward_info {
let queue_fail_htlc_res = match forward_info {
HTLCForwardInfo::AddHTLC(PendingAddHTLCInfo {
prev_short_channel_id, prev_htlc_id, prev_funding_outpoint, prev_user_channel_id,
forward_info: PendingHTLCInfo {
Expand Down Expand Up @@ -4390,40 +4390,35 @@ where
));
continue;
}
None
},
HTLCForwardInfo::AddHTLC { .. } => {
panic!("short_channel_id != 0 should imply any pending_forward entries are of type Forward");
},
HTLCForwardInfo::FailHTLC { htlc_id, err_packet } => {
log_trace!(logger, "Failing HTLC back to channel with short id {} (backward HTLC ID {}) after delay", short_chan_id, htlc_id);
if let Err(e) = chan.queue_fail_htlc(
htlc_id, err_packet, &&logger
) {
if let ChannelError::Ignore(msg) = e {
log_trace!(logger, "Failed to fail HTLC with ID {} backwards to short_id {}: {}", htlc_id, short_chan_id, msg);
} else {
panic!("Stated return value requirements in queue_fail_htlc() were not met");
}
// fail-backs are best-effort, we probably already have one
// pending, and if not that's OK, if not, the channel is on
// the chain and sending the HTLC-Timeout is their problem.
continue;
}
Some((chan.queue_fail_htlc(htlc_id, err_packet, &&logger), htlc_id))
},
HTLCForwardInfo::FailMalformedHTLC { htlc_id, failure_code, sha256_of_onion } => {
log_trace!(self.logger, "Failing malformed HTLC back to channel with short id {} (backward HTLC ID {}) after delay", short_chan_id, htlc_id);
if let Err(e) = chan.queue_fail_malformed_htlc(htlc_id, failure_code, sha256_of_onion, &self.logger) {
if let ChannelError::Ignore(msg) = e {
log_trace!(self.logger, "Failed to fail HTLC with ID {} backwards to short_id {}: {}", htlc_id, short_chan_id, msg);
} else {
panic!("Stated return value requirements in queue_fail_malformed_htlc() were not met");
}
// fail-backs are best-effort, we probably already have one
// pending, and if not that's OK, if not, the channel is on
// the chain and sending the HTLC-Timeout is their problem.
continue;
}
log_trace!(logger, "Failing malformed HTLC back to channel with short id {} (backward HTLC ID {}) after delay", short_chan_id, htlc_id);
let res = chan.queue_fail_malformed_htlc(
htlc_id, failure_code, sha256_of_onion, &&logger
);
Some((res, htlc_id))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason for storing the result of the function call in a variable res instead of putting the function call in the tuple as done on L4400?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I thought it was more readable this way since it wouldn't all fit in one line like with L4400

},
};
if let Some((queue_fail_htlc_res, htlc_id)) = queue_fail_htlc_res {
if let Err(e) = queue_fail_htlc_res {
if let ChannelError::Ignore(msg) = e {
log_trace!(logger, "Failed to fail HTLC with ID {} backwards to short_id {}: {}", htlc_id, short_chan_id, msg);
} else {
panic!("Stated return value requirements in queue_fail_{{malformed_}}htlc() were not met");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are achieving a DRY code here, and that's great!

But we are also combining the panic messages that can be generated from two different sources here.
Would this be an optimal thing to do?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point (interpreting this as "our stack trace from the panic message won't be quite as useful as we won't be able to differentiate between the two cases"), however in this case I'm not sure we care too much about being able to differentiate, so the DRYing is likely worth more. Further, we should generally be able to differentiate the cases anyway, as logs should show different messages being received.

}
// fail-backs are best-effort, we probably already have one
// pending, and if not that's OK, if not, the channel is on
// the chain and sending the HTLC-Timeout is their problem.
continue;
}
}
}
} else {
Expand Down