-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[doc][dpapp] Initial bmv2 dpapp HLD #606
Conversation
Looks like this check is failing @jimmyzhai - let us know (or attend the Community call tomorrow at 9am Pacific) if you need help |
- vpp workers, they serve as an exception path of packet processing, running on multi-cpus. It creates a flow in local flow table and notifies dashsai server to offload it to BMv2 flow table. The packet is temporarily queued. After workers know the success of flow offloading to BMv2, they deque the packet and send it back to P4 pipeline via VPP port. The workers also do flow age-out task with proper scheduling. | ||
- flow table, is a local cache of BMv2 flow table. | ||
- DASH SAI, is a unique interface for DASH object CRUD of DASH pipeline, implemented by DASH BMv2. | ||
- VPP port, is a veth interface and connects to BMv2 via veth pair. It serves as datapath channel to receive/send all packets between date-plane app and BMv2. Generally the port supports multi RSS queues, each queue binds to one vpp worker. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this impacts the current bmv2 model and requires adding two new veth ports to it? Can you update the diagrams to show the veth names?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, veth4 on bmv2 and veth5 on dpapp. I think the veth port names are implementation details. I'll later update them in dash-pipeline README instead of this HLD.
@r12f for review |
Fixed |
.wordlist.txt
Outdated
@@ -100,6 +100,8 @@ configs | |||
Conntrack | |||
Containerlab | |||
CP | |||
cpus |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it might be better to get this one removed : D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed
## 3. Project scenario | ||
|
||
### 3.1. Stateful packet process - flow | ||
- Flow Creation<br> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can use 2 enters + indent instead of
. it is preferred to not having html tags in the markdown docs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
|
||
### 3.1. Stateful packet process - flow | ||
- Flow Creation<br> | ||
In DASH pipeline, after 5-tuple flow keys are well extracted, packet goes to flow lookup stage. It does the flow lookup. If any flow is matched, packet is marked a flow-hit flag, otherwise flow-miss flag. The packet continues to go to next stages, like ACL, (destination) NAT, routing, etc. After routing stage, if route is found and packet is flow-miss, it will bypass the rest stages and be forwarded to data-plane app. The data-plane app will use dash-sai APIs to create flow in flow table, and then re-inject the packet back to pipeline. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"after 5-tuple flow keys" -> after flow keys....
this is because flow keys might not be 5 tuples, e.g. 3 tuples based on business needs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
|
||
### 3.1. Stateful packet process - flow | ||
- Flow Creation<br> | ||
In DASH pipeline, after 5-tuple flow keys are well extracted, packet goes to flow lookup stage. It does the flow lookup. If any flow is matched, packet is marked a flow-hit flag, otherwise flow-miss flag. The packet continues to go to next stages, like ACL, (destination) NAT, routing, etc. After routing stage, if route is found and packet is flow-miss, it will bypass the rest stages and be forwarded to data-plane app. The data-plane app will use dash-sai APIs to create flow in flow table, and then re-inject the packet back to pipeline. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When flow is hit, the packet should by pass the policy matching stages, such as ACL, routing and etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
- Flow Creation<br> | ||
In DASH pipeline, after 5-tuple flow keys are well extracted, packet goes to flow lookup stage. It does the flow lookup. If any flow is matched, packet is marked a flow-hit flag, otherwise flow-miss flag. The packet continues to go to next stages, like ACL, (destination) NAT, routing, etc. After routing stage, if route is found and packet is flow-miss, it will bypass the rest stages and be forwarded to data-plane app. The data-plane app will use dash-sai APIs to create flow in flow table, and then re-inject the packet back to pipeline. | ||
- Flow Deletion<br> | ||
In flow lookup stage, TCP FIN/RST packet is always marked flow-miss and later forwarded to data-plane app. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tcp FIN and RST packet should not mark as flow miss, as they should not go through the policy matching stages. however, besides applying the SDN transformation, we should notify the DPAPP to get the flow removed by forwarding.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
|
||
### 3.2. HA | ||
- Inline flow replication<br> | ||
In HA context, Active data-plane app creates flow, replicates the flow in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"replicates the flow in metadata"
will be better to rephrase this part.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
- Inline flow replication<br> | ||
In HA context, Active data-plane app creates flow, replicates the flow in | ||
metadata, glues it with original packet, and sends the packet to Standby | ||
data-plane app via DPU data-plane channel. Standby data-plane app recreates |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"sends the packet to Standby data-plane app" -> "sends the packet to Standby"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is because the p4 can also be involved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
the flow, and acknowledges Active data-plane app to finish flow creation. The | ||
same logic can apply for flow deletion, flow age-out. | ||
- Flow bulk sync<br> | ||
Flow bulk sync replicates batch flows from one DPU to another to make flow table consistency on Active and Standby DPUs. When HA agents starts a bulk sync via DASH SAI, Active data-plane app will walk flow table based on sync method (perfect/range), generate batch flows and send them to Standby data-plane app with gRPC via control-plane channel. Standby date-plane app will create flows in order. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: "flow table consistency" -> "flow table consistent"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
Flow bulk sync replicates batch flows from one DPU to another to make flow table consistency on Active and Standby DPUs. When HA agents starts a bulk sync via DASH SAI, Active data-plane app will walk flow table based on sync method (perfect/range), generate batch flows and send them to Standby data-plane app with gRPC via control-plane channel. Standby date-plane app will create flows in order. | ||
|
||
### 3.3. Flow re-simulation | ||
When SONiC changes polices via DASH SAI, flow could be impacted. Data-plane app is raised to re-simulate flow. In HA context, Active data-plane app also needs to sync the updated flows to Standby. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"flow could be impacted"
better to be more specific. we can say "some flows might need to be updated to get the latest policy applied" or so.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
|
||
## 6. Detailed design | ||
|
||
Referring to the below figure from [HA API HLD], it greatly outlines the whole packet flow in data plane for both standalone and HA context. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"[HA API HLD]" missing link.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
When DASH pipeline requests DPAPP for flow creation, it encapsulates DASH metadata in an ethernet frame with EtherType DASH_METADATA and appends the original customer packet. The packet sent to DPAPP is like: | ||
|
||
<div style="text-align: center;"> | ||
Ethernet HEADER|DASH metadata|customer packet |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this style doesn't seem to work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed this style.
Ethernet HEADER|DASH metadata|customer packet | ||
</div> | ||
|
||
The number of DASH_METADATA is 0x876D, which reuses the number of EtherType SECURE_DATA (vpp/src/vnet/ethernet/types.def at master · FDio/vpp · GitHub). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: initial space.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
Flow data: As next figure | ||
``` | ||
|
||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will be better to call out what does the following packet is
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
claimed in Ethernet HEADER|DASH metadata|customer packet
participant S as DASH SAI Server | ||
autonumber | ||
C->>+S: ENI_ATTR_FULL_FLOW_RESIMULATION_REQUESTED | ||
S->>+R: Call DASH SAI set_eni_attr (epoch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe we don't have set eni attr to update epoch. we will need to call p4 runtime api directly or something to make it work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also I believe epoch is named as flow version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add epoch as an internal attribute of eni and flow, which is invisible to public SAI.
|
||
### 6.4. HA flow | ||
Base on basic flow, HA flow adds an extra FLOW_SYNCED state, which involves | ||
extra sync for request/response ping-pang between DPAPP and PEER DPAPP. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: ping-pong
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
Following dpapp HLD - #606, this is the 1st part implementation to add dpapp target in Makefile, which is implemented as a vpp plugin.
- Flow Age-out | ||
|
||
In flow lookup stage, if packet hits one flow, it will refresh flow | ||
timestamp. Data-plane app periodically scans flow table and check if flow is |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
data-plane app can help the data plane implementing the flow age-out mechanism by bridging the gap in the current data plane engine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"e.g. ......"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated
ba0ba2f
to
1f51ca3
Compare
Just curious, is this related to PR464 ? |
|
||
``` | ||
0 1 2 3 | ||
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can use the mermaid graph to simplify this graph now, which is just available on github.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mermaid packet diagram is still in beta, not support varbit
representation. I'll prefer to it after it has this feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i see. sounds good!
ping @jimmyzhai and @r12f |
lgtm now! thanks Junhua! |
No. BYO (Bring-Your-Own) data plane app HLD is a design guideline for customer owned data plane app. |
Following dpapp HLD - #606, this is the 2nd part implementation. It updates bmv2 pipeline to have stateful packet processing.
Following dpapp HLD - #606, this is the 3th part implementation. It implements the basic logic of inline flow creation, deletion and ageout. - Build with command `make dpapp` - Run with command `make run-dpapp`
The document is a high level design doc of BMv2 data plane app. As a proof of concept, it describes the design of a data-plane app example, how it cooperates with DASH pipeline BMv2 to implement DASH data plane. The app will be based on VPP.
DPAPP is another packet processing engine running on CPUs. It adds extra capacities onto DASH capable ASIC: