Skip to content

External Collaboration with University of Science and Technology China

w-yue edited this page Nov 11, 2022 · 79 revisions

Areas

  1. ** Mizar XDP Hardware Offload **
  2. ** Mizar XDP Statistics Enhancement **
  3. ** Mizar Transit Agent Footprint Optimization **
  4. ** Performance comparison XDP vs DPDK **

Meeting Notes

11/10/2022

  1. eBPF monitoring
    • routine status update
    • serverless discussion
  2. P4 group
    • routine status update
    • potential use case discussion

11/3/2022

  1. P4 group
    • routine status update
  2. eBPF monitoring
    • routine status update

10/27/2022

  1. eBPF monitoring
    • tech deep dive
    • routine status update
  2. P4 group
    • routine status update

10/20/2022

  1. eBPF monitoring
    • routine status update
      • eBPF map access tech deep dive
  2. P4 group
    • routine status update

10/13/2022

  1. P4 group status update
    • need more study on use case scenarios
  2. Mizar PR update
  3. eBPF monitoring use case discussion
    • tcp three-way handshake packet loss/delay scenario analysis
    • more suitable for container/host scenario.

09/29/2022

  1. Fang Jin P4 study ppt sharing
  2. Deep dive eBPF ring buffer mechanism(https://nakryiko.com/posts/bpf-ringbuf/)
  3. eBPF monitoring demo discussion

09/22/2022

  1. Info sharing of Intel P4 China Hackathon 2022 USTC team attended.
  2. Discussed about P4 compiler, P4 to eBPF limitation, etc.

09/15/2022

  1. USTC eBPF monitor use case update
    • 3 way handshake TCP delay demo
  2. Bpftrace explore update
  3. XRP(https://www.usenix.org/conference/osdi22/presentation/zhong) study
  4. P4 related update and discussion(Jin)
  5. Mizar PR updated.

09/08/2022

  1. Mizar offloading PR discussion(Peng)
    • clear about what needs to be changed after discussion.
  2. Bpftrace demo and discussion(Chunyu)
    • suspect its limitation in doing more customized work
    • will study more
  3. eBPF monitor use case demo - tcp connection delay detection(Sunzong)
  4. connection tracking discussion
  5. P4/eBPF related update
  • P4C eBPF back-end accepts only P4_16(PSA) written code
  • Need better understanding of the gap between PSA and PISA
  1. Paper(https://ieeexplore.ieee.org/document/9522193) study
  2. Future action list discussion.

09/01/2022

  1. Mizar offloading PR update.
  2. USTC eBPF use case update
    • 3 way handshake TCP delay close to demo
    • discussed other use case scenario
  3. Expect to have XRP(OSDI'22) paper sharing session in two weeks.

08/25/2022

  1. Mizar offloading PR update
    • CI/CD related issue fixed.
    • Request for using existing CLI instead of adding new CLI commands
    • other minor change requests.
  2. USTC eBPF trial and demo.
  3. NetSeer(https://dl.acm.org/doi/abs/10.1145/3387514.3406214) paper study.
  4. T4P4S(https://ieeexplore.ieee.org/document/8850752) paper sharing.
  5. Follow up focus and plan discussion.

08/18/2022

  1. Dapper(https://dl.acm.org/doi/pdf/10.1145/3050220.3050228) study.
  2. USTC eBPF demo(process exit monitoring).
  3. Mizar offload PR status update.
  4. P4 switch study status discussion

08/11/2022

  1. Trumpet(https://dl.acm.org/doi/pdf/10.1145/2934872.2934879) study.
  2. eBPF based observability exploring and plan update.
  3. FlexGate(https://dl.acm.org/doi/pdf/10.1145/3343180.3343182) study.
  4. Arion DP project introduction and discussion.

08/04/2022

  1. Flow table compression paper(https://ieeexplore.ieee.org/document/9063447) study.
  2. SpiderMon paper(https://www.usenix.org/conference/nsdi22/presentation/wang-weitao-spidermon) study.
  3. Mizar offload project Q&A.

07/28/2022

  1. ElasticSketch(https://yangtonghome.github.io/uploads/SigcommElastic.pdf) paper study.
  2. Gandalf(https://www.usenix.org/system/files/nsdi20-paper-li.pdf) paper study.
  3. Community feedback in terms of eBPF based monitoring/observability.
    • eBPF static monitoring toward dynamic monitoring capability
    • eBPF monitoring current stage, scale;
    • eBPF monitoring capability construct focus, roadmap, etc.

06/23/2022

  1. Qianyu updated recent study on paper and open source related eBPF observability.
    • no high quality paper found;
  2. Yang Peng's recent study on related open source projects.
    • netdata study.
      • netdata uses tracepoints, kprobes and trampolines to collect data.
      • uprobes are currently not supported.
  3. Community discussion on project proposals.
    • distributed eBPF based tracing on hypervisor?
    • dpdk+eBPF, anything new?
    • self-defined observability objective, dynamic task defining framework?
    • industrial leading cloud monitoring framework study?

06/16/2022

  1. USTC P4 group presented P4 intro and their experience in using P4 in their project
    • very informative session.
  2. Qianyu (re)presented Arion health framework to bigger audience.
    • discussion focus on innovation points.
    • expect to have more feedback with more digests.
  3. Jin presented potential P4 project under Arion project.
    • need more sync up with Arion team about what has been done and what could be the area to collaborate.
    • need more narrow down to specific spot for innovation.

06/09/2022

  1. USTC reported related open source project survey.
    • spotted various interesting ones, including netdata, foniod, beats and packet-agent.
    • Wei suggested to look into pixie with some use cases for deep diving.
    • Liguang mentioned FW used netdata in Alcor project, could check for more first hand details.
  2. Qianyu presented revised Chinese version Arion health framework ppt.
  3. scheduled USTC P4 work sync up meeting next week.

06/02/2022

  1. Wei proposed Arion Health framework and had detailed discussion with USTC.
  2. USTC to follow up with related open source project survey and USTC P4 group sync up.

05/12/2022

  1. Paper draft walk through and discussion.
  2. Generic technical discussion about cloud networking, potential pain points exploration.

04/28/2022

  1. Latest test results update and discussion.
  2. Short follow up discussion about observability proposal presented last week.

04/21/2022

  1. Latest offloading perf test updates.
  2. FW explained relevant observability requirement in current project and potential innovation points for collaboration.
    • seems need more discussion.

04/14/2022

  1. Latest offloading test result and paper status update.
  2. Detailed discussed of four papers:
    • ViperProbe;
    • eZTrust;
    • From XDP to Socket: Routing of packets beyond XDP with BPF;
    • Zero Downtime Release.
  3. Propose to focus on XDP/eBPF based observability.
  4. Project scope and requirement to be discussed next week.

04/04/2022

Ad-hoc paper discussion session.

  1. Xinfeng presented three eBPF probing related paper/open source: ViperProbe, eZTrust and SHOWAR.
    • plan to deep dive more into ViperProbe and eZTrust;
  2. Feng studied several eBPF related services:
    • eBPF iptables;
    • eBPF in ovs;
    • DDoS mitigation in Cloudflare;
    • Katran in FB;
    • eBPF based observability in Netflix;
    • Cilium.
  3. Peng presented current study in eBPF and distributed DNN training.
  4. Qianyu discussed his study in various topics related to XDP/eBPF.
    • several papers/talks are discussed, plan to deep dive more into following two papers:
      • From XDP to Socket: Routing of packets beyond XDP with BPF;
      • Zero Downtime Release.

03/31/2022

  1. Latest perf test results discussion.
  2. Xingfeng eBPF probe cost demo with revised test code.
    • probing cost is minimal in demo;
  3. USTC had XDP/eBPF related survey study, agreed to have an extra ad-hoc meeting next week for detailed discussion.

03/17/2022

  1. Feng presented latest perf test progress.
    • suggested making results more presentable;
    • suggested need more tests for transit_agent XDP cost;
  2. Xinfeng eBPF probe cost experiment.
    • suggested doing it in a more accurate way;
  3. XDP/eBPF on probing study(Ali cloud) and idea discussion.
    • discussed potential use case scenarios.
  4. XDP/eBPF on overlay network R&D discussion.

03/03/2022

  1. Feng presented latest perf test progress.
    • existing test results;
    • follow test plan;
    • paper content and structure discussion.
  2. Xinfeng demonstrated examples in using eBPF in monitoring.
  3. Peng updated latest study in eBPF/Prometheus use case and eBPF on MPTCP paper.
  4. eBPF on overlay network R&D discussion.

02/24/2022

  1. USTC offloading status updates.
    • Not too much progress due to lab environment issue(hacked).
  2. Discussed bcc tools and Prometheus with eBPF
    • will look more into use cases in AliCloud with eBPF/Prometheus and discuss next week.
  3. Discussed meeting frequency.
    • will be bi-weekly after next week's meeting.

02/17/2022

  1. USTC offloading status updates.
    • Latest perf test setups and results discussion.
    • FW suggested to set up IRQ affinity first in perf test to take advantages of multi cores;
    • For issues in xdpgeneric mode test(low packet rate), had several suggestions and need to try pushing up the rate.
  2. Discussion about the deliverables of the collaboration work.
    • Code to Mizar branch;
    • Test scripts/results to perf repo;
    • Discussed potential paper write up.
  3. Discussed follow up R&D ideas.

01/13/2022

  1. USTC offloading status sync up.
    • discussed current benchmark test issues.
      • xdp offload vs xdp generic in various tests, have good results.
      • xdpgeneric mode pkt processing rate is low, needs further investigation.
      • need to try increase pkt generating rate.

01/06/2022

  1. USTC offloading status sync up.
    • demonstrated and discussed current test results.
    • should USTC continue working on statistics as planned?
      • probably not given Nic in offload mode's capability is limited.
    • project output discussion.
    • misc issues.
  2. next phase R&D ideas and timeline discussion.

12/30/2021

  1. USTC offloading status sync up.
    • had initial tests of XDP offload mode vs generic mode for xdp1 function.
    • had packet loss issue during test, isolated to be related to NIC hardware issue.
    • continue with more perf tests, focusing on effect on table size, load and CPU cores.
  2. Next phase R&D idea discussion and sync up.

12/23/2021

Canceled due to holiday.

12/16/2021

  1. USTC Mizar offloading status sync up.
    • discussed possible cross VPC communication support, decide to leave it for the upper layer.
    • discussed potential data structure optimization.
  2. Next phase R&D idea discussion and sync up.
    • FW suggested some related papers to study.
    • Continue discussion next time.

12/09/2021

  1. USTC Mizar offloading status sync up.
    • current status check.
    • offloading demo of existing code(with manual eBPF map changes).
  2. xDP/eBPF based Observability study update.
  3. INT study update.
  4. xDP/eBPF related R&D idea discussion.

12/02/2021

  1. USTC Mizar offloading status sync up.
    • latest code check-up.
    • discussed current status and follow up plan.
    • discussed potential test items.
  2. xdp/eBPF related R&D idea discussion.

11/25/2021

Canceled due to holiday.

11/18/2021

  1. USTC Mizar offloading status sync up.
    • last two week's progress walkthrough.
    • following two week plan discussion.
  2. Follow up R&D idea discussion.

11/11/2021

  1. USTC Mizar offloading status sync up.
    • bpf_set_link_xdp_fd issue solved.
      • it works in offload mode, previous issue was due to lib mismatch.
    • discussed existing development status and limitation.
      • options on how we could potentially move more logic from xdp2 to xdp1.
      • BPF_MAP_TYPE_LPM_TRIE convert options in offloaded xdp code.

11/04/2021

  1. USTC Mizar offloading status sync up.
    • discussed bpf_set_link_xdp_fd function not working in offload mode in tested examples, USTC suspected this could be a potential road blocker.
      • FW and USTC to tackle this and get this solved soon.
    • discussed the test scenarios to demonstrate the potential benefit.
    • discussed existing development status and limitation.
      • Need to add more functionality into offload xdp code if possible.
      • Focusing on this effort in the coming weeks.

10/28/2021

Canceled due to USTC schedule conflict.

10/21/2021

  1. USTC Mizar offloading status sync up.
    • discussed current status, had POC with minimal xdp offloading working
    • discussed follow up roadmap
  2. Mizar QOS feature demo and discussion.

10/14/2021

  1. USTC Mizar offloading status sync up.
    • discussed USTC hybrid environment(physical + vm) issue USTC had.
      • FW suggest to use other branch with "eth0" fixes for now.
      • FW will try hybrid environment locally.
      • discussed other minor issues.
    • USTC target to have first PoC in two weeks.
  2. Discussed future collaboration approaches.

10/07/2021

  1. USTC Mizar offloading status sync up.
    • discussed issues USTC had when working with latest Mizar 0.9.
    • FW suggested USTC to work with local image when trying out their code.

09/30/2021

  1. USTC Mizar offloading status update.
    • offloading part, discussed limitations we face and code logic inside transit_xdp in question.
      • will continue deep dive into code logic.
    • non-offloading part, discussed current approach and status.
      • FW made several suggestions.
    • test environment setup status update and discussion
      • USTC faces predictable naming issue, FW provided current solution.
  2. Had discussion about technology trend in SmartNic.

09/23/2021

  1. Had discussion about USTC Mizar offloading "new" approach and POC plan.
    • discussed "new" approach:
      • "rewrite/reconstruct" offload/non-offload logic for similar functionality instead of direct "refactoring" existing code.
    • discussed current POC plan and progress.
  2. Had discussion about XDP related research.
    • USTC explored latest XDP research papers.
    • Discussed XDP challenges, limitations and research ideas.
    • Will continue discussion.

09/16/2021

  1. Had discussion about the latest USTC Mizar offloading refactoring progress.
    • discussed current status and issue.
    • FW suggested to put end-to-end POC as higher priority along with optimization.
    • USTC come with a POC plan.
  2. Discussed several latest Sigcomm papers USTC read through.
    • USTC looked to see if there's something that could be applied to the project.
    • FW suggested to look into open source code related to these papers.

09/09/2021

Canceled due to USTC taking one week's summer break.

09/02/2021

  1. USTC continues work on Mizar offloading related code refactoring.
    • discussed current status and issues.
    • USTC push to have POC soon.
  2. USTC did firmware atomic write performance test with ping traffic.
    • FW suggested to use netperf, iperf as well.
  3. Discussed various topics following studies of recent conference(e.g. eBPF summit) talks.
    • FW suggested deep dive into several interested talks/areas.
  4. FW suggested USTC to document the detailed tryout steps and findings in various experiments.

08/26/2021

  1. USTC tried out XDP_REDIRECT in various scenarios.
    • works in xdpgeneric.
    • works in xdpdrv for Interl NIC(igb driver but in kernel 5.11).
    • not working in xdpdrv for Netronome NIC.
      • confirms what FW verified before.
  2. USTC continues deep diving in hw offloading code refactoring.
    • discussed current status and issues.
      • aiming to get POC refactoring e2e working first, then analyzing and optimizing.
    • discussed current Mizar status in related area.
      • FW to further confirm existing Mizar limitation.
  3. Discussed various issues in k8s, etc., will recheck next week.
  4. Misc.
    • FW confirms per packet map update was recently added for statistic related logic;
    • FW listed eBPF summit 2021 video.
    • FW suggested check on various topics mentioned previously, including open source firmware, other research area besides hw offloading.

08/19/2021

  1. USTC test various combinations about loading multiple XDP programs in one interface.
  2. USTC presented refactoring approach for HW offloading.
    • USTC did needed POC function tests last week and verified this approach is potentially applicable;
    • discussed the limitation of customized firmware about ebpf map update and how we could deal with it;
      • USTC will test and verify atomic update function and performance;
      • FW will investigate existing logic about per packet map update;
      • FW suggested USTC checking on open source nic-firmware;
  3. Discussed about tail call related questions raised in slack channel.
  4. Had discussion about related potential research areas.

08/12/2021

  1. USTC tested multiple XDP program on single nic.
    • verified multiple XDP program on "hw" mode not supported in 5.11 kernel.
  2. Discussed various follow up options.
    • refactoring existing logic(item 4).
    • check open source nic-firmware.
    • try out multiple xdp programs in different mode(just to verify).
  3. customized firmware ebpf_map_update usage limitation discussion.
    • confirmed it works as expected, last week's failure was due to other code logic issue.
  4. USTC continues HW offloading related code deep dive.
    • Discussed the refactoring proposal USTC presented
      • USTC will continue with POC tryout and potentially have some result next week.
    • Have some questions in code deep dive
      • posted on slack channel.
  5. Discussed potential R&D options.

08/05/2021

  1. USTC's SmartNic firmware test "random" crashes issue solved.
    • caused by software version incompatibility in the system.
  2. USTC tested multiple XDP program on single nic.
    • xdpgeneric mode works, hw(offload) mode fails so far.
    • [USTC] will continue investigating for root cause.
  3. customized firmware ebpf_map_update usage limitation discussion.
    • [USTC/FW] will continue to discuss on slack with more details.
  4. USTC continues bouncer/divider code deep dive.
    • [USTC] will collect questions for further discussion.
  5. Discussed about future work items.
    • [USTC/FW] will collect more detailed items for further discussion.
  6. [USTC] DPDK/XDP comparison postpone for now.

07/29/2021

Canceled due to USTC busy on paper submission.

07/22/2021

  1. USTC kubeadmin cluster install success;
  2. USTC looked into master thesis on XDP/container perf test and presented report, will continue looking at DPDK and other tests, suggested to look at VPP as well;
  3. USTC testing on various eBPF functions for offloading on SmartNIC;
    • "random" crashes, FW suggested to locate the minimal working set and find the root causes;
    • FW recommend to pay attention to extra constraints for writing offloading code;
  4. USTC continue testing multiple XDP program on single interface for offload mode;
  5. FW expects to present QoS enhancement design next week.

07/15/2021

  1. USTC got Netronome custom firmware up and running.
    • will continue exploring its usability.
  2. Refactoring bouncer/divider code:
    • FW verified latest XDP feature(support multiple XDP programs in one interface) on Ubuntu 21.04(5.11.0);
      • will continue exploring its capability
    • USTC continue exploring various approaches:
      • try create an environment which utilizes bouncer/divider and verify the traffic flows as expected;
      • rewrite bouncer/divider under current SmartNic constraints;
      • refactoring existing code;
  3. Group discussion for various multi cluster use case scenario(HQ, Edge and Mizar).

07/08/2021

  1. HQ discussion about multi-level QoS. It would be helpful to have a 1-pager doc of use cases.
    • Mada is writing doc for this, he'll sync up with us next week.
  2. USTC got firmware from Netronome and ran into issues installing it.
    • No progress this week due to final. will continue next week.
  3. Refactoring bouncer/divider code status:
    • FW investigating new XDP feature(support multiple XDP programs in one interface); create environment which utilizes bouncer/divider;
    • USTC exploring various approaches:
      • try create an environment which utilizes bouncer/divider and verify the traffic flows as expected;
      • rewrite bouncer/divider under current SmartNic constraints;
      • refactoring existing code;
  4. [FW]Deployment documentation updates;

07/01/2021

  1. HQ discussion about multi-level QoS. It would be helpful to have a 1-pager doc of use cases.
    • Mada will give us a doc of use-cases. We need to follow up with them about doc.
  2. USTC got firmware from Netronome and ran into issues installing it.
    • Suggested clean-install. Followup on slack.
  3. Refactoring bouncer/divider code status:
    • Needs further investigation.

06/23/2021

  1. HQ discussion about multi-level QoS. It would be helpful to have a 1-pager doc of use cases.
    • Mada will give us a doc of use-cases.
  2. USTC question on slack regarding limitations of NN XDP support.
    • Workaround: Implementing required BPF functionality not supported by NN will increase XDP program size and may hit limit. Needs investigation.
    • USTC/Wei is investigating this. Can we refactor bouncer/divider code?

06/17/2021

  1. Questions about kind-setup.
  2. Can XDP program on transit-agent on pod veth execute a tail-call to jump and execute an EDT program running on Netronome NIC?
    • No, this is not possible. We cannot add ID of offloaded XDP program to the jump table.
  3. Discussed questions from slack channel.
  4. DPDK discussion to continue over slack channel.
  5. USTC received the Netronome cards and switch.
    • Phu: We cannot update offloaded BFP maps from host without special firmware.
    • USTC: Need to request custom firmware for NN support team (open a support ticket)
  6. Does Netronome have any unknown limitations that prevent bouncer/divider offload?
    • Owner Wei: Needs investigation.
      • Check if bouncer / divider functionality does the following:
        • Uses any XDP actions not supported by NN.
        • Uses any BFP helper functions that are not supported by NN.
  7. FW (Wei) working on determining where egress XDP support is currently in linux kernel.
    • XDP is being actively developed feature in Linux kernel, need to better understand where they are with egress XDP support b4 asking for this feature.
  8. USTC trying to understand the IP address for host-ep being same as eth0 IP/32. Why do we do this?

06/10/2021

  1. Discussed limitations of tail-call for statistics
    • Vinay: We should have separate XDP program that does statistics gathering. (Separation of concerns design principle)
    • Phu: Statistics gathering program should not have to deal with packet forwarding decisions.
    • Discuss this offline.
  2. DPDK performance test update - USTC is working on it.
  3. Can we use Netronome or another solution can be used to offload transit-switch (bouncer) and transit-router (divider) functionality?
    • FW/USTC. USTC not received card yet - ETA next week.
  4. Questions about kind-setup.
    • Owner Vinay: Create a "Dev tips and tricks" document.
    • Owner Vinay: Cleanup existing docs on how to run Mizar in a) kind-setup , b) AWS Ubuntu 18.04.05.
  5. Can XDP program on transit-agent on pod veth execute a tail-call to jump and execute an EDT program running on Netronome NIC?
    • Phu is working on it, laptop broke down. ETA next week.
  6. Does Netronome have any unknown limitations that prevent bouncer/divider offload?
    • Phu: We cannot offload ingress transit XDP program right now to test offload-ability of bouncer/divider only functionality.
    • Code needs refactoring because transit XDP does both ingress and bouncer/divider functionality.
      • XDP_REDIRECT / bpf_tail_call usage in code.
    • Owner Wei: Needs investigation.
      • Check if bouncer / divider functionality does the following:
        • Uses any XDP actions not supported by NN.
        • Uses any BFP helper functions that are not supported by NN.
  7. FW (Wei) working on determining where egress XDP support is currently in linux kernel.
    • Investigating. Ask kernel community where they are with adding support.
    • Should we get involved / drive this?
  8. Qian: Feature request for Mizar in the scope of next release.
    • Qian will create issue for it in mizar repo.

06/03/2021

  1. Netronome docs say host does not have access to offloaded maps. Does this affect network policy?
    • Vinay: It should not as long as policy code is able to autonomously make accept/deny decision, and delegate to host if it cannot decide.
    • Resolved. Wei tried it and we can make changes from usermode program. Phu responded in detail.
  2. Issue in slack: https://mizar-group.slack.com/archives/CMNECC8JF/p1620553802093900
    • USTC will retry after Phu's fix.
    • Resolved. USTC and FW tried it and works.
  3. Can XDP program on transit-agent on pod veth execute a tail-call to jump and execute an EDT program running on Netronome NIC?
    • Phu is working on it. We may need 3 XDP programs to make this work. No update yet because heads down on 5/30 release work. ETA next week.
  4. Can Netronome or another solution be used to offload transit-switch (bouncer) and transit-router (divider) functionality?
    • FW/USTC. USTC waiting on card - ETA next week.
  5. Does Netronome have any unknown limitations that prevent bouncer/divider offload?
    • FW. No obvious limitation found, needs a more thorough comb through.
  6. FW (Wei) working on determining where egress XDP support is currently in linux kernel.
  7. What is the concern YZ has with TC solution?

05/27/2021

  1. Netronome docs say host does not have access to offloaded maps. Does this affect network policy?
    • Vinay: It should not as long as policy code is able to autonomously make accept/deny decision, and delegate to host if it cannot decide.
  2. Issue in slack: https://mizar-group.slack.com/archives/CMNECC8JF/p1620553802093900
    • USTC will retry after Phu's fix.
  3. Best way to get started with XDP (outside of Mizar)
  4. Can XDP program on transit-agent on pod veth execute a tail-call to jump and execute an EDT program running on Netronome NIC?
    • Phu is working on it. We may need 3 XDP programs to make this work. No update yet because heads down on 5/30 release work.
  5. Can Netronome or another solution be used to offload transit-switch (bouncer) and transit-router (divider) functionality?
    • FW/USTC.
  6. Does Netronome have any unknown limitations that prevent bouncer/divider offload?
    • FW. No obvious limitation found, needs a more thorough comb through.

05/20/2021

  1. Which Netronome (25G / 40G / another alternative) NICs should USTC buy? How many?
  2. Can XDP program on transit-agent on pod veth execute a tail-call to jump and execute an EDT program running on Netronome NIC?
    • Phu is working on it. We may need 3 XDP programs to make this work... investigating.
  3. Can Netronome or another solution be used to offload transit-switch (bouncer) and transit-router (divider) functionality?
    • FW/USTC.
  4. Does Netronome have any unknown limitations that prevent bouncer/divider offload?
    • FW. No obvious limitation found, needs a more thorough comb through.

05/13/2021

  1. Can XDP program on transit-agent on pod veth execute a tail-call to jump and execute an EDT program running on Netronome NIC?
    • USTC: They think this is not possible.
    • Phu will try it out. As of now, Netronome NICs don't support tail-call or XDP_REDIRECT.
  2. Can Netronome or another solution be used to offload transit-switch (bouncer) and transit-router (divider) functionality?
    • FW/USTC investigating
  3. Does Netronome have any unknown limitations that prevent bouncer/divider offload?
    • FW
  4. Create a wiki to track collaboration discussion and AI's.
    • Vinay will do this.

04/29/2021

  1. What exactly does it mean when Netronome does not support XDP_REDIRECT? - Owner: Vinay/Phu
    • Answer: Phu determined that it does not work both ways
      • Egress traffic cannot be XDP_REDIRECT'd to Netronome
      • XDP program offloaded to Netronome cannot XDP_REDIRECT traffic to a Pod interface veth
  2. What exactly does Netronome offload support on egress path?
    • Nothing.
  3. Can we redirect to Rx of H/W NIC and run XDP encapsulate during Rx and then actually put such a packet on Tx path?
    • Answer: USTC investigated and the answer is this does not work.
  4. Understand in detail how exactly XDP_REDIRECT works. - Vinay / USTC
  5. Do two endpoints on same droplet (node) communicate directly without going to bouncer?
    • Answer: Phu investigated - the first packet goes to bouncer, subsequent packets go directly to the host that contains the destination Pod (endpoint)