-
Notifications
You must be signed in to change notification settings - Fork 50
External Collaboration with University of Science and Technology China
w-yue edited this page Nov 11, 2022
·
79 revisions
- ** Mizar XDP Hardware Offload **
- ** Mizar XDP Statistics Enhancement **
- ** Mizar Transit Agent Footprint Optimization **
- ** Performance comparison XDP vs DPDK **
- eBPF monitoring
- routine status update
- serverless discussion
- P4 group
- routine status update
- potential use case discussion
- P4 group
- routine status update
- eBPF monitoring
- routine status update
- eBPF monitoring
- tech deep dive
- routine status update
- P4 group
- routine status update
- eBPF monitoring
- routine status update
- eBPF map access tech deep dive
- routine status update
- P4 group
- routine status update
- P4 group status update
- need more study on use case scenarios
- Mizar PR update
- offloading function path walk through;
- Netronome offloading XDP/eBPF coding limitation discussion
- PR approved(https://github.com/CentaurusInfra/mizar/pull/658)
- eBPF monitoring use case discussion
- tcp three-way handshake packet loss/delay scenario analysis
- more suitable for container/host scenario.
- Fang Jin P4 study ppt sharing
- Deep dive eBPF ring buffer mechanism(https://nakryiko.com/posts/bpf-ringbuf/)
- eBPF monitoring demo discussion
- Info sharing of Intel P4 China Hackathon 2022 USTC team attended.
- Discussed about P4 compiler, P4 to eBPF limitation, etc.
- USTC eBPF monitor use case update
- 3 way handshake TCP delay demo
- Bpftrace explore update
- Chunyu had discussion with bpftrace developer/community, confirmed its limitation in user space communication(https://github.com/iovisor/bpftrace/discussions/2354)
- XRP(https://www.usenix.org/conference/osdi22/presentation/zhong) study
- P4 related update and discussion(Jin)
- Mizar PR updated.
- Mizar offloading PR discussion(Peng)
- clear about what needs to be changed after discussion.
- Bpftrace demo and discussion(Chunyu)
- suspect its limitation in doing more customized work
- will study more
- eBPF monitor use case demo - tcp connection delay detection(Sunzong)
- connection tracking discussion
- P4/eBPF related update
- P4C eBPF back-end accepts only P4_16(PSA) written code
- Need better understanding of the gap between PSA and PISA
- Paper(https://ieeexplore.ieee.org/document/9522193) study
- Future action list discussion.
- Mizar offloading PR update.
- USTC eBPF use case update
- 3 way handshake TCP delay close to demo
- discussed other use case scenario
- Expect to have XRP(OSDI'22) paper sharing session in two weeks.
- Mizar offloading PR update
- CI/CD related issue fixed.
- Request for using existing CLI instead of adding new CLI commands
- other minor change requests.
- USTC eBPF trial and demo.
- NetSeer(https://dl.acm.org/doi/abs/10.1145/3387514.3406214) paper study.
- T4P4S(https://ieeexplore.ieee.org/document/8850752) paper sharing.
- Follow up focus and plan discussion.
- Dapper(https://dl.acm.org/doi/pdf/10.1145/3050220.3050228) study.
- USTC eBPF demo(process exit monitoring).
- Mizar offload PR status update.
- P4 switch study status discussion
- Trumpet(https://dl.acm.org/doi/pdf/10.1145/2934872.2934879) study.
- eBPF based observability exploring and plan update.
- FlexGate(https://dl.acm.org/doi/pdf/10.1145/3343180.3343182) study.
- Arion DP project introduction and discussion.
- Flow table compression paper(https://ieeexplore.ieee.org/document/9063447) study.
- SpiderMon paper(https://www.usenix.org/conference/nsdi22/presentation/wang-weitao-spidermon) study.
- Mizar offload project Q&A.
- ElasticSketch(https://yangtonghome.github.io/uploads/SigcommElastic.pdf) paper study.
- Gandalf(https://www.usenix.org/system/files/nsdi20-paper-li.pdf) paper study.
- Community feedback in terms of eBPF based monitoring/observability.
- eBPF static monitoring toward dynamic monitoring capability
- eBPF monitoring current stage, scale;
- eBPF monitoring capability construct focus, roadmap, etc.
- Qianyu updated recent study on paper and open source related eBPF observability.
- no high quality paper found;
- Yang Peng's recent study on related open source projects.
- netdata study.
- netdata uses tracepoints, kprobes and trampolines to collect data.
- uprobes are currently not supported.
- netdata study.
- Community discussion on project proposals.
- distributed eBPF based tracing on hypervisor?
- dpdk+eBPF, anything new?
- self-defined observability objective, dynamic task defining framework?
- industrial leading cloud monitoring framework study?
- USTC P4 group presented P4 intro and their experience in using P4 in their project
- very informative session.
- Qianyu (re)presented Arion health framework to bigger audience.
- discussion focus on innovation points.
- expect to have more feedback with more digests.
- Jin presented potential P4 project under Arion project.
- need more sync up with Arion team about what has been done and what could be the area to collaborate.
- need more narrow down to specific spot for innovation.
- USTC reported related open source project survey.
- spotted various interesting ones, including netdata, foniod, beats and packet-agent.
- Wei suggested to look into pixie with some use cases for deep diving.
- Liguang mentioned FW used netdata in Alcor project, could check for more first hand details.
- Qianyu presented revised Chinese version Arion health framework ppt.
- scheduled USTC P4 work sync up meeting next week.
- Wei proposed Arion Health framework and had detailed discussion with USTC.
- USTC to follow up with related open source project survey and USTC P4 group sync up.
- Paper draft walk through and discussion.
- Generic technical discussion about cloud networking, potential pain points exploration.
- Latest test results update and discussion.
- Short follow up discussion about observability proposal presented last week.
- Latest offloading perf test updates.
- FW explained relevant observability requirement in current project and potential innovation points for collaboration.
- seems need more discussion.
- Latest offloading test result and paper status update.
- Detailed discussed of four papers:
- ViperProbe;
- eZTrust;
- From XDP to Socket: Routing of packets beyond XDP with BPF;
- Zero Downtime Release.
- Propose to focus on XDP/eBPF based observability.
- Project scope and requirement to be discussed next week.
Ad-hoc paper discussion session.
- Xinfeng presented three eBPF probing related paper/open source: ViperProbe, eZTrust and SHOWAR.
- plan to deep dive more into ViperProbe and eZTrust;
- Feng studied several eBPF related services:
- eBPF iptables;
- eBPF in ovs;
- DDoS mitigation in Cloudflare;
- Katran in FB;
- eBPF based observability in Netflix;
- Cilium.
- Peng presented current study in eBPF and distributed DNN training.
- Qianyu discussed his study in various topics related to XDP/eBPF.
- several papers/talks are discussed, plan to deep dive more into following two papers:
- From XDP to Socket: Routing of packets beyond XDP with BPF;
- Zero Downtime Release.
- several papers/talks are discussed, plan to deep dive more into following two papers:
- Latest perf test results discussion.
- Xingfeng eBPF probe cost demo with revised test code.
- probing cost is minimal in demo;
- USTC had XDP/eBPF related survey study, agreed to have an extra ad-hoc meeting next week for detailed discussion.
- Feng presented latest perf test progress.
- suggested making results more presentable;
- suggested need more tests for transit_agent XDP cost;
- Xinfeng eBPF probe cost experiment.
- suggested doing it in a more accurate way;
- XDP/eBPF on probing study(Ali cloud) and idea discussion.
- discussed potential use case scenarios.
- XDP/eBPF on overlay network R&D discussion.
- Feng presented latest perf test progress.
- existing test results;
- follow test plan;
- paper content and structure discussion.
- Xinfeng demonstrated examples in using eBPF in monitoring.
- Peng updated latest study in eBPF/Prometheus use case and eBPF on MPTCP paper.
- eBPF on overlay network R&D discussion.
- USTC offloading status updates.
- Not too much progress due to lab environment issue(hacked).
- Discussed bcc tools and Prometheus with eBPF
- will look more into use cases in AliCloud with eBPF/Prometheus and discuss next week.
- Discussed meeting frequency.
- will be bi-weekly after next week's meeting.
- USTC offloading status updates.
- Latest perf test setups and results discussion.
- FW suggested to set up IRQ affinity first in perf test to take advantages of multi cores;
- For issues in xdpgeneric mode test(low packet rate), had several suggestions and need to try pushing up the rate.
- Discussion about the deliverables of the collaboration work.
- Code to Mizar branch;
- Test scripts/results to perf repo;
- Discussed potential paper write up.
- Discussed follow up R&D ideas.
- USTC offloading status sync up.
- discussed current benchmark test issues.
- xdp offload vs xdp generic in various tests, have good results.
- xdpgeneric mode pkt processing rate is low, needs further investigation.
- need to try increase pkt generating rate.
- discussed current benchmark test issues.
- USTC offloading status sync up.
- demonstrated and discussed current test results.
- should USTC continue working on statistics as planned?
- probably not given Nic in offload mode's capability is limited.
- project output discussion.
- misc issues.
- next phase R&D ideas and timeline discussion.
- USTC offloading status sync up.
- had initial tests of XDP offload mode vs generic mode for xdp1 function.
- had packet loss issue during test, isolated to be related to NIC hardware issue.
- continue with more perf tests, focusing on effect on table size, load and CPU cores.
- Next phase R&D idea discussion and sync up.
Canceled due to holiday.
- USTC Mizar offloading status sync up.
- discussed possible cross VPC communication support, decide to leave it for the upper layer.
- discussed potential data structure optimization.
- Next phase R&D idea discussion and sync up.
- FW suggested some related papers to study.
- Continue discussion next time.
- USTC Mizar offloading status sync up.
- current status check.
- offloading demo of existing code(with manual eBPF map changes).
- xDP/eBPF based Observability study update.
- INT study update.
- xDP/eBPF related R&D idea discussion.
- USTC Mizar offloading status sync up.
- latest code check-up.
- discussed current status and follow up plan.
- discussed potential test items.
- xdp/eBPF related R&D idea discussion.
Canceled due to holiday.
- USTC Mizar offloading status sync up.
- last two week's progress walkthrough.
- following two week plan discussion.
- Follow up R&D idea discussion.
- USTC Mizar offloading status sync up.
- bpf_set_link_xdp_fd issue solved.
- it works in offload mode, previous issue was due to lib mismatch.
- discussed existing development status and limitation.
- options on how we could potentially move more logic from xdp2 to xdp1.
- BPF_MAP_TYPE_LPM_TRIE convert options in offloaded xdp code.
- bpf_set_link_xdp_fd issue solved.
- USTC Mizar offloading status sync up.
- discussed bpf_set_link_xdp_fd function not working in offload mode in tested examples, USTC suspected this could be a potential road blocker.
- FW and USTC to tackle this and get this solved soon.
- discussed the test scenarios to demonstrate the potential benefit.
- discussed existing development status and limitation.
- Need to add more functionality into offload xdp code if possible.
- Focusing on this effort in the coming weeks.
- discussed bpf_set_link_xdp_fd function not working in offload mode in tested examples, USTC suspected this could be a potential road blocker.
Canceled due to USTC schedule conflict.
- USTC Mizar offloading status sync up.
- discussed current status, had POC with minimal xdp offloading working
- discussed follow up roadmap
- Mizar QOS feature demo and discussion.
- USTC Mizar offloading status sync up.
- discussed USTC hybrid environment(physical + vm) issue USTC had.
- FW suggest to use other branch with "eth0" fixes for now.
- FW will try hybrid environment locally.
- discussed other minor issues.
- USTC target to have first PoC in two weeks.
- discussed USTC hybrid environment(physical + vm) issue USTC had.
- Discussed future collaboration approaches.
- USTC Mizar offloading status sync up.
- discussed issues USTC had when working with latest Mizar 0.9.
- FW suggested USTC to work with local image when trying out their code.
- USTC Mizar offloading status update.
- offloading part, discussed limitations we face and code logic inside transit_xdp in question.
- will continue deep dive into code logic.
- non-offloading part, discussed current approach and status.
- FW made several suggestions.
- test environment setup status update and discussion
- USTC faces predictable naming issue, FW provided current solution.
- offloading part, discussed limitations we face and code logic inside transit_xdp in question.
- Had discussion about technology trend in SmartNic.
- Had discussion about USTC Mizar offloading "new" approach and POC plan.
- discussed "new" approach:
- "rewrite/reconstruct" offload/non-offload logic for similar functionality instead of direct "refactoring" existing code.
- discussed current POC plan and progress.
- discussed "new" approach:
- Had discussion about XDP related research.
- USTC explored latest XDP research papers.
- Discussed XDP challenges, limitations and research ideas.
- Will continue discussion.
- Had discussion about the latest USTC Mizar offloading refactoring progress.
- discussed current status and issue.
- FW suggested to put end-to-end POC as higher priority along with optimization.
- USTC come with a POC plan.
- Discussed several latest Sigcomm papers USTC read through.
- USTC looked to see if there's something that could be applied to the project.
- FW suggested to look into open source code related to these papers.
Canceled due to USTC taking one week's summer break.
- USTC continues work on Mizar offloading related code refactoring.
- discussed current status and issues.
- USTC push to have POC soon.
- USTC did firmware atomic write performance test with ping traffic.
- FW suggested to use netperf, iperf as well.
- Discussed various topics following studies of recent conference(e.g. eBPF summit) talks.
- FW suggested deep dive into several interested talks/areas.
- FW suggested USTC to document the detailed tryout steps and findings in various experiments.
- USTC tried out XDP_REDIRECT in various scenarios.
- works in xdpgeneric.
- works in xdpdrv for Interl NIC(igb driver but in kernel 5.11).
- not working in xdpdrv for Netronome NIC.
- confirms what FW verified before.
- USTC continues deep diving in hw offloading code refactoring.
- discussed current status and issues.
- aiming to get POC refactoring e2e working first, then analyzing and optimizing.
- discussed current Mizar status in related area.
- FW to further confirm existing Mizar limitation.
- discussed current status and issues.
- Discussed various issues in k8s, etc., will recheck next week.
- Misc.
- FW confirms per packet map update was recently added for statistic related logic;
- FW listed eBPF summit 2021 video.
- FW suggested check on various topics mentioned previously, including open source firmware, other research area besides hw offloading.
- USTC test various combinations about loading multiple XDP programs in one interface.
- USTC presented refactoring approach for HW offloading.
- USTC did needed POC function tests last week and verified this approach is potentially applicable;
- discussed the limitation of customized firmware about ebpf map update and how we could deal with it;
- USTC will test and verify atomic update function and performance;
- FW will investigate existing logic about per packet map update;
- FW suggested USTC checking on open source nic-firmware;
- Discussed about tail call related questions raised in slack channel.
- Had discussion about related potential research areas.
- USTC tested multiple XDP program on single nic.
- verified multiple XDP program on "hw" mode not supported in 5.11 kernel.
- Discussed various follow up options.
- refactoring existing logic(item 4).
- check open source nic-firmware.
- try out multiple xdp programs in different mode(just to verify).
- customized firmware ebpf_map_update usage limitation discussion.
- confirmed it works as expected, last week's failure was due to other code logic issue.
- USTC continues HW offloading related code deep dive.
- Discussed the refactoring proposal USTC presented
- USTC will continue with POC tryout and potentially have some result next week.
- Have some questions in code deep dive
- posted on slack channel.
- Discussed the refactoring proposal USTC presented
- Discussed potential R&D options.
- USTC's SmartNic firmware test "random" crashes issue solved.
- caused by software version incompatibility in the system.
- USTC tested multiple XDP program on single nic.
- xdpgeneric mode works, hw(offload) mode fails so far.
- [USTC] will continue investigating for root cause.
- customized firmware ebpf_map_update usage limitation discussion.
- [USTC/FW] will continue to discuss on slack with more details.
- USTC continues bouncer/divider code deep dive.
- [USTC] will collect questions for further discussion.
- Discussed about future work items.
- [USTC/FW] will collect more detailed items for further discussion.
- [USTC] DPDK/XDP comparison postpone for now.
Canceled due to USTC busy on paper submission.
- USTC kubeadmin cluster install success;
- USTC looked into master thesis on XDP/container perf test and presented report, will continue looking at DPDK and other tests, suggested to look at VPP as well;
- USTC testing on various eBPF functions for offloading on SmartNIC;
- "random" crashes, FW suggested to locate the minimal working set and find the root causes;
- FW recommend to pay attention to extra constraints for writing offloading code;
- USTC continue testing multiple XDP program on single interface for offload mode;
- FW expects to present QoS enhancement design next week.
- USTC got Netronome custom firmware up and running.
- will continue exploring its usability.
- Refactoring bouncer/divider code:
- FW verified latest XDP feature(support multiple XDP programs in one interface) on Ubuntu 21.04(5.11.0);
- will continue exploring its capability
- USTC continue exploring various approaches:
- try create an environment which utilizes bouncer/divider and verify the traffic flows as expected;
- rewrite bouncer/divider under current SmartNic constraints;
- refactoring existing code;
- FW verified latest XDP feature(support multiple XDP programs in one interface) on Ubuntu 21.04(5.11.0);
- Group discussion for various multi cluster use case scenario(HQ, Edge and Mizar).
- HQ discussion about multi-level QoS. It would be helpful to have a 1-pager doc of use cases.
- Mada is writing doc for this, he'll sync up with us next week.
- USTC got firmware from Netronome and ran into issues installing it.
- No progress this week due to final. will continue next week.
- Refactoring bouncer/divider code status:
- FW investigating new XDP feature(support multiple XDP programs in one interface); create environment which utilizes bouncer/divider;
- USTC exploring various approaches:
- try create an environment which utilizes bouncer/divider and verify the traffic flows as expected;
- rewrite bouncer/divider under current SmartNic constraints;
- refactoring existing code;
- [FW]Deployment documentation updates;
- HQ discussion about multi-level QoS. It would be helpful to have a 1-pager doc of use cases.
- Mada will give us a doc of use-cases. We need to follow up with them about doc.
- USTC got firmware from Netronome and ran into issues installing it.
- Suggested clean-install. Followup on slack.
- Refactoring bouncer/divider code status:
- Needs further investigation.
- HQ discussion about multi-level QoS. It would be helpful to have a 1-pager doc of use cases.
- Mada will give us a doc of use-cases.
- USTC question on slack regarding limitations of NN XDP support.
- Workaround: Implementing required BPF functionality not supported by NN will increase XDP program size and may hit limit. Needs investigation.
- USTC/Wei is investigating this. Can we refactor bouncer/divider code?
- Questions about kind-setup.
- Owner Vinay: Create a "Dev tips and tricks" document.
- Owner Vinay: Cleanup existing docs on how to run Mizar in a) kind-setup , b) AWS Ubuntu 18.04.05.
- Can XDP program on transit-agent on pod veth execute a tail-call to jump and execute an EDT program running on Netronome NIC?
- No, this is not possible. We cannot add ID of offloaded XDP program to the jump table.
- Discussed questions from slack channel.
- DPDK discussion to continue over slack channel.
- USTC received the Netronome cards and switch.
- Phu: We cannot update offloaded BFP maps from host without special firmware.
- USTC: Need to request custom firmware for NN support team (open a support ticket)
- Does Netronome have any unknown limitations that prevent bouncer/divider offload?
- Owner Wei: Needs investigation.
- Check if bouncer / divider functionality does the following:
- Uses any XDP actions not supported by NN.
- Uses any BFP helper functions that are not supported by NN.
- Check if bouncer / divider functionality does the following:
- Owner Wei: Needs investigation.
- FW (Wei) working on determining where egress XDP support is currently in linux kernel.
- XDP is being actively developed feature in Linux kernel, need to better understand where they are with egress XDP support b4 asking for this feature.
- USTC trying to understand the IP address for host-ep being same as eth0 IP/32. Why do we do this?
- Discussed limitations of tail-call for statistics
- Vinay: We should have separate XDP program that does statistics gathering. (Separation of concerns design principle)
- Phu: Statistics gathering program should not have to deal with packet forwarding decisions.
- Discuss this offline.
- DPDK performance test update - USTC is working on it.
- Can we use Netronome or another solution can be used to offload transit-switch (bouncer) and transit-router (divider) functionality?
- FW/USTC. USTC not received card yet - ETA next week.
- Questions about kind-setup.
- Owner Vinay: Create a "Dev tips and tricks" document.
- Owner Vinay: Cleanup existing docs on how to run Mizar in a) kind-setup , b) AWS Ubuntu 18.04.05.
- Can XDP program on transit-agent on pod veth execute a tail-call to jump and execute an EDT program running on Netronome NIC?
- Phu is working on it, laptop broke down. ETA next week.
- Does Netronome have any unknown limitations that prevent bouncer/divider offload?
- Phu: We cannot offload ingress transit XDP program right now to test offload-ability of bouncer/divider only functionality.
- Code needs refactoring because transit XDP does both ingress and bouncer/divider functionality.
- XDP_REDIRECT / bpf_tail_call usage in code.
- Owner Wei: Needs investigation.
- Check if bouncer / divider functionality does the following:
- Uses any XDP actions not supported by NN.
- Uses any BFP helper functions that are not supported by NN.
- Check if bouncer / divider functionality does the following:
- FW (Wei) working on determining where egress XDP support is currently in linux kernel.
- Investigating. Ask kernel community where they are with adding support.
- Should we get involved / drive this?
- Qian: Feature request for Mizar in the scope of next release.
- Qian will create issue for it in mizar repo.
- Netronome docs say host does not have access to offloaded maps. Does this affect network policy?
- Vinay: It should not as long as policy code is able to autonomously make accept/deny decision, and delegate to host if it cannot decide.
- Resolved. Wei tried it and we can make changes from usermode program. Phu responded in detail.
- Issue in slack: https://mizar-group.slack.com/archives/CMNECC8JF/p1620553802093900
- USTC will retry after Phu's fix.
- Resolved. USTC and FW tried it and works.
- Can XDP program on transit-agent on pod veth execute a tail-call to jump and execute an EDT program running on Netronome NIC?
- Phu is working on it. We may need 3 XDP programs to make this work. No update yet because heads down on 5/30 release work. ETA next week.
- Can Netronome or another solution be used to offload transit-switch (bouncer) and transit-router (divider) functionality?
- FW/USTC. USTC waiting on card - ETA next week.
- Does Netronome have any unknown limitations that prevent bouncer/divider offload?
- FW. No obvious limitation found, needs a more thorough comb through.
- FW (Wei) working on determining where egress XDP support is currently in linux kernel.
- What is the concern YZ has with TC solution?
- Netronome docs say host does not have access to offloaded maps. Does this affect network policy?
- Vinay: It should not as long as policy code is able to autonomously make accept/deny decision, and delegate to host if it cannot decide.
- Issue in slack: https://mizar-group.slack.com/archives/CMNECC8JF/p1620553802093900
- USTC will retry after Phu's fix.
- Best way to get started with XDP (outside of Mizar)
- See tutorial https://github.com/xdp-project/xdp-tutorial
- Can XDP program on transit-agent on pod veth execute a tail-call to jump and execute an EDT program running on Netronome NIC?
- Phu is working on it. We may need 3 XDP programs to make this work. No update yet because heads down on 5/30 release work.
- Can Netronome or another solution be used to offload transit-switch (bouncer) and transit-router (divider) functionality?
- FW/USTC.
- Does Netronome have any unknown limitations that prevent bouncer/divider offload?
- FW. No obvious limitation found, needs a more thorough comb through.
- Which Netronome (25G / 40G / another alternative) NICs should USTC buy? How many?
- Buy 40G x 3 Netronome NICs Agilio CX 1/2x40GbE.
- Buy recommended QSFP cables: https://www.netronome.com/products/cable-matrix/
- Can XDP program on transit-agent on pod veth execute a tail-call to jump and execute an EDT program running on Netronome NIC?
- Phu is working on it. We may need 3 XDP programs to make this work... investigating.
- Can Netronome or another solution be used to offload transit-switch (bouncer) and transit-router (divider) functionality?
- FW/USTC.
- Does Netronome have any unknown limitations that prevent bouncer/divider offload?
- FW. No obvious limitation found, needs a more thorough comb through.
- Can XDP program on transit-agent on pod veth execute a tail-call to jump and execute an EDT program running on Netronome NIC?
- USTC: They think this is not possible.
- Phu will try it out. As of now, Netronome NICs don't support tail-call or XDP_REDIRECT.
- Can Netronome or another solution be used to offload transit-switch (bouncer) and transit-router (divider) functionality?
- FW/USTC investigating
- Does Netronome have any unknown limitations that prevent bouncer/divider offload?
- FW
- Create a wiki to track collaboration discussion and AI's.
- Vinay will do this.
- What exactly does it mean when Netronome does not support XDP_REDIRECT? - Owner: Vinay/Phu
- Answer: Phu determined that it does not work both ways
- Egress traffic cannot be XDP_REDIRECT'd to Netronome
- XDP program offloaded to Netronome cannot XDP_REDIRECT traffic to a Pod interface veth
- Answer: Phu determined that it does not work both ways
- What exactly does Netronome offload support on egress path?
- Nothing.
- Can we redirect to Rx of H/W NIC and run XDP encapsulate during Rx and then actually put such a packet on Tx path?
- Answer: USTC investigated and the answer is this does not work.
- Understand in detail how exactly XDP_REDIRECT works. - Vinay / USTC
- Do two endpoints on same droplet (node) communicate directly without going to bouncer?
- Answer: Phu investigated - the first packet goes to bouncer, subsequent packets go directly to the host that contains the destination Pod (endpoint)