Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

project question #2

Open
wangyazhao001 opened this issue Oct 9, 2021 · 3 comments
Open

project question #2

wangyazhao001 opened this issue Oct 9, 2021 · 3 comments

Comments

@wangyazhao001
Copy link

     Hello! I read your article——blk-switch: Rearchitecting Linux Storage Stack for μs Latency and High Throughput.To be honest,the research will be relatively large applications on mobile and PC. 
     I  try to follow paper to understand the project but more difficult.In general, we write kernel code by patch for easy to read, so I am difficult to judge the code you modified kernal in your project. Can you tell me how to understand your project easily? 
@jaehyun-hwang
Copy link
Collaborator

Hi, thanks for your interest in our work!
It's a bit hard for us to completely understand the question. Could you rephrase your question for us?

@wangyazhao001
Copy link
Author

Hello hwang, to be more specific, there are mainly two questions:
(1) When using fio to simulate L-app and T-app, it is only determined based on the bs and iodepth parameters. Is there a very sufficient theoretical basis for such simulation?
(2) When testing under spdk, theoretically there should be only one fio running at the same time because spdk will lock the SSD; then how does the SPDK test in the paper run two fio at the same time (L-app And T-app)?

Please answer when you are free, thank you very much !

@webglider
Copy link
Collaborator

(1) blk-switch design is general and allows users to specify which applications are L-apps, and which are T-apps through the Linux ionice interface. In our paper's evaluation, we use fio with small bs and iodepth as representative of L-apps, and fio with large bs as representative of T-apps. There is no "theoretical" basis for this choice. The reasoning is as follows:

  • Using small I/Os for the L-apps enables us to stress the impact of latency inflation coming from the host storage stack (if we were to use large bs and iodepth for L-apps, then queueing at the storage device would likely dominate the end-to-end latency). So in a sense, our setup is allowing us to measure the "worst-case" impact of the host storage stack on the I/O latency.
  • In practice it is common for latency-sensitive applications to perform small I/Os (e.g. RocksDB style applications), and for throughput-bound applications to perform large I/Os (e.g. MapReduce style applications).

Of course, as with any experimental evaluation, our analysis is not perfect. In theory (as well as in practice), there could very well be L-apps which have large bs and iodepth, and T-apps with small bs. This is why we try to perform sensitivity analysis to whatever extent possible. For example, we do show sensitivity with varying iodepth of T-apps. Similarly, one could very well do sensitivity analysis with varying bs and iodepth for L-apps, and varying bs for T-apps. There is no fundamental issue with doing this. Our sensitivity analysis in the paper is limited simply due to space constraints.

(2) In the experiments in our paper, multiple SPDK apps are run on the host-side, not at the target-side. Hence, these apps do not directly access the SSD. Instead, they interact with TCP sockets to transfer the requests to the target-side using SPDK's NVMe-over-TCP layer. As a result, there is no problem with running multiple SPDK apps on the host-side. On the target side, all applications' requests are processed by the SPDK nvmf target which is a single process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants