Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add zone reset function and plane-level parallelism in ZNS SSD #155

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

DingcuiYu
Copy link
Contributor

Hello! I fixed some bugs in zns_ftl. My revisions are as follows:

  1. Added plane-level parallelism and modified the LPN-PPN mapping mechanism to allow each zone write to utilize SSD parallelism while taking into account programming unit. For example, assuming the media is QLC and the programming uint is 4 flash pages, each flash page size is 16 KiB, the layout of zone 0 is as follows:
    LPN 0 --> ch [0] fc [0] pl [0] blk [0] flash_pg [0] pg [0]
    ...
    LPN 4 --> ch [0] fc [0] pl [0] blk[0] flash_pg[0] pg[3]
    LPN 5 --> ch [0] fc [0] pl [0] blk[0] flash_pg[1] pg[0]
    ...
    LPN 15 --> ch [0] fc [0] pl [0] blk[0] flash_pg[3] pg[3]
    ---------------- a programming unit ---------------------------
    LPN 16 --> ch [1] fc [0] pl [0] blk[0] flash_pg[0] pg[0]
    ...

Note that I'm not sure if the flash pages of different planes within the flash chip need to be programmed at the same time, and if so, the above logic needs to be modified. Assuming there are two planes, you would need to set planes_per_lun to 1 and ZNS_PAGE_SIZE to 16KiB x 2 = 32 KiB.

  1. Let the zone reset command triggers the erasure of the block and the modification of the L2P mapping table at the same time.

  2. Updated some of the variable names to make them more readable.

  3. Correct the read logic to aggregate reads to the same flash page.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant