Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Placement of data section in wrong memory for STM32H7 #1045

Closed
chris-durand opened this issue Jul 5, 2023 · 1 comment · Fixed by #1048
Closed

Placement of data section in wrong memory for STM32H7 #1045

chris-durand opened this issue Jul 5, 2023 · 1 comment · Fixed by #1048
Labels

Comments

@chris-durand
Copy link
Member

For STM32H7 devices the data section is placed into the DTCM memory which is not accessible by peripheral DMA transfers (contrary to STM32F7 where that works).

The linker script selects the memory with the lowest address for data, but I'd suspect the intention was to pick the largest continuous region and this is a bug.

The STM32 linker script templates have:

{{ linker.section_ram(cont_ram_regions[0].cont_name|upper, "FLASH", table_copy, table_zero,
                      sections_data=["fastdata", "data_" + cont_ram_regions[0].contains[0].name],
                      sections_bss=["bss_" + cont_ram_regions[0].contains[0].name],
                      sections_noinit=["faststack"]) }}

cont_ram_regions[0] is the region with the lowest address. Probably cont_ram should be used instead which is the largest contiguous region.

@salkinium Was it the original intention to pick the largest region or is the current behaviour intended? If it should be the largest I can open a PR and fix it.

@salkinium
Copy link
Member

I'm not sure what exactly the original intention was tbh. I think STM32H7 simply reused the F7 linkerscript but was never validated in detail as you're now doing.
I didn't know the DMA limitations back then, so definitely a bug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

Successfully merging a pull request may close this issue.

2 participants