-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory summary in Arduino IDE is not correct #374
Comments
The flash size in boards.txt will be off. Maybe it doesn't have the EEPROMs memory removed from the total available size. |
I think it is correct in the boards.txt file :
So that's not the problem. It's like we're not properly counting up all of the sections from the map file that we need to. We're missing some. |
By the way, 122880 is 0x1E000. The linker script for the MX150 and MX250 both have 0x1D000 for the kseg0_program_mem, so the boards.txt file for boards that use those chips should have 118784 for the flash size. That's 0x20000 minus 0x1000 for the EEPROM and 0x1000 for the split bootloader space, and 0x1000 for the exception memory. |
Ahh - I guess we weren't taking into account the exception memory. Maybe that's where the missing space is. |
It needs to be the base 10 representation of whatever the |
Hmmm. No, that gets us closer, but the numbers still don't match. If I reduce the Fubarino MIni's upload.maximum_size to 118784, I then get :
when I compile, even though the map file says
So, now it looks like we're counting too many segments to get to the 101% number. |
Ahh, I see it now. We're counting in the exception vector sections, but we shouldn't be (if we're not including the exception vector flash in our upload.maximum_size, then we shouldn't count it when we add up all of the sections). I'll see if I can fix that. |
Agreed - that is the right number to put in the boards file for each board.
…On Wed, Jan 10, 2018 at 7:08 AM, Majenko Technologies < ***@***.***> wrote:
It needs to be the base 10 representation of whatever the LENGTH= of
kseg0_program_mem is in the linker script. Maybe a small perl script
could check them all out for us :)
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#374 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAbeCJ9eWudX-HCWmB_eq5XqZAzQcfSBks5tJLZPgaJpZM4RYtKt>
.
|
OK, I have the flash memory matching perfectly between the IDE output and the map file summary. Now to check the RAM values . . . |
Well, I learned something interesting. You can increase the size of a const array (FLASH) and a volatile array (RAM) until the link fails. Then back off by 1 bytes. The link will then succeed, and then the sketch will download. However, that's no guarantee that it will run! We must have something wrong with our linker scripts, because if I max out both flash and ram in a sketch, it will not run properly. Here's an example. This is for a 256k Flash/64K RAM part (the MX270).
Now, when you run this, you will get lots of stuff printing out, but it there will be a lot of random binary garbage in there, and after printing out the 23003rd index of bbb[], it crashes and doesn't print anything else out. Ideally, if the link succeeds, the sketch should not crash due to out of memory problems (which is what I'm assuming is happening here). That is not the case, currently. |
Maxing out RAM and it crashing and printing garbage I can quite understand. There's not a lot you can do about that - it's called "stack smashing". Yes, we have a finite and fixed amount of RAM in the chip - however the quantity of that RAM that is available to your program at any one time is not something that can be easily predicted. Your global variables are sharing the memory with both heap and stack. As the stack grows it will crunch into the heap (smash) and nasty things may happen. The bigger your global variables the smaller the gap between the stack and the heap, until it reaches a point where the two are side by side in memory - the slightest increase in stack usage over its allocated minimum and the heap gets it in the rear end. Maxing out the RAM is never a good idea. We have an initial allocation of 0x800 bytes for the stack (which it can grow past) and a hard limit of 0x800 bytes for the heap (which it's never allowed to grow over - I have been campaigning for years to have this hard limit removed, because it makes dynamic allocation of memory useless, but it falls on deaf ears, so I provide my own patched version of the pic32 compiler on UECIDE), which tries to mitigate this kind of thing, but still - maxing out the RAM usage with global variable is, basically, asking for trouble with a capital T. As for getting garbage from flash, that's probably because you're not actually specifying anything in your flash array, so you're just seeing what was present in the chip from the last program you ran. |
Matt- so you're assuming that my little sketch above blows past 2KB of stack? I don't think that's true, but I don't have proof of it yet. If such a small little sketch can use more than 2KB of stack, then we should be making a bigger reservation for the stack. The reserved heap is 0x800 bytes long, and the stack is 0x930 long, at least that's what my map file shows. |
Sure it can blow past 2K. There's so much more than just your sketch. Everything that's happening in the background is playing with the stack. Every interrupt that fires dumps a pile of registers on the stack. Any local variables in functions (including ISRs) are on the stack. On the PIC32 we have nested interrupts, so you can have the serial ISR running and the CT interrupt interrupting it, which means twice as much on the stack. The point is, if you want to fill the memory you will have to minimise your stack usage to compensate. Yes, we could increase the reserved amount, but then we'd be wasting space when you're not needing that stack space. I guess we could check that the bare minimum (an empty sketch) isn't above the reserved quantity and increase it to that level plus a little more, but much more than that could just be wasteful. |
For core v2.0.0, I'm not going to spend more time trying to figure this out exactly. I'm putting in something that gets us pretty close for the RGB_Station board, and I'm leaving the rest of them. Later, when somebody has some more time, it would be really good to try and get a more accurate idea if we really are blowing through 2KB worth of stack with very simple sketches, and understand the memory usage better. Then further refinements can be made so that you can still have a running sketch even with all of your memory 'allocated'. (I don't find this such an absurd goal - people run mission critical code with 100% memory usage. Now, they've done the analysis to show that the allocated stack area is 5x bigger than what ever gets used, but its still a system with all available memory allocated at link time and the thing runs just fine.) |
With chipKIT-core 1.4.3 : If you declare a very large static array, such that it uses up all available flash (in other words, if you make the array one byte bigger, the link fails), the IDE does NOT show 100% program memory utilization.
For example, this sketch
compiles, but if you make the array one byte larger, it fails to compile. However, the output to the user says
So what's the deal? The linker's map file output shows the correct value:
Total kseg0_program_mem used : 0x1d000 118784 100.0% of 0x1d000
This calculation is performed in the "compute size" section of platform.txt:
But I'm not sure what we're missing in that pattern matching code to make the memory allocation appear correct. My guess is that the RAM usage is also off.
The text was updated successfully, but these errors were encountered: