-
-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support read exec-file #69
Conversation
Do we need something like |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need something like
ExecFileOutput
? Then we may also needMemoryMapOutput
...
If you read the PR that added MemoryMap, there is an interesting long-form discussion on why we landed on that particular API.
For this feature, I believe it's sufficient to leave the API as-is, and have the target return the filename as a &[u8]
.
That said, upon re-reviewing the qXfer docs, I realized that these methods do support returning an error condition via an Exx
response. As such, this API should be using the TargetResult
infrastructure.
Moreover, we should also change MemoryMap
to use TargetResult
as well, but that's probably something best left to a follow-up fixup commit / PR.
Then we can't construct the result dynamically on I think we can also change |
With the current signature, you can still change the result dynamically no-problem. The lifetime of // desugars into something akin to
// fn get_exec_file<'a>(&'a self, _pid: Option<Pid>) -> TargetResult<&'a [u8], Self> {
fn get_exec_file(&self, _pid: Option<Pid>) -> TargetResult<&[u8], Self> {
// assume self.exec_file_buf: [u8; BUF_SIZE]
let s = if something { b"option/a" } else { b"option/b" };
self.exec_file_buf[..s.len()].copy_from_slize(s);
&self.exec_file_buf
} What you can't do with the current API is stream a stack-allocated buffer back to the client: fn get_exec_file(&self, _pid: Option<Pid>) -> TargetResult<&[u8], Self> {
let buf = [0; 32];
let s = b"exec/path";
buf[..s].copy_from_slice(s);
&buf // ERROR, cannot pass back stack allocated buffer
} I would consider this to be a reasonable tradeoff in the case of
I'd prefer to keep PRs focused on a single feature. If you'd like, I'd more than welcome a follow-up fixup PR. If not, I'll just pop open a tracking issue after this PR is merged to remind myself to do this cleanup at some point before releasing |
Oh no, this means that if I want to construct the result dynamically, I must have a buf with fixed size defined in the extension? How long of the buf will be enough? We must define such buf in every extension need to construct the result dynamically? And if we can do it this way, why we don't use it in Host I/O This reminds me a variable: Is that true? In GDB docs:
Oh sad, the docs don't guarantee the response won't exceed And there is an interesting sentence in the docs: If this stub feature is not supported, GDB guesses based on the size of the ‘g’ packet response. Okey, let's assume the length of all responses won't excced And I find we already use the buf in this way: gdbstub/src/protocol/packet.rs Lines 31 to 35 in 9227dfd
gdbstub/src/protocol/commands/_m.rs Lines 13 to 29 in 9227dfd
(Oh wait, shouldn't We use only part of buf to prepare the response, because we also need a buf to gdbstub/src/gdbstub_impl/ext/base.rs Lines 201 to 204 in 9227dfd
But after this decode, this part of buf will never be used. So we can use the full buf to prepare the response afterwards? But this may not easy to implement? It will be easier if we could know the type This recalls me another thing, is And there is another situation likes Host I/O |
Lots of good thoughts here, many of which I've had myself in the past! To kick off the discussion, consider the following observation about the GDB RSP: While incoming data to the target will only have a max length of In other words: your assumption that "the [PacketBuf] is always enough to be used to prepare the response to be sent" is incorrect. The GDB RSP makes no assumptions on how response data is/isn't buffered on the client. Instead, almost every packet with a non-trivial, dynamically sized response (vFile, qXfer, etc...) will include a provision whereby the target is allowed to return less data than requested, which typically results in the GDB RSP issuing a subsequent request the read more of the file from the appropriate offset + len. With this in mind, yes, you're absolutely right: we could offer an API that allows the user to write response data into the unused parts of the The point I'm trying to make is that while this would technically be more flexible, and allow certain targets to entirely bypass allocating their own buffers, I'd wager that most implementations would end up looking something like this: fn get_exec_file(&self, pid: Option<Pid>, buf: &mut [u8], offset: usize) -> TargetResult<usize, Self> {
let mut stack_buf = [0; MAX_FILENAME_LEN]; // packet buffer might not be big enough for file name
let file_name = self.get_exec_file_impl(&stack_buf, pid); // performs syscall / whatever
copy_dst_src_with_truncation_logic(buf, file_name, offset) // force user to implement offset logic
} In more complex cases - such as MemoryMap - that middle-step of actually generating the output data isn't something the implementer would want to repeat multiple times, as it'd be wasting computation, so it seems very likely that they'd just compute it once, and then stash it somewhere like And in fact, this is pretty much what the GDB RSP expects! Consider the following qXfer docs (emphasis mine):
The idea being that this data is in-fact stored in some kind of dedicated "special data area" from which arbitrary data can be read. For this reason, I would prefer to keep the qXfer APIs as they currently are, without any of the complex callback / streaming logic other types of commands support. The cost/benefit of API complexity, maintenance, and downstream implementation effort just isn't worth it for the additional flexibility a callback/streaming based API would enable.
Aside from that, there's also the more "philosophical" aspect of "use the most appropriate type for the situation". When we're reading data from the the target system's host / special area (such as in the case of qXfer), That said, I think you're right that there are a few places where it might make more sense to use |
How do you get this conclusion? But, at least in practice, after we set |
This goes back to our previous discussion in #66 about Moreover, you'll find that many of the "barebones" GDB stubs will implement their output routines by writing data directly to UART registers, without using an intermediate buffer of any kind. Of course, there's always the chance that I misread the spec, so if you can find the line that disputes my claim, I would be more than happy to reevaluate my approach. |
The evidence that
At least, GDB client could not have "unlimited" storage on it's end.
And Yes, I know that's just another implementation rather than spec, but at least that means
These evidence may not be so persuasive. But at least, if we just use
And could you tell me, how to get an arbitrary length buffer on
fn get_exec_file(&self, pid: Option<Pid>, buf: &mut [u8], offset: usize) -> TargetResult<usize, Self> {
let mut stack_buf = [0; MAX_FILENAME_LEN]; // packet buffer might not be big enough for file name
let file_name = self.get_exec_file_impl(&stack_buf, pid); // performs syscall / whatever
copy_dst_src_with_truncation_logic(buf, file_name, offset) // force user to implement offset logic
}
fn pread(&self, buf: &mut [u8], fd: usize, count: usize, offset: usize) -> TargetResult<usize, Self> {
let mut stack_buf = [0; MAX_FILE_LEN];
let file_data = self.read_full_file(&stack_buf, fd);
copy_dst_src_with_truncation_logic(buf, file_data, offset, count);
} No. Because we have And even we must implement in this way, I think there is no problem. Dynamically allocated stack-based buffer could be better than a fixed size heap buffer. And if we want to use heap buffer, we can still use it: fn get_exec_file(&self, pid: Option<Pid>, buf: &mut [u8], offset: usize) -> TargetResult<usize, Self> {
self.get_exec_file_impl(&self.exec_file_buf, pid); // performs syscall / whatever
copy_dst_src_with_truncation_logic(buf, &self.exec_file_buf, offset) // force user to implement offset logic
}
But the current implementation doesn't solve this problem. If we want to construct the result of
Yes, but that doesn't mean "unlimited". We can send data with arbitrary length, but we only need to send with the size requested. And you don't reply my questions previously:
|
Ah, I think you misunderstood what I meant here. I never said that we would be allowed to send more data than the target requested. What I was trying to say here is that the possible length of data the client requests can exceed the size of i.e: based on the spec, it's totally fair game for the client to request data of length 1000 even if the packet buffer is only 100 long. In that case, the target is allowed to return less than 1000 bytes (as pointed out in point 1), but at the same time, it's totally within spec to stream the 1000 bytes back to the client without first copying them into an output buffer (e.g: by banging them directly over the UART) Just because the reference GDB client implementation chooses to clamp response size to a number close to the PacketBuf size, doesn't mean that's part of the spec.
Many options! You could unsafely use That said, if you're implementing this extension on a super barebones no_std system, you'd probably end up using a fixed-size buffer + panicking if you exceed it's length. One of the main motivators behind me pushing for this simpler-but-less-flexible API is that I don't believe a AVR microcontroller - where every byte is precious - will end up implementing these metadata-heavy qXfer extensions. For pretty much every other target, I think it's more than reasonable to expect they can afford to allocate some kind of buffer for these kinds of extensions. (if they're really worried about space, you could always reuse the same memory buffer for multiple protocol extensions, such as MemoryMap and ExecFile) Hopefully this answers your question wrt. "How long of the buf will be enough?" Let me ask you this: say we go with the "we give you a On the first call, we'd fill the buffer as best we could. But there's still data to be read, so we'd have to call the function again to fetch the rest of the data. i.e: we'd need to add a On this subsequent call, what happens? Well, if the underlying syscall supports an offset (such as pread), sure, no problem, we can just plumb that through and things will Just Work™️. Unfortunately, that is not going to be the case of ExecFile nor MemoryMap. In these cases, the "black box" implementation I was suggested - which may be syscall backed, or whatever - will need to write it's data into some kind of large-enough buffer. In the exec-file example I gave, that's the
What I'm saying is that instead of recalculating the MemoryMap / ExecFile each time partial data is requested, it's faaaaaar more likely that an implementation would simply pre-calculate / cache / hardcode the return value of the MemoryMap / ExecFile when the extension is first called (if not before that, during other initialization steps), and then read data back from that pre-allocated buffer when the client requests data. And indeed, you point this out yourself:
These cache buffers are what the GDB RSP considers to be the "target's special data area" - i.e: a part of memory that is used by the GDB stub itself to store metadata about the target being debugged. Long story short, the only benefit I see of plumbing through a PacketBuf-backed |
Oh, but what you said before in #66 (comment):
If you think return If you allow me to do this, then other questions is not need to discuss. |
As a point of clarification (because I'm not sure how well you understand Rust's no_std / std split), my comment from 66 and my last comment are both correct, and don't contradict each-other in any way. On certain no_std platforms (e.g: some beefier ARM Cortex SoC, for example), there may be enough resources available to provide a global dynamic allocator. On other no_std platforms (e.g: on tiny AVRs), there likely aren't enough resources for a global dyanmic allocator. That said, I agree that the wording of "it might not even be possible to implement this method" was misleading, and incorrect. You could implement the method by allocating a buffer tied to the lifetime of The biggest driving force for why I prefer a simpler "return &[u8]" API for qXfer methods vs. "write callback + offset" API for vFile methods are the following two points:
. Shoot. As I finished writing those two points, I realized that my logic is flawed, and that qXfer APIs could absolutely be file-backed, and the idea that they'd be in memory as part of the "target’s special data area" is totally bogus. If so, then it means that I'll need to change the APIs of TargetXML and MemoryMap to support a callback + offset API. So, for action items:
That latter point is entirely orthogonal to the whole "return &[u8] vs. callback" question, but nonetheless, you've convinced me that providing this option would be nice - both in these qXfer APIs, and in the vFile pread and readlink APIs. If targets want to use this buffer, they can, and if not, they can always allocate their own. And as a reminder, to keep PRs focused, please don't go changing any other APIs as part of this PR. If we want to start tweaking other stuff, lets do it in a follow up. sigh, I feel bad for stretching out this discussion for long. sometimes it takes me a lot of back and forth and composing my own thoughts to get everything straightened out... |
Oh you finally agree with me. In fact, if you take a look at other commands of And I must remind you, after you choose the two-branch implementation in #64, all of them must be called and recalculated at least twice, even if the result can be filled in one packet. I think we really should pass-through a PacketBuf
That's quite easy: let handler_status = match command {
HostIo::vFilePread(cmd) => {
ops.pread(buf, cmd.fd, count.min(buf.len()), offset);
......
HandlerStatus::Handled
}
}; Client won't know about it, it will just send another with a new offset. And the implementation won't sense it, it just need to fill the buffer with data of requested size.
Slight? No, it's the most common situation. registers data can be filled in the buffer, the I think we could just delay this PR until we implement passing the buf as a argument. I really don't want to write another
|
Sorry about this massive wall of text. I really tried to address every point + answer all your questions + make my opinion as clear as possible.
So, I'm not sure about the "recalculated" bit... More specifically, the "trigger" for recalculating the values (at least in a in a well architected application) shouldn't be the actual qXfer-triggered handler invocation. Rather, some other lifecycle event (e.g: startup, on-library-load, etc...) would cause the underlying value to get recalculated + cached (in memory / on disk), such that the actual handler itself simply reads the pre-calculated data. Of course, you could also generate the data on the fly for these qXfer packets, but you run into the very real problem of redoing the same work If this implementation pattern isn't immediately obvious, we should work on improving the documentation around these qXfer-backed extensions to explain how the method is intended to be implemented.
Yeah, we could probably re-introduce the third branch as an optimization. The reason I removed it in the first place is because I wanted the "minimal" Of course, this is orthogonal to the rest of the discussion. If we add it or not doesn't change the fact that there will be plenty of times where the handlers will get called multiple times regardless.
A few things here:
Also, which problem are you referring to in #53?
This seems to be a philosophical difference between us, and unfortunately, this isn't something I'm willing to budge on 😅 Whether or not the spec should be updated to be more restrictive is a different discussion, but at the time being, the fact that the spec allows the client to request more data than the size of For my money, I'm tempted to send a patch upstream to GDB, whereby the client would try and "eagerly" request the entire file from the target before falling back to the Packetbuf-clamped, multi-invocation approach. If that were the case, the a callback-based implementation could stream the entire file back in a single The point being, the callback API is definitely going to stay. Plumbing through the
Counterargument: consider a Msp430 target that wants to return a MemoryMap XML. If PacketBuf is assumed to be just big enough to hold the target's registers, that would mean that PacketBuf is a paltry The point being, just because "many" commands are small enough to be sent in one packet, doesn't mean all of them are. There are plenty of commands that will require multiple invocations, and the API shouldn't artificially hamper their ability to send back in one go, if a future, smarter GDB client comes along as doesn't artificially limit the amount of data sent back per invocation.
This is something I'm aware of, and is something I've tried my hand at implementing on numerous occasions. Each time I tried, I made some decent progress, but as time went on the code required to do so got gnarlier and messier, and I ended up throwing my efforts away... That said, I've come to realize that this isn't actually something I'd like to implement for other reasons: it would break the use case of debugging multiple Admittedly, this is a feature that isn't likely to get implemented in the near future. It's firmly in the realm of "long-term" plans. Nonetheless, this is a use case that So, in a nutshell, the implementation effort + future compat hazard of plumbing |
Could you point out where does the spec say client can request more data than the size of I think the whole foundation of your viewpoints base on this sentence:
Then you draw the conclusion:
I think it's really not properly. Why you think this sentence only affect on incoming packet? Because the But what if:
Is this possible? And:
Still possible? Oh, then we encountered the problem of how to understand the sentence, what the sentence wants to express. How could we know it? We could ask the man who created this sentence, but it's created fifteen years ago, so it's not possible. But we have another way: dig into the implementation. Oh, implementation again. I know you said we should focus on spec rather than implementation, but GDB docs is not really a spec. This "spec" is not created before the code, but more like a doc alongside the code. The code when the doc created represent the real meaning of the doc, do you agree with it?
I think I don't need to explain the code. You can read the code yourself and try to find if And you can try
Are you crazy? Firstly, the request of entire file is really easy to implement. We could just omit the It things like what you said, server can reply data of any length, why the Do you think of the practicability? Yes, with the callback, you could send data of any length. But is it possible for a software to receive data of arbitrary length? Where does it hold it? You may say that we could receive and process at the same time, but we need to check the checksum before use it. That's also the reason why we need
I think you use callback API because of two reason:
But do you think about the price of it? To achieve it, you use
So you really care about efficiency? I thought you don't care about it when I was suprise that each packet only sent data of one byte at the first time I found it. To solve it, we may need something like And because of the existence of this buffer, the size of outgoing packet is limited. If we must copy all data to a buffer before send, is the callback API meaningless?
I mean the former. For example, if a server want to implement
No, I not mean the minimum possible size of
What I said was to reply is:
Yes, many commands are small and many commands are big. You can say all length of data is possble say according to the spec. But if you use slight to discuss the possibility, I must retort you. You could try to find it. If you can't find any situation the commands are bigger than And the whole sentence I want to reply is:
So, if a file is bigger than |
And there it is! That's the missing line that I didn't see in the spec! Thank you for pointing it out. Now that I see it clearly spelled out in the docs, rather than empirically observed via the GDB client implementation, I can safely say you're right, I was wrong. While I still think the callback API has merits wrt. reducing copies + enabling streaming, it's undeniably a "weirder" API to work with for the end user, and copying data into a library provided I'll touch on a few other points you made along the way, but most of those will simply be clarification + philosophy things, and won't be related to the actual work that'll need to get done in this PR. I'll go over action-items at the tail end of this response.
Yes, I'm completely aware that the spec is descriptive rather than prescriptive. Nonetheless, that doesn't mean that the code is somehow "more correct" than the spec - in fact, the very fact that they go through the trouble of maintaining a spec doc implies that the spec is more important than the underlying implementation, as there are many people who'll end up implementing a This is most clear wrt. something like the vFile spec, where as you've observed, the code doesn't use all the fields / flags, while the spec does require the target to send / respect all the fields / flags. So, to be clear, Of course, this is all my opinion / personal philosophy, and you are more than welcome to disagree with it. Hopefully, we can shelve this particular aspect of our discussion for now, and leave it at "agree to disagree" 😄
Could you elaborate on this point? This is over TCP, correct? If so, it might be related to #28. If this is behavior you're not a fan of, you can easily write a custom
My comments wrt efficiency and overhead are geared from the perspective of a UART-backed bare-metal case, where the perf/memory cost of extra copies and buffering could be a bit more noticeable. Obviously, in a hosted implementation using TCP, there are the TCP stack's overheads and outgoing packet buffers, etc... To reiterate:
If I were writing my own GDB client from scratch, where I knew that it'd be running on my beefy 2021 computer with RAM to spare: I'd just use a dynamically sized vector as my incoming data buffer lol. I consider it an unfortunate detail that the GDB client uses a fixed sized incoming packet buffer, when it really ought to have split the Anyways, this ties back into the whole "target - client asymmetry" I've touched upon in the past, where for all intents and purposes, we can assume that the GDB client has "unlimited" resources, while the GDB target is usually far more resource constrained. This is an important property to keep in mind, as it explains many of the seemingly quirky design decisions in the GDB RSP. Anyways, like I said earlier, those other responses are moreso about clarifying my viewpoint, and aren't related to the work we need to do in this PR. I am now on board with shedding the callback API, and updating the packet parsing code + API to plumb-through the unused space from the Unfortunately, as we discussed earlier, there's the whole problem that the packet parsing code having to preserve target-specific data (e.g: addresses/offsets) as hex-decoded Now, I am tempted to play with the code a bit, and see if there might be some way to get a As such, for the time being, you'd probably want to do something similar to the Once you've gotten this working for |
Also, just a heads up - I'll be on vacation from Thursday 'till Monday for the labor day long weekend. I might have time to review code, but if I don't get the chance, I'll be back next Tuesday. |
After I remove The situation is quite reasonable. We send data byte by byte, so it's impossible for TCP to know where is the end of data, the only thing it can do is waiting for a timeout, which couldn't be too long. As I said before, if we want to solve this problem, we must collect all the data to be sent into a buffer and use something like Just notice you mentioned it before: gdbstub/src/protocol/response_writer.rs Lines 16 to 17 in 9227dfd
If there are no pub fn write_all(&mut self, data: &[u8]) -> Result<(), Error<C::Error>> {
for b in data.iter() {
self.write(*b)?;
}
Ok(())
} Consider that most users will use TCP, we should use That also means that we must do escaping and RLE in buffer before send: pub fn write_binary(&mut self, data: mut &[u8]) -> Result<(), Error<C::Error>> {
self.escape_data(data);
self.rle_data(data);
self.write_all(data)?,
Ok(())
} Now we decide to prepare the outgoing data in a buffer, the cost of above should be very low. P.S. I do all my testing on loopback.
But what if the target send a huge amount of data which will lead to a DoS attack? And most implementations won't use the output callback. They will use a limited buffer to hold the data to be sent. (So they won't meet with the problem mentioned above)
It should be quite easy if we use not Rust but C? But it's annoying with lifetimes. It seems to be a We could store the raw pointer of Or we could allocate a new buffer to hold the target-specific data? It won't be too big, So you will do it, right? Hope it can be a bit sooner, because I want to release my project base on gdbstub, otherwise I must use my fork of gdbstub with dirty hacks😅 |
(I'm writing this the morning as I leave for my trip, so pardon the brevity) Yes, I've set a reminder to open a tracking issue to use 100% of the packet buffer when formatting responses, and it'll be yet another thing to look into when I finally find time to release 0.6. As for timing... I truly feel bad about sitting on so many unreleased changes, but the reality is I've been quite busy at work and life that I haven't had much time to sit down and focus on I continue to say "sometime in the next few weeks", and yet the right time never seems to come. As such, as icky as it sounds, I wouldn't hold out on publishing your project just because |
Happy holiday~ 😄 |
Alright, I'm back, and it looks like there hasn't been much movement here... If I'm reading through the thread correctly, it seems this PR is blocked on changing the API to use a At some point in the future, I'll look into reworking the internal packet parsing architecture to use 100% of the packet buffer (rather than just a trailing "free" bit), but that shouldn't block this PR, as the end user API will stay the same. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure why you switched the PR back to a draft, since it's pretty much done 😅
I left a bit more feedback, but after you apply those changes, we should be good to merge.
Co-authored-by: Daniel Prilik <[email protected]>
Looks like you'll need to manually go in and add that missing semicolon. |
Some questions:
|
Off the top of my head, I have no strong opinion one way or the other. The logic is pretty straightforward, so whether it's encapsulated or duplicated might not matter too much. If you have a stronger opinion + rationale for one over the other, I'm all ears.
Ah, what an interesting observation! One thing to consider is that if we were to add this check here, I would also want to add similar checks to other instances of this sort of thing in the codebase. I think we can hold off on this for now, and potentially add these checks behind something like a
This is inside the If you'd like to explicitly use something like
Same as 3., this is in the Let me know if there's anything else you'd like to do in this PR before merging. Otherwise, feel free to undraft, I'll do one last review, and we can merge it in. EDIT: looks like the rustfmt CI is failing... you might need to manually fix that as well. |
If we don't use |
The issue with the tri-branch implementation is that the current API doesn't have a way to signal "data has been read, AND it's EOF". We could tweak the API to have a "tri-state result" type, but IMO, it'd be more ergonomic to leave the API as is + accept the fact that there'll be an extra packet-response to signal EOF. The current API is intuitive, as it follows the same pattern as other similar Rust APIs (e.g: If you feel strongly about the inefficiency, I guess we could change the signature to sometime like: // naming things is hard
enum ReadStatus {
DataRead(usize),
DataReadEof(usize),
Eof
}
fn get_exec_file(
&self,
_pid: Option<Pid>,
offset: usize,
length: usize,
buf: &mut [u8],
) -> TargetResult<ReadStatus, Self>; ...but this is clearly harder to grok, so I'd rather not. |
We may check whether the return size is smaller than |
Can you elaborate on that? |
let ret = ops.get_exec_file(cmd.pid, cmd.offset, cmd.length, cmd.buf).handle_error()?;
if ret == 0 {
res.write_str("l")?;
} else if ret < cmd.length {
res.write_str("l")?;
res.write_binary(cmd.buf.get(..ret).ok_or(Error::PacketBufferOverflow)?)?;
} else {
res.write_str("m")?;
res.write_binary(cmd.buf.get(..ret).ok_or(Error::PacketBufferOverflow)?)?;
} |
Ahh, I see. My concern here (which may or may not be valid) is that as an implementer, it would be "surprising" to me if a partial read into the buffer would also signal "End of Data". This wouldn't match the typical semantics I'd expect of such a method in Rust, such as As such, instead of having a "surprising" API contract, and leaning on the docs to explain this behavior, I think we should stick with the simpler approach that matches Rust semantics, or take the more verbose approach I sketched out above to make this behavior incredibly obvious. As before, I personally lean towards "keep things simple, and do an extra roundtrip over the wire" over "complicate the API, but make it possible to save the roundtrip", but if you have strong opinions about this extra inefficiency, we could go for the latter approach. |
Then we can leave this PR as this. This PR can be merged! 🎉 |
Hooray, it's in 🎉 To summarize what we'd want to do in a follow up PR:
Let me know if I've missed anything, and I'll update this comment for posterity. |
|
In hindsight, I don't think That is to say, please feel free to change the API signature of |
Description
This PR implements the
qXfer:exec-file:read
command, based off the GDB documentation here.API Stability
Checklist
cargo build
compiles withouterrors
orwarnings
cargo clippy
runs withouterrors
orwarnings
cargo fmt
was runexamples/armv4t
examples/armv4t
withRUST_LOG=trace
+ any relevant GDB output under the "Validation" section below./scripts/test_dead_code_elim.sh
and/or./example_no_std/check_size.sh
)Arch
implementationValidation
GDB output
armv4t output