Document not found (404)
+This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..f1731109 --- /dev/null +++ b/.nojekyll @@ -0,0 +1 @@ +This file makes sure that Github Pages doesn't process mdBook's output. diff --git a/404.html b/404.html new file mode 100644 index 00000000..29cc2887 --- /dev/null +++ b/404.html @@ -0,0 +1,221 @@ + + +
+ + +This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +The embedded ecosystem is full of different protocols, hardware components and +vendor-specific things that use their own terms and abbreviations. This Glossary +attempts to list them with pointers for understanding them better.
+A Board Support Crate provides a high level interface configured for a specific +board. It usually depends on a HAL crate. +There is a more detailed description on the memory-mapped registers page +or for a broader overview see this video.
+Floating-point Unit. A 'math processor' running only operations on floating-point numbers.
+A Hardware Abstraction Layer crate provides a developer friendly interface to a microcontroller's
+features and peripherals. It is usually implemented on top of a Peripheral Access Crate (PAC).
+It may also implement traits from the embedded-hal
crate.
+There is a more detailed description on the memory-mapped registers page
+or for a broader overview see this video.
Sometimes referred to as I²C
or Inter-IC. It is a protocol meant for hardware communication
+within a single integrated circuit. See here for more details
A Peripheral Access Crate provides access to a microcontroller's peripherals. It is one of +the lower level crates and is usually generated directly from the provided SVD, often +using svd2rust. The Hardware Abstraction Layer +would usually depend on this crate. +There is a more detailed description on the memory-mapped registers page +or for a broader overview see this video.
+Serial Peripheral Interface. See here for more information.
+System View Description is an XML file format used to describe the programmers view of a +microcontroller device. You can read more about it on +the ARM CMSIS documentation site.
+Universal asynchronous receiver-transmitter. See here for more information.
+Universal synchronous and asynchronous receiver-transmitter. See here for more information.
+ +This chapter collects a variety of tips that might be useful to experienced +embedded C developers looking to start writing Rust. It will especially +highlight how things you might already be used to in C are different in Rust.
+In embedded C it is very common to use the preprocessor for a variety of +purposes, such as:
+#ifdef
In Rust there is no preprocessor, and so many of these use cases are addressed +differently. In the rest of this section we cover various alternatives to +using the preprocessor.
+The closest match to #ifdef ... #endif
in Rust are Cargo features. These
+are a little more formal than the C preprocessor: all possible features are
+explicitly listed per crate, and can only be either on or off. Features are
+turned on when you list a crate as a dependency, and are additive: if any crate
+in your dependency tree enables a feature for another crate, that feature will
+be enabled for all users of that crate.
For example, you might have a crate which provides a library of signal
+processing primitives. Each one might take some extra time to compile or
+declare some large table of constants which you'd like to avoid. You could
+declare a Cargo feature for each component in your Cargo.toml
:
[features]
+FIR = []
+IIR = []
+
+Then, in your code, use #[cfg(feature="FIR")]
to control what is included.
+#![allow(unused)] +fn main() { +/// In your top-level lib.rs + +#[cfg(feature="FIR")] +pub mod fir; + +#[cfg(feature="IIR")] +pub mod iir; +}
You can similarly include code blocks only if a feature is not enabled, or if +any combination of features are or are not enabled.
+Additionally, Rust provides a number of automatically-set conditions you can
+use, such as target_arch
to select different code based on architecture. For
+full details of the conditional compilation support, refer to the
+conditional compilation chapter of the Rust reference.
The conditional compilation will only apply to the next statement or block. If
+a block can not be used in the current scope then the cfg
attribute will
+need to be used multiple times. It's worth noting that most of the time it is
+better to simply include all the code and allow the compiler to remove dead
+code when optimising: it's simpler for you and your users, and in general the
+compiler will do a good job of removing unused code.
Rust supports const fn
, functions which are guaranteed to be evaluable at
+compile-time and can therefore be used where constants are required, such as
+in the size of arrays. This can be used alongside features mentioned above,
+for example:
+#![allow(unused)] +fn main() { +const fn array_size() -> usize { + #[cfg(feature="use_more_ram")] + { 1024 } + #[cfg(not(feature="use_more_ram"))] + { 128 } +} + +static BUF: [u32; array_size()] = [0u32; array_size()]; +}
These are new to stable Rust as of 1.31, so documentation is still sparse. The
+functionality available to const fn
is also very limited at the time of
+writing; in future Rust releases it is expected to expand on what is permitted
+in a const fn
.
Rust provides an extremely powerful macro system. While the C preprocessor +operates almost directly on the text of your source code, the Rust macro system +operates at a higher level. There are two varieties of Rust macro: macros by +example and procedural macros. The former are simpler and most common; they +look like function calls and can expand to a complete expression, statement, +item, or pattern. Procedural macros are more complex but permit extremely +powerful additions to the Rust language: they can transform arbitrary Rust +syntax into new Rust syntax.
+In general, where you might have used a C preprocessor macro, you probably want +to see if a macro-by-example can do the job instead. They can be defined in +your crate and easily used by your own crate or exported for other users. Be +aware that since they must expand to complete expressions, statements, items, +or patterns, some use cases of C preprocessor macros will not work, for example +a macro that expands to part of a variable name or an incomplete set of items +in a list.
+As with Cargo features, it is worth considering if you even need the macro. In
+many cases a regular function is easier to understand and will be inlined to
+the same code as a macro. The #[inline]
and #[inline(always)]
attributes
+give you further control over this process, although care should be taken here
+as well — the compiler will automatically inline functions from the same crate
+where appropriate, so forcing it to do so inappropriately might actually lead
+to decreased performance.
Explaining the entire Rust macro system is out of scope for this tips page, so +you are encouraged to consult the Rust documentation for full details.
+Most Rust crates are built using Cargo (although it is not required). This
+takes care of many difficult problems with traditional build systems. However,
+you may wish to customise the build process. Cargo provides build.rs
+scripts for this purpose. They are Rust scripts which can interact with the
+Cargo build system as required.
Common use cases for build scripts include:
+At present there is no support for post-build scripts, which you might +traditionally have used for tasks like automatic generation of binaries from +the build objects or printing build information.
+Using Cargo for your build system also simplifies cross-compiling. In most
+cases it suffices to tell Cargo --target thumbv6m-none-eabi
and find a
+suitable executable in target/thumbv6m-none-eabi/debug/myapp
.
For platforms not natively supported by Rust, you will need to build libcore
+for that target yourself. On such platforms, Xargo can be used as a stand-in
+for Cargo which automatically builds libcore
for you.
In C you are probably used to accessing arrays directly by their index:
+int16_t arr[16];
+int i;
+for(i=0; i<sizeof(arr)/sizeof(arr[0]); i++) {
+ process(arr[i]);
+}
+
+In Rust this is an anti-pattern: indexed access can be slower (as it needs to +be bounds checked) and may prevent various compiler optimisations. This is an +important distinction and worth repeating: Rust will check for out-of-bounds +access on manual array indexing to guarantee memory safety, while C will +happily index outside the array.
+Instead, use iterators:
+let arr = [0u16; 16];
+for element in arr.iter() {
+ process(*element);
+}
+Iterators provide a powerful array of functionality you would have to implement +manually in C, such as chaining, zipping, enumerating, finding the min or max, +summing, and more. Iterator methods can also be chained, giving very readable +data processing code.
+See the Iterators in the Book and Iterator documentation for more details.
+In Rust, pointers (called raw pointers) exist but are only used in specific
+circumstances, as dereferencing them is always considered unsafe
-- Rust
+cannot provide its usual guarantees about what might be behind the pointer.
In most cases, we instead use references, indicated by the &
symbol, or
+mutable references, indicated by &mut
. References behave similarly to
+pointers, in that they can be dereferenced to access the underlying values, but
+they are a key part of Rust's ownership system: Rust will strictly enforce that
+you may only have one mutable reference or multiple non-mutable references to
+the same value at any given time.
In practice this means you have to be more careful about whether you need
+mutable access to data: where in C the default is mutable and you must be
+explicit about const
, in Rust the opposite is true.
One situation where you might still use raw pointers is interacting directly +with hardware (for example, writing a pointer to a buffer into a DMA peripheral +register), and they are also used under the hood for all peripheral access +crates to allow you to read and write memory-mapped registers.
+In C, individual variables may be marked volatile
, indicating to the compiler
+that the value in the variable may change between accesses. Volatile variables
+are commonly used in an embedded context for memory-mapped registers.
In Rust, instead of marking a variable as volatile
, we use specific methods
+to perform volatile access: core::ptr::read_volatile
and
+core::ptr::write_volatile
. These methods take a *const T
or a *mut T
+(raw pointers, as discussed above) and perform a volatile read or write.
For example, in C you might write:
+volatile bool signalled = false;
+
+void ISR() {
+ // Signal that the interrupt has occurred
+ signalled = true;
+}
+
+void driver() {
+ while(true) {
+ // Sleep until signalled
+ while(!signalled) { WFI(); }
+ // Reset signalled indicator
+ signalled = false;
+ // Perform some task that was waiting for the interrupt
+ run_task();
+ }
+}
+
+The equivalent in Rust would use volatile methods on each access:
+static mut SIGNALLED: bool = false;
+
+#[interrupt]
+fn ISR() {
+ // Signal that the interrupt has occurred
+ // (In real code, you should consider a higher level primitive,
+ // such as an atomic type).
+ unsafe { core::ptr::write_volatile(&mut SIGNALLED, true) };
+}
+
+fn driver() {
+ loop {
+ // Sleep until signalled
+ while unsafe { !core::ptr::read_volatile(&SIGNALLED) } {}
+ // Reset signalled indicator
+ unsafe { core::ptr::write_volatile(&mut SIGNALLED, false) };
+ // Perform some task that was waiting for the interrupt
+ run_task();
+ }
+}
+A few things are worth noting in the code sample:
+&mut SIGNALLED
into the function requiring *mut T
, since
+&mut T
automatically converts to a *mut T
(and the same for *const T
)unsafe
blocks for the read_volatile
/write_volatile
methods,
+since they are unsafe
functions. It is the programmer's responsibility
+to ensure safe use: see the methods' documentation for further details.It is rare to require these functions directly in your code, as they will +usually be taken care of for you by higher-level libraries. For memory mapped +peripherals, the peripheral access crates will implement volatile access +automatically, while for concurrency primitives there are better abstractions +available (see the Concurrency chapter).
+In embedded C it is common to tell the compiler a variable must have a certain +alignment or a struct must be packed rather than aligned, usually to meet +specific hardware or protocol requirements.
+In Rust this is controlled by the repr
attribute on a struct or union. The
+default representation provides no guarantees of layout, so should not be used
+for code that interoperates with hardware or C. The compiler may re-order
+struct members or insert padding and the behaviour may change with future
+versions of Rust.
+struct Foo { + x: u16, + y: u8, + z: u16, +} + +fn main() { + let v = Foo { x: 0, y: 0, z: 0 }; + println!("{:p} {:p} {:p}", &v.x, &v.y, &v.z); +} + +// 0x7ffecb3511d0 0x7ffecb3511d4 0x7ffecb3511d2 +// Note ordering has been changed to x, z, y to improve packing.
To ensure layouts that are interoperable with C, use repr(C)
:
+#[repr(C)] +struct Foo { + x: u16, + y: u8, + z: u16, +} + +fn main() { + let v = Foo { x: 0, y: 0, z: 0 }; + println!("{:p} {:p} {:p}", &v.x, &v.y, &v.z); +} + +// 0x7fffd0d84c60 0x7fffd0d84c62 0x7fffd0d84c64 +// Ordering is preserved and the layout will not change over time. +// `z` is two-byte aligned so a byte of padding exists between `y` and `z`.
To ensure a packed representation, use repr(packed)
:
+#[repr(packed)] +struct Foo { + x: u16, + y: u8, + z: u16, +} + +fn main() { + let v = Foo { x: 0, y: 0, z: 0 }; + // References must always be aligned, so to check the addresses of the + // struct's fields, we use `std::ptr::addr_of!()` to get a raw pointer + // instead of just printing `&v.x`. + let px = std::ptr::addr_of!(v.x); + let py = std::ptr::addr_of!(v.y); + let pz = std::ptr::addr_of!(v.z); + println!("{:p} {:p} {:p}", px, py, pz); +} + +// 0x7ffd33598490 0x7ffd33598492 0x7ffd33598493 +// No padding has been inserted between `y` and `z`, so now `z` is unaligned.
Note that using repr(packed)
also sets the alignment of the type to 1
.
Finally, to specify a specific alignment, use repr(align(n))
, where n
is
+the number of bytes to align to (and must be a power of two):
+#[repr(C)] +#[repr(align(4096))] +struct Foo { + x: u16, + y: u8, + z: u16, +} + +fn main() { + let v = Foo { x: 0, y: 0, z: 0 }; + let u = Foo { x: 0, y: 0, z: 0 }; + println!("{:p} {:p} {:p}", &v.x, &v.y, &v.z); + println!("{:p} {:p} {:p}", &u.x, &u.y, &u.z); +} + +// 0x7ffec909a000 0x7ffec909a002 0x7ffec909a004 +// 0x7ffec909b000 0x7ffec909b002 0x7ffec909b004 +// The two instances `u` and `v` have been placed on 4096-byte alignments, +// evidenced by the `000` at the end of their addresses.
Note we can combine repr(C)
with repr(align(n))
to obtain an aligned and
+C-compatible layout. It is not permissible to combine repr(align(n))
with
+repr(packed)
, since repr(packed)
sets the alignment to 1
. It is also not
+permissible for a repr(packed)
type to contain a repr(align(n))
type.
For further details on type layouts, refer to the type layout chapter of the +Rust Reference.
+Eventually you'll want to use dynamic data structures (AKA collections) in your
+program. std
provides a set of common collections: Vec
, String
,
+HashMap
, etc. All the collections implemented in std
use a global dynamic
+memory allocator (AKA the heap).
As core
is, by definition, free of memory allocations these implementations
+are not available there, but they can be found in the alloc
crate
+that's shipped with the compiler.
If you need collections, a heap allocated implementation is not your only
+option. You can also use fixed capacity collections; one such implementation
+can be found in the heapless
crate.
In this section, we'll explore and compare these two implementations.
+alloc
The alloc
crate is shipped with the standard Rust distribution. To import the
+crate you can directly use
it without declaring it as a dependency in your
+Cargo.toml
file.
#![feature(alloc)]
+
+extern crate alloc;
+
+use alloc::vec::Vec;
+To be able to use any collection you'll first need use the global_allocator
+attribute to declare the global allocator your program will use. It's required
+that the allocator you select implements the GlobalAlloc
trait.
For completeness and to keep this section as self-contained as possible we'll +implement a simple bump pointer allocator and use that as the global allocator. +However, we strongly suggest you use a battle tested allocator from crates.io +in your program instead of this allocator.
+// Bump pointer allocator implementation
+
+use core::alloc::{GlobalAlloc, Layout};
+use core::cell::UnsafeCell;
+use core::ptr;
+
+use cortex_m::interrupt;
+
+// Bump pointer allocator for *single* core systems
+struct BumpPointerAlloc {
+ head: UnsafeCell<usize>,
+ end: usize,
+}
+
+unsafe impl Sync for BumpPointerAlloc {}
+
+unsafe impl GlobalAlloc for BumpPointerAlloc {
+ unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
+ // `interrupt::free` is a critical section that makes our allocator safe
+ // to use from within interrupts
+ interrupt::free(|_| {
+ let head = self.head.get();
+ let size = layout.size();
+ let align = layout.align();
+ let align_mask = !(align - 1);
+
+ // move start up to the next alignment boundary
+ let start = (*head + align - 1) & align_mask;
+
+ if start + size > self.end {
+ // a null pointer signal an Out Of Memory condition
+ ptr::null_mut()
+ } else {
+ *head = start + size;
+ start as *mut u8
+ }
+ })
+ }
+
+ unsafe fn dealloc(&self, _: *mut u8, _: Layout) {
+ // this allocator never deallocates memory
+ }
+}
+
+// Declaration of the global memory allocator
+// NOTE the user must ensure that the memory region `[0x2000_0100, 0x2000_0200]`
+// is not used by other parts of the program
+#[global_allocator]
+static HEAP: BumpPointerAlloc = BumpPointerAlloc {
+ head: UnsafeCell::new(0x2000_0100),
+ end: 0x2000_0200,
+};
+Apart from selecting a global allocator the user will also have to define how
+Out Of Memory (OOM) errors are handled using the unstable
+alloc_error_handler
attribute.
#![feature(alloc_error_handler)]
+
+use cortex_m::asm;
+
+#[alloc_error_handler]
+fn on_oom(_layout: Layout) -> ! {
+ asm::bkpt();
+
+ loop {}
+}
+Once all that is in place, the user can finally use the collections in alloc
.
#[entry]
+fn main() -> ! {
+ let mut xs = Vec::new();
+
+ xs.push(42);
+ assert!(xs.pop(), Some(42));
+
+ loop {
+ // ..
+ }
+}
+If you have used the collections in the std
crate then these will be familiar
+as they are exact same implementation.
heapless
heapless
requires no setup as its collections don't depend on a global memory
+allocator. Just use
its collections and proceed to instantiate them:
// heapless version: v0.4.x
+use heapless::Vec;
+use heapless::consts::*;
+
+#[entry]
+fn main() -> ! {
+ let mut xs: Vec<_, U8> = Vec::new();
+
+ xs.push(42).unwrap();
+ assert_eq!(xs.pop(), Some(42));
+ loop {}
+}
+You'll note two differences between these collections and the ones in alloc
.
First, you have to declare upfront the capacity of the collection. heapless
+collections never reallocate and have fixed capacities; this capacity is part of
+the type signature of the collection. In this case we have declared that xs
+has a capacity of 8 elements that is the vector can, at most, hold 8 elements.
+This is indicated by the U8
(see typenum
) in the type signature.
Second, the push
method, and many other methods, return a Result
. Since the
+heapless
collections have fixed capacity all operations that insert elements
+into the collection can potentially fail. The API reflects this problem by
+returning a Result
indicating whether the operation succeeded or not. In
+contrast, alloc
collections will reallocate themselves on the heap to increase
+their capacity.
As of version v0.4.x all heapless
collections store all their elements inline.
+This means that an operation like let x = heapless::Vec::new();
will allocate
+the collection on the stack, but it's also possible to allocate the collection
+on a static
variable, or even on the heap (Box<Vec<_, _>>
).
Keep these in mind when choosing between heap allocated, relocatable collections +and fixed capacity collections.
+With heap allocations Out Of Memory is always a possibility and can occur in
+any place where a collection may need to grow: for example, all
+alloc::Vec.push
invocations can potentially generate an OOM condition. Thus
+some operations can implicitly fail. Some alloc
collections expose
+try_reserve
methods that let you check for potential OOM conditions when
+growing the collection but you need be proactive about using them.
If you exclusively use heapless
collections and you don't use a memory
+allocator for anything else then an OOM condition is impossible. Instead, you'll
+have to deal with collections running out of capacity on a case by case basis.
+That is you'll have deal with all the Result
s returned by methods like
+Vec.push
.
OOM failures can be harder to debug than say unwrap
-ing on all Result
s
+returned by heapless::Vec.push
because the observed location of failure may
+not match with the location of the cause of the problem. For example, even
+vec.reserve(1)
can trigger an OOM if the allocator is nearly exhausted because
+some other collection was leaking memory (memory leaks are possible in safe
+Rust).
Reasoning about memory usage of heap allocated collections is hard because the
+capacity of long lived collections can change at runtime. Some operations may
+implicitly reallocate the collection increasing its memory usage, and some
+collections expose methods like shrink_to_fit
that can potentially reduce the
+memory used by the collection -- ultimately, it's up to the allocator to decide
+whether to actually shrink the memory allocation or not. Additionally, the
+allocator may have to deal with memory fragmentation which can increase the
+apparent memory usage.
On the other hand if you exclusively use fixed capacity collections, store
+most of them in static
variables and set a maximum size for the call stack
+then the linker will detect if you try to use more memory than what's physically
+available.
Furthermore, fixed capacity collections allocated on the stack will be reported
+by -Z emit-stack-sizes
flag which means that tools that analyze stack usage
+(like stack-sizes
) will include them in their analysis.
However, fixed capacity collections can not be shrunk which can result in +lower load factors (the ratio between the size of the collection and its +capacity) than what relocatable collections can achieve.
+If you are building time sensitive applications or hard real time applications +then you care, maybe a lot, about the worst case execution time of the different +parts of your program.
+The alloc
collections can reallocate so the WCET of operations that may grow
+the collection will also include the time it takes to reallocate the collection,
+which itself depends on the runtime capacity of the collection. This makes it
+hard to determine the WCET of, for example, the alloc::Vec.push
operation as
+it depends on both the allocator being used and its runtime capacity.
On the other hand fixed capacity collections never reallocate so all operations
+have a predictable execution time. For example, heapless::Vec.push
executes in
+constant time.
alloc
requires setting up a global allocator whereas heapless
does not.
+However, heapless
requires you to pick the capacity of each collection that
+you instantiate.
The alloc
API will be familiar to virtually every Rust developer. The
+heapless
API tries to closely mimic the alloc
API but it will never be
+exactly the same due to its explicit error handling -- some developers may feel
+the explicit error handling is excessive or too cumbersome.
Concurrency happens whenever different parts of your program might execute +at different times or out of order. In an embedded context, this includes:
+Since many embedded programs need to deal with interrupts, concurrency will +usually come up sooner or later, and it's also where many subtle and difficult +bugs can occur. Luckily, Rust provides a number of abstractions and safety +guarantees to help us write correct code.
+The simplest concurrency for an embedded program is no concurrency: your +software consists of a single main loop which just keeps running, and there +are no interrupts at all. Sometimes this is perfectly suited to the problem +at hand! Typically your loop will read some inputs, perform some processing, +and write some outputs.
+#[entry]
+fn main() {
+ let peripherals = setup_peripherals();
+ loop {
+ let inputs = read_inputs(&peripherals);
+ let outputs = process(inputs);
+ write_outputs(&peripherals, outputs);
+ }
+}
+Since there's no concurrency, there's no need to worry about sharing data +between parts of your program or synchronising access to peripherals. If +you can get away with such a simple approach this can be a great solution.
+Unlike non-embedded Rust, we will not usually have the luxury of creating +heap allocations and passing references to that data into a newly-created +thread. Instead, our interrupt handlers might be called at any time and must +know how to access whatever shared memory we are using. At the lowest level, +this means we must have statically allocated mutable memory, which +both the interrupt handler and the main code can refer to.
+In Rust, such static mut
variables are always unsafe to read or write,
+because without taking special care, you might trigger a race condition,
+where your access to the variable is interrupted halfway through by an
+interrupt which also accesses that variable.
For an example of how this behaviour can cause subtle errors in your code, +consider an embedded program which counts rising edges of some input signal +in each one-second period (a frequency counter):
+static mut COUNTER: u32 = 0;
+
+#[entry]
+fn main() -> ! {
+ set_timer_1hz();
+ let mut last_state = false;
+ loop {
+ let state = read_signal_level();
+ if state && !last_state {
+ // DANGER - Not actually safe! Could cause data races.
+ unsafe { COUNTER += 1 };
+ }
+ last_state = state;
+ }
+}
+
+#[interrupt]
+fn timer() {
+ unsafe { COUNTER = 0; }
+}
+Each second, the timer interrupt sets the counter back to 0. Meanwhile, the
+main loop continually measures the signal, and incremements the counter when
+it sees a change from low to high. We've had to use unsafe
to access
+COUNTER
, as it's static mut
, and that means we're promising the compiler
+we won't cause any undefined behaviour. Can you spot the race condition? The
+increment on COUNTER
is not guaranteed to be atomic — in fact, on most
+embedded platforms, it will be split into a load, then the increment, then
+a store. If the interrupt fired after the load but before the store, the
+reset back to 0 would be ignored after the interrupt returns — and we would
+count twice as many transitions for that period.
So, what can we do about data races? A simple approach is to use critical
+sections, a context where interrupts are disabled. By wrapping the access to
+COUNTER
in main
in a critical section, we can be sure the timer interrupt
+will not fire until we're finished incrementing COUNTER
:
static mut COUNTER: u32 = 0;
+
+#[entry]
+fn main() -> ! {
+ set_timer_1hz();
+ let mut last_state = false;
+ loop {
+ let state = read_signal_level();
+ if state && !last_state {
+ // New critical section ensures synchronised access to COUNTER
+ cortex_m::interrupt::free(|_| {
+ unsafe { COUNTER += 1 };
+ });
+ }
+ last_state = state;
+ }
+}
+
+#[interrupt]
+fn timer() {
+ unsafe { COUNTER = 0; }
+}
+In this example, we use cortex_m::interrupt::free
, but other platforms will
+have similar mechanisms for executing code in a critical section. This is also
+the same as disabling interrupts, running some code, and then re-enabling
+interrupts.
Note we didn't need to put a critical section inside the timer interrupt, +for two reasons:
+COUNTER
can't be affected by a race since we don't read itmain
thread anywayIf COUNTER
was being shared by multiple interrupt handlers that might
+preempt each other, then each one might require a critical section as well.
This solves our immediate problem, but we're still left writing a lot of unsafe code which we need to carefully reason about, and we might be using critical sections needlessly. Since each critical section temporarily pauses interrupt processing, there is an associated cost of some extra code size and higher interrupt latency and jitter (interrupts may take longer to be processed, and the time until they are processed will be more variable). Whether this is a problem depends on your system, but in general, we'd like to avoid it.
+It's worth noting that while a critical section guarantees no interrupts will +fire, it does not provide an exclusivity guarantee on multi-core systems! The +other core could be happily accessing the same memory as your core, even +without interrupts. You will need stronger synchronisation primitives if you +are using multiple cores.
+On some platforms, special atomic instructions are available, which provide
+guarantees about read-modify-write operations. Specifically for Cortex-M: thumbv6
+(Cortex-M0, Cortex-M0+) only provide atomic load and store instructions,
+while thumbv7
(Cortex-M3 and above) provide full Compare and Swap (CAS)
+instructions. These CAS instructions give an alternative to the heavy-handed
+disabling of all interrupts: we can attempt the increment, it will succeed most
+of the time, but if it was interrupted it will automatically retry the entire
+increment operation. These atomic operations are safe even across multiple
+cores.
use core::sync::atomic::{AtomicUsize, Ordering};
+
+static COUNTER: AtomicUsize = AtomicUsize::new(0);
+
+#[entry]
+fn main() -> ! {
+ set_timer_1hz();
+ let mut last_state = false;
+ loop {
+ let state = read_signal_level();
+ if state && !last_state {
+ // Use `fetch_add` to atomically add 1 to COUNTER
+ COUNTER.fetch_add(1, Ordering::Relaxed);
+ }
+ last_state = state;
+ }
+}
+
+#[interrupt]
+fn timer() {
+ // Use `store` to write 0 directly to COUNTER
+ COUNTER.store(0, Ordering::Relaxed)
+}
+This time COUNTER
is a safe static
variable. Thanks to the AtomicUsize
+type COUNTER
can be safely modified from both the interrupt handler and the
+main thread without disabling interrupts. When possible, this is a better
+solution — but it may not be supported on your platform.
A note on Ordering
: this affects how the compiler and hardware may reorder
+instructions, and also has consequences on cache visibility. Assuming that the
+target is a single core platform Relaxed
is sufficient and the most efficient
+choice in this particular case. Stricter ordering will cause the compiler to
+emit memory barriers around the atomic operations; depending on what you're
+using atomics for you may or may not need this! The precise details of the
+atomic model are complicated and best described elsewhere.
For more details on atomics and ordering, see the nomicon.
+None of the above solutions are especially satisfactory. They require unsafe
+blocks which must be very carefully checked and are not ergonomic. Surely we
+can do better in Rust!
We can abstract our counter into a safe interface which can be safely used +anywhere else in our code. For this example, we'll use the critical-section +counter, but you could do something very similar with atomics.
+use core::cell::UnsafeCell;
+use cortex_m::interrupt;
+
+// Our counter is just a wrapper around UnsafeCell<u32>, which is the heart
+// of interior mutability in Rust. By using interior mutability, we can have
+// COUNTER be `static` instead of `static mut`, but still able to mutate
+// its counter value.
+struct CSCounter(UnsafeCell<u32>);
+
+const CS_COUNTER_INIT: CSCounter = CSCounter(UnsafeCell::new(0));
+
+impl CSCounter {
+ pub fn reset(&self, _cs: &interrupt::CriticalSection) {
+ // By requiring a CriticalSection be passed in, we know we must
+ // be operating inside a CriticalSection, and so can confidently
+ // use this unsafe block (required to call UnsafeCell::get).
+ unsafe { *self.0.get() = 0 };
+ }
+
+ pub fn increment(&self, _cs: &interrupt::CriticalSection) {
+ unsafe { *self.0.get() += 1 };
+ }
+}
+
+// Required to allow static CSCounter. See explanation below.
+unsafe impl Sync for CSCounter {}
+
+// COUNTER is no longer `mut` as it uses interior mutability;
+// therefore it also no longer requires unsafe blocks to access.
+static COUNTER: CSCounter = CS_COUNTER_INIT;
+
+#[entry]
+fn main() -> ! {
+ set_timer_1hz();
+ let mut last_state = false;
+ loop {
+ let state = read_signal_level();
+ if state && !last_state {
+ // No unsafe here!
+ interrupt::free(|cs| COUNTER.increment(cs));
+ }
+ last_state = state;
+ }
+}
+
+#[interrupt]
+fn timer() {
+ // We do need to enter a critical section here just to obtain a valid
+ // cs token, even though we know no other interrupt could pre-empt
+ // this one.
+ interrupt::free(|cs| COUNTER.reset(cs));
+
+ // We could use unsafe code to generate a fake CriticalSection if we
+ // really wanted to, avoiding the overhead:
+ // let cs = unsafe { interrupt::CriticalSection::new() };
+}
+We've moved our unsafe
code to inside our carefully-planned abstraction,
+and now our application code does not contain any unsafe
blocks.
This design requires that the application pass a CriticalSection
token in:
+these tokens are only safely generated by interrupt::free
, so by requiring
+one be passed in, we ensure we are operating inside a critical section, without
+having to actually do the lock ourselves. This guarantee is provided statically
+by the compiler: there won't be any runtime overhead associated with cs
.
+If we had multiple counters, they could all be given the same cs
, without
+requiring multiple nested critical sections.
This also brings up an important topic for concurrency in Rust: the
+Send
and Sync
traits. To summarise the Rust book, a type is Send
+when it can safely be moved to another thread, while it is Sync when
+it can be safely shared between multiple threads. In an embedded context,
+we consider interrupts to be executing in a separate thread to the application
+code, so variables accessed by both an interrupt and the main code must be
+Sync.
For most types in Rust, both of these traits are automatically derived for you
+by the compiler. However, because CSCounter
contains an UnsafeCell
, it is
+not Sync, and therefore we could not make a static CSCounter
: static
+variables must be Sync, since they can be accessed by multiple threads.
To tell the compiler we have taken care that the CSCounter
is in fact safe
+to share between threads, we implement the Sync trait explicitly. As with the
+previous use of critical sections, this is only safe on single-core platforms:
+with multiple cores, you would need to go to greater lengths to ensure safety.
We've created a useful abstraction specific to our counter problem, but +there are many common abstractions used for concurrency.
+One such synchronisation primitive is a mutex, short for mutual exclusion.
+These constructs ensure exclusive access to a variable, such as our counter. A
+thread can attempt to lock (or acquire) the mutex, and either succeeds
+immediately, or blocks waiting for the lock to be acquired, or returns an error
+that the mutex could not be locked. While that thread holds the lock, it is
+granted access to the protected data. When the thread is done, it unlocks (or
+releases) the mutex, allowing another thread to lock it. In Rust, we would
+usually implement the unlock using the Drop
trait to ensure it is always
+released when the mutex goes out of scope.
Using a mutex with interrupt handlers can be tricky: it is not normally +acceptable for the interrupt handler to block, and it would be especially +disastrous for it to block waiting for the main thread to release a lock, +since we would then deadlock (the main thread will never release the lock +because execution stays in the interrupt handler). Deadlocking is not +considered unsafe: it is possible even in safe Rust.
+To avoid this behaviour entirely, we could implement a mutex which requires +a critical section to lock, just like our counter example. So long as the +critical section must last as long as the lock, we can be sure we have +exclusive access to the wrapped variable without even needing to track +the lock/unlock state of the mutex.
+This is in fact done for us in the cortex_m
crate! We could have written
+our counter using it:
use core::cell::Cell;
+use cortex_m::interrupt::Mutex;
+
+static COUNTER: Mutex<Cell<u32>> = Mutex::new(Cell::new(0));
+
+#[entry]
+fn main() -> ! {
+ set_timer_1hz();
+ let mut last_state = false;
+ loop {
+ let state = read_signal_level();
+ if state && !last_state {
+ interrupt::free(|cs|
+ COUNTER.borrow(cs).set(COUNTER.borrow(cs).get() + 1));
+ }
+ last_state = state;
+ }
+}
+
+#[interrupt]
+fn timer() {
+ // We still need to enter a critical section here to satisfy the Mutex.
+ interrupt::free(|cs| COUNTER.borrow(cs).set(0));
+}
+We're now using Cell
, which along with its sibling RefCell
is used to
+provide safe interior mutability. We've already seen UnsafeCell
which is
+the bottom layer of interior mutability in Rust: it allows you to obtain
+multiple mutable references to its value, but only with unsafe code. A Cell
+is like an UnsafeCell
but it provides a safe interface: it only permits
+taking a copy of the current value or replacing it, not taking a reference,
+and since it is not Sync, it cannot be shared between threads. These
+constraints mean it's safe to use, but we couldn't use it directly in a
+static
variable as a static
must be Sync.
So why does the example above work? The Mutex<T>
implements Sync for any
+T
which is Send — such as a Cell
. It can do this safely because it only
+gives access to its contents during a critical section. We're therefore able
+to get a safe counter with no unsafe code at all!
This is great for simple types like the u32
of our counter, but what about
+more complex types which are not Copy? An extremely common example in an
+embedded context is a peripheral struct, which generally is not Copy.
+For that, we can turn to RefCell
.
Device crates generated using svd2rust
and similar abstractions provide
+safe access to peripherals by enforcing that only one instance of the
+peripheral struct can exist at a time. This ensures safety, but makes it
+difficult to access a peripheral from both the main thread and an interrupt
+handler.
To safely share peripheral access, we can use the Mutex
we saw before. We'll
+also need to use RefCell
, which uses a runtime check to ensure only one
+reference to a peripheral is given out at a time. This has more overhead than
+the plain Cell
, but since we are giving out references rather than copies,
+we must be sure only one exists at a time.
Finally, we'll also have to account for somehow moving the peripheral into
+the shared variable after it has been initialised in the main code. To do
+this we can use the Option
type, initialised to None
and later set to
+the instance of the peripheral.
use core::cell::RefCell;
+use cortex_m::interrupt::{self, Mutex};
+use stm32f4::stm32f405;
+
+static MY_GPIO: Mutex<RefCell<Option<stm32f405::GPIOA>>> =
+ Mutex::new(RefCell::new(None));
+
+#[entry]
+fn main() -> ! {
+ // Obtain the peripheral singletons and configure it.
+ // This example is from an svd2rust-generated crate, but
+ // most embedded device crates will be similar.
+ let dp = stm32f405::Peripherals::take().unwrap();
+ let gpioa = &dp.GPIOA;
+
+ // Some sort of configuration function.
+ // Assume it sets PA0 to an input and PA1 to an output.
+ configure_gpio(gpioa);
+
+ // Store the GPIOA in the mutex, moving it.
+ interrupt::free(|cs| MY_GPIO.borrow(cs).replace(Some(dp.GPIOA)));
+ // We can no longer use `gpioa` or `dp.GPIOA`, and instead have to
+ // access it via the mutex.
+
+ // Be careful to enable the interrupt only after setting MY_GPIO:
+ // otherwise the interrupt might fire while it still contains None,
+ // and as-written (with `unwrap()`), it would panic.
+ set_timer_1hz();
+ let mut last_state = false;
+ loop {
+ // We'll now read state as a digital input, via the mutex
+ let state = interrupt::free(|cs| {
+ let gpioa = MY_GPIO.borrow(cs).borrow();
+ gpioa.as_ref().unwrap().idr.read().idr0().bit_is_set()
+ });
+
+ if state && !last_state {
+ // Set PA1 high if we've seen a rising edge on PA0.
+ interrupt::free(|cs| {
+ let gpioa = MY_GPIO.borrow(cs).borrow();
+ gpioa.as_ref().unwrap().odr.modify(|_, w| w.odr1().set_bit());
+ });
+ }
+ last_state = state;
+ }
+}
+
+#[interrupt]
+fn timer() {
+ // This time in the interrupt we'll just clear PA0.
+ interrupt::free(|cs| {
+ // We can use `unwrap()` because we know the interrupt wasn't enabled
+ // until after MY_GPIO was set; otherwise we should handle the potential
+ // for a None value.
+ let gpioa = MY_GPIO.borrow(cs).borrow();
+ gpioa.as_ref().unwrap().odr.modify(|_, w| w.odr1().clear_bit());
+ });
+}
+That's quite a lot to take in, so let's break down the important lines.
+static MY_GPIO: Mutex<RefCell<Option<stm32f405::GPIOA>>> =
+ Mutex::new(RefCell::new(None));
+Our shared variable is now a Mutex
around a RefCell
which contains an
+Option
. The Mutex
ensures we only have access during a critical section,
+and therefore makes the variable Sync, even though a plain RefCell
would not
+be Sync. The RefCell
gives us interior mutability with references, which
+we'll need to use our GPIOA
. The Option
lets us initialise this variable
+to something empty, and only later actually move the variable in. We cannot
+access the peripheral singleton statically, only at runtime, so this is
+required.
interrupt::free(|cs| MY_GPIO.borrow(cs).replace(Some(dp.GPIOA)));
+Inside a critical section we can call borrow()
on the mutex, which gives us
+a reference to the RefCell
. We then call replace()
to move our new value
+into the RefCell
.
interrupt::free(|cs| {
+ let gpioa = MY_GPIO.borrow(cs).borrow();
+ gpioa.as_ref().unwrap().odr.modify(|_, w| w.odr1().set_bit());
+});
+Finally, we use MY_GPIO
in a safe and concurrent fashion. The critical section
+prevents the interrupt firing as usual, and lets us borrow the mutex. The
+RefCell
then gives us an &Option<GPIOA>
, and tracks how long it remains
+borrowed - once that reference goes out of scope, the RefCell
will be updated
+to indicate it is no longer borrowed.
Since we can't move the GPIOA
out of the &Option
, we need to convert it to
+an &Option<&GPIOA>
with as_ref()
, which we can finally unwrap()
to obtain
+the &GPIOA
which lets us modify the peripheral.
If we need a mutable reference to a shared resource, then borrow_mut
and deref_mut
+should be used instead. The following code shows an example using the TIM2 timer.
use core::cell::RefCell;
+use core::ops::DerefMut;
+use cortex_m::interrupt::{self, Mutex};
+use cortex_m::asm::wfi;
+use stm32f4::stm32f405;
+
+static G_TIM: Mutex<RefCell<Option<Timer<stm32::TIM2>>>> =
+ Mutex::new(RefCell::new(None));
+
+#[entry]
+fn main() -> ! {
+ let mut cp = cm::Peripherals::take().unwrap();
+ let dp = stm32f405::Peripherals::take().unwrap();
+
+ // Some sort of timer configuration function.
+ // Assume it configures the TIM2 timer, its NVIC interrupt,
+ // and finally starts the timer.
+ let tim = configure_timer_interrupt(&mut cp, dp);
+
+ interrupt::free(|cs| {
+ G_TIM.borrow(cs).replace(Some(tim));
+ });
+
+ loop {
+ wfi();
+ }
+}
+
+#[interrupt]
+fn timer() {
+ interrupt::free(|cs| {
+ if let Some(ref mut tim)) = G_TIM.borrow(cs).borrow_mut().deref_mut() {
+ tim.start(1.hz());
+ }
+ });
+}
+
+Whew! This is safe, but it is also a little unwieldy. Is there anything else +we can do?
+One alternative is the RTIC framework, short for Real Time Interrupt-driven Concurrency. It
+enforces static priorities and tracks accesses to static mut
variables
+("resources") to statically ensure that shared resources are always accessed
+safely, without requiring the overhead of always entering critical sections and
+using reference counting (as in RefCell
). This has a number of advantages such
+as guaranteeing no deadlocks and giving extremely low time and memory overhead.
The framework also includes other features like message passing, which reduces +the need for explicit shared state, and the ability to schedule tasks to run at +a given time, which can be used to implement periodic tasks. Check out the +documentation for more information!
+Another common model for embedded concurrency is the real-time operating system +(RTOS). While currently less well explored in Rust, they are widely used in +traditional embedded development. Open source examples include FreeRTOS and +ChibiOS. These RTOSs provide support for running multiple application threads +which the CPU swaps between, either when the threads yield control (called +cooperative multitasking) or based on a regular timer or interrupts (preemptive +multitasking). The RTOS typically provide mutexes and other synchronisation +primitives, and often interoperate with hardware features such as DMA engines.
+At the time of writing, there are not many Rust RTOS examples to point to, +but it's an interesting area so watch this space!
+It is becoming more common to have two or more cores in embedded processors,
+which adds an extra layer of complexity to concurrency. All the examples using
+a critical section (including the cortex_m::interrupt::Mutex
) assume the only
+other execution thread is the interrupt thread, but on a multi-core system
+that's no longer true. Instead, we'll need synchronisation primitives designed
+for multiple cores (also called SMP, for symmetric multi-processing).
These typically use the atomic instructions we saw earlier, since the +processing system will ensure that atomicity is maintained over all cores.
+Covering these topics in detail is currently beyond the scope of this book, +but the general patterns are the same as for the single-core case.
+ +embedded-hal
traits (C-HAL-TRAITS)GPIO Interfaces exposed by the HAL should provide dedicated zero-sized types for +each pin on every interface or port, resulting in a zero-cost GPIO abstraction +when all pin assignments are statically known.
+Each GPIO Interface or Port should implement a split
method returning a
+struct with every pin.
Example:
++ +#![allow(unused)] +fn main() { +pub struct PA0; +pub struct PA1; +// ... + +pub struct PortA; + +impl PortA { + pub fn split(self) -> PortAPins { + PortAPins { + pa0: PA0, + pa1: PA1, + // ... + } + } +} + +pub struct PortAPins { + pub pa0: PA0, + pub pa1: PA1, + // ... +} +}
Pins should provide type erasure methods that move their properties from +compile time to runtime, and allow more flexibility in applications.
+Example:
++ +#![allow(unused)] +fn main() { +/// Port A, pin 0. +pub struct PA0; + +impl PA0 { + pub fn erase_pin(self) -> PA { + PA { pin: 0 } + } +} + +/// A pin on port A. +pub struct PA { + /// The pin number. + pin: u8, +} + +impl PA { + pub fn erase_port(self) -> Pin { + Pin { + port: Port::A, + pin: self.pin, + } + } +} + +pub struct Pin { + port: Port, + pin: u8, + // (these fields can be packed to reduce the memory footprint) +} + +enum Port { + A, + B, + C, + D, +} +}
Pins may be configured as input or output with different characteristics +depending on the chip or family. This state should be encoded in the type system +to prevent use of pins in incorrect states.
+Additional, chip-specific state (eg. drive strength) may also be encoded in this +way, using additional type parameters.
+Methods for changing the pin state should be provided as into_input
and
+into_output
methods.
Additionally, with_{input,output}_state
methods should be provided that
+temporarily reconfigure a pin in a different state without moving it.
The following methods should be provided for every pin type (that is, both +erased and non-erased pin types should provide the same API):
+pub fn into_input<N: InputState>(self, input: N) -> Pin<N>
pub fn into_output<N: OutputState>(self, output: N) -> Pin<N>
pub fn with_input_state<N: InputState, R>(
+ &mut self,
+ input: N,
+ f: impl FnOnce(&mut PA1<N>) -> R,
+) -> R
+
+pub fn with_output_state<N: OutputState, R>(
+ &mut self,
+ output: N,
+ f: impl FnOnce(&mut PA1<N>) -> R,
+) -> R
+
+Pin state should be bounded by sealed traits. Users of the HAL should have no +need to add their own state. The traits can provide HAL-specific methods +required to implement the pin state API.
+Example:
++ +#![allow(unused)] +fn main() { +use std::marker::PhantomData; +mod sealed { + pub trait Sealed {} +} + +pub trait PinState: sealed::Sealed {} +pub trait OutputState: sealed::Sealed {} +pub trait InputState: sealed::Sealed { + // ... +} + +pub struct Output<S: OutputState> { + _p: PhantomData<S>, +} + +impl<S: OutputState> PinState for Output<S> {} +impl<S: OutputState> sealed::Sealed for Output<S> {} + +pub struct PushPull; +pub struct OpenDrain; + +impl OutputState for PushPull {} +impl OutputState for OpenDrain {} +impl sealed::Sealed for PushPull {} +impl sealed::Sealed for OpenDrain {} + +pub struct Input<S: InputState> { + _p: PhantomData<S>, +} + +impl<S: InputState> PinState for Input<S> {} +impl<S: InputState> sealed::Sealed for Input<S> {} + +pub struct Floating; +pub struct PullUp; +pub struct PullDown; + +impl InputState for Floating {} +impl InputState for PullUp {} +impl InputState for PullDown {} +impl sealed::Sealed for Floating {} +impl sealed::Sealed for PullUp {} +impl sealed::Sealed for PullDown {} + +pub struct PA1<S: PinState> { + _p: PhantomData<S>, +} + +impl<S: PinState> PA1<S> { + pub fn into_input<N: InputState>(self, input: N) -> PA1<Input<N>> { + todo!() + } + + pub fn into_output<N: OutputState>(self, output: N) -> PA1<Output<N>> { + todo!() + } + + pub fn with_input_state<N: InputState, R>( + &mut self, + input: N, + f: impl FnOnce(&mut PA1<N>) -> R, + ) -> R { + todo!() + } + + pub fn with_output_state<N: OutputState, R>( + &mut self, + output: N, + f: impl FnOnce(&mut PA1<N>) -> R, + ) -> R { + todo!() + } +} + +// Same for `PA` and `Pin`, and other pin types. +}
This is a set of common and recommended patterns for writing hardware +abstraction layers (HALs) for microcontrollers in Rust. These patterns are +intended to be used in addition to the existing Rust API Guidelines when +writing HALs for microcontrollers.
+ + + +Any non-Copy
wrapper type provided by the HAL should provide a free
method
+that consumes the wrapper and returns back the raw peripheral (and possibly
+other objects) it was created from.
The method should shut down and reset the peripheral if necessary. Calling new
+with the raw peripheral returned by free
should not fail due to an unexpected
+state of the peripheral.
If the HAL type requires other non-Copy
objects to be constructed (for example
+I/O pins), any such object should be released and returned by free
as well.
+free
should return a tuple in that case.
For example:
++ +#![allow(unused)] +fn main() { +pub struct TIMER0; +pub struct Timer(TIMER0); + +impl Timer { + pub fn new(periph: TIMER0) -> Self { + Self(periph) + } + + pub fn free(self) -> TIMER0 { + self.0 + } +} +}
HALs can be written on top of svd2rust-generated PACs, or on top of other +crates that provide raw register access. HALs should always reexport the +register access crate they are based on in their crate root.
+A PAC should be reexported under the name pac
, regardless of the actual name
+of the crate, as the name of the HAL should already make it clear what PAC is
+being accessed.
embedded-hal
traits (C-HAL-TRAITS)Types provided by the HAL should implement all applicable traits provided by the
+embedded-hal
crate.
Multiple traits may be implemented for the same type.
+ +HAL crates should be named after the chip or family of chips they aim to
+support. Their name should end with -hal
to distinguish them from register
+access crates. The name should not contain underscores (use dashes instead).
All peripherals to which the HAL adds functionality should be wrapped in a new +type, even if no additional fields are required for that functionality.
+Extension traits implemented for the raw peripheral should be avoided.
+ +#[inline]
where appropriate (C-INLINE)The Rust compiler does not by default perform full inlining across crate
+boundaries. As embedded applications are sensitive to unexpected code size
+increases, #[inline]
should be used to guide the compiler as follows:
#[inline]
. What qualifies as "small"
+is subjective, but generally all functions that are expected to compile down
+to single-digit instruction sequences qualify as small.#[inline]
. This enables the compiler to compute even complicated
+initialization logic at compile time, provided the function inputs are known.This chapter aims to collect various useful design patterns for embedded Rust.
+ +Welcome to The Embedded Rust Book: An introductory book about using the Rust +Programming Language on "Bare Metal" embedded systems, such as Microcontrollers.
+Embedded Rust is for everyone who wants to do embedded programming while taking advantage of the higher-level concepts and safety guarantees the Rust language provides. +(See also Who Rust Is For)
+The goals of this book are:
+Get developers up to speed with embedded Rust development. i.e. How to set +up a development environment.
+Share current best practices about using Rust for embedded development. i.e. +How to best use Rust language features to write more correct embedded +software.
+Serve as a cookbook in some cases. e.g. How do I mix C and Rust in a single +project?
+This book tries to be as general as possible but to make things easier for both +the readers and the writers it uses the ARM Cortex-M architecture in all its +examples. However, the book doesn't assume that the reader is familiar with this +particular architecture and explains details particular to this architecture +where required.
+This book caters towards people with either some embedded background or some Rust background, however we believe +everybody curious about embedded Rust programming can get something out of this book. For those without any prior knowledge +we suggest you read the "Assumptions and Prerequisites" section and catch up on missing knowledge to get more out of the book +and improve your reading experience. You can check out the "Other Resources" section to find resources on topics +you might want to catch up on.
+If you are unfamiliar with anything mentioned above or if you want more information about a specific topic mentioned in this book you might find some of these resources helpful.
+Topic | Resource | Description |
---|---|---|
Rust | Rust Book | If you are not yet comfortable with Rust, we highly suggest reading this book. |
Rust, Embedded | Discovery Book | If you have never done any embedded programming, this book might be a better start |
Rust, Embedded | Embedded Rust Bookshelf | Here you can find several other resources provided by Rust's Embedded Working Group. |
Rust, Embedded | Embedonomicon | The nitty gritty details when doing embedded programming in Rust. |
Rust, Embedded | embedded FAQ | Frequently asked questions about Rust in an embedded context. |
Rust, Embedded | Comprehensive Rust 🦀: Bare Metal | Teaching material for a 1-day class on bare-metal Rust development |
Interrupts | Interrupt | - |
Memory-mapped IO/Peripherals | Memory-mapped I/O | - |
SPI, UART, RS232, USB, I2C, TTL | Stack Exchange about SPI, UART, and other interfaces | - |
This book has been translated by generous volunteers. If you would like your +translation listed here, please open a PR to add it.
+This book generally assumes that you’re reading it front-to-back. Later +chapters build on concepts in earlier chapters, and earlier chapters may +not dig into details on a topic, revisiting the topic in a later chapter.
+This book will be using the STM32F3DISCOVERY development board from +STMicroelectronics for the majority of the examples contained within. This board +is based on the ARM Cortex-M architecture, and while basic functionality is +the same across most CPUs based on this architecture, peripherals and other +implementation details of Microcontrollers are different between different +vendors, and often even different between Microcontroller families from the same +vendor.
+For this reason, we suggest purchasing the STM32F3DISCOVERY development board +for the purpose of following the examples in this book.
+The work on this book is coordinated in this repository and is mainly +developed by the resources team.
+If you have trouble following the instructions in this book or find that some +section of the book is not clear enough or hard to follow then that's a bug and +it should be reported in the issue tracker of this book.
+Pull requests fixing typos and adding new content are very welcome!
+This book is distributed under the following licenses:
+TL;DR: If you want to use our text or images in your work, you need to:
+Also, please do let us know if you find this book useful!
+ +