forked from sde1000/NanodeUIP
-
Notifications
You must be signed in to change notification settings - Fork 12
/
README
80 lines (63 loc) · 3.48 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
This is a port of the uIP network stack from SICS to the Nanode. It
presents itself as an Arduino library.
There's an example sketch, "uiptest", which you can access from the
Arduino IDE. Chose File -> Examples -> NanodeUIP -> uiptest to access
it. Note that it depends on the NanodeUNIO library (available at
https://github.com/sde1000/NanodeUNIO) being installed to access the
MAC address chip on the back of the Nanode.
Notes on porting decisions:
clock_time_t has to be unsigned long (4 bytes) if we're going to use
the output of millis() as the clock. This overflows every 50 days -
acceptable.
If we only used an unsigned int (2 bytes) the time would overflow
every 65s and half of this would be the maximum wait possible - some
protocols demand more. The DHCP client would enter an infinite loop
when its retransmission time grew beyond 32s.
We might get away with using millis()/10 as the clock if we define
clock_time_t to be unsigned int. It would overflow every 10 minutes
or so, allowing for a maximum wait of 5 minutes.
RAM is very valuable. The packet buffer takes up most of it (and
applications should construct their data to send directly within this
buffer when possible). Constant strings in the user's program also
take up RAM; if your "Serial.println" is just outputting a newline,
when you expect it to do something else, your program has probably
outgrown the 2k RAM. If you can find the build directory, you can run
avr-objdump -h on the .elf file and add up the sizes of the .data and
.bss sections - if this is approaching 2k you're in trouble.
How are we going to manage memory allocation for connection state?
The size is application-dependent, and I think most people are going
to want at least two applications running at once (even if one of them
is just the DHCP client).
uIP as distributed assumes that there's going to be a single TCP
application and a single UDP application. State for each connection
to these applications is allocated at compile time as a member of the
uIP connection structure.
How much space is an application going to need, per connection?
TCP applications:
(Note: struct psock is 20 bytes)
hello-world: 70 bytes
webclient: 359 bytes
webserver: 128 bytes
telnetd: 75 bytes
smtp: 17 bytes (but implementation looks buggy!)
UDP applications:
resolv: 2 bytes per-connection, plus 41 bytes per cache entry (default 4)
dhcpc: 43 bytes (possibly reducible to 41 bytes)
(On further reflection, per-connection state for UDP is really
pointless. I expect apps using UDP will have a single copy of their
state allocated in .data and .bss - we just need to store their
appcall function pointer in uIP's UDP connection structure so we can
demultiplex based on destination port number.)
With no applications beyond the DHCP client, uIP currently takes about
1k of RAM. That leaves 1k for all the application state, the heap and
the stack. I think it's reasonable to be able to serve three TCP
connections at once by default; if we allow 150 bytes state per
connection that leaves at least 512 bytes for heap and stack.
To allow for multiple applications we need to store a function pointer
per connection so the uIP code can call the appropriate handler. I
propose to insert this into struct uip_conn and define the
UIP_APPCALL() macro to indirect through it.
How does this function pointer get into the uip_conn structure? We
need to modify the code that opens a connection to put it there.
Store the pointer for each listening port, and insert it into the
newly-formed connection.