forked from rock-core/tools-roby
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathTODO.txt
146 lines (119 loc) · 7.09 KB
/
TODO.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
= TODO
== Refactoring
=== Small stuff
* remote_id => network_id
* incremental_dump? => dump_reference?
=== Big stuff
* fix the Roby singleton methods/Roby::Control/Roby::Propagation mess. Maybe move
all the execution-related stuff in a subclass of Plan, MainPlan. That would allow
per-MainPlan execution management and in the end having multiple executable
plan in the same plan manager (I'm thinking bi-simulation)
* have separate relation graphs in each Plan object. That would be possible
if we don't allow anymore to change any event or task relations if the task
is not included in a plan, which would greatly simply logging-related stuff
(for instance, rebuilding a representation of plans from logged data).
Moreover, we would not have tasks hanging around anymore because they have
never been removed from any plan.
* in dRoby, separate the implementation of the RX/TX and
connection/disconnection protocol, including the management of the TX queue
(callbacks, synchronous/asynchronous).
* have a thread-safe handling of transactions. This requires a few things:
- separate relation graphs: have one relation graph per plan object: see previous
point.
- stop using method forwarding, which is not the good way to do that.
Instead, do as for distributed: update the task status in the transaction
thread. Need to define a notion of "transaction thread" and how messages
can get propagated there, though.
== Core
* dynamic parameters. State prediction in both plan and transaction context:
allow to predict the effect of changing the value of dynamic parameters in
transaction context.
* when predicates like happened? are called in a propagation context, and
we don't know the result yet (happened? = false), register a continuation and
its dependency on the event. Include this continuation in the event set and
sort it accordingly.
* in #replace, use Hierarchy#fullfilled_events to check that
all needed events are provided. If some are missing, use define_event to try
to add a new one dynamically
* when replacing an event generator, what to do with its handlers ?
- either we discard them, because some handlers are used in a specific way
- either we apply them on the new task
- unfortunately there is no good answer... Both are useful and it is
difficult (if not impossible) to know what to do. Having no solution for
this problem reduces the usefulness of event handlers greatly.
* Need to tag relations with a name so that we can find what is what (kind of
'roles'). For instance, in PlanningLoop, we would tag the seeds (i.e. the
planned tasks) with a certain name which would allow to add any child without
the loop code noticing
* add a multiplicity parameter for hierarchy relations which tells
how many child task of a given kind are expected by the parent task. Add a
'realized_by' transition for that. For instance, in case of
Pom::Localization, we can tell that the task expects at least one
Pom::Estimator. If the last estimator breaks, we can repair the plan online
by adding a transition.
* we NEED plan merging: if we reuse a task which is already running, it should
be transparent for the new process: this new task tree will call start!, but
the task is running. Moreover, if it is synchronizing on the start event, it
should appear "as if" the event has been emitted
* Check the capability we have to put every application block in transaction
context. This would allow for instance to discard **all** plan modification
done by an exception handler (or plan command, event handler, you get my
drift) if an exception is raised. I think it would be a worthy feature [This
can't be done because of the transaction creation cost]
* Kill local tasks that are in force_gc even if they have parents, if their
parents are remote tasks
* rename Roby::TaskModelTag::argument into 'arguments' (beware of name clash
with the accessor)
* Fix transaction proxying. Better use the same kind of model than in dRoby:
create slightly modified tasks of the same model than the proxied event, do
not proxy task events
* Find a way to remove the "return unless proxying?" in top of proxy_code. It sucks
* fix handling of execution agents: we should re-spawn the execution agents
only if there is an execution agent model on the task model. We currently use
respawn on the execution agent instance. The problem lies in, for instance,
ConnectionTask which are respawned for all pending remote tasks.
ConnectionTask should never be created because of the exected_by relation.
== Planning
* fix memory leak regarding planning loops: for now, make_loop creates a new
planning method each time make_loop is called. The problem is that the
created method should actually be local to the planner object, not the
planner model (we are doing the latter right now).
== Distributed
* extension of communication to really have a multi-robot system. It is for now
too much peer-to-peer
- a pDB should always send the messages about objects it owns. For now, we
check two things: that the involved objects are owned and that the message
comes from the inside (we never forward a message coming from outside).
There is a synchronization problem here:
A sends a message M about a joint ab object to B
B does not send the message to C
but B can send other messages to C, including the /consequences/ of the M message
Result: C knows about the consequences of M, but not about M itself.
The solution is to mark all messages with an UID, and forward new messages
when we decode them (in PeerServer#demux)
- make sure unknown models are properly handled in 3-peers scenario. In
particular, the following situation should work
peer 1 and 2 know about a SpecificModel task model
peer 3 does not
If 1 sees tasks of 2 *through* the plan of 3, and if these tasks are of the
SpecificModel model, then they should be instances of the SpecificModel
class
* There is a race condition when removing objects: if a peer A is building a
transaction involving tasks from a peer B, and if the peer B removes the
tasks, then there is a race condition between the time B sends the
transaction and the time it processes the remove_object update from A. The
net effect is that when the transaction sibling is created, some tasks it
includes do not exist anymore. It is more general, since the same problem
exists when wrapping tasks in a transaction for instance.
I don't think we should really fix the race condition per se. We should in
general check if the object has not been removed when A processes updates
from B and have some ways to feedback B from A.
* Better handling of errors during transactions, better management of new
peers, ... DecisionControl should really be the central component handling
all that. Moreover, defining management policies for distributed transactions
should also be defined.
== UI
* log snapshot: build a set of logging messages which represent the current
state of the plan. Useful for display (it would allow to seek a lot faster)
and for runtime display of the plan state (send the snapshot on connection)
vim: tw=100