The former diagram mainly teach us that event
messages are stored in queues and that each queue has a target thread
and a set of events handlers.
Each event message has a
code that will
be used to retreive the proper handler when the queue is processed, but
also to decide what is the target queue. The sender tells the system only
which process should receive the event. We have two complementary to select
one of the process's event queue, both of them based on the
event code
.
- Thread-based events: in this mode, the code holds the
identifier of the target thread. This usually means that the event code has
been forged by that thread and then given to a event server like a timer
system or a GUI server. When the event is raised, we look in the process'
set of threads to find out the thread having the right identifier, then we
pick up its "attached event queue" for delivering the event.
- Queue-based events: in this second mode, the code does
not define a specific thread. Instead, it defines an event class
(small number - usually between 0 and 15, as a maximum) that is used
to group events by sender type. One class is devoted
to kernel-raised events, another one could be used for GUI-raised events,
etc.
Having two events-coding schemes allow us to define the semantic
of the event either on the
client side (the process that receives
the event) or on the
server side. It makes no sense to let the user
program select what event is raised when a CTRL-C is pressed: it will be
SIG_INT_EVENT, nothing else! (Catch it if you can :) The semantic here is
defined by the server-side (the kernel). But if you want your thread to
execute my_clock_handler(), you clearly want your event to be unique, so
you'll probably use a dynamic-allocated event code that will have the thread_id
of the requestor thread and a counter as sub-code.
All the "events classes" will have a symbolic name and their corresponding
code will be stored in "system.events.class.*"
1. Delivering an event to its thread
|
|
preparation step for thread-based
events
|
preparation step for queue-based
events
|
You'll always need two objects in order to deliver events: a
target
thread and a
target queue. The target queue will receive the
event message before it got processed by the target thread (that will
run the handler). And the target-thread
must be the owner of the
target queue.
What would probably happen within the kernel is that queueing a message
in an event queue will act on its owner kThread to make it runnable (and
possibly putting it in a high priority queue of the scheduler) after setting
the requested priority from the queue's priority.
|
stores the event and wakes
the thread up
|
2. Handler-Thread activation
We must handle with care the re-activation of the handler thread. First,
because we usually will have to call a function that run at
user level
(the event handler) from a function that run at
kernel level (the
event engine). The i386 cpu wasn't designed to make this feasible, however,
we still can achieve it by "cheating" the stack content properly (and make
the cpu believe it gently returns to user mode after an interrupt).
Second, the handler thread can be in
any state when the event occur
(except zombie or killed state). If interrupting a running thread is not
really dangerous, interrupting a sleeping thread is more tedious because
we must keep it in it's linked list of sleeping threads.
Therefore, the status of a thread will be splitted between two separated
machine states: one for "regular" processing, having states like "running",
"ready" and "sleeping", and another for events processing (with states like
"no-events", "pending-events", "processing-events", etc.) The complete state
being a superposition of both others.
In the actual Clicker version, there are a set of pointer devoted to a second
list related to events processing. The initial idea was to use these pointer
to put the thread in another list (distinct from its normal "active" list):
the
urgent queue of the dispatcher. In future version, we could consider
having events pending but still keeping the thread in the normal queue (so
that it does not get a priority if it's a low-priority user-defined event
:)
3. Processing events
Once the target-thread is activated by the scheduler, we still need to
detect that there are pending events. This could be quite easy to do,
because the thread state will have its "
events-pending" bit set. Then,
we'll browse each active event queue related to that thread (by order of
growing priority) and make them
decode their events. This decoding
will translate the event code (something like
CHILD_PROCESS_DIED
) into handler context (stack position, code pointer and some other context
registers like IA-32's
segment registers). Note that only events which
have a registered handler will lead to some code execution, so we can leave
this step with no handlers to call!
Everytime a queue has been completely decoded, we start to execute the events
handlers. Each handler call will use the same "stack manipulation" to be
run: we first store the complete processor state on the kernel stack, then
we push a fake return context (what we would have found on the stack if an
interrupt just occured before the handler was executed :) and then
iretd
to that code...
Of course, these manipulations are machine-specific, but i beleive they are
possible on all common architectures.
remark
|
- on the above UML diagram, we mention that
each eQueue has a target kThread. We should add that the kThread has also
a default eQueue (for receiving thread-based events) and a list of active
eQueues, sorted by priority.
- Everytime the eQueue.decode() method is
called, we pop messages out of it and match them against the events handlers
set until we find a handler to execute. If no match is found and the queue
run empty, then we will simply move to the next active queue.
- We should try to decode events until we
have a handler to execute before checking if there's no higher-priority events
to be decoded in other threads. This will reduce the task-switches we need
to process all the events. This also means that the scheduler will have to
provide an method for highest-priority checking .
- On IA-32, we'll have to pay care to the
stack pointer that we should use. It's usually kept at the bottom of the
kernel stack (ss:esp0), but we could find a stack pointer for another priviledge
level that the one we need .
|