Chinaunix首页 | 论坛 | 博客
  • 博客访问: 827363
  • 博文数量: 168
  • 博客积分: 5431
  • 博客等级: 大校
  • 技术积分: 1560
  • 用 户 组: 普通用户
  • 注册时间: 2007-10-22 11:56
文章存档

2015年(2)

2014年(1)

2013年(12)

2012年(12)

2011年(15)

2010年(5)

2009年(16)

2008年(41)

2007年(64)

分类: C/C++

2008-04-18 12:21:54

Inter-Process Communication : Events Processing Engine

released by Pype on 12.01.02
status: coding guideline  - still under cooking.



Introduction

What are Events ?

Events, in clicker terminology, is a small message you can send to a process to make it run a given function (the event handler). The receiver will choose which messages he agrees to receive and which one he just discard. It's also to the receiver to define what code is to be run when a given event occur.

The event message is transmitted from one process to another by the microkernel. It is usually made of a code (telling what kind of event this is), a sender pid and a target pid, plus some optionnal arguments (one or two words, max).
The receiver process sets up events queue to receive those events and gives each queue a priority level and a target thread (the one that will finally run the handler code). The scheduler can use that priority to speed up execution of thread that execute event handlers.

Roughly, once a event handler has been defined for a given event code , some other process that know that code (and that allowed to send it to the receiver process) can make the receiver process to run the handler code  at any time. By the way, the receiver keeps the ability to block or drop temporarly some handler (by desactivating it) to synchronize reception with its own needs.

What are events good for ?

signals
Kernel signals can be implemented with events. Unexpected conditions like protection faults, user interrupt / sleep / continue / kill request can result in event raising by the kernel. Event handlers then act as unix signal handlers to intercept execution before the process get killed or something else.
timers
You can easily program delayed or repeated tasks based on some clock with events. Everytime the clock server meet a deadline, it raises an event for a process (event code and target process were defined when the timer was programmed).
If you had to implement this with just threads and sleep() function, you'd have need a separate thread for each delayed task. With events, you just need one thread that will be target of all timer-events. You can even use that thread to do background processing independently when no handler is active, just as what you would have done with interrupts in real mode !
GUI
Of course, events are a gift to user-interface programmer. Lot of existing GUI have proven that events processing is the way to make gui server and applications communicate. With Clicker's kernel events, it becomes even easier, because you don't have to worry about how to implement those events: they are natively available.
The few words of parameters with the event should be enough to carry a mouse coordinate or an object identifier (or both :). And if you still need more space, you can use events to notify the other side that some shared memory content has changed.


If we do the job properly, it might become one of the most used inter-process communication technique in Clicker OS.


Design of Events Processing Engine


The former diagram mainly teach us that event messages are stored in queues and that each queue has a target thread and a set of events handlers.
Each event message has a code that will be used to retreive the proper handler when the queue is processed, but also to decide what is the target queue. The sender tells the system only which process should receive the event. We have two complementary to select one of the process's event queue, both of them based on the event code .
Thread-based events: in this mode, the code holds the identifier of the target thread. This usually means that the event code has been forged by that thread and then given to a event server like a timer system or a GUI server. When the event is raised, we look in the process' set of threads to find out the thread having the right identifier, then we pick up its "attached event queue" for delivering the event.
Queue-based events: in this second mode, the code does not define a specific thread. Instead, it defines an event class (small number - usually between 0 and 15, as a maximum) that is used to group events by sender type. One class is devoted to kernel-raised events, another one could be used for GUI-raised events, etc.
Having two events-coding schemes allow us to define the semantic of the event either on the client side (the process that receives the event) or on the server side. It makes no sense to let the user program select what event is raised when a CTRL-C is pressed: it will be SIG_INT_EVENT, nothing else! (Catch it if you can :) The semantic here is defined by the server-side (the kernel). But if you want your thread to execute my_clock_handler(), you clearly want your event to be unique, so you'll probably use a dynamic-allocated event code that will have the thread_id of the requestor thread and a counter as sub-code.

All the "events classes" will have a symbolic name and their corresponding code will be stored in "system.events.class.*"

1. Delivering an event to its thread

selection of queue in thread-based events
selection of queue in queue-based events
preparation step for thread-based events
preparation step for queue-based events

You'll always need two objects in order to deliver events: a target thread and a target queue. The target queue will receive the event message before it got processed by the target thread (that will run the handler). And the target-thread must be the owner of the target queue.
What would probably happen within the kernel is that queueing a message in an event queue will act on its owner kThread to make it runnable (and possibly putting it in a high priority queue of the scheduler) after setting the requested priority from the queue's priority.

stores the event and wakes the thread up

2. Handler-Thread activation

We must handle with care the re-activation of the handler thread. First, because we usually will have to call a function that run at user level (the event handler) from a function that run at kernel level (the event engine). The i386 cpu wasn't designed to make this feasible, however, we still can achieve it by "cheating" the stack content properly (and make the cpu believe it gently returns to user mode after an interrupt).

Second, the handler thread can be in any state when the event occur (except zombie or killed state). If interrupting a running thread is not really dangerous, interrupting a sleeping thread is more tedious because we must keep it in it's linked list of sleeping threads.
Therefore, the status of a thread will be splitted between two separated machine states: one for "regular" processing, having states like "running", "ready" and "sleeping", and another for events processing (with states like "no-events", "pending-events", "processing-events", etc.) The complete state being a superposition of both others.

In the actual Clicker version, there are a set of pointer devoted to a second list related to events processing. The initial idea was to use these pointer to put the thread in another list (distinct from its normal "active" list): the urgent queue of the dispatcher. In future version, we could consider having events pending but still keeping the thread in the normal queue (so that it does not get a priority if it's a low-priority user-defined event :)

3. Processing events

Once the target-thread is activated by the scheduler, we still need to detect that there are pending events. This could be quite easy to do, because the thread state will have its "events-pending" bit set. Then, we'll browse each active event queue related to that thread (by order of growing priority) and make them decode their events. This decoding will translate the event code (something like CHILD_PROCESS_DIED ) into handler context (stack position, code pointer and some other context registers like IA-32's segment registers). Note that only events which have a registered handler will lead to some code execution, so we can leave this step with no handlers to call!

Everytime a queue has been completely decoded, we start to execute the events handlers. Each handler call will use the same "stack manipulation" to be run: we first store the complete processor state on the kernel stack, then we push a fake return context (what we would have found on the stack if an interrupt just occured before the handler was executed :) and then iretd to that code...
Of course, these manipulations are machine-specific, but i beleive they are possible on all common architectures.

remark
  • on the above UML diagram, we mention that each eQueue has a target kThread. We should add that the kThread has also a default eQueue (for receiving thread-based events) and a list of active eQueues, sorted by priority.
  • Everytime the eQueue.decode() method is called, we pop messages out of it and match them against the events handlers set until we find a handler to execute. If no match is found and the queue run empty, then we will simply move to the next active queue.
  • We should try to decode events until we have a handler to execute before checking if there's no higher-priority events to be decoded in other threads. This will reduce the task-switches we need to process all the events. This also means that the scheduler will have to provide an method for highest-priority checking .
  • On IA-32, we'll have to pay care to the stack pointer that we should use. It's usually kept at the bottom of the kernel stack (ss:esp0), but we could find a stack pointer for another priviledge level that the one we need  .


阅读(1431) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~