Thursday, January 26, 2006

CPU Inheritance Scheduling

Recently, I remembered a topic that I dedicated a lot of my attention in the end of my undergrad course: CPU Inheritance Scheduling.

The main motivation for Inheritance (Hierarchical or Loadable, if you prefer) Scheduling is the assumption that it is hard to a particular scheduling policy to fulfill the requirements posed by several different target applications.

Therefore, the idea is to allow general purpose systems to easily implement multiple scheduling policies. Furthermore, to have the schedulers organized in a given hierarchy, in the sense that it is possible to reuse the whole scheduling policy logic.

As an example of different application requirements in the same system, we can think of interactive applications have a natural demand for responsiveness (e.g. a text editor, image editing), while batch applications for throughput (e.g. compiling a kernel, running simulations).

Back to the late 90's, the first reference that I have found about it, and that captivated me, was a paper written by Bryan Ford and Sai Susarla (CPU Inheritance Scheduling). In this article, the authors describe the design and implementation of a thread scheduling framework that supports multi-policy scheduling in the FreeBSD system.

Later, an approach that provides yet more flexibility is presented by George Candea Michael B. Jones (Vassal: Loadable Scheduler Support for Multi-Policy Scheduling). In this case, they provide the ability of dynamic loading of scheduling policies. In contrast with the Bryan Ford's paper, the Vassal strategy is better from the point of view that it is not necessary to rely on the scheduling policies made available by the operating system. One could request for loading her own scheduling policy instead. Obviously, this would require the necessary privileges, what turns out in a limited flexibility.

Thus, how about in a system based on virtual machines, where possible harmful user activities will not influence other users? Well, I would primarily think that it might be interesting in a certain degree, however it remains an open question to me.

Maybe, future posts soon. If I have some course projects break. :-)

Tuesday, January 03, 2006

In the last Computer Architecture class, it was mentioned that it is easier to find information about the evolution of processors (e.g. number of transistors, performance, etc) than finding the equivalent information about disk and/or memory. This is very interesting because I was wondering about something similar approximately a month ago.

The question in my head was: "Why does not exist a TOP500 ranking of the High Performance Storage infrastructures?".

They have published the www.top500.org highlighting the processing power for years, but few details are included about the storage systems which come together these powerful machines.

Recently, I have found that the IEEE Computer Society Mass Storage Systems Technical Committee is sponsoring an initiative to develop the TOP100io (http://top100io.org/) which is still in the "Call for Contributors" phase.

Particularly, considering that several distributed computational architectures are built today to tackle huge data intensive problems, this ranking might be relevant in guiding future research on high performance storage systems design and performance analysis.

Monday, January 02, 2006

I used to post some comments about articles that I read in my CiteULike library.
The purpose of this blog is to be a place where preliminary ideas and opinions about research will be published.

I would appreciate to receive comments about the information posted here.

Cheers,
Eli