Tuesday, June 23, 2009

The role of the scientific method

Here is an interesting article by John Polanyi published by The Globe and Mail.

Hope lies in the scientific method by John Polanyi.

The article discusses the role of science, and in my opinion, more importantly, the importance of the critical view of scientists on the impact that political decisions may have on the lives of many.

Food for thought, indeed.

Wednesday, June 03, 2009

Data Reliability Tradeoffs

Abdullah Gharaibeh (my colleague at the NetSysLab) has a recent work that explores a combination of heterogeneous storage components in terms of cost, reliability and throughput.

The work proposes a storage architecture that leverages idle storage resources, located on volatile nodes (e.g., desktops), to provide high throughput access at low cost. Moreover, the system is designed to provide durability with a low throughput durable storage component (e.g., tape).

What I like in the solution is that it shows nicely how to decouple components that provide two important features for data-intensive applications: availability and durability. Moreover, this separation (together with the evidence showed by the experiments) helps system administrators to reason about the deployment cost.

Here is the abstract:

Abdullah Gharaibeh and Matei Ripeanu. Exploring Data Reliability Tradeoffs in Replicated Storage Systems. ACM/IEEE International Symposium on High Performance Distributed Computing (HPDC 2009), Munich, Germany, June 2009.


This paper explores the feasibility of a cost-efficient storage architecture that offers the reliability and access performance characteristics of a high-end system. This architecture exploits two opportunities: First, scavenging idle storage from LAN connected desktops not only offers a low-cost storage space, but also high I/O throughput by aggregating the I/O channels of the participating nodes. Second, the two components of data reliability – durability and availability – can be decoupled to control overall system cost. To capitalize on these opportunities, we integrate two types of components: volatile, scavenged storage and dedicated, yet low-bandwidth durable storage. On the one hand, the durable storage forms a low-cost back-end that enables the system to restore the data the volatile nodes may lose. On the other hand, the volatile nodes provide a high-throughput front-end. While integrating these components has the potential to offer a unique combination of high throughput, low cost, and durability, a number of concerns need to be addressed to architect and correctly provision the system. To this end, we develop analytical- and simulation-based tools to evaluate the impact of system characteristics (e.g., bandwidth limitations on the durable and the volatile nodes) and design choices (e.g., replica placement scheme) on data availability and the associated system costs (e.g., maintenance traffic). Further, we implement and evaluate a prototype of the proposed architecture: namely a GridFTP server that aggregates volatile resources. Our evaluation demonstrates an impressive, up to 800MBps transfer throughput for the new GridFTP service