|
Distributed Cache Architecture
Based on our analyses of current web caching hierarchies, we derived
four basic design principles for distributed caches: (1) minimize
the number of hops to locate and access data, (2) minimize access
latencies even on cache misses, (3) share data among many caches,
and (4) cache data close to clients. Although these principles may
seem obvious in retrospect, current cache architectures routinely
violate them. Using these principles, we have proposed a novel
architecture that:
-
Separates data paths from meta-data paths and maintains a
hierarchy of meta-data to track where copies of data are stored;
-
Maintains hints to locate nearby copies of data without suffering
network latencies;
-
Uses direct cache-to-cache data transfers to avoid
store-and-forward delays inherent in conventional web caching
hierarchies; and
-
Pushes data near clients that have not referenced the data but
are likely to do so in the future.
Our architecture yields speedups (with respect to access latencies)
of 1.27 to 2.43 compared to conventional cache hierarchies. We have
implemented a distributed cache prototype by augmenting the
widely-deployed Squid proxy cache.
Representative Publications:
-
R. Tewari, M. Dahlin, H.M. Vin, and J. Kay, Design Considerations for
Distributed Caching on the Internet,
In Proceedings of the International Conference on Distributed
Computing and Systems (ICDCS), pages 273-284, May 1999.
[
Abstract |
Paper ]
|
|