Adaptive Caching

This project is no longer active. Information is still available below.

Data access latency caused by the gap between CPU and I/O speeds and overloaded networks can be reduced via caching. By responding to the client requests locally caches also reduce the upstream network bandwidth usage and the load on the servers. Many cache replacement policies have been invented and some perform better than others under certain workload and network-topological conditions. It is impossible and sub-optimal to manually choose cache replacement policies for workloads and topologies that are under continuous change. We use machine learning algorithms to automatically select the best current policy or mixtures of policies from a policy (a.k.a expert) pool to provide an "adaptive caching" service. The weights assigned to each policy are increased or decreased based on their performance. Each adaptive caching node is autonomous (i.e., self-tuning) leading to construction of scalable networks of caches.


We have implemented a cache simulator in C++. We have used web proxy and file system traces to show the existence of "switching"—the best cache replacement policy being under continuous change—and the benefits of using adaptive methods for caching. We have proposed various techniques for dynamically changing caching policies based on the changes in the workload and the network topology.


Last modified 23 May 2019