The LRU (Least Recently Used) cache algorithm maintains a fixed-capacity cache by evicting the least recently used items when new items need to be added. This ensures that frequently accessed data remains readily available while discarding less frequently accessed data, optimizing both memory usage and access performance.
Hash Map (Unordered Map)
Doubly-Linked List
splice() operation can move a node to the front in O(1) time by simply rewiring pointersCheck if the key already exists in the hash map:
If the key is new:
capacity itemsThe combination of a hash map and doubly-linked list provides:
Below is a detailed walkthrough of an LRU cache with capacity = 2.
Capacity: 2
LRU List: []
Cache: {}
cache.Put(1, 1)Operation: Insert key 1 with value 1
[1,1] at frontState after operation:
LRU List: [1,1]
^MRU/LRU
Cache: {1 → iter[1,1]}
cache.Put(2, 2)Operation: Insert key 2 with value 2
[2,2] at frontState after operation:
LRU List: [2,2] → [1,1]
^MRU ^LRU
Cache: {1 → iter[1,1], 2 → iter[2,2]}
cache.Get(1)Operation: Access key 1
[1,1] to front (most recently used)1State after operation:
LRU List: [1,1] → [2,2]
^MRU ^LRU
Cache: {1 → iter[1,1], 2 → iter[2,2]}
cache.Put(3, 3)Operation: Insert key 3 with value 3
lru.back() is [2,2] → Remove key 2[3,3] at frontState after operation:
LRU List: [3,3] → [1,1]
^MRU ^LRU
Cache: {1 → iter[1,1], 3 → iter[3,3]}
🗑️ Key 2 was evicted!
cache.Get(2)Operation: Access key 2
-1State after operation:
LRU List: [3,3] → [1,1]
^MRU ^LRU
Cache: {1 → iter[1,1], 3 → iter[3,3]}
cache.Put(4, 4)Operation: Insert key 4 with value 4
lru.back() is [1,1] → Remove key 1[4,4] at frontState after operation:
LRU List: [4,4] → [3,3]
^MRU ^LRU
Cache: {3 → iter[3,3], 4 → iter[4,4]}
🗑️ Key 1 was evicted!
cache.Get(1)Operation: Access key 1
-1State after operation:
LRU List: [4,4] → [3,3]
^MRU ^LRU
Cache: {3 → iter[3,3], 4 → iter[4,4]}
cache.Get(3)Operation: Access key 3
[3,3] to front3State after operation:
LRU List: [3,3] → [4,4]
^MRU ^LRU
Cache: {3 → iter[3,3], 4 → iter[4,4]}
cache.Get(4)Operation: Access key 4
[4,4] to front4State after operation:
LRU List: [4,4] → [3,3]
^MRU ^LRU
Cache: {3 → iter[3,3], 4 → iter[4,4]}
1 -1 -1 3 4
Program for implementing LRU cache.
#include <unordered_map>
#include <list>
#include <iostream>
using namespace std;
class LRUCache {
int capacity;
list<pair<int,int>> lru; // Stores the key and value in the order of most recently used to least recently used
unordered_map<int, list<pair<int,int>>::iterator> cache; // key, iterator to the list
public:
LRUCache(int cap) : capacity(cap) {}
int Get(int key) {
auto it = cache.find(key);
if (it == cache.end())
return -1;
// Move accessed node to front (MRU)
// list.splice(destination_position, source_list, element_to_move)
// This operation moves (not copies) an element from one position in the list to
// another position in O(1) time. Thus splice just rewires pointers. i.e
// Detach node
// Attach it at the front
// No traversal. No copying.
lru.splice(lru.begin(), lru, it->second);
return it->second->second;
}
void Put(int key, int value) {
if (capacity == 0) return;
auto it = cache.find(key);
if (it != cache.end()) {
// Just update the value and move to front as it is the most recently used
it->second->second = value;
// list.splice(destination_position, source_list, element_to_move)
lru.splice(lru.begin(), lru, it->second);
return;
}
// Evict LRU if needed as the cache is full
if (cache.size() == capacity) {
int lru_key = lru.back().first;
// Remove the last element from the list as it is the least recently used
// and remove the corresponding key from the cache
lru.pop_back();
cache.erase(lru_key);
}
// Insert new key and value at the front of the list
lru.emplace_front(key, value);
// Add the key and the iterator to the front of the list to the cache
cache[key] = lru.begin();
}
};
int main() {
LRUCache cache(2);
cache.Put(1,1);
cache.Put(2,2);
cout << cache.Get(1) << " ";
cache.Put(3,3);
cout << cache.Get(2) << " ";
cache.Put(4,4);
cout << cache.Get(1) << " ";
cout << cache.Get(3) << " ";
cout << cache.Get(4) << " ";
return 0;
}
Output
1 -1 -1 3 4
© 2019-2026 Algotree.org | All rights reserved.
This content is provided for educational purposes. Feel free to learn, practice, and share knowledge.
For questions or contributions, visit algotree.org
"Simplicity is the ultimate sophistication. - Leonardo da Vinci"