-
Notifications
You must be signed in to change notification settings - Fork 31
/
README
266 lines (215 loc) · 11.6 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
STL::Cache - in-memory cache for C++ applications
Introduction:
STL::Cache is just a simple wrapper over standard map, that implements
some cache algorithms, thus allowing you to limit the storage size and
automatically remove unused items from it. (Right now the std::map
interface is partially implemented, so it can't be used as a drop-in
replacement for the std::map).
It is intended to be used for keeping any key/value data, especially
when data's size are too big, to just put it into the map and keep the
whole thing. With STL::Cache you could put enormous (really unlimited)
amount of data into it, but it will store only some small part of your
data. So re-usable data will be kept near your code and not so popular
data will not spend expensive memory.
STL::Cache uses configurable policies, for decisions, whether data are
good, to be kept in cache or they should be thrown away. It is shipped
with 8 policies and you are free to implement your own.
Getting and installing:
The STL::Cache source code is hosted on github, so you can download
the source code using wget or other download tool:
* https://github.com/akashihi/stlcache/tarball/master (tar.gz
format)
* https://github.com/akashihi/stlcache/zipball/master (or zip
format)
You could also clone the repository:
git clone git://github.com/akashihi/stlcache.git
Current development code is available at 'master' branch and (more) stable releases have been tagged.
The STL::Cache is header only and does not require any building. Recommended way of using it is by CMake integration:
Include(FetchContent)
FetchContent_Declare(
STLCache
GIT_REPOSITORY git://github.com/akashihi/stlcache.git
GIT_TAG v0.4
)
FetchContent_MakeAvailable(STLCache)
add_executable(test test.cpp)
target_link_libraries(test PRIVATE STLCache)
Alternatively just get the sources and add 'include' directory to your project's include directories list.
As i told you, the STL::Cache itself is a header-only library
and doesn't requires building. Indeed, it is shipped with tests,
documentation and other stuff, that could be built and used. For
STL::Cache building you need:
* Recent CMake (required)
* Boost library (optional)
* Doxygen (optional, only for documentation processing)
After getting this stuff up and running, select a directory for
building and issue the following commands:
cmake /path/to/the/stlcache/sources
make
make test
make doc
make install
The CMake is using a out-of-source build approach, so it is
recommended to create a temporary build directory somewhere and remove
it after installation.
The generated documentation will be put into the <build>/doc directory
and tests will be built and run directly in the <build> directory. They
are NOT installed by 'make install' command, so you have to copy them
manually, if you really need them.
Usage:
Select a cache expiration policy , configure cache with it,
construct the cache, specifying it's size, and start putting data into
the cache:
typedef CacheLRU stlcache::cache<int,string,stlcache::policy_lru>;
CacheLRU myCache(10);
The policy, key data type and value data type are passed as parameters
to the stlcache::cache class template. In the example above we have
constructed cache, that keeps data of type std::string and keys are of
type int. It uses a LRU policy for removal of excessive items.
Cache object 'myCache' is able to keep only 10 items. Cache size is
configured at the construction time and cannot be changed in the future
(with exceptions for assignment and swap operations). Additionally you
could specify a comparison object and allocator type, see the
cache class documentation for the details.
You could put objects into your cache object using insert call:
myCache.insert(1,"One)";
Note, that the insert call could throw a cache full exception
or invalid key exception, As keys have to be unique, the insert call
may do nothing, if it meets the duplicate key. Don't forget to check
it's return value.
If you are not sure, whether you have element in the cache or not,
you can use a safe insert_or_assign call, that will either add new element
to the cache or update exiting one with the new value.
Now, when you have some data in the cache, you may want to retrieve it
back:
string myOne = myCache.fetch(1);
The fetch call will return a data value, associated with the
supplied key or throw the invalid key exception when the key
doesn't exists in the cache. For safier operations, you could check
in advance, whether key is in the cache or not:
string myOne;
if (myCache.check(1)) {
myOne = myCache.fetch(1);
}
The check is exception-safe.
Both check and fetch calls are increasing internal reference
count for the key, so, depending on the used policy, it will increase
or decrease the entry's chance to become an expiration victim. Under
some circumstances you may need to manually change those reference
counter, without actually accesing the entry. Something like
touching it:
myCache.touch(1);
The cache::touch call is exception-safe and could be used even on
non-existent keys.
When you have done your work, you may manually delete the entry:
myCache.erase(1);
Check it's return value, to get sure, whether something was deleted or
not.
Thread safety:
cache can be configured with different locking implementations, thus implementing thread safety. By default a non-locking approach is used and cache is not thread-safe, allowing user to implement external locking. STL::Cache is shipped with the following locking implementations:
* non-locking - No locking will be done with that implementation, leaving cache non thread-safe. Class name is lock_none
* exclusive locking - cache will be locked exclusively on almost every call, thus limiting parallel usage to a single thread. Class name is lock_exclusive
* shared locking - Some calls will be allowed to be run in parallel with this policy. But, due to nature of the cache , even operations, that seems to be non-modifying, require exclusive lock to
update access tracking data. Class name is lock_shared
The locking implementation must be specified as a last parameter of cache type and it is optional.
Boost integration
Since version 0.3 stlcache includes Boost specific multi-map based LFU policy: lfu_multi_index
lfu_multi_index implementes LFU algorithm using a Boost MultiIndex map, which is more slower, but uses less RAM, comparing to the typical LFU implementation. You have to define USE_BOOST macro to access that policy.
Policies:
The policy is a pluggable implementation of a cache algorithms,
used for finding and removing excessive cache entries. STL::Cache is
shipped with the following policies:
* None - A random expiration policy. Removes some random entry on
request
* LRU - 'Least recently used' policy.
* MRU - 'Most recentrly used' policy
* LFU - 'Least frequently used' policy
* LFU (Multi-index) - 'Least frequently used' policy implemented on top of the multi index map. Requires Boost.
* LFU* - 'Least frequently used' policy, that expires only items
with refcount 1, as proposed by M.Arlitt
* LFU-Aging - 'Least frequently used' policy with time-based
decreasing of usage count
* LFU*-Aging - Combination of LFU* and LFU*-Aging
policies
* Adaptive Replacement - 'Adaptive Replacement' policy
The cache expiration policy must be specified as a third parameter of
cache type and it is mandatory.
Your own policy:
The policy implementation should keep track of entries in the cache and
it must be able to tell the cache, what item should be expired at the
moment. There is not any limitations on policy internal structure and
stuff, but it is expected, that policy conforms to some predefined
interfaces.
First of all - every policy is built in two classes, one class is the
policy iteslf, and another one is a 'bind wrapper':
Note:
All code examples in this section are from policy none
struct policy_none {
template <typename Key, template <typename T> class Allocator>
struct bind : _policy_none_type<Key,Allocator> {
bind(const bind& x) : _policy_none_type<Key,Allocator>(x) { }
bind(const size_t& size) : _policy_none_type<Key,Allocator>(size
) { }
};
};
As you may see, the policy itself is automatically configured with
caches's Key type and Allocator type. Of course, you could also pass
your own template parameters and partially instantiate the policy
implementation template:
template <class R> struct policy_none {
template <typename Key, template <typename T> class Allocator>
struct bind : _policy_none_type<R,Key,Allocator> {
bind(const bind& x) : _policy_none_type<R,Key,Allocator>(x)
{ }
bind(const size_t& size) : _policy_none_type<R,Key,Allocator
>(size) { }
};
};
You could pass some implementation of R during cache type definition:
stlcache::cache<int,int,stlcache::policy_none<Randomizer> > c;
Well, the actual implementation must implement the policy interface
:
template <class Key,template <typename T> class Allocator> class policy
{
public:
virtual void insert(const Key& _k) throw(exception_invalid_key)
=0;
virtual void remove(const Key& _k) throw() =0;
virtual void touch(const Key& _k) throw() =0;
virtual void clear() throw() =0;
virtual void swap(policy<Key,Allocator>& _p) throw(exception_inv
alid_policy)=0;
virtual const _victim<Key> victim() throw() =0;
};
So, the policy could be asked for a victim , entries could be
inserted , removed and touched . It's contents could also
be cleared or swapped with another policy. Concrete policy
implementation should be CopyConstructible, Assignable and must provide
a constructor, for specifiying policy size:
template <class Key, template <typename T> class Allocator> class _polic
y_none_type : public policy<Key,Allocator> {
public:
_policy_none_type<Key,Allocator>& operator= ( const _policy_none_typ
e<Key,Allocator>& x) throw() { }
_policy_none_type(const _policy_none_type<Key,Allocator>& x) throw()
{}
_policy_none_type(const size_t& size ) throw() { }
virtual void insert(const Key& _k) throw(exception_invalid_key)
{}
virtual void remove(const Key& _k) throw() {}
virtual void touch(const Key& _k) throw() {}
virtual void clear() throw() {}
virtual void swap(policy<Key,Allocator>& _p) throw(exception_inv
alid_policy) {}
virtual const _victim<Key> victim() throw() {}
};
It's up to you, how you will implement those methods and so on. The
only important thing, we haven't mentioned yet, is a victim class.
It is a way to return optional value from a function. So, when your
policy implementatiton cannot find any entry to remove, it will return
empty victim object.
Authors and Licensing
Copyright (C) 2011-2023 Denis V Chapligin, Martin Hrabovsky, Vojtech Ondruj, Tom Anderson
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt
or copy at http://www.boost.org/LICENSE_1_0.txt)