Skip to content

Commit 6114d55

Browse files
committed
Merge branch 'master' into rocksdict
2 parents 306b0ab + 734cf99 commit 6114d55

20 files changed

+649
-139
lines changed

.github/workflows/rust.yml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ name: RocksDB CI
22

33
on: [push, pull_request]
44
env:
5-
RUST_VERSION: 1.60.0
5+
RUST_VERSION: 1.63.0
66

77
jobs:
88
fmt:
@@ -104,6 +104,8 @@ jobs:
104104
run: |
105105
cargo test --all
106106
cargo test --all --features multi-threaded-cf
107+
- name: Free disk space
108+
run: cargo clean
107109
- name: Run rocksdb tests (jemalloc)
108110
if: runner.os != 'Windows'
109111
run: cargo test --all --features jemalloc

CHANGELOG.md

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,10 @@
22

33
## [Unreleased]
44

5+
* Bump MSRV to 1.63.0 (mina86)
6+
* Convert properties to `&PropName` which can be converted at no cost to `&CStr`
7+
and `&str` (mina86)
8+
59
## 0.21.0 (2023-05-09)
610

711
* Add doc-check to CI with fix warnings in docs (YuraKotov)
@@ -109,10 +113,10 @@
109113
* Bump `librocksdb-sys` up to 6.20.3 (olegnn, akrylysov)
110114
* Add `DB::key_may_exist_cf_opt` method (stanislav-tkach)
111115
* Add `Options::set_zstd_max_train_bytes` method (stanislav-tkach)
112-
* Mark Cache and Env as Send and Sync (akrylysov)
116+
* Mark Cache and Env as Send and Sync (akrylysov)
113117
* Allow cloning the Cache and Env (duarten)
114-
* Make SSE inclusion conditional for target features (mbargull)
115-
* Use Self where possible (adamnemecek)
118+
* Make SSE inclusion conditional for target features (mbargull)
119+
* Use Self where possible (adamnemecek)
116120
* Don't leak dropped column families (ryoqun)
117121

118122
## 0.16.0 (2021-04-18)
@@ -169,23 +173,23 @@
169173
* Add `set_max_total_wal_size` to the `Options` (wqfish)
170174
* Simplify conversion on iterator item (zhangsoledad)
171175
* Add `flush_cf` method to the `DB` (wqfish)
172-
* Fix potential segfault when calling `next` on the `DBIterator` that is at the end of the range (wqfish)
176+
* Fix potential segfault when calling `next` on the `DBIterator` that is at the end of the range (wqfish)
173177
* Move to Rust 2018 (wqfish)
174178
* Fix doc for `WriteBatch::delete` (wqfish)
175179
* Bump `uuid` and `bindgen` dependencies (jonhoo)
176180
* Change APIs that never return error to not return `Result` (wqfish)
177181
* Fix lifetime parameter for iterators (wqfish)
178-
* Add a doc for `optimize_level_style_compaction` method (NikVolf)
182+
* Add a doc for `optimize_level_style_compaction` method (NikVolf)
179183
* Make `DBPath` use `tempfile` (jder)
180184
* Refactor `db.rs` and `lib.rs` into smaller pieces (jder)
181185
* Check if we're on a big endian system and act upon it (knarz)
182186
* Bump internal snappy version up to 1.1.8 (aleksuss)
183187
* Bump rocksdb version up to 6.7.3 (aleksuss)
184-
* Atomic flush option (mappum)
188+
* Atomic flush option (mappum)
185189
* Make `set_iterate_upper_bound` method safe (wqfish)
186190
* Add support for data block hash index (dvdplm)
187191
* Add some extra config options (casualjim)
188-
* Add support for range delete APIs (wqfish)
192+
* Add support for range delete APIs (wqfish)
189193
* Improve building `librocksdb-sys` with system libraries (basvandijk)
190194
* Add support for `open_for_read_only` APIs (wqfish)
191195
* Fix doc for `DBRawIterator::prev` and `next` methods (wqfish)

Cargo.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ name = "rocksdb"
33
description = "Rust wrapper for Facebook's RocksDB embeddable database"
44
version = "0.21.0"
55
edition = "2018"
6-
rust-version = "1.60"
6+
rust-version = "1.63"
77
authors = ["Tyler Neely <t@jujit.su>", "David Greenberg <dsg123456789@gmail.com>"]
88
repository = "https://github.com/rust-rocksdb/rust-rocksdb"
99
license = "Apache-2.0"
@@ -36,7 +36,7 @@ serde1 = ["serde"]
3636

3737
[dependencies]
3838
libc = "0.2"
39-
librocksdb-sys = { path = "librocksdb-sys", version = "0.11.0" }
39+
librocksdb-sys = { path = "librocksdb-sys", version = "0.12.0" }
4040
serde = { version = "1", features = [ "derive" ], optional = true }
4141

4242
[dev-dependencies]

README.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ rust-rocksdb
55
[![documentation](https://docs.rs/rocksdb/badge.svg)](https://docs.rs/rocksdb)
66
[![license](https://img.shields.io/crates/l/rocksdb.svg)](https://github.com/rust-rocksdb/rust-rocksdb/blob/master/LICENSE)
77
[![Gitter chat](https://badges.gitter.im/rust-rocksdb/gitter.png)](https://gitter.im/rust-rocksdb/lobby)
8-
![rust 1.60.0 required](https://img.shields.io/badge/rust-1.60.0-blue.svg?label=MSRV)
8+
![rust 1.63.0 required](https://img.shields.io/badge/rust-1.63.0-blue.svg?label=MSRV)
99

1010

1111
![GitHub commits (since latest release)](https://img.shields.io/github/commits-since/rust-rocksdb/rust-rocksdb/latest.svg)
@@ -16,25 +16,25 @@ rust-rocksdb
1616

1717
## Contributing
1818

19-
Feedback and pull requests welcome! If a particular feature of RocksDB is
20-
important to you, please let me know by opening an issue, and I'll
19+
Feedback and pull requests welcome! If a particular feature of RocksDB is
20+
important to you, please let me know by opening an issue, and I'll
2121
prioritize it.
2222

2323
## Usage
2424

25-
This binding is statically linked with a specific version of RocksDB. If you
26-
want to build it yourself, make sure you've also cloned the RocksDB and
25+
This binding is statically linked with a specific version of RocksDB. If you
26+
want to build it yourself, make sure you've also cloned the RocksDB and
2727
compression submodules:
2828

2929
git submodule update --init --recursive
3030

3131
## Compression Support
32-
By default, support for the [Snappy](https://github.com/google/snappy),
33-
[LZ4](https://github.com/lz4/lz4), [Zstd](https://github.com/facebook/zstd),
34-
[Zlib](https://zlib.net), and [Bzip2](http://www.bzip.org) compression
35-
is enabled through crate features. If support for all of these compression
36-
algorithms is not needed, default features can be disabled and specific
37-
compression algorithms can be enabled. For example, to enable only LZ4
32+
By default, support for the [Snappy](https://github.com/google/snappy),
33+
[LZ4](https://github.com/lz4/lz4), [Zstd](https://github.com/facebook/zstd),
34+
[Zlib](https://zlib.net), and [Bzip2](http://www.bzip.org) compression
35+
is enabled through crate features. If support for all of these compression
36+
algorithms is not needed, default features can be disabled and specific
37+
compression algorithms can be enabled. For example, to enable only LZ4
3838
compression support, make these changes to your Cargo.toml:
3939

4040
```

librocksdb-sys/Cargo.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
[package]
22
name = "librocksdb-sys"
3-
version = "0.11.0+8.1.1"
3+
version = "0.12.0+8.5.3"
44
edition = "2018"
5-
rust-version = "1.60"
5+
rust-version = "1.63"
66
authors = ["Karl Hobley <karlhobley10@gmail.com>", "Arkadiy Paronyan <arkadiy@ethcore.io>"]
77
license = "MIT/Apache-2.0/BSD-3-Clause"
88
description = "Native bindings to librocksdb"

librocksdb-sys/build.rs

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -118,24 +118,19 @@ fn build_rocksdb() {
118118
}
119119
if target_features.contains(&"sse4.2") {
120120
config.flag_if_supported("-msse4.2");
121-
config.define("HAVE_SSE42", Some("1"));
122121
}
123122
// Pass along additional target features as defined in
124123
// build_tools/build_detect_platform.
125124
if target_features.contains(&"avx2") {
126125
config.flag_if_supported("-mavx2");
127-
config.define("HAVE_AVX2", Some("1"));
128126
}
129127
if target_features.contains(&"bmi1") {
130128
config.flag_if_supported("-mbmi");
131-
config.define("HAVE_BMI", Some("1"));
132129
}
133130
if target_features.contains(&"lzcnt") {
134131
config.flag_if_supported("-mlzcnt");
135-
config.define("HAVE_LZCNT", Some("1"));
136132
}
137133
if !target.contains("android") && target_features.contains(&"pclmulqdq") {
138-
config.define("HAVE_PCLMUL", Some("1"));
139134
config.flag_if_supported("-mpclmul");
140135
}
141136
}

librocksdb-sys/build_version.cc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,17 +8,17 @@
88

99
// The build script may replace these values with real values based
1010
// on whether or not GIT is available and the platform settings
11-
static const std::string rocksdb_build_git_sha = "6a436150417120a3f9732d65a2a5c2b8d19b60fc";
12-
static const std::string rocksdb_build_git_tag = "rocksdb_build_git_tag:v8.1.1";
11+
static const std::string rocksdb_build_git_sha = "f32521662acf3352397d438b732144c7813bbbec";
12+
static const std::string rocksdb_build_git_tag = "rocksdb_build_git_tag:v8.5.3";
1313
#define HAS_GIT_CHANGES 0
1414
#if HAS_GIT_CHANGES == 0
1515
// If HAS_GIT_CHANGES is 0, the GIT date is used.
1616
// Use the time the branch/tag was last modified
17-
static const std::string rocksdb_build_date = "rocksdb_build_date:2023-04-06 16:38:52";
17+
static const std::string rocksdb_build_date = "rocksdb_build_date:2023-09-01 20:58:39";
1818
#else
1919
// If HAS_GIT_CHANGES is > 0, the branch/tag has modifications.
2020
// Use the time the build was created.
21-
static const std::string rocksdb_build_date = "rocksdb_build_date:2023-04-06 16:38:52";
21+
static const std::string rocksdb_build_date = "rocksdb_build_date:2023-09-01 20:58:39";
2222
#endif
2323

2424
std::unordered_map<std::string, ROCKSDB_NAMESPACE::RegistrarFunc> ROCKSDB_NAMESPACE::ObjectRegistry::builtins_ = {};

librocksdb-sys/rocksdb

Submodule rocksdb updated 580 files

librocksdb-sys/rocksdb_lib_sources.txt

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -249,6 +249,8 @@ util/stderr_logger.cc
249249
util/string_util.cc
250250
util/thread_local.cc
251251
util/threadpool_imp.cc
252+
util/udt_util.cc
253+
util/write_batch_util.cc
252254
util/xxhash.cc
253255
utilities/agg_merge/agg_merge.cc
254256
utilities/backup/backup_engine.cc

librocksdb-sys/tests/ffi.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1072,7 +1072,7 @@ fn ffi() {
10721072
rocksdb_slicetransform_create_fixed_prefix(3),
10731073
);
10741074
rocksdb_options_set_hash_skip_list_rep(options, 5000, 4, 4);
1075-
rocksdb_options_set_plain_table_factory(options, 4, 10, 0.75, 16);
1075+
rocksdb_options_set_plain_table_factory(options, 4, 10, 0.75, 16, 0, 0, 0, 0);
10761076
rocksdb_options_set_allow_concurrent_memtable_write(options, 0);
10771077

10781078
db = rocksdb_open(options, dbname, &mut err);

src/db.rs

Lines changed: 88 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -921,6 +921,28 @@ impl<T: ThreadMode, D: DBInner> DBCommon<T, D> {
921921
Ok(())
922922
}
923923

924+
/// Flushes multiple column families.
925+
///
926+
/// If atomic flush is not enabled, it is equivalent to calling flush_cf multiple times.
927+
/// If atomic flush is enabled, it will flush all column families specified in `cfs` up to the latest sequence
928+
/// number at the time when flush is requested.
929+
pub fn flush_cfs_opt(
930+
&self,
931+
cfs: &[&impl AsColumnFamilyRef],
932+
opts: &FlushOptions,
933+
) -> Result<(), Error> {
934+
let mut cfs = cfs.iter().map(|cf| cf.inner()).collect::<Vec<_>>();
935+
unsafe {
936+
ffi_try!(ffi::rocksdb_flush_cfs(
937+
self.inner.inner(),
938+
opts.inner,
939+
cfs.as_mut_ptr(),
940+
cfs.len() as libc::c_int,
941+
));
942+
}
943+
Ok(())
944+
}
945+
924946
/// Flushes database memtables to SST files on the disk for a given column family using default
925947
/// options.
926948
pub fn flush_cf(&self, cf: &impl AsColumnFamilyRef) -> Result<(), Error> {
@@ -1157,38 +1179,40 @@ impl<T: ThreadMode, D: DBInner> DBCommon<T, D> {
11571179
/// Return the values associated with the given keys and the specified column family
11581180
/// where internally the read requests are processed in batch if block-based table
11591181
/// SST format is used. It is a more optimized version of multi_get_cf.
1160-
pub fn batched_multi_get_cf<K, I>(
1182+
pub fn batched_multi_get_cf<'a, K, I>(
11611183
&self,
11621184
cf: &impl AsColumnFamilyRef,
11631185
keys: I,
11641186
sorted_input: bool,
11651187
) -> Vec<Result<Option<DBPinnableSlice>, Error>>
11661188
where
1167-
K: AsRef<[u8]>,
1168-
I: IntoIterator<Item = K>,
1189+
K: AsRef<[u8]> + 'a + ?Sized,
1190+
I: IntoIterator<Item = &'a K>,
11691191
{
11701192
self.batched_multi_get_cf_opt(cf, keys, sorted_input, &ReadOptions::default())
11711193
}
11721194

11731195
/// Return the values associated with the given keys and the specified column family
11741196
/// where internally the read requests are processed in batch if block-based table
1175-
/// SST format is used. It is a more optimized version of multi_get_cf_opt.
1176-
pub fn batched_multi_get_cf_opt<K, I>(
1197+
/// SST format is used. It is a more optimized version of multi_get_cf_opt.
1198+
pub fn batched_multi_get_cf_opt<'a, K, I>(
11771199
&self,
11781200
cf: &impl AsColumnFamilyRef,
11791201
keys: I,
11801202
sorted_input: bool,
11811203
readopts: &ReadOptions,
11821204
) -> Vec<Result<Option<DBPinnableSlice>, Error>>
11831205
where
1184-
K: AsRef<[u8]>,
1185-
I: IntoIterator<Item = K>,
1206+
K: AsRef<[u8]> + 'a + ?Sized,
1207+
I: IntoIterator<Item = &'a K>,
11861208
{
1187-
let (keys, keys_sizes): (Vec<Box<[u8]>>, Vec<_>) = keys
1209+
let (ptr_keys, keys_sizes): (Vec<_>, Vec<_>) = keys
11881210
.into_iter()
1189-
.map(|k| (Box::from(k.as_ref()), k.as_ref().len()))
1211+
.map(|k| {
1212+
let k = k.as_ref();
1213+
(k.as_ptr() as *const c_char, k.len())
1214+
})
11901215
.unzip();
1191-
let ptr_keys: Vec<_> = keys.iter().map(|k| k.as_ptr() as *const c_char).collect();
11921216

11931217
let mut pinned_values = vec![ptr::null_mut(); ptr_keys.len()];
11941218
let mut errors = vec![ptr::null_mut(); ptr_keys.len()];
@@ -1796,7 +1820,7 @@ impl<T: ThreadMode, D: DBInner> DBCommon<T, D> {
17961820
))),
17971821
};
17981822
unsafe {
1799-
libc::free(value as *mut c_void);
1823+
ffi::rocksdb_free(value as *mut c_void);
18001824
}
18011825
result
18021826
}
@@ -1989,6 +2013,47 @@ impl<T: ThreadMode, D: DBInner> DBCommon<T, D> {
19892013
}
19902014
}
19912015

2016+
/// Obtains the LSM-tree meta data of the default column family of the DB
2017+
pub fn get_column_family_metadata(&self) -> ColumnFamilyMetaData {
2018+
unsafe {
2019+
let ptr = ffi::rocksdb_get_column_family_metadata(self.inner.inner());
2020+
2021+
let metadata = ColumnFamilyMetaData {
2022+
size: ffi::rocksdb_column_family_metadata_get_size(ptr),
2023+
name: from_cstr(ffi::rocksdb_column_family_metadata_get_name(ptr)),
2024+
file_count: ffi::rocksdb_column_family_metadata_get_file_count(ptr),
2025+
};
2026+
2027+
// destroy
2028+
ffi::rocksdb_column_family_metadata_destroy(ptr);
2029+
2030+
// return
2031+
metadata
2032+
}
2033+
}
2034+
2035+
/// Obtains the LSM-tree meta data of the specified column family of the DB
2036+
pub fn get_column_family_metadata_cf(
2037+
&self,
2038+
cf: &impl AsColumnFamilyRef,
2039+
) -> ColumnFamilyMetaData {
2040+
unsafe {
2041+
let ptr = ffi::rocksdb_get_column_family_metadata_cf(self.inner.inner(), cf.inner());
2042+
2043+
let metadata = ColumnFamilyMetaData {
2044+
size: ffi::rocksdb_column_family_metadata_get_size(ptr),
2045+
name: from_cstr(ffi::rocksdb_column_family_metadata_get_name(ptr)),
2046+
file_count: ffi::rocksdb_column_family_metadata_get_file_count(ptr),
2047+
};
2048+
2049+
// destroy
2050+
ffi::rocksdb_column_family_metadata_destroy(ptr);
2051+
2052+
// return
2053+
metadata
2054+
}
2055+
}
2056+
19922057
/// Returns a list of all table files with their level, start key
19932058
/// and end key
19942059
pub fn live_files(&self) -> Result<Vec<LiveFile>, Error> {
@@ -2179,6 +2244,18 @@ impl<T: ThreadMode, I: DBInner> fmt::Debug for DBCommon<T, I> {
21792244
}
21802245
}
21812246

2247+
/// The metadata that describes a column family.
2248+
#[derive(Debug, Clone)]
2249+
pub struct ColumnFamilyMetaData {
2250+
// The size of this column family in bytes, which is equal to the sum of
2251+
// the file size of its "levels".
2252+
pub size: u64,
2253+
// The name of the column family.
2254+
pub name: String,
2255+
// The number of files in this column family.
2256+
pub file_count: usize,
2257+
}
2258+
21822259
/// The metadata that describes a SST file
21832260
#[derive(Debug, Clone)]
21842261
pub struct LiveFile {

0 commit comments

Comments
 (0)