Skip to content

Commit

Permalink
add prophet-rocksdb
Browse files Browse the repository at this point in the history
  • Loading branch information
attack204 committed Apr 20, 2024
1 parent 7a9ecda commit dc78f33
Show file tree
Hide file tree
Showing 55 changed files with 2,899 additions and 255 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
make_config.mk
rocksdb.pc

out.txt


*.a
*.arc
*.d
Expand Down
1 change: 1 addition & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ BASH_EXISTS := $(shell which bash)
SHELL := $(shell which bash)
include common.mk

USE_RTTI = 1
CLEAN_FILES = # deliberately empty, so we can append below.
CFLAGS += ${EXTRA_CFLAGS}
CXXFLAGS += ${EXTRA_CXXFLAGS}
Expand Down
51 changes: 29 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,31 +1,38 @@
## RocksDB: A Persistent Key-Value Store for Flash and RAM Storage
## Prophet

[![CircleCI Status](https://circleci.com/gh/facebook/rocksdb.svg?style=svg)](https://circleci.com/gh/facebook/rocksdb)
[![Appveyor Build status](https://ci.appveyor.com/api/projects/status/fbgfu0so3afcno78/branch/main?svg=true)](https://ci.appveyor.com/project/Facebook/rocksdb/branch/main)
[![PPC64le Build Status](http://140-211-168-68-openstack.osuosl.org:8080/buildStatus/icon?job=rocksdb&style=plastic)](http://140-211-168-68-openstack.osuosl.org:8080/job/rocksdb)
Build Prophet:

RocksDB is developed and maintained by Facebook Database Engineering Team.
It is built on earlier work on [LevelDB](https://github.com/google/leveldb) by Sanjay Ghemawat (sanjay@google.com)
and Jeff Dean (jeff@google.com)
Please make sure you have installed the required dependencies in [RocksDB](https://github.com/facebook/rocksdb/blob/main/INSTALL.md) and replace `<zoned block device>` to real ZNS SSD device name.

This code is a library that forms the core building block for a fast
key-value server, especially suited for storing data on flash drives.
It has a Log-Structured-Merge-Database (LSM) design with flexible tradeoffs
between Write-Amplification-Factor (WAF), Read-Amplification-Factor (RAF)
and Space-Amplification-Factor (SAF). It has multi-threaded compactions,
making it especially suitable for storing multiple terabytes of data in a
single database.
```bash
sudo git clone https://github.com/Flappybird11101001/prophet-rocksdb.git rocksdb
cd rocksdb
sudo git clone https://github.com/Flappybird11101001/prophet-zenfs.git plugin/zenfs
sudo DISABLE_WARNING_AS_ERROR=1 ROCKSDB_PLUGINS=zenfs make -j db_bench install DEBUG_LEVEL=0
pushd .
cd plugin/zenfs/util
sudo make
popd
```

Start with example usage here: https://github.com/facebook/rocksdb/tree/main/examples
initialize ZNS SSD device

See the [github wiki](https://github.com/facebook/rocksdb/wiki) for more explanation.
```bash
echo deadline > /sys/class/block/<zoned block device>/queue/scheduler
sudo ./plugin/zenfs/util/zenfs mkfs --zbd=<zoned block device> --aux_path=./temp --force
```

The public interface is in `include/`. Callers should not include or
rely on the details of any other header files in this package. Those
internal APIs may be changed without warning.
# Benchmark

Questions and discussions are welcome on the [RocksDB Developers Public](https://www.facebook.com/groups/rocksdb.dev/) Facebook group and [email list](https://groups.google.com/g/rocksdb) on Google Groups.
run db_bench to test(the same config with paper in 64MB SST file size).

## License
```bash
sudo ./db_bench -num=400000000 -key_size=8 -value_size=256 -statistics=true -max_bytes_for_level_base=268435456 -target_file_size_base=67108864 -write_buffer_size=134217728 writable_file_max_buffer_size=134217728 -max_bytes_for_level_multiplier=4 -max_background_compactions=1 -max_background_flushes=1 -max_background_jobs=1 -soft_pending_compaction_bytes_limit=67108864 -hard_pending_compaction_bytes_limit=67108864 -level0_stop_writes_trigger=12 -level0_slowdown_writes_trigger=8 -level0_file_num_compaction_trigger=4 -max_write_buffer_number=1 -threads=1 -compaction_pri=4 -open_files=1000 -target_file_size_multiplier=1 --fs_uri=zenfs://dev:<zoned block device> --benchmarks='fillrandom,stats' --use_direct_io_for_flush_and_compaction
```

RocksDB is dual-licensed under both the GPLv2 (found in the COPYING file in the root directory) and Apache 2.0 License (found in the LICENSE.Apache file in the root directory). You may select, at your option, one of the above-listed licenses.

![allocation_migrated_data](./allocation_migrated_data.jpg)

![allocation_wa](./allocation_wa.jpg)

![allocation_zone_number_page-0001](./allocation_zone_number.jpg)
Binary file added allocation_migrated_data.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added allocation_wa.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added allocation_zone_number.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
7 changes: 7 additions & 0 deletions clear.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
rm -f level.out
rm -f lifetime.out
rm -f number_life.out
rm -f factor.out
rm -f last_compact.out
rm -f rank.out
rm -rf clock.out
31 changes: 31 additions & 0 deletions clock_pic.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
import matplotlib.pyplot as plt
import numpy as np

prev_list = []
tmp_prev_flush_list = []
prev_flush_list = []
type_list = []
tot = 0
for line in open("clock.out"):
tot = tot + 1
if(tot != 1):
prev_list.append(int(line.split(' ')[0]))
tmp_prev_flush_list.append(int(line.split(' ')[1]))
type_list.append(int(line.split(' ')[2]))


y = np.array(prev_list)
plt.hist(prev_list, bins=100, color="brown")
plt.show()


tot = 0
for i in range(0, len(type_list)):
tot = tot + 1
if i + 1 < len(type_list) and type_list[i] == 2 and type_list[i + 1] == 1:
prev_flush_list.append(tmp_prev_flush_list[i])


# y = np.array(prev_flush_list)
plt.hist(prev_flush_list, bins=100, color="brown")
plt.show()
31 changes: 29 additions & 2 deletions db/builder.cc
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,12 @@

namespace ROCKSDB_NAMESPACE {

extern void get_predict(int level, const FileMetaData &file, Version *v, const Compaction* compaction_, int &predict_, int &predict_type_, int &tmp_rank);
extern void set_deleted_time(int fnumber, int clock);
extern void update_fname(uint64_t id, std::string name);
extern std::string get_fname(uint64_t id);
extern int get_clock();

class TableFactory;

TableBuilder* NewTableBuilder(const TableBuilderOptions& tboptions,
Expand Down Expand Up @@ -147,10 +153,31 @@ Status BuildTable(
bool use_direct_writes = file_options.use_direct_writes;
TEST_SYNC_POINT_CALLBACK("BuildTable:create_file", &use_direct_writes);
#endif // !NDEBUG
IOStatus io_s = NewWritableFile(fs, fname, &file, file_options);
//file_options.lifetime = 1000;
FileOptions tmp_file_options = file_options;
tmp_file_options.lifetime = 100;

update_fname(meta->fd.GetNumber(), fname);
//在这里写入
IOStatus io_s = NewWritableFile(fs, fname, &file, tmp_file_options);


int predict;
int predict_type;
int rank;
const int output_level = 0;

get_predict(output_level, *meta, versions->GetColumnFamilySet()->GetDefault()->current(), nullptr, predict, predict_type, rank);
set_deleted_time(meta->fnumber, predict + get_clock());
printf("meta->fname=%s get_clock=%d lifetime=%d\n", fname.c_str(), get_clock(), predict + get_clock());
fs->SetFileLifetime(fname, predict + get_clock(), get_clock(), 0, output_level, std::vector<std::string> {});




assert(s.ok());
s = io_s;
if (io_status->ok()) {
if (io_status->ok()) {
*io_status = io_s;
}
if (!s.ok()) {
Expand Down
2 changes: 1 addition & 1 deletion db/column_family.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1118,7 +1118,7 @@ Compaction* ColumnFamilyData::PickCompaction(
imm_.current()->GetEarliestSequenceNumber(false));
auto* result = compaction_picker_->PickCompaction(
GetName(), mutable_options, mutable_db_options, current_->storage_info(),
log_buffer, earliest_mem_seqno);
log_buffer, earliest_mem_seqno); //PickCompaction来选择需要被Compact的文件
if (result != nullptr) {
result->SetInputVersion(current_);
}
Expand Down
2 changes: 2 additions & 0 deletions db/compaction/compaction.h
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ struct AtomicCompactionUnitBoundary {
const InternalKey* largest = nullptr;
};

//使用此结构维护同一个level中所有的SST files
// The structure that manages compaction input files associated
// with the same physical level.
struct CompactionInputFiles {
Expand Down Expand Up @@ -438,6 +439,7 @@ class Compaction {
bool l0_files_might_overlap_;

// Compaction input files organized by level. Constant after construction
//Compaction中的输入变量
const std::vector<CompactionInputFiles> inputs_;

// A copy of inputs_, organized more closely in memory
Expand Down
Loading

0 comments on commit dc78f33

Please sign in to comment.