mirror of
https://github.com/nestriness/cdc-file-transfer.git
synced 2026-01-30 14:35:37 +02:00
Change fastcdc to a better and simpler algorithm. (#79)
This CL changes the chunking algorithm from "normalized chunking" to simple "regression chunking", and changes the has criteria from 'hash&mask' to 'hash<=threshold'. These are all ideas taken from testing and analysis done at https://github.com/dbaarda/rollsum-chunking/blob/master/RESULTS.rst Regression chunking was introduced in https://www.usenix.org/system/files/conference/atc12/atc12-final293.pdf The algorithm uses an arbitrary number of regressions using power-of-2 regression target lengths. This means we can use a simple bitmask for the regression hash criteria. Regression chunking yields high deduplication rates even for lower max chunk sizes, so that the cdc_stream max chunk can be reduced to 512K from 1024K. This fixes potential latency spikes from large chunks.
This commit is contained in:
@@ -158,7 +158,7 @@ class MultiSessionTest : public ManifestTestBase {
|
||||
EXPECT_EQ(data->file_count, file_count);
|
||||
EXPECT_EQ(data->min_chunk_size, 128 << 10);
|
||||
EXPECT_EQ(data->avg_chunk_size, 256 << 10);
|
||||
EXPECT_EQ(data->max_chunk_size, 1024 << 10);
|
||||
EXPECT_EQ(data->max_chunk_size, 512 << 10);
|
||||
}
|
||||
|
||||
metrics::ManifestUpdateData GetManifestUpdateData(
|
||||
|
||||
Reference in New Issue
Block a user