An experimental high performance real-time in-memory distributed timeseries database.
In current, it is just a demo.
- kindly inspire by InfluxDB IOx, Google Monarch, ScyllaDB and much more open-source projects.
- use new experimental features to make the fastest insertion / query TSDB
- OLAP / OLTP fusion
- support massive get / set operation for recent mutable data.
- immutable old data & use Apache Arrow / Parquet ecosystem
- boost analytical query (push-down more piplinable calculator with SIMD)
- support zero-copy transportation to easily integrate to other analytical project
- massive distributed query
- load-on-demand component & easily scaling on component level
- optional WAL / distribution backup
- columnar format & rich-type column
- Data must has a timestamp
- Tracking data transmutation of unique object in various of times (timeseries)
- Has a set of labels to identify unique object in different timestamps
- Has a set of data(scalar) that are computable
- Data distribution is related to the time
- Can always find a minimal size of time interval (unit), in this level, data in each fragment of a single unique object are continuous
- Can always find a large enough size of timer interval, in this level, both unique object and data are sparse
- Insertions are always happened in recent rather than before
- OLTP / OLAP fusion
- Recent data for single data query: alerting, monitoring
- History data for analysis: attribution analysis, machine learning
- Data aggregation are always group by timestamp
- Data can be merged with neighboring data on time
- No transaction required
- rustc (1.60.0-nightly+)
- clang (13.0.0+)
git clone https://github.com/Homebrew-TSDB-Club/t0.git
cargo build --release
./target/release/t0 --address=0.0.0.0:1108 --server-cores=24 --storage-cores=16
- core
- coroutine runtime
- CPU core-affinity coroutine runtime
- epoll I/O
- Linux aio / io_uring API
- query language
- uniform logical expression
- PromQL parser
- custom query language syntax & parser
- asynchronous & multiplexing server
- Tokio(work-stealing coroutine) based HTTP/2(gRPC) server
- core-affinity coroutine based HTTP/2(gRPC) server
- FlatBuffers over QUIC
- function level tracing
- load-on-demand component: insertion / storage / query / config
- decentralized federation deployment
- self metrics
- coroutine runtime
- insertion
- Prometheus remote write protocol
- custom protocol over FlatBuffers
- storage
- column-oriented
- rich-type column
- chunking data
- data format
- mutable in-memory chunk with custom format
- immutable in-memory Apache Arrow format
- immutable Apache Parquet format file storage
- shared-nothing insertion based on CPU core-affinity coroutine
- query calculator push-down
- projection
- filter
- time range
- limit
- pipeline compute
- data archive pipeline: mutable -> immutable -> file
- query
- basic PromQL support
- transport
- Apache Arrow Flight over HTTP/2(gRPC)
- DPDK / RDMA
- shared-nothing mutable chunk query
- inverted index
- sparse index
Over 30% overhead in croaring bitmap, and still have lots of optimize ways.