PolarDB for PostgreSQL (PolarDB for short) is an open source database system based on PostgreSQL. It extends PostgreSQL to become a share-nothing distributed database, which supports global data consistency and ACID across database nodes, distributed SQL processing, and data redundancy and high availability through Paxos based replication. PolarDB is designed to add values and new features to PostgreSQL in dimensions of high performance, scalability, high availability, and elasticity. At the same time, PolarDB remains SQL compatibility to single-node PostgreSQL with best effort.
PolarDB will evolve and offer its functions and features in two major parts: an extension and a patch to Postgres. The extension part includes components implemented outside PostgreSQL kernel, such as distributed transaction management, global or distributed time service, distributed SQL processing, additional metadata and internal functions, and tools to manage database clusters and conduct fault tolerance or recovery. Having most of its functions in a Postgres extension, PolarDB targets easy upgrading, easy migration, and fast adoption. The patch part includes the changes necessary to the kernel, such as distributed MVCC for different isolation levels. We expect functions and codes in the patch part is limited. As a result, PolarDB can be easily upgraded with newer PostgreSQL versions and maintain full compatible to PostgreSQL.
Three approaches are offered to quickly try out PolarDB: Alibaba Cloud service, deployment using Docker images, and deployment from source codes.
TBD
TBD
onekey.sh can be used to build, configure, deploy, start, init a Paxos HA environment by single command. for more detail please reference under "Deployment from Source Code" part.
-
prepare work setup environment variables(LD_LIBRARY_PATH and PATH) and install dependency packages
-
call onekey.sh script
./onekey.sh all
- check process running(master, slave, learner), and replica roles and status:
ps -ef|grep polardb
psql -p 10001 -d postgres -c "select * from pg_stat_replication;"
psql -p 10001 -d postgres -c "select * from polar_dma_cluster_status;"
We extend a tool named as pgxc_ctl from PG-XC/PG-XL open source project to support cluster management, such as configuration generation, configuration modification, cluster initialization, starting/stopping nodes, and switchover, etc. Its detail usage can be found deployment.
- download source code
- install dependency packages (use Centos as an example)
sudo yum install libzstd-devel libzstd zstd cmake openssl-devel protobuf-devel readline-devel libxml2-devel libxslt-devel zlib-devel bzip2-devel lz4-devel snappy-devel
- build and install binary
./configure --prefix=/home/postgres/polardb/polardbhome
make
make install
cd contrib
make
or you can just call build script to build.
./build.sh
- setup environment variables
vi ~/.bashrc
export PGUSER=postgres
export PGHOME=/home/postgres/polardb/polardbhome
export LD_LIBRARY_PATH=$PGHOME/lib
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export PATH=$PGHOME/bin:$PATH
- generate default configure file
pgxc_ctl -c $HOME/polardb/polardb_paxos.conf prepare standalone
- deploy binary file
pgxc_ctl -c $HOME/polardb/polardb_paxos.conf deploy all
- clean residual installation and init cluster
pgxc_ctl -c $HOME/polardb/polardb_paxos.conf clean all
pgxc_ctl -c $HOME/polardb/polardb_paxos.conf init all
pgxc_ctl -c $HOME/polardb/polardb_paxos.conf monitor all
- install dependency packages for cluster management
pgxc_ctl -c $HOME/polardb/polardb_paxos.conf deploy cm
- start cluster or node
pgxc_ctl -c $HOME/polardb/polardb_paxos.conf start all
- stop cluster or node
pgxc_ctl -c $HOME/polardb/polardb_paxos.conf stop all
- failover datanode
datanode_1 is node name configured in polardb_paxos.conf.
pgxc_ctl -c $HOME/polardb/polardb_paxos.conf failover datanode datanode_1
- cluster health check
check cluster status and start failed node.
pgxc_ctl -c $HOME/polardb/polardb_paxos.conf healthcheck all
- example for other command
pgxc_ctl -c $HOME/polardb/polardb_paxos.conf kill all
pgxc_ctl -c $HOME/polardb/polardb_paxos.conf log var datanodeNames
pgxc_ctl -c $HOME/polardb/polardb_paxos.conf show configuration all
- check and test
ps -ef | grep postgres
psql -p 10001 -d postgres -c "create table t1(a int primary key, b int);"
createdb test -p 10001
psql -p 10001 -d test -c "select version();"
reference deployment for detail instructions.
Regress and other test details can be found here. Some benchmarking example is here
PolarDB uses a share-nothing architecture. Each node stores data and also executes queries, and they coordinate with each other through message passing. The architecture allows the database to be scaled by adding more nodes to the cluster.
PolarDB slices a table into shards by hashing its primary key. The number of shards is configurable. Shards are stored in PolarDB nodes. When a query accesses shards in multiple nodes, a distributed transaction and a transaction coordinator are used to maintain ACID across nodes.
Each shard is replicated to three nodes with each replica stored on different node. In order to save costs, we can deploy two of the replicas to store complete data. The third replica only stores write ahead log (WAL), which participates in the election but cannot be chosen as the leader.
See architecture design for more information
- architecture design
- roadmap
- Features and their design in PolarDB for PG Version 1.0
PolarDB is built on and of open source, and extends open source PostgreSQL. Your contributions are welcome. How to start developing PolarDB is summarized in coding style. It also introduces our coding style and quality guidelines.
PolarDB code is released under the Apache Version 2.0 License and the Licenses with PostgreSQL code.
The relevant licenses can be found in the comments at the top of each file.
Reference License and NOTICE for details.
Some codes and design ideas were from other open source projects, such as PG-XC/XL(pgxc_ctl), TBase (timestamp based vacuum and MVCC), and CitusDB (pg_cron). Thanks for their contributions.
Copyright © Alibaba Group, Inc.