TiUP Changelog
- Add support of new component
tikv-cdc
fortiup-cluster
andtiup-playground
(#2000, #2022, @pingyu) - Add support of dedicated
tidb-dashboard
intiup-cluster
(#2017, @nexustar) - Add support of TiCDC rolling upgrade for
tiup-cluster
(#1996, #2005, #2036, @3AceShowHand) - Add support to config TiCDC cluster-id for
tiup-cluster
(#2042, @nexustar) - Add support to set CPUAffinity in
tiup-cluster
(#2007, @YaozhengWang) - Allow to display memory usage in
tiup-cluster
(#1994, @nexustar)
- Fix tmp file not deleted when upload package in
tiup-server
(#2021, @nexustar) - Fix redundant log when start TiDB cluster with
tiup-playground
(#2032, @nexustar) - Fix panic when fail to start component in
tiup-playground
(#1933, @dveeden) - Fix scale-out cdc command in
tiup-playground
(#1935, @lonng) - Fix ineffectiveness of ticdc.config in
tiup-playground
(#1978, @pingyu) - Fix timezone check and remove duplicate cleanTasks in
tiup-cluster
(#2045, @nexustar)
- Use test-cluster as dashboard name in
tiup-playground
(#1920, @breezewish) - Add pd.port argument in
tiup-playground
(#1931, @pingyu) - Allow --tag argument on any locate in
tiup-playground
(#1998, @pingyu)
- Add new version for node_exporter (https://github.com/prometheus/node_exporter/releases/tag/v1.3.1) and blackbox_exporter (https://github.com/prometheus/blackbox_exporter/releases/tag/v0.21.1) in tiup repository. All the new tidb clusters or instances deployed by tiup cluster will use the new version by default.
- Fix cannot clean related tidb topology after scale-in in
tiup-cluster
(#2011, @nexustar) - Fix fail to push if server name has "-" in
tiup-cluster
(#2008, @nexustar) - Fix unable to configure tiflash LearnerConfig in
tiup-cluster
(#1991, @srstack)
- Improve the THP check rule in
tiup-cluster
(#2014, @nexustar) - Add an example in -h for
tiup mirror clone
for multiple versions (#2009, @nexustar)
- Fix cannot get drainer status from pd in
tiup-cluster
(#1922, @srstack) - Fix error when check time zone in
tiup-cluster
(#1925, @nexustar) - Fix wrong parameter value of --peer-urls in
tiup-dm
(#1926, @nexustar)
- Fix SSH login error when identity file is specified for non-root user in
tiup-cluster
(#1914, @srstack)
- Add support of backup and restore the cluster metadata for
tiup-cluster
andtiup-dm
(#1801, @nexustar) - Add
history
command fortiup
to display component execution records (#1808, @srstack) - Add support of trying to disable swap when
check --apply
intiup-cluster
(#1803, @AstroProfundis) - Add Grafana URL in
display
output oftiup-cluster
(#1819, @Smityz) - Add a
latest
alias for component versions when cloning repo withtiup mirror clone
command (#1835, @srstack) - Add Kylin Linux 10+ as supported in
check
result oftiup-cluster
(#1886, @srstack) - Add support of completion of cluster name with Tab button for
tiup-cluster
(#1891, @nexustar) - Add support of checking timezone consistency among servers in
check
command oftiup-cluster
(#1890, @nexustar) - Add support of deploying on RHEL 8 in
tiup-cluster
(#1896, @nexustar) - Add support of specifing custom key directory when rotating
root.json
intiup mirror
command (#1848, @AstroProfundis)
- Fix typo in error message of
tiup-bench
(#1824, @Mini256) - Fix duplicated component path printed in
tiup
(#1832, @nexustar) - Fix outdated URL in topology example for
tiup-cluster
(#1840, @srstack) - Fix DM startup scripts to bind
0.0.0.0
instead of host IP (#1845, @nexustar) - Fix incorrect blackbox_exporter, node_exporter and Grafana status monitor for TLS enabled clusters (#1853, @srstack)
- Fix priority of tag argument for
tiup-playground
(#1869, @nexustar) - Fix
TIUP_HOME
not loaded correctly on initializing metadata for some components (#1885, @srstack) - Fix concurrent error in
display
command oftiup-cluster
(#1895, @srstack) - Fix incorrect workload loading in
tiup-bench
(#1827, @Smityz) - Fix OS type detection for hybrid platform deployment in
tiup-cluster
(#1753, @srstack)
- Add notes about default workload values in help message of
tiup-bench
(#1807, @Smityz) - Refactor
-h/--help
handling to avoid conflicts with component arguments (#1831, @nexustar) - Refactor version specific handlings of TiDB cluster to a dedicated Go package (#1873, @nexustar)
- Improve integrate tests for
tiup-cluster
(#1882, @nexustar) - Adjust help information of
edit-cluster
command fortiup-cluster
andtiup-dm
(#1900, @nexustar) - Update configuration example of monitoring components (#1818, @glkappe; #1843, @nexustar)
- Improve cluster shutting down process in
playground
(#1893, @nexustar)
- Fix
prune
incorrectly destroy pump/drainer node before they becomeTombstone
intiup-cluster
(#1851, @srstack) - Report error when multiple pump nodes with the same
ip:port
found intiup-cluster
(#1856, @srstack) - Get node status of pump/drainer from PD in
tiup-cluster
(#1862, @srstack)
- Check node status concurrently and support custom timeout for
display
intiup-cluster
(#1867, @srstack) - Support
tidb-lightning
intiup-ctl
(#1863, @nexustar)
- Fix copy error when file is read only in
tiup-playground
(#1816, @breeswish) - Fix
data-dir
not properly handled for TiCDC v6.0.0 intiup-cluster
(#1838, @overvenus)
- Fix error running
exec
subcommand oftiup-cluster
when hostname contains '-' (#1794, @nexustar) - Fix port conflict check for TiFlash instances in
tiup-cluster
(#1805, @AstroProfundis) - Fix next-generation monitor (
ng-monitor
) not available in Prometheus (#1806, @nexustar) - Fix node_exporter metrics not collected if the host has only Prometheus deployed (#1806, @nexustar)
- Fix
--host 0.0.0.0
not working intiup-playground
(#1811, @nexustar)
- Support cleanup audit log files for
tiup-cluster
andtiup-dm
(#1780, @srstack) - Add anonymous login example to Grafana configuration templates (#1785, @sunzhaoyang)
- Fix next-generation monitor (
ng-monitor
) is not started by default for nightly versions intiup-cluster
(#1760, @nexustar) - Fix the
--ignore-config-check
argument not working during deploy process intiup-cluster
(#1774, @AstroProfundis) - Fix incorrect
initial-commit-ts
config for drainer intiup-cluster
(#1776, @nexustar) - Fix symbolic link handling when decompressing packages (#1784, @nexustar)
- Check for inactive Prometheus service before
reload
intiup-cluster
(#1775, @nexustar) - Mark Oracle Linux as supported OS in
check
result oftiup-cluster
(#1786, @srstack)
- Fix panic running TPCC with
tiup-bench
(#1755, @nexustar) - Fix blackbox_exporter and node_exporter not restarted during upgrade in
tiup-cluster
andtiup-dm
(#1758, @srstack) - Fix messed
stdout
andstderr
handling for SSH commands intiup-cluster
andtiup-dm
(#1763, @tongtongyin) - Fix Grafana datasource config handling in
tiup-cluster
andtiup-dm
(#1768, @srstack)
- Enable next-generation monitor (
ng-monitor
) by default for TiDB versions equal or later thanv5.4.0
intiup-cluster
(#1699 #1743, @nexustar) - Add support of enabling and disabling TLS encryption for deployed TiDB cluster in
tiup-cluster
(#1657, @srstack) - Add support of deploying TLS enabled DM clusters in
tiup-dm
(#1745, @nexustar) - Add support of changing owner of a component in
tiup mirror
andtiup-server
(#1676, @AstroProfundis) - Add support of specifing IP address to bind for AlertManager in
tiup-cluster
(#1665 #1669, @srstack) - Add support of initialing random root password for TiDB in
tiup-cluster
(#1700, @AstroProfundis) - Add support of
check
before scaling out a cluster intiup-cluster
(#1659, @srstack) - Add support of customizing Grafana configurations in
server_configs
section intiup-cluster
andtiup-dm
(#1703, @nexustar) - Add support of Chrony as valid NTP daemon for
check
intiup-cluster
(#1714, @srstack) - Add Amazon Linux 2 as supported OS for
check
intiup-cluster
(#1740, @dveeden) - Add significant warning destroying a cluster in
tiup-cluster
andtiup-dm
(#1723, @AstroProfundis)
- Fix DM hosts not added to node_exporter list of Prometheus configuration in
tiup-dm
(#1654, @AstroProfundis) - Adjust command argument of
tiup
to workaround conflict with some components (#1698, @nexustar) - Fix global configs not correctly set for new instances during scaling out in
tiup-cluster
(#1701, @srstack) - Fix incorrect
initial_commit_ts
set in start up script of Drainer intiup-cluster
(#1706, @nexustar) - Fix JSON output for
check
results intiup-cluster
(#1720, @AstroProfundis) - Fix incorrect instance status for
display
intiup-cluster
(#1742, @nexustar) - Fix malformed commands in local executor in
tiup-cluster
(#1734, @AstroProfundis) - Fix incorrect exit code for
tiup
(#1738, @nexustar) - Remove duplicate
check
results intiup-cluster
(#1737, @srstack) - Fix version check of TiFlash nightly builds for TLS enabled clusters in
tiup-cluster
(#1735, @srstack)
- Adjust configuration template for TiFlash to support new versions in
tiup-cluster
(#1673, @hehechen) - Adjust configuration sample for DM in
tiup-dm
(#1692, @lance6716) - Reder cluster name for custom Prometheus alert rules in
tiup-cluster
(#1674, @srstack) - Improve shell auto-completion to support cli of components (#1678, @nexustar)
- Add checks for
tiup
installed with 3rd party package manager when runningtiup update --self
(#1693, @srstack) - Check for component updates before actually run it (#1718, @nexustar)
- Use latest nightly build for each component in
tiup-playground
(#1727, @nexustar)
- Fix global configuration not inherited correctly in
scale-out
command oftiup-cluster
(#1701, @srstack) - Fix errors starting
tiup-playground
in some circumstances (#1712 #1715, @nexustar) - Fix error comparing nightly versions in
tiup-cluster
(#1702, @srstack)
- Fix port conflict not checked for TiDB clusters imported from
tidb-ansible
onscale-out
intiup-cluster
(#1656, @srstack) - Fix SSH commands stale in some circumstances (#1664, @nexustar)
- Fix default value of
initial-commit-ts
for drainer intiup-cluster
(#1678, @nexustar)
- Add
data-dir
support for TiCDC intiup-playground
(#1631, @nexustar) - Add support of using custom files as input of
edit-config
, and support dumping the current full config to a file withshow-config
command intiup-cluster
(#1637, @haiboumich) - Add support of next-generation monitor (
ng-monitor
) intiup-playground
(#1648, @nexustar) - Add support of inserting custom
scrape_configs
to Prometheus configs intiup-cluster
(#1641, @nexustar) - [experimental] Support 2-staged scaling out for
tiup-cluster
(#1638 #1642, @srstack)- Scaling out of a TiDB cluster can be devided with
--stage1
and--stage2
arguments, the stage 1 deploys files and configs but not starting the new instances, and the stage 2 actually starts the new instances and reload necessary configs - This could be useful if you want to modify config of the new instances or use a custom binary with
patch
before the first start of the new instances
- Scaling out of a TiDB cluster can be devided with
- [experimental] Implement plain text output and support custom output writer for logs (#1646, @AstroProfundis)
- Fix incorrect progress bar displaying in some tasks (#1624, @nexustar)
- Fix incorrect argument flags in
tiup-playground
(#1635, @srstack) - Fix files of monitoring agents and TiDB audit log not cleaned with
clean
command oftiup-cluster
(#1643 #1644, @srstack) - Fix confirmation prompt in
scale-out
can not be skipped with--yes
argument intiup-cluster
(#1645, @srstack) - Fix directory conflict error in some circumstances even when node is marked as
ignore_exporter
(#1649, @AstroProfundis) - Fix DM nodes not added to node_exporter target list in Prometheus config in
tiup-dm
(#1654, @AstroProfundis)
- Add significant warning when
--force
argument is set forscale-in
command intiup-cluster
(#1629, @AstroProfundis) - Add environment variables to skip topology sanity check in
scale-in
command intiup-cluster
(#1627, @AstroProfundis) - Update examples to use
--without-monitor
instead of--monitor
fortiup-playground
(#1639, @dveeden)
- Support deploying and managing TLS enabled TiDB cluster with TiFlash nodes (#1594, @nexustar)
- Support rendering template for local deployment with vairables in
tiup-cluster
andtiup-dm
(#1596, @makocchi-git) - [experimental] Support optionally enable next-generation monitor (
ng-monitor
) for latest TiDB releases (#1601, @nexustar) - [experimental] Support JSON output format for
tiup-cluster
andtiup-dm
(#1617, @AstroProfundis)
- Remove warning about tag argument for
tiup-playground
(#1606, @nexustar) - Set
--external-url
for AlertManager intiup-cluster
(#1608, @reAsOn2010) - Fix auto detecting of system arch fail in certain circumstances (#1610, @AstroProfundis)
- Support getting cluster ID from PD in
pdapi
package (#1573 #1574, @nexustar; #1580, @AstroProfundis) - Accurately get status of TiFlash nodes during operations (#1600, @AstroProfundis)
- Fix
tiup-bench
reporting wrong latency for TPCC workloads (#1577, @lobshunter) - Fix test cases for
tiup-bench
andtiup-client
(#1579, @AstroProfundis) - Fix fetching component manifest error on certain circumstances (#1581, @nexustar)
- Add support of using
ssh-agent
auth socket intiup-cluster
(#1416, @9547) - Add parallel task concurrency control in
tiup-cluster
andtiup-dm
with-c/--concurrency
argument (#1420, @AstroProfundis)- The default value of max number of parallel tasks allowed is 5, this feature could help users managing very large clusters to avoid connection errors on operations.
- Add the ability to detect CPU arch for deployment servers automatically in
tiup-cluster
andtiup-dm
if not set by user (#1423, @9547) - Add
renew
subcommand fortiup mirror
to extend the expiration date of component manifest (#1479, @AstroProfundis) - Add the ability to ignore monitor agents for specific instances in
tiup-cluster
(#1492, @AstroProfundis) - Add
--force
argument forprune
subcommand intiup-cluster
(#1552, @AstroProfundis) - Add more configuration fields for Grafana in
tiup-cluster
andtiup-dm
(#1566, @haiboumich) - [Experimental] Add support of SSH connections via proxy in
tiup-cluster
(#1438, @9547) - Deprecate the
--monitor
argument and introduce a new--without-monitor
argument to disable monitoring components intiup-playground
(#1512, @LittleFall) - Deprecate the
TIUP_WORK_DIR
environment as it's not actually been used, and make it possible fortiup-playground
to run withouttiup
(#1553 #1556 #1558, @nexustar)
- Fix
blackbox_exporter
configs for TLS enabled clusters intiup-cluster
(#1443, @9547) - Only try to apply THP fix if it's available on the deployment server in
tiup-cluster
(#1458, @9547) - Fix sudo errors in
tiup-cluster
when devtoolset is enabled on deployment server (#1516, @nexustar) - Fix test cases for
tiup-dm
(#1540, @nexustar) - Fix downloading of uneeded component packages when
--binpath
is specifiedintiup-playground
(#1495, @AstroProfundis; #1545, @nexustar) - Fix panic when
tiup-bench
fails to connect to the database (#1557, @nexustar) - Fix
numa_node
configs are not rendered into PD startup script intiup-cluster
(#1565, @onlyacat) - Correctly handle
--
in command line arguments passed totiup
(#1569, @dveeden)
- Reduce network usage on various operations and speed up the process
- Use the value of
--wait-timeout
argument as timeout of SSH command operations with thebuiltin
executor (#1445, @AstroProfundis) - Refuse to
clone
a local mirror to the same location it is stored (#1464, @dveeden) - Set terminal title to show session tag in
tiup-playground
(#1506, @dveeden) - Show TiDB port when scale out in
tiup-playground
(#1520, @nexustar) - Cleanup files if component fails to install (#1562, @nexustar)
- Update docs and examples (#1484, @ichn-hu; #1502, @AstroProfundis)
- Use auto completion from
cobra
itself (#1544, @AstroProfundis; #1549, @nexustar)
- Fix OS version check rules for
tiup-cluster check
(#1535, @AstroProfundis) - Fix component upgrade order for
tiup-cluster
to make sure TiCDC nodes work correctly (#1542, @overvenus)
- Adjust warning message of
tiup-cluster restart
to make users clear that the cluster will be unavailable during the process (#1523, @glkappe) - Reverse the order of audit log listing to show latest records at the buttom (#1538, @AstroProfundis)
- Fix error when reloading a stopped cluster with
--skip-restart
argument (#1513, @AstroProfundis) - Use absolute path for
sudo
command, to workaround errors on systems wheredevtoolset
is enabled (#1516, @nexustar) - Fix custom TiDB port not correctly set in playground (#1511, @hecomlilong)
- Allow editing of
lerner_config
field in TiFlash spec (#1494, @AstroProfundis) - Fix incorrect timeout for telemetry requests (#1500, @AstroProfundis)
- Ingore
data_dir
of monitor agents when checking for directory overlaps (#1510, @AstroProfundis)
- Distinguish cookie names of multiple grafana instances on the same host (#1491, @AstroProfundis)
- Fix incorrect alert rules for TiDB version 3.x (#1463, @9547)
- Fix TiKV config check to correctly handle the
data_dir
value (#1471, @tabokie)
- Update dependencies and adjust error message of
ctl
(#1459, @AstroProfundis) - Use
$SHELL
environment variable for completion (#1455, @dveeden) - Allow listing components from local cached manifests without network access (#1466, @c4pt0r)
- Adjust error message of SELinux check failure (#1476, @AstroProfundis)
- Adjust warning message when
scale-in
with--force
argument to make it more clear of potential risks (#1477, @AstroProfundis)
- Fix native SSH not working with custom SSH port (#1424, @9547)
- Fix dashboard address displaying issue for
tikv-slim
clusters (#1428, @iosmanthus) - Fix a typo in help message of
tiup-playground
(#1429, @ekexium) - Fix TiFlash nodes not handled correctly in some commands (#1431, @lucklove)
- Fix jemalloc config for TiKV nodes (#1435, @9547)
- Fix the issue that slow log is not placed under
log_dir
(#1441, @lucklove)
- Update default alertmanager config template to avoid confusing (#1425 #1426, @lucklove)
- Increase default timeout of transferring leader in upgrade progress (#1434, @AstroProfundis)
- Update dependencies (#1433, @AstroProfundis)
- Fix the issue that some versions of TiCDC node may fail to start in
tiup-cluster
(#1421, @JinLingChristopher)
- Show more information in
display
subcommand oftiup-cluster
- Add an
--uptime
argument to show time since the last state change of process (#1231, @9547) - Show deploy user in
display
output and adjust formats (#1390 #1409, @AstroProfundis)
- Add an
- Add JSON output for
display
subcommand oftiup-cluster
(#1358, @dveeden) - Add double confirmation for
scale-out
subcommand intiup-cluster
to let users be aware of global configs being used (#1309, @AstroProfundis) - Support deploying pure TiKV cluster with
--mode tikv-slim
inplayground
(#1333, @iosmanthus; #1365, @tisonkun) - Support data dir settings for TiCDC in
tiup-cluster
(#1372, @JinLingChristopher) - Support change of
GCTTL
andTZ
configs for TiCDC intiup-cluster
(#1380, @amyangfei) - Add a local deployment template for
tiup-cluster
(#1404, @kolbe) - Support using dot (
.
) in cluster name (#1412, @9547)
- Fix a variety of typos (#1306, @kolbe)
- Fix non-common speed units shown in downloading progress (#1312, @dveeden)
- Fix the issue that it may panic when user try to list expired component (#1391, @lucklove)
- Fix the issue that tikv not upgraded on error increasing schedule limit (#1401, @AstroProfundis)
- Support specifying node counts in tests (#1251, @9547)
- Add double confirmation for
reload
,patch
andrename
subcommands intiup-cluster
(#1263, @9547) - Add ability to list available make targets for developers (#1277, @rkazak)
- Update links in doc/dev/README.md file (#1296, @mjonss)
- Improve handling of latest versions in
mirror clone
subcommand (#1313, @dveeden) - Add check for dependencies before downloading package in installation script (#1348, @AstroProfundis)
- Simplified the handling of configs imported from TiDB-Ansible (#1350, @lucklove)
- Implement native scp downloading (#1382, @AstroProfundis)
- Update and fix dependencies (#1362, @AstroProfundis; #1407, @dveeden)
- Fix the issue that upgrade process may fail if the PD node is not available for longer than normal after restart (#1359, @AstroProfundis)
- Fix incorrect
MALLOC_CONF
value for TiKV node, setprof_active
tofalse
(#1361 #1369, @YangKeao)- Risk of this issue: Generating prof data for TiKV node with
prof_active=true
may cause high CPU systime usage in some circumstances, users need to regenerate startup scripts for TiKV nodes withtiup cluster reload <cluster-name> -R tikv
to make the update applied
- Risk of this issue: Generating prof data for TiKV node with
- Fix the issue that the global
log_dir
not generated correctly for absolute paths (#1376, @lucklove) - Fix the issue that
display
command may report label mismatch warning ifplacement-rule
is enabled (#1378, @lucklove) - Fix the issue that SELinux setting is incorrect when
tiup-cluster
tries to disable it withcheck --apply
(#1383, @AstroProfundis) - Fix the issue that when scaling out instance on a host imported from
tidb-ansible
, the process may report error about monitor directory conflict (#1386, @lucklove)
- Allow scale in cluster when there is no TiSpark master node but have worker node in the topology (#1363, @AstroProfundis)
- Make port check error message more clear to users (#1367, @JinLingChristopher)
- Fix OS check for RHEL in
tiup-cluster
(#1336, @AstroProfundis) - Check for command depends before downloading packages in install script (#1348, @AstroProfundis)
- Fix the issue that install script downloads an old TiUP package (#1349, @lucklove)
- Fix the issue that drainer node imported from TiDB-Ansible may have incorrect
data_dir
(#1346, @AstroProfundis)
- Optimize some subcommands of
tiup mirror
(#1331, @AstroProfundis) - Set proper User-Agent for requests downloading manifests and files from remote (#1342, @AstroProfundis)
- Add basic telemetry report for
tiup
andplayground
(#1341 #1353, @AstroProfundis)
- Send meta output from
tiup
tostderr
to not to mix with output of components (#1298, @dveeden) - Update confusing version selection examples in help message of
playground
(#1318, @AstroProfundis) - Fix the issue that
tiup mirror clone
command does exclude yanked component correctly (#1321, @lucklove)
- Adjust output messages and operation processes of
tiup mirror
command (#1302, @AstroProfundis) - Add
tiup mirror show
subcommand to display current mirror address in use (#1317, @baurine) - Optimize error handling if
root.json
fails to load (#1303, @AstroProfundis) - Update MySQL client connection example in
playground
(#1323, @tangenta) - Adjust data and fields report via telemetry (#1327, @AstroProfundis)
- Fix pprof failing for TiKV in playground (#1272, @hicqu)
- Fix the issue that TiFlash node may be failed to restart in playground (#1280, @lucklove)
- Fix the issue that
binlog_enable
is not imported from tidb-ansible correctly (#1261, @lucklove) - Fix directory conflict check error for TiDB and DM clusters imported from ansible deployment (#1273, @lucklove)
- Fix compatibility issue during upgrade for PD v3.x (#1274, @lucklove)
- Fix failure of parsing very long audit log in replay for tiup-cluster (#1259, @lucklove)
- Fix log dir path of Grafana for tiup-cluster (#1276, @rkazak)
- Fix config check error when the cluster was deployed with an legacy nightly version in tiup-cluster (#1281, @AstroProfundis)
- Fix error when using nightly version while the actual component is not available in repo (#1294, @lucklove)
- Refine PD scaling script rendering to optimize the code (#1253, @9547)
- Start PD and DM master nodes sequentially in (#1262, @9547)
- Properly follow the ignore config check argument in reload for tiup-cluster (#1265, @9547)
- EXPERIMENTAL: Add support of Apple M1 devices (#1122, @terasum @AstroProfundis @sunxiaoguang)
- Playground may not fully work as some components don't yet have packages for
darwin-arm64
released
- Playground may not fully work as some components don't yet have packages for
- Not displaying dashboard address if it's "none" or "auto" (#1054, @9547)
- Support filtering nodes and roles in
check
subcommand of tiup-cluster (#1030, @AstroProfundis) - Support retry of failed operations from where it broke with
replay
command of tiup-cluster and tiup-dm (#1069 #1157, @lucklove) - Support upgrade and patch a stopped TiDB / DM cluster (#1096, @lucklove)
- Support setting global custom values for topology of tiup-cluster (#1098, @lucklove)
- Support custom
root_url
and anonymous login for Grafana in tiup-cluster (#1085, @mianhk) - Support remote read and remote write for Prometheus node in tiup-cluster (#1070, @XSHui)
- Support custom external AlertManager target for Prometheus node in tiup-cluster (#1149, @lucklove)
- Support force reinstallation of already installed component (#1145, @9547)
- Add
--force
and retain data options to tiup-dm (#1080, @9547) - Add
enable
/disable
subcommands to tiup-dm (#1114, @9547) - Add
template
subcommand to tiup-cluster to print pre-defined topology templates (#1156, @lucklove) - Add
--version
option todisplay
subcommand of tiup-cluster to print the cluster version (#1207, @AstroProfundis) - Allow value type change when editing topology with
edit-config
subcommand of tiup-cluster (#1050, @AstroProfundis)
- Not allowing deployment if the input topology file is empty (#994, @AstroProfundis)
- Fix data dir setting for Prometheus (#1040, @9547)
- Fix the issue that pre-defined Prometheus rules may be missing if a custom
rule_dir
is set (#1073, @9547) - Fix the issue that config files of Prometheus and Grafana are not checked before start (#1074, @9547)
- Fix the issue that cluster name is not validated for some operations (#1177, @AstroProfundis)
- Fix the issue that tiup-cluster reloads a cluster even if the config may contain errors (#1183, @9547)
- Fix the issue that
publish
command may fail when uploading files without retry (#1174 #1202, @AstroProfundis; #1167, @lucklove) - Fix the issue that newly added TiFlash nodes may fail to start during
scale-out
in tiup-cluster (#1227, @9547) - Fix incorrect cluster name in alert messages (#1238, @9547)
- Fix the issue that blackbox_exporter may not collecting ping metrics correctly (#1250, @STRRL)
- Reduce jitter during upgrade process of TiDB cluster
- Make sure PD node is online and serving before upgrading the next one (#1032, @HunDunDM)
- Upgrade PD leader node after upgrading other PD nodes (#1086, @AstroProfundis)
- Increase schedule limit during upgrade of TiKV nodes (#1661, @AstroProfundis)
- Add check to validate if all regions are healthy (#1126, @AstroProfundis)
- Only reload Prometheus configs when needed (#989, @9547)
- Show default option on prompted input messages (#1132 #1134, @wangbinhe3db)
- Include user's input in error message if prompted challenge didn't pass (#1104, @AstroProfundis)
- Check for
data_dir
andlog_dir
overlap before deploying a cluster (#1093, @9547) - Improve checking rules in
tiup cluster check
command (#1099 #1107, @AstroProfundis; #1118 #1124, @9547) - Refine
list
anddisplay
command for tiup-cluster (#1139, @baurine) - Mark patched nodes in
display
output of tiup-cluster and tiup-dm (#1125, @AstroProfundis) - Ignore
users.*
settings for TiFlash if the cluster version is later than v4.0.12 and v5.0.0-rc (#1211, @JaySon-Huang) - Cache
timestamp
manifest in memory to reduce network requests (#1212, @lucklove) - Upgrade toolchain to Go 1.16 (#1151 #1153 #1130, @AstroProfundis)
- Use GitHub Actions to build and release TiUP components (#1158, @AstroProfundis)
- Remove deprecated
v0manifest
support, TiUP version before v1.0.0 may not be able to download latest packages anymore (#906)
- Fix the issue that metrics of tiflash-server instance may not collected correctly (#1083, @yuzhibotao)
- Fix the issue that tiup-cluster disables monitoring services unexpectedly (#1088, @lucklove)
- Fix wrong dashboard name for lightning in Grafana after renaming a cluster with tiup-cluster (#1196, @9547)
- Fix the issue that tiup-cluster
prune
command may try to generate config for removed nodes (#1237, @lucklove)
- Fix the issue that tiup-cluster can't gernerate prometheus config (#1185, @lucklove)
- Fix the issue that tiup may choose yanked version if it's already installed (#1191, @lucklove)
- Fix the issue that tiup will hang forever when reloading a stopped cluster (#1044, @9547)
- Fix the issue that
tiup mirror merge
does not work on official offline package (#1121, @lucklove) - Fix the issue that there may be no retry when download component failed (#1137, @lucklove)
- Fix the issue that PD dashboard does not report grafana address in playground (#1142, @9547)
- Fix the issue that the default selected version may be a preprelease version (#1128, @lucklove)
- Fix the issue that the error message is confusing when the patched tar is not correct (#1175, @lucklove)
- Add darwin-arm64 not support hint in install script (#1123, @terasum)
- Improve playground welcome information for connecting TiDB (#1133, @dveeden)
- Bind latest stable grafana and prometheus in DM deploying (#1129, @lucklove)
- Use the advertised host instead of 0.0.0.0 for tiup-playground (#1152, @9547)
- Check tarball checksum on tiup-server when publish component (#1163, @lucklove)
- Fix the issue that the grafana and alertmanager target not set in prometheus.yaml (#1041, @9547)
- Fix the issue that grafana deployed by tiup-dm missing home.json (#1056, @lucklove)
- Fix the issue that the expires of cloned mirror is shourened after publish component to it (#1051, @lucklove)
- Fix the issue that tiup-cluster may remove wrong paths for imported cluster on scale-in (#1068, @AstroProfundis)
- Risk of this issue: If an imported cluster has deploy dir ending with
/
, and sub dirs as<deploy-dir>//sub
, it could results to delete wrong paths on scale-in
- Risk of this issue: If an imported cluster has deploy dir ending with
- Fix the issue that imported
*_exporter
has wrong binary path (#1101, @AstroProfundis)
- Apply more strict check on tar.gz file for
patch
command: check if the entry is an executable file (#1091, @lucklove)
- Workaround the issue that store IDs in PDs may not monotonically assigned (#1011, @AstroProfundis)
- Currently, the ID allocator is guaranteed not to allocate duplicated IDs, but when PD leader changes multiple times, the IDs may not be monotonic
- For tiup < v1.2.1, the command
tiup cluster display
may delete store (without confirm) by mistake due to this issue (high risk) - For tiup >= v1.2.1 and <= v1.3.0, the command
tiup cluster display
may displayup
stores astombstone
, and encourages the user to delete them with the commandtiup cluster prune
(medium risk)
- Fix the issue that the
cluster check
always fail on thp check even though the thp is disabled (#1005, @lucklove) - Fix the issue that the command
tiup mirror merge -h
outputs wrong usage (#1008, @lucklove)- The syntax of this command should be
tiup mirror merge <mirror-dir-1> [mirror-dir-N]
but it outputstiup mirror merge <base> <mirror-dir-1> [mirror-dir-N]
- The syntax of this command should be
- Fix the issue that prometheus doesn't collect drainer metrics (#1012, @SE-Bin)
- Reduce display duration when PD nodes encounter network problems and droping packages (#986, @9547)
- cluster, dm: support version input without leading 'v' (#1009, @AstroProfundis)
- Add a warning to explain that we will stop the cluster before clean logs (#1029, @lucklove)
- When a user try to clean logs with the command
tiup cluster clean --logs
, he may expect that the cluster is still running during the clean operation - The actual situation is not what he expect, which may suprise the user (risk)
- When a user try to clean logs with the command
- Modify TiFlash's query memory limit from 10GB to 0(unlimited) in playground cluster (#907, @LittleFall)
- Import configuration into topology meta when migrating a cluster from Ansible (#766, @yuzhibotao)
- Before, we stored imported ansible config in ansible-imported-configs which is hidden for users, in this release, we merge the configs into meta.yaml so that the user can see the config with the command
tiup cluster edit
- Before, we stored imported ansible config in ansible-imported-configs which is hidden for users, in this release, we merge the configs into meta.yaml so that the user can see the config with the command
- Enhance the
tiup mirror
command (#860, @lucklove)- Support merge two or more mirrors into one
- Support publish component to local mirror besides remote mirror
- Support add component owner to local mirror
- Partially support deploy cluster with hostname besides ip address (EXPERIMENTAL) (#948,#949, @fln)
- Not usable for production, as there would be issue if a hostname resolves to a new IP address after deployment
- Support setting custom timeout for waiting instances up in playground-cluster (#968, @unbyte)
- Support check and disable THP in
tiup cluster check
(#964, @anywhy) - Support sign remote manifest and rotate root.json (#967, @lucklove)
- Fixed the issue that the public key created by TiUP was not removed after the cluster was destroyed (#910, @9547)
- Fix the issue that user defined grafana username and password not imported from tidb-ansible cluster correctly (#937, @AstroProfundis)
- Fix the issue that playground cluster not quiting components with correct order: TiDB -> TiKV -> PD (#933, @unbyte)
- Fix the issue that TiKV reports wrong advertise address when
--status-addr
is set to a wildcard address like0.0.0.0
(#951, @lucklove) - Fix the issue that Prometheus doesn't reload target after scale-in action (#958, @9547)
- Fix the issue that the config file for TiFlash missing in playground cluster (#969, @unbyte)
- Fix Tilfash startup failed without stderr output when numa is enabled but numactl cannot be found (#984, @lucklove)
- Fix the issue that the deployment environment fail to copy config file when zsh is configured (#982, @9547)
- Enable memory buddyinfo monitoring on node_exporter to collect exposes statistics of memory fragments (#904, @9547)
- Move error logs dumped by tiup-dm and tiup-cluster to
${TIUP_HOME}/logs
(#908, @9547) - Allow run pure TiKV (without TiDB) cluster in playground cluster (#926, @sticnarf)
- Add confirm stage for upgrade action (#963, @Win-Man)
- Omit debug log from console output in tiup-cluster (#977, @AstroProfundis)
- Prompt list of paths to be deleted before processing in the clean action of tiup-cluster (#981, #993, @AstroProfundis)
- Make error message of monitor port conflict more readable (#966, @JaySon-Huang)
- Fix the issue that can't operate the cluster which have tispark workers without tispark master (#924, @AstroProfundis)
- Root cause: once the tispark master been removed from the cluster, any later action will be reject by TiUP
- Fix: make it possible for broken clusters to fix no tispark master error by scaling out a new tispark master node
- Fix the issue that it report
pump node id not found
while drainer node id not found (#925, @lucklove)
- Support deploy TiFlash on multi-disks with "storage" configurations since v4.0.9 (#931, #938, @JaySon-Huang)
- Check duplicated pd_servers.name in the topology before truly deploy the cluster (#922, @anywhy)
- Fix the issue that Pump & Drainer has different node id between tidb-ansible and TiUP (#903, @lucklove)
- For the cluster imported from tidb-ansible, if the pump or drainer is restarted, it will start with a new node id
- Risk of this issue: binlog may not work correctly after restart pump or drainer
- Fix the issue that audit log may get lost in some special case (#879, #882, @9547)
- If the user execute two commands one follows the other, and the second one quit in 1 second, the audit log of the first command will be overwirten by the second one
- Risk caused by this issue: some audit logs may get lost in above case
- Fix the issue that new component deployed with
tiup cluster scale-out
doesn't auto start when rebooting (#905, @9547)- Risk caused by this issue: the cluster may be unavailable after rebooting
- Fix the issue that data directory of TiFlash is not deleted if multiple data directories are specified (#871, @9547)
- Fix the issue that
node_exporter
andblackbox_exporter
not cleaned up after scale-in all instances on specified host (#857, @9547) - Fix the issue that the patch command will fail when try to patch dm cluster (#884, @lucklove)
- Fix the issue that the bench component report
Error 1105: client has multi-statement capability disabled
(#887, @mahjonp) - Fix the issue that the TiSpark node can't be upgraded (#901, @lucklove)
- Fix the issue that playground cluster can't start TiFlash with newest nightly PD (#902, @lucklove)
- Ignore no tispark master error when listing clusters since the master node may be remove by
scale-in --force
(#920, @AstroProfundis)
- Introduce a more safe way to cleanup tombstone nodes (#858, @lucklove)
- When an user
scale-in
a TiKV server, it's data is not deleted until the user executes adisplay
command, it's risky because there is no choice for user to confirm - We have add a
prune
command for the cleanup stage, the display command will not cleanup tombstone instance any more
- When an user
- Skip auto-start the cluster before the scale-out action because there may be some damaged instance that can't be started (#848, @lucklove)
- In this version, the user should make sure the cluster is working correctly by themselves before executing
scale-out
- In this version, the user should make sure the cluster is working correctly by themselves before executing
- Introduce a more graceful way to check TiKV labels (#843, @lucklove)
- Before this change, we check TiKV labels from the config files of TiKV and PD servers, however, servers imported from tidb-ansible deployment don't store latest labels in local config, this causes inaccurate label information
- After this we will fetch PD and TiKV labels with PD api in display command
- Fix the issue that there is datarace when concurrent save the same file (#836, @9547)
- We found that while the cluster deployed with TLS supported, the ca.crt file was saved multi times in parallel, this may lead to the ca.crt file to be left empty
- The influence of this issue is that the tiup client may not communicate with the cluster
- Fix the issue that files copied by TiUP may have different mode with origin files (#844, @lucklove)
- Fix the issue that the tiup script not updated after
scale-in
PD (#824, @9547)
- Support tiup env sub command (#788, @lucklove)
- Support TiCDC for playground (#777, @leoppro)
- Support limiting core dump size (#817, @lucklove)
- Support using latest Spark and TiSpark release (#779, @lucklove)
- Support new cdc arguments
gc-ttl
andtz
(#770, @lichunzhu) - Support specifing custom ssh and scp path (#734, @9547)
- Fix
tiup update --self
results to tiup's binary file deleted (#816, @lucklove) - Fix per-host custom port for drainer not handled correctly on importing (#806, @AstroProfundis)
- Fix the issue that help message is inconsistent (#758, @9547)
- Fix the issue that dm not applying config files correctly (#810, @lucklove)
- Fix the issue that playground display wrong TiDB number in error message (#821, @SwanSpouse)
- Automaticlly check if TiKV's label is set (#800, @lucklove)
- Download component with stream mode to avoid memory explosion (#755, @9547)
- Save and display absolute path for deploy directory, data dirctory and log directory to avoid confusion (#822, @lucklove)
- Redirect DM stdout to log files (#815, @csuzhangxc)
- Skip download nightly package when it exists (#793, @lucklove)
- Fix the issue that TiKV store leader count is not correct (#762)
- Fix the issue that TiFlash's data is not clean up (#768)
- Fix the issue that
tiup cluster deploy --help
display wrong help message (#758) - Fix the issue that tiup-playground can't display and scale (#749)
- Remove the username
root
in sudo command #731 - Transfer the default alertmanager.yml if the local config file not specified #735
- Only remove corresponed config files in InitConfig for monitor service in case it's a shared directory #736
- [experimental] Support specifying customized configuration files for monitor components (#712, @lucklove)
- Support specifying user group or skipping creating a user in the deploy and scale-out stage (#678, @lucklove)
- to specify the group: https://github.com/pingcap/tiup/blob/master/examples/topology.example.yaml#L7
- to skip creating the user:
tiup cluster deploy/scale-out --skip-create-user xxx
- [experimental] Support rename cluster by the command
tiup cluster rename <old-name> <new-name>
(#671, @lucklove)Grafana stores some data related to cluster name to its grafana.db. The rename action will NOT delete them. So there may be some useless panel need to be deleted manually.
- [experimental] Introduce
tiup cluster clean
command (#644, @lucklove):- Cleanup all data in specified cluster:
tiup cluster clean ${cluster-name} --data
- Cleanup all logs in specified cluster:
tiup cluster clean ${cluster-name} --log
- Cleanup all logs and data in specified cluster:
tiup cluster clean ${cluster-name} --all
- Cleanup all logs and data in specified cluster, excepting the Prometheus service:
tiup cluster clean ${cluster-name} --all --ignore-role Prometheus
- Cleanup all logs and data in specified cluster, expecting the node
172.16.13.11:9000
:tiup cluster clean ${cluster-name} --all --ignore-node 172.16.13.11:9000
- Cleanup all logs and data in specified cluster, expecting the host
172.16.13.11
:tiup cluster clean ${cluster-name} --all --ignore-node 172.16.13.12
- Cleanup all data in specified cluster:
- Support skipping evicting store when there is only 1 TiKV (#662, @lucklove)
- Support importing clusters with binlog enabled (#652, @AstroProfundis)
- Support yml source format with tiup-dm (#655, @july2993)
- Support detecting port conflict of monitoring agents between different clusters (#623, @AstroProfundis)
- Set correct
deploy_dir
of monitoring agents when importing ansible deployed clusters (#704, @AstroProfundis) - Fix the issue that
tiup update --self
may make root.json invalid with offline mirror (#659, @lucklove)
- Add
advertise-status-addr
for TiFlash to support host name (#676, @birdstorm)
- Clone with yanked version #602
- Support yank a single version on client side #602
- Support bash and zsh completion #606
- Handle yanked version when update components #635
- Validate topology changes after edit-config #609
- Allow continue editing when new topology has errors #624
- Fix wrongly setted data_dir of TiFlash when import from ansible #612
- Support native ssh client #615
- Support refresh configuration only when reload #625
- Apply config file on scaled pd server #627
- Refresh monitor configs on reload #630
- Support posix style argument for user flag #631
- Fix PD config incompatible when retrieving dashboard address #638
- Integrate tispark #531 #621