From bf353dcfaea11fbe0805c3f7533428cf5271ec37 Mon Sep 17 00:00:00 2001 From: RingsC Date: Fri, 30 Jun 2023 18:41:18 +0800 Subject: [PATCH] feat(tianmu): merge to Stonedb 5.7 stable (#1919) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * docs(downlaod):update the docs of download(#1453) * fix(mtr):Resolve the nightly run error problem(#1458) * fix(workflow): fix can not run lcov in workflow * feat(tianmu):New configuration parameters: "tianmu_mandatory" and "tianmu_no_key_error (#1462)" In version 1.0.4, we will discard "MANDATORY_TIANMU" and "NO_KEY_ERROR" in (sql_mode) Specifies whether to enable mandatory Tianmu engine in table. if yes ,set tianmu_mandatory to ON, otherwise set the variable to OFF. Specifies whether to to directly skip DDL statements that are not supported by the SQL layer, instead of reporting errors. if yes ,set tianmu_no_key_error to ON, otherwise set the variable to OFF. * feat(tianmu):Discard "MANDATORY_TIANMU" and "NO_KEY_ERROR" in (sql_mode) * feat(tianmu):Resolve the issue of warning messages when setting new parameters * fix(tianmu):(Primary/Secondary)Error 1032 occasionally occurs during primary/secondary synchronization if UUIDs are used as the primary key(#1464) Cause of the problem: When performing a primary key scan under the master slave, "ha_tianmu:: position()" is called first to obtain the primary key value from the "record", However, in this scenario, after calling the "key_copy()" function, the "record" will be cleared, resulting in the subsequent "GetKeys()" obtaining a null primary key value. Solution: Because the value "handler->ref" is not used in the future, you can simply delete the call to "key_copy()". * test(mtr): add integer/unsigned/in subquery/create temporary testcases and update escape.test(#1196) * docs(v1.0.3): update the docs for v1.0.3 * fix(dcoker): FixDocker deployment commands(#1499) * feat: add Baidu statistics script(#1498) * fix(website): fix Baidu statistics script(#1502) * test(mtr): Optimize parallel scheduling to execute more innodb engine testcases, add date type and std func testcase(#1196) * fix crash when the aggregated element was decimal (#1402) 1. Fix the crash first 2. then redesign the entire aggregated data stream * build(deps): bump nth-check and unist-util-select in /website Bumps [nth-check](https://github.com/fb55/nth-check) to 2.1.1 and updates ancestor dependency [unist-util-select](https://github.com/syntax-tree/unist-util-select). These dependencies need to be updated together. Updates `nth-check` from 1.0.2 to 2.1.1 - [Release notes](https://github.com/fb55/nth-check/releases) - [Commits](https://github.com/fb55/nth-check/compare/v1.0.2...v2.1.1) Updates `unist-util-select` from 2.0.2 to 4.0.1 - [Release notes](https://github.com/syntax-tree/unist-util-select/releases) - [Commits](https://github.com/syntax-tree/unist-util-select/compare/2.0.2...4.0.1) --- updated-dependencies: - dependency-name: nth-check dependency-type: indirect - dependency-name: unist-util-select dependency-type: direct:production ... Signed-off-by: dependabot[bot] * docs(deploy): Update the document and fix the link * update the copyright * update the readme(#1324) * Create FUNDING.yml update the Sponsor button * Update FUNDING.yml fix the link of opencollective * remove dup lines in git workflow * feat(tianmu): mv func from public to protected(#1501) * style change * strip spaces * remove dup lines * change to uint * fix(tianmu):Resolve DDL and insert inot select Possible space inflation issues(#366) Cause of the problem: 1. For multiple versions, Tianmu needs to first copy an original pack when performing DML operations, Modify the copied package and use append write or overwrite write after modification (If there is invalid space in the DATA file that can be written to the current pack, use overwrite write, otherwise use append write) to write to the file, After the latest package is written to a file, the latest version chain will point to the address that was last written. There is a problem with the current (TianmuAttr:: LoadData) logic. Every time you call (TianmuAttr:: LoadData), Will write data to disk, If there are multiple rows written in a transaction, there will be multiple copies of data, "Because the current transaction has not been committed, the space for previous repeated writes has not been released, so the logic of overwriting writes will not be reached.", "I only follow the logic of additional writing, which is the fundamental reason for the skyrocketing space.". If you encounter a particularly large multiline write transaction, it will lead to a space explosion. Moreover, disk IO is performed once per load line, which can also lead to degraded insert performance. Solution: To optimize the logic of (TianmuAttr:: LoadData), it is necessary to determine whether the data in the pack is full before saving changes, Is whether to reach 65536 lines, and if so, write again, If it cannot be reached, it is necessary to write again in the commit phase. * feat(tianmu):Reconstruct direct insert into parallel execution, improving direct insert performance. * feat(tianmu):Add code comments for easy understanding * fix(tianmu):The myloader cann't work if the autocommit is closed. (#1510) Currently, TIANMU does not support manual transactions and only supports automatic commit. However, it does determine whether to commit the transaction based on the automatic commit parameters of MySQL. If automatic commit is turned off, automatic commit will not be performed on the transaction * fix(core): fix bug: The instance occasionally crashes if both fields specified for an equi-join are of the string data types (#1476) * fix: page hover font style * feat(mtr): To fix the mtr usage The disabeld mtr use cases should follow the mtr conventional rules. Adds a disabled.def to indeciate which case is disabled or not. * fix(tianmu):The mysqld is crashed when you are starting replication.(#1523) It is possible to update null values with null values in the delta layer, For example: update t set name="xiaohua" where id=1; update t set name=null where id=1; So when encountering this situation, directly return * feat(tianmu):Add delta layer mtr * bug 1538:The instance occasionally crashes when the parallel degree is enabled of the right table #1538 * feat(tianmu): To support vocalno framwork The PRs to support vocalno framwork include a serials of PR. Part1: Refine framework of code to make it clean and clear to read. * docs(quickstart): add stonedb-8.0 compiling guide #1449 * fix(website): fix website error #1449 * feat(tianmu): support volcano framewrok (#1546) * feat(tianmu): To support volcano framework This is a serial of PRs, part2: To remove global var `ha_kvstore`, which should be in an engine. To support volcano framework. and In order to do that, at secondly, ha_kvstore_, a global variable removed to the engine. That follows innodb conventional rules. * fix(tianmu): revert code, mv ret value from try block back to catch block [summary] The logic of this modification is as follows: Previously, set ret action has been moved to try block, which is not efficient because every time we do truncate success, ret will be setted to 1. This time we will move the set ret action back to catch block, which will only trigged when truncate failed. * feat(tiamnu): hard code in defs.h (#1481) change magic number to readable const * docs:update the compile guides #1562 * test(mtr): add more innodb testcases and tianmu range testcase(#1196) * fix(tianmu):Remove excess log printing and add some code comments(#1545) * fix(tianmu): fix mysqld crash when exec query with AggregateRough, assert failed on i < m_idx.size() at tianmu_attr.h:387, msg: [bad dpn index 0/0] (#1580) * fix(website): fix the download link of 5.7(#1518) * feat(website): update the latest content(#1587) * feat(tianmu): support volcano framework (#1554) Part3: To remove the `ha_tianmu_engine`, and gets it from hton's data. This makes it behavior just like innodb. MySQL gets innodb handler instance from table->s->file. and it will make the code logic more concise. * fix: max-width navbar search style * feat(website): upgrade the docusaurus version(#1604) fix #1604 * fix(website): fix Roadmap module location(#1597) * website(community): update the content * feat(website): update the logo of XinChuang(#1590) * fix(tianmu):The instance occasionally crashes when the memory leak. (#1549) * fix(tianmu):Modify merge_ The assignment method of ID, delaying the assignment to ensure that the final value is correct * fix(tianmu):Fix bug in delta layer initialization * fix(tianmu):Resolve the issue of assertion failure caused by memory allocation bugs in (pack_int) * fix(tianmu):Code format adjustment * fix(website): fix the wrong QR code(#1624) * feat(tianmu):Add delta layer information output and table name output * fix(tianmu):Perfect atomic operations for delta_table * feat(tianmu):Optimize delta layer merge operations to remove useless logic * fix(tianmu):assert failed on ptr == buff.get() + data_.sum_len at pack_str.cpp:584(#1620) * fix(tianmu):assert failed on oldv <= dpn_->max_i at pack_int.cpp:337 (#1610) * feat(tianmu):Increase assertion printing information and optimize code logic(#1617) * fix(tianmu): fix mysqld crash when query where JOIN::propagate_dependencies (#1628) * fix(tianmu): fix MySQL server has gone away when exec query (#1641 #1640) * fix(tianmu):Support insert ignore syntax (#1637) * fix(tianmu): fix query input variables wrong result (#1647) * fix(tianmu): fix result of the query using the subquery derived table is incorrect (#1662) * fix(tianmu): fix results of two queries using a derived table and a custom face change are incorrect (#1696) * feat(tianmu): Test cases that supplement custom variables (#1703) * fix(tianmu): fix mysqld crash when assigning return values using both custom variables and derived tables (#1707) * fix(tianmu): Insert ignore can insert duplicate values.(#1699) * fix(tianmu): fix error occurred in the union all query result (#1599) * remove unused code block * fix bug and change test case exptected result * add stonedb-8.0 compiling guide for CentOS 7.x * docs(quickstart): add stonedb-8.0 compiling guide(Chinese) for CentOS 7.x * fix(tianmu):Even if a primary key is defined, duplicate data may be imported.(#1648) * add delete/drop into tianmu log stat * open log for all cmds * fit format * fix(tianmu): fix Error result set of the IN subquery with semi join (#1764) * doc(develop-guide): modify method for complie stonedb using docker 1. change the version of the stonedb from 5.6 to 5.7 in docs. 2. list both manual install and automatic install in docs. 3. update the reference in zh-doc to a valid one: 可以参考:[StoneDB快速部署手册](https://stonedb.io/zh/docs/getting-started/quick-deployment) 可以参考:[StoneDB快速部署手册](https://stonedb.io/zh/docs/quick-deployment) * docs:add docker compile guide of stonedb8.0.(#1780) * feature: remove DBUG_OFF and repalce DEBUG_ASSERT with assert * automatically formatting * fix: fix storage of DT type * fix incorrect result of TIME type by distinguishing the processing of TIME type from other time types * fix(tinmu): fix tianmu crash when set varchar to num when order by * docs(website): update the documentation for Compile StoneDB 8.0 in Docker(#1823) * fix(tianmu): fix up the incompatible type 1) In result value setup phase, it only deals with num, but in some case, some non-num types involved, therefore, we should also deal with these types. 2) To fixup the boundary of the error codes. * fix(tianmu): Fix up the unknown exception after instance killed randomly (#1841) If the instance was killed by `kill -9 'pid'` randomly, the new inserted data will not be written into `DATA` file under the param of `tianmu_insert_delayed` was set to 0. Under this configuration, the data will write to memory not a `DATA` file. Therefore, when the instance was killed and restarted, the data will be lost, and the `DATA` can not be found, and an TianmuError exception will be thrown. changed the writing behavior of instance from writting data into memory to writting data into `DATA` file immediately as that of `tianmu_insert_delayed=1`. * fix(tianmu): fix up the incorrect meta-info leads unexpected behavior (#1840) In `ColumnShare::scan_dpn`, it will throw an exception to identify the in-consistent meta-data, which the offset violates the rules. Now, the deleted DPNs should not be added to `segs`. And, some auxiliary functions are added for helping to identify the status of files. * fix(workflow): nightly build failed #1830 [summary] Currently we disable tar pkg in ci/cd. * feat(tianmu): revert assert() --> debug_assert() #1551 [summary] To avoid some cases of some assert failed in release mode. * feat(tianmu): fixup the default delimeter for load data (#1843) Tianmu default delimiter is ';' not '\t'. In order to follow the mysql convention, we change the default delimeter to `\t`. * fix(tianmu): revert PR #1841. (#1850) Due to the disk space flattion, this PR is used to revert PR #1841. But this will lead to data in-consistent after the instance killed at randomly because the new data inserted into memory, and will lost. The root cause pls ref to discussion in #1621. * fix(tianmu): fix up mtr test case for delim of load data command (#1854) To fix up some MTR test cases because the delim of load data is changed from ; to \t. * fix(tianmu): fix up the `group_concat` function in tianmu (#1852) To allow `group_concat` function executed in tianmu, and Changes `SI` to `SpecialInstruction`. Some exceptions are catched. * fix(tianmu): To fixup the instance crashed if the result of aggregate function goes out of bounds (#1856) To caculate `sum(length())` will lead a corruption in destructor of `ValueOrNull`, frees an array of char. Before the array deletes, the validity of that should be checked, and after that the pointer of that arrary should also be set to nullptr, which make sure it's a safe code piece. * docs(developer-guide): update the compiling guide of stonedb 8.0 for centos7.x(#1817) * fix(tianmu): Fixup the mem leakage of aggregation function 1: Fixup the memory leakage of aggregation function, which may lead to failed of malloc. 2: Re-impl the operator= of `ValueOrNull`. 3: Fixup the assertion of `dow_cast` in ` Query::ClearSubselectTransformation`, `Item_func_trig_cond`. * fix(tianmu): fix UNION of non-matching columns (column no 0) * test(tianmu): add order by sentence in the mtr case various_join.test * test(mtr): add more test cases for tianmu(#1196) [summary] case_when.test drop_restric.test empty_string_not_null.test left_right_func.test like_not_like.test multi_join.test order_by.test ssb_small.test union_case.test * test(mtr): add order by sentence in the mtr case various_join.test * ci(codecov): update the config * fix(tianmu): To suuport ignore option for update statement To support `update ignore` statement. The logic of uniqueness check is re-implemented. * ci(codecov): update the codecov congfig * docs(intro): update the support for 8.0 * wokflow(codecov): Filter out excess code files * workflow(coverage): Update the lcov running logic * fix(tianmu): default value of the field take unaffect in load #1865 Cause: in the function ParsingStrategy::ParseResult ParsingStrategy::GetOneRow field->val_str(str) cannot distinguish 0 and NULL value. Solution: Check whether field's default value is NULL. * fix(tianmu): To support union(all) the statement which is without from clause 1: To fixup unsupport union or union all a sql statement which is without from clause. 2: Re-format some codes and functions. * fix(tianmu): To remove unnessary optimization in tianmu 1:Removes the unnessary optimization in stage of compiliation of tianmu. It doesnot have any helps for us. and may introuduce unexepected behaviors. 2:Refine MTR: issue848, issue1865, alter_table1, issue1523 * fix(tianmu): hotfix corruption in ValueOrNull under multi-thread In multi-thread aggregation, ExpressionColumn will occur double free due to without protection. Thread A will do ValueOrNull::operator ==, but in thread B, it will try to free it. Therefore, it leads to instance crash. * fix(tianmu): incorrect result when using where expr and args > bigint_max #1564 [summary] 1. static_cast(18446744073709551601) = -15 2. Item will set 18446744073709551601 with unsigned flag, but in tianmu transform to ValueOrNot, the value will be set to `-15`. 3. add `unsigned flag` in value_or_null & TianmuNum & tianmu expr. * fix(tianmu): add TIME_to_ulonglong_time_round process and fix up precision loss problem (#1173) When converting TIME/DATETIME to ulonglong numeric, tianmu engine does not take the TIME_to_ulonglong_time_round process. This causes the results different from innodb. Furthermore, when we close the tianmu_insert_delayed parameter and execute an insert SQL, the TIME/DATETIME/TIMESTAMP type's data will loss precision due to incomplete attribute copying. PR Close #1173 * fix(tianmu): fix format using clang-format #792 * feat: rm files after rebase leftover #1217 files deleted: storage/tianmu/core/rc_attr_typeinfo.h storage/tianmu/handler/tianmu_handler.cpp storage/tianmu/handler/tianmu_handler_com.cpp storage/tianmu/types/rc_data_types.cpp storage/tianmu/types/rc_num.cpp storage/tianmu/types/rc_num.h storage/tianmu/types/rc_value_object.cpp * fix(sql,tianmu):fix when binlog format is row, the load data statement cannot be recorded(#1876) 1. Actually, tianmu uses its own code to handle load which lacks support of row format of binlog 2. When tianmu parsing rows, write table map event first 3. Once tianmu constructs a row, just add it to the rows log event, when parsing is done, the rows log event will also be ready, then write it to the binlog --------- Signed-off-by: dependabot[bot] Co-authored-by: LiMK Co-authored-by: lihongjian Co-authored-by: shizhao Co-authored-by: adofsauron Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: StoneAtom <98451811+stoneatomadmin@users.noreply.github.com> Co-authored-by: zzzz-vincent Co-authored-by: wisehead Co-authored-by: Agility6 Co-authored-by: Double0101 Co-authored-by: dysprosium0626 <1119493091@qq.com> Co-authored-by: augety Co-authored-by: xuxinqiang Co-authored-by: Jinrong Duan Co-authored-by: unknown Co-authored-by: hustjieke --- mysql-test/suite/tianmu/r/issue1876.result | 57 ++++++++++++++++ .../suite/tianmu/t/issue1876-master.opt | 5 ++ mysql-test/suite/tianmu/t/issue1876.test | 66 +++++++++++++++++++ sql/binlog.cc | 6 +- sql/sql_class.h | 2 + storage/tianmu/core/tianmu_table.cpp | 42 ++++++++++-- storage/tianmu/core/tianmu_table.h | 1 + storage/tianmu/loader/load_parser.cpp | 15 +++++ 8 files changed, 188 insertions(+), 6 deletions(-) create mode 100644 mysql-test/suite/tianmu/r/issue1876.result create mode 100644 mysql-test/suite/tianmu/t/issue1876-master.opt create mode 100644 mysql-test/suite/tianmu/t/issue1876.test diff --git a/mysql-test/suite/tianmu/r/issue1876.result b/mysql-test/suite/tianmu/r/issue1876.result new file mode 100644 index 000000000..2ca5ecba6 --- /dev/null +++ b/mysql-test/suite/tianmu/r/issue1876.result @@ -0,0 +1,57 @@ +include/master-slave.inc +Warnings: +Note #### Sending passwords in plain text without SSL/TLS is extremely insecure. +Note #### Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information. +[connection master] +create table t1 (b int not null default 1, c varchar(60) default '\\')engine=tianmu; +insert into t1 values(1, 'AAAAAAAA'); +insert into t1 values(2, 'BBBBBBBB'); +SELECT * from t1 INTO OUTFILE '1876_tmp_dat'; +create table t2 like t1; +load data infile '1876_tmp_dat' into table t2; +CREATE TABLE `column_type_test1` ( +`c_tinyint` tinyint(4) DEFAULT NULL COMMENT 'tinyint', +`c_smallint` smallint(6) DEFAULT NULL COMMENT 'smallint', +`c_mediumint` mediumint(9) DEFAULT NULL COMMENT 'mediumint', +`c_int` int(11) DEFAULT NULL COMMENT 'int', +`c_bigint` bigint(20) DEFAULT NULL COMMENT 'bigint', +`c_float` float DEFAULT NULL COMMENT 'float', +`c_double` double DEFAULT NULL COMMENT 'double', +`c_decimal` decimal(10,5) DEFAULT NULL COMMENT 'decimal', +`c_date` date DEFAULT NULL COMMENT 'date', +`c_datetime` datetime DEFAULT NULL COMMENT 'datetime', +`c_timestamp` timestamp NULL DEFAULT NULL COMMENT 'timestamp', +`c_time` time DEFAULT NULL COMMENT 'time', +`c_char` char(10) DEFAULT NULL COMMENT 'char', +`c_varchar` varchar(10) DEFAULT NULL COMMENT 'varchar', +`c_blob` blob COMMENT 'blob', +`c_text` text COMMENT 'text', +`c_longblob` longblob COMMENT 'longblob' +) engine=tianmu; +insert into column_type_test1 values(100, 100, 100, 100, 100, 5.2, 10.88, 100.08300, '2016-02-25', '2016-02-25 10:20:01', '2007-04-23 08:12:49', '10:20:01', 'stonedb', 'hello', null, 'bcdefghijklmn', null); +insert into column_type_test1 values(101, 101, 101, 101, 101, 5.2, 10.88, 101.08300, '2016-02-25', '2016-02-25 10:20:01', '1985-08-11 09:10:25', '10:20:01', 'stoneatom', 'hello', null, 'bcdefghijklmn', null); +SELECT * from column_type_test1 INTO OUTFILE '1876_tmp1_dat'; +create table column_type_test2 like column_type_test1; +load data infile '1876_tmp1_dat' into table column_type_test2; +create table user_t1(id int, department varchar(10)) engine=tianmu; +SELECT * from user_t1 INTO OUTFILE '1876_tmp2_dat'; +create table user_t2 like user_t1; +load data infile '1876_tmp2_dat' into table user_t2; +SHOW STATUS LIKE 'Slave_running'; +Variable_name Value +Slave_running ON +select * from t2; +b c +1 AAAAAAAA +2 BBBBBBBB +select * from column_type_test2; +c_tinyint c_smallint c_mediumint c_int c_bigint c_float c_double c_decimal c_date c_datetime c_timestamp c_time c_char c_varchar c_blob c_text c_longblob +100 100 100 100 100 5.2 10.88 100.08300 2016-02-25 2016-02-25 10:20:01 2007-04-23 08:12:49 10:20:01 stonedb hello NULL bcdefghijklmn NULL +101 101 101 101 101 5.2 10.88 101.08300 2016-02-25 2016-02-25 10:20:01 1985-08-11 09:10:25 10:20:01 stoneatom hello NULL bcdefghijklmn NULL +checksum table user_t2; +Table Checksum +test.user_t2 536836232 +drop table t1, t2; +drop table column_type_test1, column_type_test2; +drop table user_t1, user_t2; +include/rpl_end.inc diff --git a/mysql-test/suite/tianmu/t/issue1876-master.opt b/mysql-test/suite/tianmu/t/issue1876-master.opt new file mode 100644 index 000000000..55fa0a48d --- /dev/null +++ b/mysql-test/suite/tianmu/t/issue1876-master.opt @@ -0,0 +1,5 @@ +--testcase-timeout=40 +--secure-file-priv="" +--tianmu_insert_delayed=off +--log-bin=bin +--binlog_format=row diff --git a/mysql-test/suite/tianmu/t/issue1876.test b/mysql-test/suite/tianmu/t/issue1876.test new file mode 100644 index 000000000..0e756c3c6 --- /dev/null +++ b/mysql-test/suite/tianmu/t/issue1876.test @@ -0,0 +1,66 @@ +--source include/have_tianmu.inc +--source include/master-slave.inc + +connection master; +create table t1 (b int not null default 1, c varchar(60) default '\\')engine=tianmu; +insert into t1 values(1, 'AAAAAAAA'); +insert into t1 values(2, 'BBBBBBBB'); +SELECT * from t1 INTO OUTFILE '1876_tmp_dat'; +create table t2 like t1; +load data infile '1876_tmp_dat' into table t2; + +CREATE TABLE `column_type_test1` ( + `c_tinyint` tinyint(4) DEFAULT NULL COMMENT 'tinyint', + `c_smallint` smallint(6) DEFAULT NULL COMMENT 'smallint', + `c_mediumint` mediumint(9) DEFAULT NULL COMMENT 'mediumint', + `c_int` int(11) DEFAULT NULL COMMENT 'int', + `c_bigint` bigint(20) DEFAULT NULL COMMENT 'bigint', + `c_float` float DEFAULT NULL COMMENT 'float', + `c_double` double DEFAULT NULL COMMENT 'double', + `c_decimal` decimal(10,5) DEFAULT NULL COMMENT 'decimal', + `c_date` date DEFAULT NULL COMMENT 'date', + `c_datetime` datetime DEFAULT NULL COMMENT 'datetime', + `c_timestamp` timestamp NULL DEFAULT NULL COMMENT 'timestamp', + `c_time` time DEFAULT NULL COMMENT 'time', + `c_char` char(10) DEFAULT NULL COMMENT 'char', + `c_varchar` varchar(10) DEFAULT NULL COMMENT 'varchar', + `c_blob` blob COMMENT 'blob', + `c_text` text COMMENT 'text', + `c_longblob` longblob COMMENT 'longblob' +) engine=tianmu; +insert into column_type_test1 values(100, 100, 100, 100, 100, 5.2, 10.88, 100.08300, '2016-02-25', '2016-02-25 10:20:01', '2007-04-23 08:12:49', '10:20:01', 'stonedb', 'hello', null, 'bcdefghijklmn', null); +insert into column_type_test1 values(101, 101, 101, 101, 101, 5.2, 10.88, 101.08300, '2016-02-25', '2016-02-25 10:20:01', '1985-08-11 09:10:25', '10:20:01', 'stoneatom', 'hello', null, 'bcdefghijklmn', null); +SELECT * from column_type_test1 INTO OUTFILE '1876_tmp1_dat'; +create table column_type_test2 like column_type_test1; +load data infile '1876_tmp1_dat' into table column_type_test2; + +create table user_t1(id int, department varchar(10)) engine=tianmu; +--disable_query_log +let $i = 0; +while($i < 70000) +{ + eval insert into user_t1 values($i, 'stonedb'); + inc $i; +} +--enable_query_log +SELECT * from user_t1 INTO OUTFILE '1876_tmp2_dat'; +create table user_t2 like user_t1; +load data infile '1876_tmp2_dat' into table user_t2; + +--sync_slave_with_master + +connection slave; +# check the rpl is running normally +SHOW STATUS LIKE 'Slave_running'; + +# the data in table t2 in slave is the same to that's in master, means the binlog is written correctly +select * from t2; +select * from column_type_test2; +checksum table user_t2; + +connection master; +drop table t1, t2; +drop table column_type_test1, column_type_test2; +drop table user_t1, user_t2; +--sync_slave_with_master +--source include/rpl_end.inc diff --git a/sql/binlog.cc b/sql/binlog.cc index 5017b573d..6d8a4550e 100644 --- a/sql/binlog.cc +++ b/sql/binlog.cc @@ -8802,12 +8802,14 @@ TC_LOG::enum_result MYSQL_BIN_LOG::commit(THD *thd, bool all) DBUG_RETURN(RESULT_ABORTED); } } - else if (real_trans && xid && trn_ctx->rw_ha_count(trx_scope) > 1 && - !trn_ctx->no_2pc(trx_scope)) + else if (thd->tianmu_need_xid || (real_trans && xid && trn_ctx->rw_ha_count(trx_scope) > 1 && + !trn_ctx->no_2pc(trx_scope))) { Xid_log_event end_evt(thd, xid); if (cache_mngr->trx_cache.finalize(thd, &end_evt)) DBUG_RETURN(RESULT_ABORTED); + // used for tianmu only + thd->tianmu_need_xid= false; } else { diff --git a/sql/sql_class.h b/sql/sql_class.h index bb9204de0..8eb6a1a48 100644 --- a/sql/sql_class.h +++ b/sql/sql_class.h @@ -1492,6 +1492,8 @@ class THD :public MDL_context_owner, { assert(0); return Query_arena::is_conventional(); } public: + /* Used only for tianmu to wirte xid log event */ + bool tianmu_need_xid; MDL_context mdl_context; /* diff --git a/storage/tianmu/core/tianmu_table.cpp b/storage/tianmu/core/tianmu_table.cpp index 386f0c536..8e95efa29 100644 --- a/storage/tianmu/core/tianmu_table.cpp +++ b/storage/tianmu/core/tianmu_table.cpp @@ -1118,11 +1118,20 @@ uint64_t TianmuTable::ProceedNormal(system::IOParameters &iop) { auto no_loaded_rows = parser.GetNoRow(); - if (no_loaded_rows > 0 && mysql_bin_log.is_open()) - if (binlog_load_query_log_event(iop) != 0) { - TIANMU_LOG(LogCtl_Level::ERROR, "Write load binlog fail!"); - throw common::FormatException("Write load binlog fail!"); + if (no_loaded_rows > 0 && mysql_bin_log.is_open()) { + LOAD_FILE_INFO *lf_info = (LOAD_FILE_INFO *)iop.GetLogInfo(); + THD *thd = lf_info->thd; + if (thd->is_current_stmt_binlog_format_row()) { // if binlog format is row + if (binlog_flush_pending_rows_event(iop, true, iop.GetTable()->file->has_transactions()) != 0) { + TIANMU_LOG(LogCtl_Level::ERROR, "Write row binlog fail!"); + throw common::FormatException("Write row binlog fail!"); + } + } else if (binlog_load_query_log_event(iop) != 0) { + TIANMU_LOG(LogCtl_Level::ERROR, "Write statement binlog fail!"); + throw common::FormatException("Write statement binlog fail!"); } + } + timer.Print(__PRETTY_FUNCTION__); no_rejected_rows = parser.GetNumOfRejectedRows(); @@ -1137,6 +1146,31 @@ uint64_t TianmuTable::ProceedNormal(system::IOParameters &iop) { return no_loaded_rows; } +int TianmuTable::binlog_flush_pending_rows_event(system::IOParameters &iop, bool stmt_end, bool is_transactional) { + DBUG_ENTER(__PRETTY_FUNCTION__); + /* + We shall flush the pending event even if we are not in row-based + mode: it might be the case that we left row-based mode before + flushing anything (e.g., if we have explicitly locked tables). + */ + if (!mysql_bin_log.is_open()) + DBUG_RETURN(0); + + LOAD_FILE_INFO *lf_info = (LOAD_FILE_INFO *)iop.GetLogInfo(); + THD *thd = lf_info->thd; + thd->tianmu_need_xid = true; + int error = 0; + + if (Rows_log_event *pending = thd->binlog_get_pending_rows_event(is_transactional)) { + if (stmt_end) { + pending->set_flags(Rows_log_event::STMT_END_F); + thd->clear_binlog_table_maps(); + } + error = mysql_bin_log.flush_and_set_pending_rows_event(thd, 0, is_transactional); + } + DBUG_RETURN(error); +} + int TianmuTable::binlog_load_query_log_event(system::IOParameters &iop) { char *load_data_query, *end, *fname_start, *fname_end, *p = nullptr; size_t pl = 0; diff --git a/storage/tianmu/core/tianmu_table.h b/storage/tianmu/core/tianmu_table.h index 368532dc9..34b37f472 100644 --- a/storage/tianmu/core/tianmu_table.h +++ b/storage/tianmu/core/tianmu_table.h @@ -159,6 +159,7 @@ class TianmuTable final : public JustATable { uint64_t ProceedNormal(system::IOParameters &iop); uint64_t ProcessDelayed(system::IOParameters &iop); void Field2VC(Field *f, loader::ValueCache &vc, size_t col); + int binlog_flush_pending_rows_event(system::IOParameters &iop, bool stmt_end, bool is_transactional); int binlog_load_query_log_event(system::IOParameters &iop); int binlog_insert2load_log_event(system::IOParameters &iop); int binlog_insert2load_block(std::vector &vcs, uint load_obj, system::IOParameters &iop); diff --git a/storage/tianmu/loader/load_parser.cpp b/storage/tianmu/loader/load_parser.cpp index e83c11889..7222e703c 100644 --- a/storage/tianmu/loader/load_parser.cpp +++ b/storage/tianmu/loader/load_parser.cpp @@ -73,10 +73,25 @@ uint LoadParser::GetPackrow(uint no_of_rows, std::vector &value_buff value_buffers.emplace_back(pack_size_, init_capacity); } + THD *thd = io_param_.GetTHD(); + TABLE *table = io_param_.GetTable(); + bool is_transactional = table->file->has_transactions(); + bool need_rows_binlog = false; + if (mysql_bin_log.is_open() && thd->is_current_stmt_binlog_format_row()) { + need_rows_binlog = true; + const bool has_trans = thd->lex->sql_command == SQLCOM_CREATE_TABLE || is_transactional; + bool need_binlog_rows_query = thd->variables.binlog_rows_query_log_events; + /* write table map event */ + [[maybe_unused]] int err = thd->binlog_write_table_map(table, has_trans, need_binlog_rows_query); + } + uint no_of_rows_returned; for (no_of_rows_returned = 0; no_of_rows_returned < no_of_rows; no_of_rows_returned++) { if (!MakeRow(value_buffers)) break; + /* write row after one row is ready */ + if (need_rows_binlog) + [[maybe_unused]] int err = thd->binlog_write_row(table, is_transactional, table->record[0], NULL); } last_pack_size_.clear();