This repository has been archived by the owner on Nov 24, 2023. It is now read-only.
Releases: pingcap/dm
Releases · pingcap/dm
DM v2.0.0-RC.2
[2.0.0-rc.2] 2020-09-01
Improvements
- Support more AWS Aurora-specific privileges when pre-checking the data migration task #950
- Check whether GTID is enabled for the upstream MySQL/MariaDB when configuring
enable-gtid: true
and creating a data source #957
Bug fixes
- Fix the
Column count doesn't match value count
error that occurs in the running migration task after automatically upgrading the DM cluster from v1.0.x to v2.0.0-rc #952 - Fix the issue that the DM-worker or DM-master component might not correctly exit #963
- Fix the issue that the
--no-locks
argument does not take effect on the dump processing unit in DM v2.0 #961 - Fix the
field remove-meta not found in type config.TaskConfig
error that occurs when using the task configuration file of the v1.0.x cluster to start the task of a v2.0 cluster #965 - Fix the issue that when the domain name is used as the connection address of each component, the component might not be correctly started #955
- Fix the issue that the connection between the upstream and downstream might not be released after the migration task is stopped #943
- Fix the issue that in the optimistic sharding DDL mode, concurrently executing the DDL statement on multiple sharded tables might block the sharding DDL coordination #944
- Fix the issue that the newly started DM-master might cause the
list-member
to panic #970
Action required
- When upgrading from a previous version, note that you must upgrade all DM components (dmctl/DM-master/DM-worker) together
DM v2.0.0-RC
Improvements
- Support high availability for data migration tasks
- Add an optimistic mode for sharding DDL statements
- Add the
handle-error
command to handle errors during DDL incremental replication - Add a
workaround
field in the error returned byquery-status
to suggest the error handling method - Improve the monitoring dashboards and alert rules
- Replace Mydumper with Dumpling as the full export unit
- Support the GTID mode when performing incremental replication to the downstream
- Support TLS connections between upstream and downstream databases, and between DM components
- Support the incremental replication scenarios where the table of the downstream has more columns than that of the upstream
- Add a
--remove-meta
option to thestart-task
command to clean up metadata related to data migration tasks - Support dropping columns with single-column indices
- Support automatically cleaning up temporary files after a successful full import
- Support checking whether the table to be migrated has a primary key or a unique key before starting a migration task
- Support connectivity check between dmctl and DM-master while starting dmctl
- Support connectivity check for downstream TiDB during the execution of
start-task
/check-task
- Support replacing task names with task configuration files for some commands such as
pause-task
- Support logs in
json
format for DM-master and DM-worker components - Remove the call stack information and redundant fields in the error message returned by
query-status
- Improve the binlog position information of the upstream database returned by
query-status
- Improve the processing of
auto resume
when an error is encountered during the full export
Bug fixes
- Fix the issue of goroutine leak after executing
stop-task
- Fix the issue that the task might not be paused after executing
pause-task
- Fix the issue that the checkpoint might not be saved correctly in the initial stage of incremental replication
- Fix the issue that the
BIT
data type is incorrectly handled during incremental replication
Action required
- When upgrading from a previous version, note that you must upgrade all DM components (dmctl/DM-master/DM-worker) together
Detailed Bug Fixes and Changes
- Support high availability for data migration tasks #473
- Add an optimistic mode for sharding DDL statements #568
- Add the
handle-error
command to handle errors during DDL incremental replication #850 - Add a
workaround
field in the error returned byquery-status
to suggest the error handling method #753 - Improve the monitoring dashboards and alert rules #853
- Replace Mydumper with Dumpling as the full export unit #540
- Support the GTID mode when performing incremental replication to the downstream #521
- Support TLS connections between upstream and downstream databases, and between DM components #569
- Support the incremental replication scenarios where the table of the downstream has more columns than that of the upstream #379
- Add a
--remove-meta
option to thestart-task
command to clean up metadata related to data migration tasks #651 - Support dropping columns with single-column indices #801
- Support automatically cleaning up temporary files after a successful full import #770
- Support checking whether the table to be migrated has a primary key or a unique key before starting a migration task #870
- Support connectivity check between dmctl and DM-master while starting dmctl #786
- Support connectivity check for downstream TiDB during the execution of
start-task
/check-task
#769 - Support replacing task names with task configuration files for some commands such as
pause-task
#854 - Support logs in
json
format for DM-master and DM-worker components #808 - Remove the call stack information in the error message returned by
query-status
#746 - Remove the redundant fields in the error message returned by
query-status
#771 - Improve the binlog position information of the upstream database returned by
query-status
#830 - Improve the processing of
auto resume
when an error is encountered during the full export #872 - Fix the issue of goroutine leak after executing
stop-task
#731 - Fix the issue that the task might not be paused after executing
pause-task
#644 - Fix the issue that the checkpoint might not be saved correctly in the initial stage of incremental replication #758
- Fix the issue that the
BIT
data type is incorrectly handled during incremental replication #876
DM v2.0.0-beta.2
fix(shardddl): fix init schema fetch before applied DDL to downstream…
DM v1.0.6
Improvements
- Support the original plaintext passwords for upstream and downstream databases
- Support configuring session variables for DM’s connections to upstream and downstream databases
- Remove the call stack information in some error messages returned by the
query-status
command when the data migration task encounters an exception - Filter out the items that pass the precheck from the message returned when the precheck of the data migration task fails
Bug fixes
- Fix the issue that the data migration task is not automatically paused and the error cannot be identified by executing the
query-status
command if an error occurs when the load unit creates a table - Fix possible DM-worker panics when data migration tasks run simultaneously
- Fix the issue that the existing data migration task cannot be automatically restarted when the DM-worker process is restarted if the
enable-heartbeat
parameter of the task is set totrue
- Fix the issue that the shard DDL conflict error may not be returned after the task is resumed
- Fix the issue that the
replicate lag
information is displayed incorrectly for an initial period of time when theenable-heartbeat
parameter of the data migration task is set totrue
- Fix the issue that
replicate lag
cannot be calculated using the heartbeat information whenlower_case_table_names
is set to1
in the upstream database - Disable the meaningless auto-resume tasks triggered by the
unsupported collation
error during data migration
Action required
- When upgrading from a previous version, note that you must upgrade all DM components (dmctl/DM-master/DM-worker) together
Detailed Bug Fixes and Changes
- Support the original plaintext passwords for upstream and downstream databases #676
- Support configuring session variables for DM’s connections to upstream and downstream databases #692
- Remove the call stack information in some error messages returned by the
query-status
command when the data migration task encounters an exception #733 #747 - Filter out the items that pass the precheck from the message returned when the precheck of the data migration task fails #730
- Fix the issue that the data migration task is not automatically paused and the error cannot be identified by executing the
query-status
command if an error occurs when the load unit creates a table #747 - Fix possible DM-worker panics when data migration tasks run simultaneously #710
- Fix the issue that the existing data migration task cannot be automatically restarted when the DM-worker process is restarted if the
enable-heartbeat
parameter of the task is set totrue
#739 - Fix the issue that the shard DDL conflict error may not be returned after the task is resumed #739 #742
- Fix the issue that the
replicate lag
information is displayed incorrectly for an initial period of time when theenable-heartbeat
parameter of the data migration task is set totrue
#704 - Fix the issue that
replicate lag
cannot be calculated using the heartbeat information whenlower_case_table_names
is set to1
in the upstream database #704 - Disable the meaningless auto-resume tasks triggered by the
unsupported collation
error during data migration #735 - Optimize some logs #660 #724 #738
DM v2.0.0-beta.1
.*: fix bug that after execute pause-task the task may still running …
DM v1.0.5
Improvements
- Improve the incremental replication speed when the
UNIQUE KEY
column has theNULL
value - Add retry for the
Write conflict
(9007 and 8005) error returned by TiDB
Bug fixes
- Fix the issue that the
Duplicate entry
error might occur during the full data import - Fix the issue that the replication task cannot be stopped or paused when the full data import is completed and the upstream has no written data
- Fix the issue the monitoring metrics still display data after the replication task is stopped
Action required
- When upgrading from a previous version, note that you must upgrade all DM components (dmctl/DM-master/DM-worker) together
Detailed Bug Fixes and Changes
- Improve the incremental replication speed when the
UNIQUE KEY
column has theNULL
value #588 #597 - Add retry for the
Write conflict
(9007 and 8005) error returned by TiDB #632 - Fix the issue that the
Duplicate entry
error might occur during the full data import #554 - Fix the issue that the replication task cannot be stopped or paused when the full data import is completed and the upstream has no written data #622
- Fix the issue the monitoring metrics still display data after the replication task is stopped #616
- Fix the issue that the
Column count doesn't match value count
error might be returned during the sharding DDL replication #624 - Fix the issue that some metrics such as
data file size
are incorrectly displayed when the paused task of full data import is resumed #570 - Add and fix multiple monitoring metrics #590 #594
DM v1.0.4-hotfix
Bug fixes
- Fix the issue that
Duplicate entry
might be reported inload
stage
DM v1.0.4
Improvements
- Add English UI for DM-portal
- Add the
--more
parameter in thequery-status
command to show complete replication status information
Bug fixes
- Fix the issue that
resume-task
might fail to resume the replication task which is interrupted by the abnormal connection to the downstream TiDB server - Fix the issue that the online DDL operation cannot be properly replicated after a failed replication task is restarted because the online DDL meta information has been cleared after the DDL operation failure
- Fix the issue that
query-error
might cause the DM-worker to panic afterstart-task
goes into error - Fix the issue that the relay log file and
relay.meta
cannot be correctly recovered when restarting an abnormally stopped DM-worker process beforerelay.meta
is successfully written
Action required
- When upgrading from a previous version, note that you must upgrade all DM components (dmctl/DM-master/DM-worker) together
Detailed Bug Fixes and Changes
- Add English UI for DM-portal #480
- Add the
--more
parameter in thequery-status
command to show complete replication status information #533 - Fix the issue that
resume-task
might fail to resume the replication task which is interrupted by the abnormal connection to the downstream TiDB server #436 - Fix the issue that the online DDL operation cannot be properly replicated after a failed replication task is restarted because the online DDL meta information is cleared after the DDL operation failure #465
- Fix the issue that
query-error
might cause the DM-worker to panic afterstart-task
goes into error #519 - Fix the issue that the relay log file and
relay.meta
cannot be correctly recovered when restarting an abnormally stopped DM-worker process beforerelay.meta
is successfully written #534 - Fix the issue that the
value out of range
error might be reported when gettingserver-id
from the upstream #538 - Fix the issue that when Prometheus is not configured DM-Ansible prints the wrong error message that DM-master is not configured #438
DM v1.0.3
Improvements
- Add the command mode in dmctl
- Support replicating the
ALTER DATABASE
DDL statement - Optimize the error message output
Bug fixes
- Fix the panic-causing data race issue occurred when the full import unit pauses or exits
- Fix the issue that
stop-task
andpause-task
might not take effect when retrying SQL operations to the downstream
Action required
- When upgrading from a previous version, note that you must upgrade all DM components (dmctl/DM-master/DM-worker) together
Detailed Bug Fixes and Changes
- Add the command mode in dmctl #364
- Optimize the error message output #351
- Optimize the output of the
query-status
command #357 - Optimize the privilege check for different task modes #374
- Support checking the duplicate quotaed route-rules or filter-rules in task config#385
- Support replicating the
ALTER DATABASE
DDL statement #389 - Optimize the retry mechanism for anomalies #391
- Fix the panic issue caused by the data race when the import unit pauses or exits #353
- Fix the issue that
stop-task
andpause-task
might not take effect when retrying SQL operations to the downstream #400 - Upgrade golang to v1.13 and upgrade the version of other dependencies #362
- Filter the error that the context is canceled when a SQL statement is being executed #382
- Fix the issue that the error occurred when performing a rolling update to DM monitor using DM-ansible causes the update to fail #408
DM v1.0.2
v1.0.2 What's New
Improvements
- Generate some config items for DM-worker automatically
- Generate some config items for replication task automatically
- Simplify the output of
query-status
without arguments - Manage DB connections directly for downstream
Bug fixes
- Fix some panic when starting up or executing SQL statements
- Fix abnormal sharding DDL replication on DDL execution timeout
- Fix starting task failure caused by the checking timeout or any inaccessible DM-worker
- Fix SQL execution retry for some error
Action required
- When upgrading from a previous version, note that you must upgrade all DM components (dmctl/DM-master/DM-worker) together
Detailed Bug Fixes and Changes
- Generate random
server-id
for DM-worker config automatically #337 - Generate
flavor
for DM-worker config automatically #328 - Generate
relay-binlog-name
andrelay-binlog-gtid
for DM-worker config automatically #318 - Generate table name list for dumping in task config from black & white table lists automatically #326
- Add concurrency items (
mydumper-thread
,loader-thread
andsyncer-thread
) for task config #314 - Simplify the output of
query-status
without arguments #340 - Fix abnormal sharding DDL replication on DDL execution timeout #338
- Fix potential DM-worker panic when restoring subtask from local meta #311
- Fix DM-worker panic when committing a DML transaction failed #313
- Fix DM-worker or DM-master panic when the listening port is being used #301
- Fix retry for error code 1105 #321, #332
- Fix retry for
Duplicate entry
andData too long for column
#313 - Fix task check timeout when having large amounts of tables in upstream #327
- Fix starting task failure when any DM-worker is not accessible #319
- Fix potential DM-worker startup failure in GTID mode after being recovered from corrupt relay log #339
- Fix in-memory TPS count for sync unit #294
- Manage DB connections directly for downstream #325
- Improve error system by refining error information passed between components #320