MySQL Shell is an advanced client tool that has many features and is easy to use. The recent MySQL release (MySQL 8.0.22) has the utility “exportTable()”, which supports exporting the data into a file from a table using MySQL shell. The exported data file can be imported using the utility “importTable()”, which was released in MySQL 8.0.17.
With “exportTable()”, you can export the data into a local server or in any S3-compliant object storage provider. In this blog, I am going to explain how those exportTable() & importTable() utilities are working in local servers and I also did some tests with MySQL 5.7 and MySQL 5.6.
Overview of exportTable() & importTable()
exportTable():
- Introduced in MySQL 8.0.22.
- The utility is used to export the data from the MySQL table into a data file.
- It can be used to export the table to a local server or any S3-compliant object storage provider.
- By default, the data will be exported in the default format for MySQL Shell’s table import utility.
- It supports CSV and TSV files.
- Compression is supported with “gzip” or “zstd”.
- JSON is not supported.
Below is the format of the query that will be used to export the data.
1 |
| 43 | root | localhost | NULL | Query | 44 | Sending to client | SELECT SQL_NO_CACHE `<column1>`,`<column2>`,`<column3>`,`<column4>` ... FROM `<db name>`.`<table name>` ORDER BY `<column_pri>` /* mysqlsh exportTable, dumping table `<db name>`.`<table name>`, chunk ID: 1 */ | 39142633 | 0 | |
importTable():
- Introduced in MySQL 8.0.17.
- The utility is used to import the data into a MySQL table from a data file.
- It supports parallelism and a very fast method to import a large data file into the table.
- It is similar to LOAD DATA INFILE.
- By default, the data will be imported in the default format for the MySQL shell table export utility.
- The data file will be chunked and processed with different threads.
- We can control the number of threads for parallel execution.
- “local_infile” is the mandatory variable that should be ON on the server.
1 2 3 4 5 6 7 8 9 10 |
MySQL localhost JS > \sql set global local_infile=on; Query OK, 0 rows affected (0.0001 sec) MySQL localhost JS > \sql show global variables like 'local_infile'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | local_infile | ON | +---------------+-------+ 1 row in set (0.0016 sec) |
Below is the format of the query that will be used to import the data.
1 |
| 90 | root | localhost | NULL | Query | 6 | executing | LOAD DATA LOCAL INFILE '/<folder>/<filename>.<extension>' INTO TABLE `<db name>`.`<table name>` FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"' ESCAPED BY '\\' LINES STARTING BY '' TERMINATED BY '\n' | |
How to Export the Table Into a File Using util.exportTable()
For testing, I have installed the latest Percona Server for MySQL 8.0.22 and created the table “percona.herc7”. The size of the table is around 2.3G.
1 2 3 4 5 6 7 8 |
mysql> select @@version, @@version_comment\G *************************** 1. row *************************** @@version: 8.0.22-13 @@version_comment: Percona Server (GPL), Release '13', Revision '6f7822f' 1 row in set (0.00 sec) [root@mysqlshelldump8 percona]# ls -lrth | grep -i herc7 -rw-r-----. 1 mysql mysql 2.5G Nov 18 21:25 herc7.ibd |
In this section, I am going to address three topics:
- Default output file structure using util.exportTable()
- Modified output file structure using util.exportTable()
- Compressed output using util.exportTable()
Default Output File Structure Using util.exportTable()
I am going to export the table “percona.herc7” with the default option so that the utility will create the output file with default structure.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
MySQL localhost JS > util.exportTable("percona.herc7", "file:///root/herc7.txt") Preparing data dump for table `percona`.`herc7` Data dump for table `percona`.`herc7` will use column `id` as an index Running data dump using 1 thread. NOTE: Progress information uses estimated values and may not be accurate. Data dump for table `percona`.`herc7` will be written to 1 file 17% (8.25M rows / ~46.39M rows), 905.70K rows/s, 45.74 MB/s 29% (13.64M rows / ~46.39M rows), 924.65K rows/s, 47.23 MB/s 100% (46.54M rows / ~46.39M rows), 869.14K rows/s, 44.76 MB/s Duration: 00:00:52s Data size: 2.39 GB Rows written: 46535194 Bytes written: 2.39 GB Average throughput: 45.18 MB/s The dump can be loaded using: util.importTable("file:///root/herc7.txt", { "characterSet": "utf8mb4", "schema": "percona", "table": "herc7" }) |
The table export is completed and the output seems pretty good. During the execution, the utility prints the progress, data size, execution time, rows written, throughput, etc.
The output from the data file looks like tab-separated:
1 2 3 4 5 6 7 |
[root@mysqlshelldump8 ~]# cd /root/mysqlshellexport8/ [root@mysqlshelldump8 mysqlshellexport8]# ls -lrth total 2.3G -rw-r-----. 1 root root 2.3G Nov 18 22:42 herc7.txt [root@mysqlshelldump8 mysqlshellexport8]# cat herc7.txt | tail -n2 47399358 sakthi 2020-11-18 21:21:28 Everything is MySQL 47399359 ram 2020-11-18 21:21:28 I love Python |
Query from processlist:
1 |
| 43 | root | localhost | NULL | Query | 44 | Sending to client | SELECT SQL_NO_CACHE `id`,`name`,`sent_time`,`message` FROM `percona`.`herc7` ORDER BY `id` /* mysqlsh exportTable, dumping table `percona`.`herc7`, chunk ID: 1 */ | 39142633 | 0 | |
Note: Make sure to create the needed directories before executing the command. Otherwise, the command will fail.
Modified Output File Structure Using util.exportTable()
In this section, I am going to play with the options and create the output file with a comma separated CSV file. I am going to modify the following options to create the CSV file.
- fieldsOptionallyEnclosed: true,
- fieldsTerminatedBy: “,”
- linesTerminatedBy: “\n”
- fieldsEnclosedBy: ‘”‘
- defaultCharacterSet: “utf8”
- showProgress: true
- dialect: “csv”
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
MySQL localhost JS > util.exportTable("percona.herc7", "file:///root/mysqlshellexport8/herc7.csv |