ndb_import − Import CSV data into NDB
ndb_import imports CSV−formatted data, such as that produced by mysqldump −−tab, directly into NDB using the NDB API. ndb_import requires a connection to an NDB management server (ndb_mgmd) to function; it does not require a connection to a MySQL Server. Usage
ndb_import db_name file_name options
ndb_import requires two arguments. db_name is the name of the database where the table into which to import the data is found; file_name is the name of the CSV file from which to read the data; this must include the path to this file if it is not in the current directory. The name of the file must match that of the table; the file's extension, if any, is not taken into consideration. Options supported by ndb_import include those for specifying field separators, escapes, and line terminators, and are described later in this section.
Prior to NDB 8.0.30, ndb_import rejects any empty lines which it reads from the CSV file. Beginning with NDB 8.0.30, when importing a single column, an empty value that can be used as the column value, ndb_import handles it in the same manner as a LOAD DATA statement does.
ndb_import must be able to connect to an NDB Cluster management server; for this reason, there must be an unused [api] slot in the cluster config.ini file.
To duplicate an existing table that uses a different storage engine, such as InnoDB, as an NDB table, use the mysql client to perform a SELECT INTO OUTFILE statement to export the existing table to a CSV file, then to execute a CREATE TABLE LIKE statement to create a new table having the same structure as the existing table, then perform ALTER TABLE ... ENGINE=NDB on the new table; after this, from the system shell, invoke ndb_import to load the data into the new NDB table. For example, an existing InnoDB table named myinnodb_table in a database named myinnodb can be exported into an NDB table named myndb_table in a database named myndb as shown here, assuming that you are already logged in as a MySQL user with the appropriate privileges:
1. In the mysql client:
mysql> SELECT * INTO OUTFILE '/tmp/myndb_table.csv'
> FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '\\'
> LINES TERMINATED BY '\n'
> FROM myinnodbtable;
mysql> CREATE DATABASE myndb;
mysql> USE myndb;
mysql> CREATE TABLE myndb_table LIKE myinnodb.myinnodb_table;
mysql> ALTER TABLE myndb_table ENGINE=NDB;
Once the target database and table have been created, a running mysqld is no longer required. You can stop it using mysqladmin shutdown or another method before proceeding, if you wish.
2. In the system shell:
# if you are
not already in the MySQL bin directory:
$> cd path−to−mysql−bin−dir
$> ndb_import myndb /tmp/myndb_table.csv −−fields−optionally−enclosed−by='"' \
The output should resemble what is shown here:
import myndb.myndb_table from /tmp/myndb_table.csv
job−1 [running] import myndb.myndb_table from /tmp/myndb_table.csv
job−1 [success] import myndb.myndb_table from /tmp/myndb_table.csv
job−1 imported 19984 rows in 0h0m9s at 2277 rows/s
jobs summary: defined: 1 run: 1 with success: 1 with failure: 0
All options that can be used with ndb_import are shown in the following table. Additional descriptions follow the table.
Table 23.35. Command−line options used with the program ndb_import
Dump core on any fatal error; used for debugging only.
For a table with a hidden primary key, specify the autoincrement increment, like the auto_increment_increment system variable does in the MySQL Server.
For a table with hidden primary key, specify the autoincrement offset. Similar to the auto_increment_offset system variable.
For a table with a hidden primary key, specify the number of autoincrement values that are prefetched. Behaves like the ndb_autoincrement_prefetch_sz system variable does in the MySQL Server.
Directory containing character sets.
Number of cluster connections to create.
Number of times to retry connection before giving up.
Number of seconds to wait between attempts to contact management server.
Same as −−ndb−connectstring.
When a job fails, continue to the next job.
Write core file on error; used in debugging.
Provides a shortcut method for setting typical CSV import options. The argument to this option is a string consisting of one or more of the following parameters:
• c: Fields terminated by comma
• d: Use defaults, except where overridden by another parameter
• n: Lines terminated by \n
• q: Fields optionally enclosed by double quote characters (")
• r: Line terminated by \r
In NDB 8.0.28 and later, the order of parameters used in the argument to this option is handled such that the rightmost parameter always takes precedence over any potentially conflicting parameters which have already been used in the same argument value. This also applies to any duplicate instances of a given parameter. Prior to NDB 8.0.28, the order of the parameters made no difference, other than that, when both n and r were specified, the one occurring last (rightmost) was the parameter which actually took effect.
This option is intended for use in testing under conditions in which it is difficult to transmit escapes or quotation marks.
Number of threads, per data node, executing database operations.
Read default options from given file only.
Read given file after global files are read.
Also read groups with concat(group, suffix).
Error insert type; use list as the name value to obtain all possible values. This option is used for testing purposes only.
Error insert delay in milliseconds; random variation is added. This option is used for testing purposes only.
This works in the same way as the FIELDS ENCLOSED BY option does for the LOAD DATA statement, specifying a character to be interpreted as quoting field values. For CSV input, this is the same as −−fields−optionally−enclosed−by.
Specify an escape character in the same way as the FIELDS ESCAPED BY option does for the SQL LOAD DATA statement.
This works in the same way as the FIELDS OPTIONALLY ENCLOSED BY option does for the LOAD DATA statement, specifying a character to be interpreted as optionally quoting field values. For CSV input, this is the same as −−fields−enclosed−by.
This works in the same way as the FIELDS TERMINATED BY option does for the LOAD DATA statement, specifying a character to be interpreted as the field separator.
Display help text and exit.
Number of milliseconds to sleep waiting for more work to perform.
Number of times to retry before sleeping.
Cause ndb_import to ignore the first # lines of the input file. This can be employed to skip a file header that does not contain any data.
Set the type of input type. The default is csv; random is intended for testing purposes only. .
Set the number of threads processing input.
By default, ndb_import removes all state files (except non−empty *.rej files) when it completes a job. Specify this option (nor argument is required) to force the program to retain all state files instead.
This works in the same way as the LINES TERMINATED BY option does for the LOAD DATA statement, specifying a character to be interpreted as end−of−line.
Performs internal logging at the given level. This option is intended primarily for internal and development use.
In debug builds of NDB only, the logging level can be set using this option to a maximum of 4.
Read given path from login file.
Import only this number of input data rows; the default is 0, which imports all rows.
This option can be employed when importing a single table, or multiple tables. When used, it indicates that the CSV file being imported does not contain any values for an AUTO_INCREMENT column, and that ndb_import should supply them; if the option is used and the AUTO_INCREMENT column contains any values, the import operation cannot proceed.
Periodically print the status of a running job if something has changed (status, rejected rows, temporary errors). Set to 0 to disable this reporting. Setting to 1 prints any change that is seen. Higher values reduce the frequency of this status reporting.
Set connect string for connecting to ndb_mgmd. Syntax: "[nodeid=id;][host=]hostname[:port]". Overrides entries in NDB_CONNECTSTRING and my.cnf.
Same as −−ndb−connectstring.
Set node ID for this node, overriding any ID set by −−ndb−connectstring.
Enable optimizations for selection of nodes for transactions. Enabled by default; use −−skip−ndb−optimized−node−selection to disable.
Run database operations as batches, in single transactions.
Do not read default options from any option file other than login file.
Do not use distribution key hinting to select a data node.
Set a limit on the number of operations (including blob operations), and thus the number of asynchronous transactions, per execution batch.
Set a limit on the number of bytes per execution batch. Use 0 for no limit.
Set the output type. ndb is the default. null is used only for testing.
Set the number of threads processing output or relaying database operations.
Align I/O buffers to the given size.
Set the size of I/O buffers as multiple of page size. The CSV input worker allocates buffer that is doubled in size.
Set a timeout per poll for completed asynchronous transactions; polling continues until all polls are completed, or until an error occurs.
Print program argument list and exit.
Limit the number of rejected rows (rows with permanent errors) in the data load. The default is 0, which means that any rejected row causes a fatal error. Any rows causing the limit to be exceeded are added to the .rej file.
The limit imposed by this option is effective for the duration of the current run. A run restarted using −−resume is considered a “new” run for this purpose.
If a job is aborted (due to a temporary db error or when interrupted by the user), resume with any rows not yet processed.
Set a limit on the number of rows per row queue. Use 0 for no limit.
Set a limit on the number of bytes per row queue. Use 0 for no limit.
Save information about options related to performance and other internal statistics in files named *.sto and *.stt. These files are always kept on successful completion (even if −−keep−state is not also specified).
Where to write the state files (tbl_name.map, tbl_name.rej, tbl_name.res, and tbl_name.stt) produced by a run of the program; the default is the current directory.
By default, ndb_import attempts to import data into a table whose name is the base name of the CSV file from which the data is being read. Beginning with NDB 8.0.28, you can override the choice of table name by specifying it using the −−table option (short form −t).
Number of milliseconds to sleep between temporary errors.
Number of times a transaction can fail due to a temporary error, per execution batch. The default is 0, which means that any temporary error is fatal. Temporary errors do not cause any rows to be added to the .rej file.
Enable verbose output.
Display help text and exit; same as −−help.
Display version information and exit.
As with LOAD DATA, options for field and line formatting much match those used to create the CSV file, whether this was done using SELECT INTO ... OUTFILE, or by some other means. There is no equivalent to the LOAD DATA statement STARTING WITH option.
Copyright © 1997, 2023, Oracle and/or its affiliates.
This documentation is free software; you can redistribute it and/or modify it only under the terms of the GNU General Public License as published by the Free Software Foundation; version 2 of the License.
This documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with the program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA or see http://www.gnu.org/licenses/.
For more information, please refer to the MySQL Reference Manual, which may already be installed locally and which is also available online at http://dev.mysql.com/doc/.
Oracle Corporation (http://dev.mysql.com/).