"Fossies" - the Fresh Open Source Software Archive

Member "refman-8.0-en.man-gpl/ndb_import.1" (20 Oct 2021, 43445 Bytes) of package /linux/misc/mysql-refman/mysql-refman-8.0-en.man-gpl.tar.gz:


Caution: As a special service "Fossies" has tried to format the requested manual source page into HTML format but links to other man pages may be missing or even erroneous. Alternatively you can here view or download the uninterpreted manual source code. A member file download can also be achieved by clicking within a package contents listing on the according byte size field.

NDB_IMPORT

NAME
SYNOPSIS
DESCRIPTION
COPYRIGHT
SEE ALSO
AUTHOR

NAME

ndb_import − Import CSV data into NDB

SYNOPSIS

ndb_import options

DESCRIPTION

ndb_import imports CSV−formatted data, such as that produced by mysqldump −−tab, directly into NDB using the NDB API. ndb_import requires a connection to an NDB management server (ndb_mgmd) to function; it does not require a connection to a MySQL Server. Usage

ndb_import db_name file_name options

ndb_import requires two arguments. db_name is the name of the database where the table into which to import the data is found; file_name is the name of the CSV file from which to read the data; this must include the path to this file if it is not in the current directory. The name of the file must match that of the table; the file's extension, if any, is not taken into consideration. Options supported by ndb_import include those for specifying field separators, escapes, and line terminators, and are described later in this section. ndb_import must be able to connect to an NDB Cluster management server; for this reason, there must be an unused [api] slot in the cluster config.ini file.

To duplicate an existing table that uses a different storage engine, such as InnoDB, as an NDB table, use the mysql client to perform a SELECT INTO OUTFILE statement to export the existing table to a CSV file, then to execute a CREATE TABLE LIKE statement to create a new table having the same structure as the existing table, then perform ALTER TABLE ... ENGINE=NDB on the new table; after this, from the system shell, invoke ndb_import to load the data into the new NDB table. For example, an existing InnoDB table named myinnodb_table in a database named myinnodb can be exported into an NDB table named myndb_table in a database named myndb as shown here, assuming that you are already logged in as a MySQL user with the appropriate privileges:

1. In the mysql client:

mysql> USE myinnodb;
mysql> SELECT * INTO OUTFILE '/tmp/myndb_table.csv'
> FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '\\'
> LINES TERMINATED BY '\n'
> FROM myinnodbtable;
mysql> CREATE DATABASE myndb;
mysql> USE myndb;
mysql> CREATE TABLE myndb_table LIKE myinnodb.myinnodb_table;
mysql> ALTER TABLE myndb_table ENGINE=NDB;
mysql> EXIT;
Bye
shell>

Once the target database and table have been created, a running mysqld is no longer required. You can stop it using mysqladmin shutdown or another method before proceeding, if you wish.

2. In the system shell:

# if you are not already in the MySQL bin directory:
shell> cd path−to−mysql−bin−dir
shell> ndb_import myndb /tmp/myndb_table.csv −−fields−optionally−enclosed−by='"' \
−−fields−terminated−by="," −−fields−escaped−by='\\'

The output should resemble what is shown here:

job−1 import myndb.myndb_table from /tmp/myndb_table.csv
job−1 [running] import myndb.myndb_table from /tmp/myndb_table.csv
job−1 [success] import myndb.myndb_table from /tmp/myndb_table.csv
job−1 imported 19984 rows in 0h0m9s at 2277 rows/s
jobs summary: defined: 1 run: 1 with success: 1 with failure: 0
shell>

All options that can be used with ndb_import are shown in the following table. Additional descriptions follow the table.

Table 23.35. Command−line options used with the program ndb_import

−−abort−on−error

Dump core on any fatal error; used for debugging only.

−−ai−increment=#

For a table with a hidden primary key, specify the autoincrement increment, like the auto_increment_increment system variable does in the MySQL Server.

−−ai−offset=#

For a table with hidden primary key, specify the autoincrement offset. Similar to the auto_increment_offset system variable.

−−ai−prefetch−sz=#

For a table with a hidden primary key, specify the number of autoincrement values that are prefetched. Behaves like the ndb_autoincrement_prefetch_sz system variable does in the MySQL Server.

−−connections=#

Number of cluster connections to create.

−−continue

When a job fails, continue to the next job.

−−db−workers=#

Number of threads, per data node, executing database operations.

−−errins−type=name

Error insert type; use list as the name value to obtain all possible values. This option is used for testing purposes only.

−−errins−delay=#

Error insert delay in milliseconds; random variation is added. This option is used for testing purposes only.

−−fields−enclosed−by=char

This works in the same way as the FIELDS ENCLOSED BY option does for the LOAD DATA statement, specifying a character to be interpeted as quoting field values. For CSV input, this is the same as −−fields−optionally−enclosed−by.

−−fields−escaped−by=name

Specify an escape character in the same way as the FIELDS ESCAPED BY option does for the SQL LOAD DATA statement.

−−fields−optionally−enclosed−by=char

This works in the same way as the FIELDS OPTIONALLY ENCLOSED BY option does for the LOAD DATA statement, specifying a character to be interpeted as optionally quoting field values. For CSV input, this is the same as −−fields−enclosed−by.

−−fields−terminated−by=char

This works in the same way as the FIELDS TERMINATED BY option does for the LOAD DATA statement, specifying a character to be interpeted as the field separator.

−−idlesleep=#

Number of milliseconds to sleep waiting for more work to perform.

−−idlespin=#

Number of times to retry before sleeping.

−−ignore−lines=#

Cause ndb_import to ignore the first # lines of the input file. This can be employed to skip a file header that does not contain any data.

−−input−type=name

Set the type of input type. The default is csv; random is intended for testing purposes only. .

−−input−workers=#

Set the number of threads processing input.

−−keep−state

By default, ndb_import removes all state files (except non−empty *.rej files) when it completes a job. Specify this option (nor argument is required) to force the program to retain all state files instead.

−−lines−terminated−by=name

This works in the same way as the LINES TERMINATED BY option does for the LOAD DATA statement, specifying a character to be interpeted as end−of−line.

−−log−level=#

Performs internal logging at the given level. This option is intended primarily for internal and development use.

In debug builds of NDB only, the logging level can be set using this option to a maximum of 4.

−−max−rows=#

Import only this number of input data rows; the default is 0, which imports all rows.

−−monitor=#

Periodically print the status of a running job if something has changed (status, rejected rows, temporary errors). Set to 0 to disable this reporting. Setting to 1 prints any change that is seen. Higher values reduce the frequency of this status reporting.

−−no−asynch

Run database operations as batches, in single transactions.

−−no−hint

Do not use distribution key hinting to select a data node.

−−opbatch=#

Set a limit on the number of operations (including blob operations), and thus the number of asynchronous transactions, per execution batch.

−−opbytes=#

Set a limit on the number of bytes per execution batch. Use 0 for no limit.

−−output−type=name

Set the output type. ndb is the default. null is used only for testing.

−−output−workers=#

Set the number of threads processing output or relaying database operations.

−−pagesize=#

Align I/O buffers to the given size.

−−pagecnt=#

Set the size of I/O buffers as multiple of page size. The CSV input worker allocates buffer that is doubled in size.

−−polltimeout=#

Set a timeout per poll for completed asynchonous transactions; polling continues until all polls are completed, or until an error occurs.

−−rejects=#

Limit the number of rejected rows (rows with permanent errors) in the data load. The default is 0, which means that any rejected row causes a fatal error. Any rows causing the limit to be exceeded are added to the .rej file.

The limit imposed by this option is effective for the duration of the current run. A run restarted using −−resume is considered a “new” run for this purpose.

−−resume

If a job is aborted (due to a temporary db error or when interrupted by the user), resume with any rows not yet processed.

−−rowbatch=#

Set a limit on the number of rows per row queue. Use 0 for no limit.

−−rowbytes=#

Set a limit on the number of bytes per row queue. Use 0 for no limit.

−−stats

Save information about options related to performance and other internal statistics in files named *.sto and *.stt. These files are always kept on successful completion (even if −−keep−state is not also specified).

−−state−dir=name

Where to write the state files (tbl_name.map, tbl_name.rej, tbl_name.res, and tbl_name.stt) produced by a run of the program; the default is the current directory.

−−tempdelay=#

Number of milliseconds to sleep between temporary errors.

−−temperrors=#

Number of times a transaction can fail due to a temporary error, per execution batch. The default is 0, which means that any temporary error is fatal. Temporary errors do not cause any rows to be added to the .rej file.

−−verbose, −v

Enable verbose output.

−−print−defaults

Print program argument list and exit.

−−no−defaults

Do not read default options from any option file other than login file.

−−defaults−file

Read default options from given file only.

−−defaults−extra−file

Read given file after global files are read.

−−defaults−group−suffix

Also read groups with concat(group, suffix).

−−login−path

Read given path from login file.

−−help

Display help text and exit.

−−usage

Display help text and exit; same as −−help.

−−version

Display version information and exit.

−−ndb−connectstring

Set connect string for connecting to ndb_mgmd. Syntax: "[nodeid=id;][host=]hostname[:port]". Overrides entries in NDB_CONNECTSTRING and my.cnf.

−−connect−string

Same as −−ndb−connectstring.

−−ndb−mgmd−host

Same as −−ndb−connectstring.

−−ndb−nodeid

Set node ID for this node, overriding any ID set by −−ndb−connectstring.

−−core−file

Write core file on error; used in debugging.

−−character−sets−dir

Directory containing character sets.

−−connect−retries

Number of times to retry connection before giving up.

−−connect−retry−delay

Number of seconds to wait between attempts to contact management server.

−−ndb−optimized−node−selection

Enable optimizations for selection of nodes for transactions. Enabled by default; use −−skip−ndb−optimized−node−selection to disable.

As with LOAD DATA, options for field and line formatting much match those used to create the CSV file, whether this was done using SELECT INTO ... OUTFILE, or by some other means. There is no equivalent to the LOAD DATA statement STARTING WITH option.

COPYRIGHT

Copyright © 1997, 2021, Oracle and/or its affiliates.

This documentation is free software; you can redistribute it and/or modify it only under the terms of the GNU General Public License as published by the Free Software Foundation; version 2 of the License.

This documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with the program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA or see http://www.gnu.org/licenses/.

SEE ALSO

For more information, please refer to the MySQL Reference Manual, which may already be installed locally and which is also available online at http://dev.mysql.com/doc/.

AUTHOR

Oracle Corporation (http://dev.mysql.com/).