I dumped data from a database using the MySQL Admin Migration toolkit. One of the tables has over 5 million rows. When I try to load the generated data file I get out of memory issues.
Is there a way to force commit after X number of rows? The script generated by the migration toolkit is shown below:
INSERT INTO mytable (`col1`, `col1`)
VALUES (823, 187.25),
(823, 187.25),
(823, 187.25),
(823, 187.25),
(823, 187.25),
(823, 187.25),
(823, 187.25),
(823, 187.25),
(823, 187.25),
(823, 187.25),
(823, 187.25),
(823, 187.25),
(823, 187.25),
(823, 187.25),
(823, 187.25),
Yes. This big INSERT INTO statement is essentially a script. Take an editor to it; every few hundred lines, insert a line that says COMMIT, and then start a new INSERT…VALUES statement.
For extra credit, you can write a simple script to rewrite the original SQL statement into several transactions (as stated above) programmatically. You don’t want to be doing this by hand repeatedly.
There’s no outside setting you can activate so that you can submit this script in its entirety and have auto-commits happening at regular intervals without changing the script.
Increasing log file space may raise the limit but I wouldn’t count on it. Face it: The DB was simply not built to process statements several thousands of lines in length.
What I’d probably do is do multiple commits but push this stuff to a temporary table or with a “temporary” flag; and then run one or a small number of operations to make these changes permanent. If that goes wrong, you can still retry or delete your temporary records.