Article From:https://segmentfault.com/q/1010000008938953
Question:

I use insert into to insert and add for loop 10 million times. But the notebook is too laggy. Does the great God have a better way?

Answer 0:

Using stored procedures, reference

Answer 1:

Use scripts to generate data files, with “\t” or “,” segmentation.
mysql load data infile Way to import data through files!

insert intoDefinitely not.

Answer 2:

In CLI mode, PHP asynchronously generates 10000 data inserts per minute, and it can be completed in 100 minutes.

Answer 3:

You can switch to a different storage engine. MyISAM has a higher insertion speed than Innodb. My previous test results showed that it took about 15 minutes for InnoDB to insert 1 million pieces of data, and 50 seconds for myisam!

Answer 4:

Using transactions, such as committing every 5,000 pieces of data as a transaction, is hundreds of times more efficient than not using transactions.

Answer 5:

This is how to use memory skills.
1000000 data one-time insert table?
Oh, you can only reboot your entire server card.

Set a limit for each query, such as ten thousand, so that it can be split into 1000 query packets.
You can steadily run the 1000 packets in a single thread cycle.
If the data is not sequentially dependent, to be high-speed, it is concurrent or multithreaded, such as dividing into 10 workers to run, each worker only needs to run 100 packets, and is parallel.

Answer 6:

insert into tablename select * from tablename

Just do it several times.

Answer 7:

1.It depends on the content, but the simple alphabet information will be faster.
2.Secondly, if the database does not have too many indexes in the sky, it will be better.
3.If MySQL is in solid state hard disk, reading and writing will be quicker.
4.But the key is to see your memory and CPU frequency.

Answer 8:

Note that you should try to use a insert statement with a lot of values instead of multiple insert into.
Other ways are not particularly good.

Answer 9:

Ten million pieces of data are still plentiful, you can insert them several times separately, and then put the value of each insert into an SQL statement to execute

Answer 10:

Insert it several times. Use trigger.

Answer 11:

Build multiple tables, multithreaded inserts, and merge these tables.

Answer 12:

Multi process execution, it is recommended to see the next pcntl_fork function.

Answer 13:

Original answer
Those who say to use asynchronism and circulate something are personally unreasonable.
10wA bar of data, even if you have enough memory, will take up a lot of time (IO stream) to store the data in the database, and even if the transaction commit is not handled properly the database will take up memory.

It is the fastest way to generate data directly in the database.
Generate a ID table (save only one field for ID) and record 10W (0-10w).
mysqlPractice:

insert into table t
select i.id, concat('Name, i.id) name,Concat (random generating code 7-12:', FLOOR (7 + (RAND (* * 6))) Rand,Ifnull (a.nickname,'No)Nickname') nickname,UUID () descript, -- random stringFrom_unixtime (unix_timestamp ("20170101000000")+FLOOR ((RAND () *60*60*24*365))) random date in --2017From table_id ILeft join table_account a on a.id=FLOOR((RAND () *12)) - if there are other tables in the data sourceWhere i.id < 1000 - if only 1000 are generated.

Answer 14:

Why not use stored procedures? How do you feel that the storage process will be better?

Answer 15:

Decisive stored procedures, and then execute the script asynchronously, restarting the script every time an asynchronous callback is made, and stopping when the conditions are met

Similar Posts:

Leave a Reply

Your email address will not be published. Required fields are marked *