I use insert into to insert and add for loop 10 million times. But the notebook is too laggy. Does the great God have a better way?
Using stored procedures, reference
Use scripts to generate data files, with “\t” or “,” segmentation.
mysql load data infile Way to import data through files!
insert intoDefinitely not.
In CLI mode, PHP asynchronously generates 10000 data inserts per minute, and it can be completed in 100 minutes.
You can switch to a different storage engine. MyISAM has a higher insertion speed than Innodb. My previous test results showed that it took about 15 minutes for InnoDB to insert 1 million pieces of data, and 50 seconds for myisam!
Using transactions, such as committing every 5,000 pieces of data as a transaction, is hundreds of times more efficient than not using transactions.
This is how to use memory skills.
1000000 data one-time insert table?
Oh, you can only reboot your entire server card.
Set a limit for each query, such as ten thousand, so that it can be split into 1000 query packets.
You can steadily run the 1000 packets in a single thread cycle.
If the data is not sequentially dependent, to be high-speed, it is concurrent or multithreaded, such as dividing into 10 workers to run, each worker only needs to run 100 packets, and is parallel.
insert into tablename select * from tablename
Just do it several times.
1.It depends on the content, but the simple alphabet information will be faster.
2.Secondly, if the database does not have too many indexes in the sky, it will be better.
3.If MySQL is in solid state hard disk, reading and writing will be quicker.
4.But the key is to see your memory and CPU frequency.
Note that you should try to use a insert statement with a lot of values instead of multiple insert into.
Other ways are not particularly good.
Ten million pieces of data are still plentiful, you can insert them several times separately, and then put the value of each insert into an SQL statement to execute
Insert it several times. Use trigger.
Build multiple tables, multithreaded inserts, and merge these tables.
Multi process execution, it is recommended to see the next pcntl_fork function.
Those who say to use asynchronism and circulate something are personally unreasonable.
10wA bar of data, even if you have enough memory, will take up a lot of time (IO stream) to store the data in the database, and even if the transaction commit is not handled properly the database will take up memory.
It is the fastest way to generate data directly in the database.
Generate a ID table (save only one field for ID) and record 10W (0-10w).
insert into table t select i.id, concat('Name, i.id) name,Concat (random generating code 7-12:', FLOOR (7 + (RAND (* * 6))) Rand,Ifnull (a.nickname,'No)Nickname') nickname,UUID () descript, -- random stringFrom_unixtime (unix_timestamp ("20170101000000")+FLOOR ((RAND () *60*60*24*365))) random date in --2017From table_id ILeft join table_account a on a.id=FLOOR((RAND () *12)) - if there are other tables in the data sourceWhere i.id < 1000 - if only 1000 are generated.
Why not use stored procedures? How do you feel that the storage process will be better?
Decisive stored procedures, and then execute the script asynchronously, restarting the script every time an asynchronous callback is made, and stopping when the conditions are met