Configuring the Amazon MySQL RDS for 30GB database backup file restore -


i planing restore mysql db dump file in mysql amazon rds instance. suggestions on configuring mysql rds instance in order make mysql db backup (30gb size) complete successfully. last time tried 3gb backup file restore m4large instance 8gb ram. time of restore memory reached threshold , stopped process. time clear mysql rds instance configuration accept 30gb backup restore. please give me suggestion on this

if have smaller instance still can backup large data if somehow can break data or process smaller steps. can way break whole data smaller chunks while backing up. using tool can allow break whole data chunks. chunks have effect on (source) server data exported. chunk size specified in number of rows. if instance chunk size of 1000 rows specified data not fetched using 1 'select ...' more selects 'select ..limit 1,1000', 'select ... limit 1001,1000' used until end of data reached. assures possible specify chunk size not exceed various types of resources ( memory available) user ... result in slow operation or maybe 'hang' or 'deadlock'. specifying not big chunk setting ensure no timeout occur. such timeout may happen due server 'net_write_timeout' setting or network settings not related mysql.

also size of bulk can determine maximum size of saved file. have experiment little find settings optimal - , of course may different different hosting providers if have more. practical experience cheap hosting chunks setting of 2000-10000 (rows - depending on how many , type of columns have)


Comments

Popular posts from this blog

php - Wordpress website dashboard page or post editor content is not showing but front end data is showing properly -

How to get the ip address of VM and use it to configure SSH connection dynamically in Ansible -

javascript - Get parameter of GET request -