More Backup Automation
Some years back I purchased a couple of NetGear Duo RAID disk enclosures. Each enclosure was loaded up with dual 1TB disk drives from Western Digital. Everything written to one drive was mirrored to the second drive. This helped protect against a single disk failure.
I also had a very old (PIII processor!) computer that was running linux. At one point I was using it as a development environment, but it had long since been retired from that duty. Why was it still around? I had created mount points for each of the RAID arrays on this linux box, and it was responsible for going to my web server (which hosts this blog, among other things) and downloading the nightly database backup files. (My web server runs a hot backup at 1AM each morning using the mysqldump command.) This linux box also had a script (running at 2AM) that would ftp to my web server, retrieve all of the database dumps, download them, add a date stamp to the file name, and then copy the resulting files out to the RAID array. The entire process was automatic, transparent, and for a long time was quite robust.
Until a few weeks ago.
At various points I would go out to the disk array and check to see that the backup files were properly being retrieved. The last time I checked, it seemed that I was not getting any backup files… at least not for the past several weeks. Attempting to log in to the development / backup box via ssh proved to be unsuccessful. After rebooting the box, though, everything came up and seemed to work. I was able to run the backup script without errors.
The next morning, however, there was no backup file from the prior night, and the box was once again locked up. To be honest, it wasn’t worth fixing. It was noisy, a power hog, and the only thing it was doing at this point was downloading the nightly backup files. The NetGear Duo was also running a variation of linux, so why not try that? I installed an add-on that enabled ssh access to the device and went to town. After a couple of different attempts I eventually got a cron job set up that would connect to my web server and do all of the same things my old linux box used to do, with the added advantage that the Duo was already on as a file server anyway so it was now taking on an additional task.
Life was good, and backup files began to show up on a regular basis once again. To make sure I didn’t miss out on backup files any more I also added a line at the end of the process to email me the results of the backup script. Now each morning I can check my smartphone and confirm that the backup process ran correctly the night before.
Time To Upgrade
But never one to leave well enough alone I decided to try upgrading one of the Duo devices to a newer model, the Ultra 2 Plus. This new device had a lot more memory, dual network ports, and a multi-core CPU. Everything should be faster. I also upgraded to the “black” series of Western Digital disk drives as I had experienced intermittent problems with using the “green” series inside the RAID enclosures. Everything came in last week and was installed with only a little fuss. I got my security configuration reset so that I can use the scp command (secure copy) to download the files from my web server. The last hurdle was that the new Ultra 2 Plus device did not have a mail command installed! After some research and posting on the ReadyNas forums (which by the way run phpBB) I got the answer: I have to use the sendmail command instead. This command is not quite as friendly as the mail command, but I got it to work.
Database Backup Script
My web server already has a cron job scheduled at 1AM that uses the mysqldump command to create a database dump of every database that I want to back up. Here is the following cron job which is scheduled on the Ultra 2 Plus. It’s responsible for connecting to the server, downloading the files, adding a date stamp, and then copying the files out to the designated sub-directory on the RAID. Finally it creates the mail file and sends it out. The list of databases to back up is included in a text file so that I don’t have maintain that script.
#!/bin/bash # first establish local path location cd /w_drive/incoming/ # next get updated copy of dbnames.txt scp [email protected]:db_backups/dbnames.txt . for dbname in `cat dbnames.txt`; do # Go get database backup file echo "Processing $dbname " scp [email protected]:db_backups/$dbname.sql.gz . # Establish some local variables filename="_`date +%Y-%m-%d`.sql.gz" filename="$dbname$filename" outputpath="/w_drive/$dbname/$filename" echo "Moving $dbname.sql.gz to $outputpath" # Rename local copy of db backup file to include date mv $dbname.sql.gz $filename # Then move dated backup file to proper output path mv $filename $outputpath chmod 644 $outputpath done # Build mail formatted file echo "date: `date +%Y-%m-%d`" > mail.txt echo "to: [email protected]" >> mail.txt echo "subject: Backup" >> mail.txt echo "from: [email protected]" >> mail.txt cat /w_drive/incoming/getbackup.out | grep -v "tty" >> mail.txt # Send the email /usr/sbin/sendmail [email protected] < /w_drive/incoming/mail.txt
As an additional bonus, my office is a lot quieter without the extra server running. I don't know that it will make a noticeable difference in my power bill, but it will certainly help at least a little bit.
I do something similar and suggest the following for your script:
* Get the exit code of ’scp’ and have it notify you if it’s not ‘0′ (anything other than 0 is a failure). If you SCP fails(password changes, disk space full on backup system, IP/hostname changes, etc) you’re most likely going to be notified that it was successful when it really wasn’t. Something like this:
scp :db_backups/dbnames.txt .
if [ "$SCP_EXIT_CODE" -ne "0" ]; then
# Send email notification that the SCP failed here.
Comment by John — March 22, 2012 @ 1:26 pm
Thanks, John, I like the enhancement. I’ll get my script updated.
Comment by Dave Rathbun — March 28, 2012 @ 1:43 pm