After the cloning process, VMWare gives the virtual machine NIC a new MAC address. Why is this significant? Well, from what I understand, the MAC address is what Linux uses to associate eth0 with that NIC. So, since your NIC no longer has that old MAC address, you have to change /etc/network/interfaces to bring up eth1 instead of eth0. I am not sure how to get eth0 freed up, but it can probably be found with a carefully crafted Google query.
Next thing we want to do is update our Ubuntu Server. This is one of the main reasons why I like ubuntu, its really this simple:
sudo apt-get update
sudo apt-get dist-upgrade
Next order of business is authentication. We like to use integrated Active Directory authentication using Likewise. Its a very cool tool to make your Linux servers authenticate back to Active Directory so you do not have to maintain local /etc/passwd files, or a separate LDAP server.
As a matter of best practice, I always create a local /etc/passwd account. This is for two reasons, first, if AD cannot be found, you cannot logon (remember ubuntu does not give root a password), and second, its always industry best practice when authenticating against an LDAP server to leave a local admin account.
Once that account is added, I add likewise to the ubuntu server with the following command:
sudo apt-get install likewise-openThis is some pretty cool software. I ran a few commands next, the first puts likewise to automatically startup on a boot, so we do not have to do it manually. The second command is the command to join a domain (the 'administrator' can be replaced with any account with privileges to join a system to the domain).
Now, because I hate prepending my domain name to my username (IE domain\user) when logging in, I want it to just use the default domain name when I put in a username unless otherwise instructed. To do so, I added the following line to the /etc/samba/lwiauthd.conf file, then ran the command to restart likewise.
sudo udate-rc.d likewise-open defaults
sudo domainjoin-cli join domain.local Administrator
winbind use default domain = yesThen, the final order of business with likewise, I need my domain admins group to be able to sudo, similar to the way users in the admin group can in a default ubuntu installation. To do this, I editted the /etc/sudoers file and added the following line:
sudo /etc/init.d/likewise-open restart
%domain^ admins ALL=(ALL) ALLStep 4:
Next order of business is my nagios monitoring plugins. I use a monitoring tool call nagios to monitor my servers. Nagios has a plugin you can install called nrpe that runs as a local daemon or in xinetd so the monitoring server can send requests to run predefined scripts. This is helpful to obtain data you cannot get from remote, such as load average, disk utilization, etc. Note, I am only installing the client side plugins and remote pole daemon (nrpe), this is by no means a guide to setup a nagios server (maybe a future blog post though :P )
First thing I install is the nrpe-server package. This is the daemon (or xinetd process) that runs to accept remote connections to run predefined scripts and return the results.
sudo aptitude install nagios-nrpe-serverAfter this is installed, you will need to modify its configuration file located at /etc/nagios/nrpe.cfg. In this file you will setup two things, first there is a directive to specify which IP's are allowed to talk to your nrpe process (this will prevent bad people from monitoring your server without your permission). This is also where you setup your check commands. Nagios uses scripts called check commands to get perfdata from a server, such as disk space and load averages. You setup the scripts that run locally on command from the main nagios server in this file a well. Then, you simply tell your nagios server where this box is and what command to ask it to run.
I installed all the main check commands, then just chose which ones I wanted to use in the nrpe-server's configuration file. Here is how to install the plugins:
Nagios should now be able to monitor your server.
sudo apt-get install nagios-plugins nagios-plugins-basic
Now its time to get our MySQL stuff installed. Once again, time for a shameless plug of "This is why I love Ubuntu":
sudo apt-get install mysql-server mysql-clientYou have now installed MySQL server. It will probably ask you to set the mysql root password, try not to forget it.
Next, we'll install a really nifty MySQL admin tool called phpmyadmin. This usually requires you install a bunch of prerequisites, but in this case, its ubuntu-ized, so its just another one liner:
sudo apt-get install phpmyadminIf you do not already have a web server installed, it will automatically install apache2 for you. After installation, it will launch an automatic configuration tool which will ask what web server to configure. Select apache2 if you did not have a web server installed prior. If you did, tell it which one.
The last couple of tasks are to enable some users for this SQL server and allow other hosts to connect to it.
We'll start with enabling some users. Here is the quick and dirty dev server way to do it. Please do not use this in real environments. I am not going to dive off into database security, just know this'll work for a nice little sandbox, not a production environment.
mysql -u root -p
CREATE USER 'user'@'%' IDENTIFIED BY
GRANT ALL PRIVILEGES ON *.* TO 'user'@'%';
Now that we have a user that can connect from any IP (thats what the % means) we need to tell the MySQL server to listen on an interface other than 127.0.0.1. By default, MySQL only listens on the localhost IP for security. In this instance, I need a database server to connect to it. There is a line that looks like this:
bind-address = 127.0.0.1
You are going to want this to be this server's (the MySQL server, not the web server) IP address. This is telling it what Ip addresses to listen on (do not mistake this with listen to). If you just comment this line out, MySQL will listen on all IP addresses of the host.
Now its migration time. What I do for migrations is do a mysqldump of the database, then import it on the other side from the dump file it produces. The mysqldump command will produce a file that goes one by one through your database tables, dumps them if they exist, creates the table, then inserts all the data into it. The output is really just one giant text file that can recreate your entire database.
If you are on a production system, its probably a good idea to stop any service leveraging the database, then bounce the database server to make sure all your writes are finished. Once this is done, we can use this command to dump a copy out:
mysqldump --add-locks --user=root --password=password --all -c dbname >Move this file over to the new MySQL server. We now add two lines to the top of the file:
create database dbname;This will create our database and use it so we can create the tables and insert the data from the file. All we need to do is pipe the file into a mysql prompt:
mysql -u root -p < /path/to/dbname-backup.sqlRun some tests, everything should be happy. Our database is migrated over to our new server.
Finally, we need to ensure we are backing up our database. I created a little backup script that I use based off our migration command. Most people have solutions from their backup suites that work, I actually use both this and our suite. I think this is one of the easiest to restore backups (see Step 6 migration). Its adding 2 lines to a text file, then a 1 line command at a prompt to restore your database, can't beat that. I just run the script in cron. Please note, if you want to use the scp portion, the user running the script in cron needs a preshared RSA key with the remote backup server.
# set date formatToss that into a script and put it in cron on an interval of choice. Its moving the backup off to another server and notifying you if the file does not move over correctly.
# dump database
mysqldump --add-locks --user=user --password=password --all -c dbname
# compress backup file
gzip -9 /path/to/dbname-backup-$date.sql
# secure copy to backup server
scp /path/to/dbname-backup-$date.sql.gz email@example.com:/path/to/backup/dir
# if scp to fileserver failed, email me about it
if [ $? -gt 0 ]
mail -s "Development dbname
MySQL Backup Failed" firstname.lastname@example.org < /dev/null exit fi # remove local copy rm /path/to/dbname-backup-$date.sql.gz
Thats all there is to it. We built a brand new server to host MySQL, have it authenticating against AD, monitored by our nagios server, backing itself up, and servicing our database needs. Funny part is, it took me longer to do the writeup than it did to do the actual work :)