Thursday, November 5, 2009
Our standard build consists of a build that has sudo installed. Root cannot login over SSH and we have a default local account that the sysadmins know and can use if in a pinch. We do, however, use our own accounts so it can leave a sudo log trail.
Now, when we installed likewise-open on our default ubuntu image, we had one concern, any user with an Active Directory account can now login to our Ubuntu servers. While they will just be a dummy user with minimal privileges, what is to stop these users from using an exploit to escalate their privileges to root, or filling up their home directory and owning some of the boxes that don't keep those partitions separate. While I don't think any of my users would do anything like that, its always better to allow minimal privileges and lets face it, my sales, accounting, managers, and many others have absolutely no business in ever logging into any of the production linux/unix servers.
I did some strategic Googling query building and I was able to stumble on a thread (sorry I lost the original link) that pointed me to a specific line in a configuration file (/etc/security/pam_lwidentity.conf). By uncommenting this line and putting a group in there, I can restrict who can login to my servers. So I set it to the following and simply created a security group called 'unix admins' in Active Directory to hold the people who can admin these boxes. Piece of cake.
require_membership_of = GCA\unix^admins
Wednesday, November 4, 2009
More information can be found at securityfocus here:
From what I understand, you can fix this vulnerability by simply setting the vm.mmap_min_addr setting to a value of 4096 or greater ( in short, if this value is 0, you are vulnerable). I also read that if you install wine (winehq.com) that it will automatically set this value to 0 and make you vulnerable.
To check the value of this variable on your linux system to see if you are currently vulnerable, just run this command and ensure it is not set to a value of 0:
On debian based installations, check the following site for how to fix this for the current boot and how to make it persist a reboot.
For those that are really concerned right now, I checked my Ubuntu 8.04.3 LTS servers and they all are defaulted to 65535 for this value, so they should not be vulnerable to this bug. Keep in mind what I stated above though, if you install wine, you become vulnerable.
Tuesday, November 3, 2009
Let me also preface with the fact that we have nightly snapshots of the VM's and the developers check their code into subversion on a regular basis, so worst case scenario is a 20 min operation to copy the VM from last nights backup in the event one explodes.
Today I had an engineer who tried to reboot her Windows XP VDI and it would not reboot. I asked her if she had any open applications, to which she replied no, so I used VirtualCenter to send the VM a reboot command. That did not work, so I gave it a good amount of time, then got impatient and took the shotgun approach and tried the reset vm command instead. This still did not work, so I tried it again and I get an error that there is another operation in progress.
I did some googling and found information on the nuclear approach to the situation. First thing is to find the vm world id with the following command:
cat /proc/vmware/vm/*/names grep "vmname"
The result should start with vmid=####. With this number, you can browse over to /proc/vmware/vm/####/cpu and do:
less -S /proc/vmware/vm/###/cpu/status
Well, I had a problem. The cpu directory in this location was not present. I could not get the VM to stop at all. I also tried the other method of using ps -aux and grepping for the VM name, but it did not have a process either. I did not find a solution for this, but here is what I found in /var/log/vmkwarning log file. Look at the timestamps, it figured it out eventually and allowed me to do operations it, but it took well over an hour. See below for log output and please let me know if you know how I can fix this without waiting 90 minutes in the future:
Nov 3 14:54:50 gcaesx001 vmkernel: 40:00:33:25.587 cpu1:1042)WARNING: Swap: vm 1390: 7515: Swap sync read failed: status=195887193, retrying...
Nov 3 14:54:50 gcaesx001 vmkernel: 40:00:33:25.587 cpu0:1042)WARNING: Swap: vm 1390: 7528: Swap sync read retry failed: status=195887193, retry=1
Nov 3 14:54:50 gcaesx001 vmkernel: 40:00:33:25.638 cpu2:1042)WARNING: Swap: vm 1390: 7528: Swap sync read retry failed: status=195887193, retry=2
Nov 3 14:54:51 gcaesx001 vmkernel: 40:00:33:25.939 cpu2:1042)WARNING: Swap: vm 1390: 7528: Swap sync read retry failed: status=195887193, retry=4
Nov 3 14:54:54 gcaesx001 vmkernel: 40:00:33:29.144 cpu1:1042)WARNING: Swap: vm 1390: 7528: Swap sync read retry failed: status=195887193, retry=8
Nov 3 14:55:02 gcaesx001 vmkernel: 40:00:33:37.147 cpu2:1042)WARNING: Swap: vm 1390: 7528: Swap sync read retry failed: status=195887193, retry=16
Nov 3 14:55:18 gcaesx001 vmkernel: 40:00:33:53.155 cpu0:1042)WARNING: Swap: vm 1390: 7528: Swap sync read retry failed: status=195887193, retry=32
Nov 3 14:55:50 gcaesx001 vmkernel: 40:00:34:25.175 cpu3:1042)WARNING: Swap: vm 1390: 7528: Swap sync read retry failed: status=195887193, retry=64
Nov 3 14:56:54 gcaesx001 vmkernel: 40:00:35:29.224 cpu0:1042)WARNING: Swap: vm 1390: 7528: Swap sync read retry failed: status=195887193, retry=128
Nov 3 14:59:02 gcaesx001 vmkernel: 40:00:37:37.300 cpu0:1042)WARNING: Swap: vm 1390: 7528: Swap sync read retry failed: status=195887193, retry=256
Nov 3 15:03:18 gcaesx001 vmkernel: 40:00:41:53.473 cpu1:1042)WARNING: Swap: vm 1390: 7528: Swap sync read retry failed: status=195887193, retry=512
Nov 3 15:11:51 gcaesx001 vmkernel: 40:00:50:25.865 cpu2:1042)WARNING: Swap: vm 1390: 7528: Swap sync read retry failed: status=195887193, retry=1024
Nov 3 15:28:55 gcaesx001 vmkernel: 40:01:07:30.612 cpu3:1042)WARNING: Swap: vm 1390: 7528: Swap sync read retry failed: status=195887193, retry=2048
Nov 3 16:03:05 gcaesx001 vmkernel: 40:01:41:40.116 cpu1:1042)WARNING: Swap: vm 1390: 7528: Swap sync read retry failed: status=195887193, retry=4096
Nov 3 16:18:12 gcaesx001 vmkernel: 40:01:56:47.424 cpu3:1042)WARNING: Swap: vm 1390: 7550: Read failed: numPages = 1
Nov 3 16:18:12 gcaesx001 vmkernel: 40:01:56:47.424 cpu3:1042)WARNING: Alloc: vm 1390: 5226: unable to read from slot(0x102ac7c)
Nov 3 16:18:12 gcaesx001 vmkernel: 40:01:56:47.424 cpu3:1042)WARNING: World: vm 1390: 6870: vmm0:vdi-rlosaria:vmk: vcpu-0:Unable to read swapped out PPN(0xa1ef) from swap slot(0x102ac7c) for VM(1390)
Nov 3 16:18:12 gcaesx001 vmkernel: 40:01:56:47.426 cpu3:1042)WARNING: P2MCache: vm 1390: 450: Alloc_GetPhysMemRange failed for PPN 0xa1ef status = Permission denied
Nov 3 16:18:12 gcaesx001 vmkernel: 40:01:56:47.430 cpu2:1037)WARNING: World: vm 1390: 5160: Panic'd VMM world being reaped, but no core dumped.
Wednesday, October 7, 2009
After the cloning process, VMWare gives the virtual machine NIC a new MAC address. Why is this significant? Well, from what I understand, the MAC address is what Linux uses to associate eth0 with that NIC. So, since your NIC no longer has that old MAC address, you have to change /etc/network/interfaces to bring up eth1 instead of eth0. I am not sure how to get eth0 freed up, but it can probably be found with a carefully crafted Google query.
Next thing we want to do is update our Ubuntu Server. This is one of the main reasons why I like ubuntu, its really this simple:
sudo apt-get update
sudo apt-get dist-upgrade
Next order of business is authentication. We like to use integrated Active Directory authentication using Likewise. Its a very cool tool to make your Linux servers authenticate back to Active Directory so you do not have to maintain local /etc/passwd files, or a separate LDAP server.
As a matter of best practice, I always create a local /etc/passwd account. This is for two reasons, first, if AD cannot be found, you cannot logon (remember ubuntu does not give root a password), and second, its always industry best practice when authenticating against an LDAP server to leave a local admin account.
Once that account is added, I add likewise to the ubuntu server with the following command:
sudo apt-get install likewise-openThis is some pretty cool software. I ran a few commands next, the first puts likewise to automatically startup on a boot, so we do not have to do it manually. The second command is the command to join a domain (the 'administrator' can be replaced with any account with privileges to join a system to the domain).
Now, because I hate prepending my domain name to my username (IE domain\user) when logging in, I want it to just use the default domain name when I put in a username unless otherwise instructed. To do so, I added the following line to the /etc/samba/lwiauthd.conf file, then ran the command to restart likewise.
sudo udate-rc.d likewise-open defaults
sudo domainjoin-cli join domain.local Administrator
winbind use default domain = yesThen, the final order of business with likewise, I need my domain admins group to be able to sudo, similar to the way users in the admin group can in a default ubuntu installation. To do this, I editted the /etc/sudoers file and added the following line:
sudo /etc/init.d/likewise-open restart
%domain^ admins ALL=(ALL) ALLStep 4:
Next order of business is my nagios monitoring plugins. I use a monitoring tool call nagios to monitor my servers. Nagios has a plugin you can install called nrpe that runs as a local daemon or in xinetd so the monitoring server can send requests to run predefined scripts. This is helpful to obtain data you cannot get from remote, such as load average, disk utilization, etc. Note, I am only installing the client side plugins and remote pole daemon (nrpe), this is by no means a guide to setup a nagios server (maybe a future blog post though :P )
First thing I install is the nrpe-server package. This is the daemon (or xinetd process) that runs to accept remote connections to run predefined scripts and return the results.
sudo aptitude install nagios-nrpe-serverAfter this is installed, you will need to modify its configuration file located at /etc/nagios/nrpe.cfg. In this file you will setup two things, first there is a directive to specify which IP's are allowed to talk to your nrpe process (this will prevent bad people from monitoring your server without your permission). This is also where you setup your check commands. Nagios uses scripts called check commands to get perfdata from a server, such as disk space and load averages. You setup the scripts that run locally on command from the main nagios server in this file a well. Then, you simply tell your nagios server where this box is and what command to ask it to run.
I installed all the main check commands, then just chose which ones I wanted to use in the nrpe-server's configuration file. Here is how to install the plugins:
Nagios should now be able to monitor your server.
sudo apt-get install nagios-plugins nagios-plugins-basic
Now its time to get our MySQL stuff installed. Once again, time for a shameless plug of "This is why I love Ubuntu":
sudo apt-get install mysql-server mysql-clientYou have now installed MySQL server. It will probably ask you to set the mysql root password, try not to forget it.
Next, we'll install a really nifty MySQL admin tool called phpmyadmin. This usually requires you install a bunch of prerequisites, but in this case, its ubuntu-ized, so its just another one liner:
sudo apt-get install phpmyadminIf you do not already have a web server installed, it will automatically install apache2 for you. After installation, it will launch an automatic configuration tool which will ask what web server to configure. Select apache2 if you did not have a web server installed prior. If you did, tell it which one.
The last couple of tasks are to enable some users for this SQL server and allow other hosts to connect to it.
We'll start with enabling some users. Here is the quick and dirty dev server way to do it. Please do not use this in real environments. I am not going to dive off into database security, just know this'll work for a nice little sandbox, not a production environment.
mysql -u root -p
CREATE USER 'user'@'%' IDENTIFIED BY
GRANT ALL PRIVILEGES ON *.* TO 'user'@'%';
Now that we have a user that can connect from any IP (thats what the % means) we need to tell the MySQL server to listen on an interface other than 127.0.0.1. By default, MySQL only listens on the localhost IP for security. In this instance, I need a database server to connect to it. There is a line that looks like this:
bind-address = 127.0.0.1
You are going to want this to be this server's (the MySQL server, not the web server) IP address. This is telling it what Ip addresses to listen on (do not mistake this with listen to). If you just comment this line out, MySQL will listen on all IP addresses of the host.
Now its migration time. What I do for migrations is do a mysqldump of the database, then import it on the other side from the dump file it produces. The mysqldump command will produce a file that goes one by one through your database tables, dumps them if they exist, creates the table, then inserts all the data into it. The output is really just one giant text file that can recreate your entire database.
If you are on a production system, its probably a good idea to stop any service leveraging the database, then bounce the database server to make sure all your writes are finished. Once this is done, we can use this command to dump a copy out:
mysqldump --add-locks --user=root --password=password --all -c dbname >Move this file over to the new MySQL server. We now add two lines to the top of the file:
create database dbname;This will create our database and use it so we can create the tables and insert the data from the file. All we need to do is pipe the file into a mysql prompt:
mysql -u root -p < /path/to/dbname-backup.sqlRun some tests, everything should be happy. Our database is migrated over to our new server.
Finally, we need to ensure we are backing up our database. I created a little backup script that I use based off our migration command. Most people have solutions from their backup suites that work, I actually use both this and our suite. I think this is one of the easiest to restore backups (see Step 6 migration). Its adding 2 lines to a text file, then a 1 line command at a prompt to restore your database, can't beat that. I just run the script in cron. Please note, if you want to use the scp portion, the user running the script in cron needs a preshared RSA key with the remote backup server.
# set date formatToss that into a script and put it in cron on an interval of choice. Its moving the backup off to another server and notifying you if the file does not move over correctly.
# dump database
mysqldump --add-locks --user=user --password=password --all -c dbname
# compress backup file
gzip -9 /path/to/dbname-backup-$date.sql
# secure copy to backup server
scp /path/to/dbname-backup-$date.sql.gz firstname.lastname@example.org:/path/to/backup/dir
# if scp to fileserver failed, email me about it
if [ $? -gt 0 ]
mail -s "Development dbname
MySQL Backup Failed" email@example.com < /dev/null exit fi # remove local copy rm /path/to/dbname-backup-$date.sql.gz
Thats all there is to it. We built a brand new server to host MySQL, have it authenticating against AD, monitored by our nagios server, backing itself up, and servicing our database needs. Funny part is, it took me longer to do the writeup than it did to do the actual work :)
Friday, October 2, 2009
So, the first thing I did was figure out how to get the virtual machines to start with the vmrun command. The first thing I learned was that I needed a line added to the vmx file to ensure the question prompts will not hold up the virtual machine from starting without the GUI to see the question that Vmware wants answered. To do this, I added the following line to my vmx file so that vmware will automatically select the default answer to any questions it may have.
msg.autoAnswer = "TRUE"
So, now I was able to figure out using helpfiles and some carefully crafted google queries that you can start a virtual machine in vmware workstation with the following command in headless mode (IE, no DISPLAY variable set).
vmrun start nogui
After figuring this out, I need to create an init script to start this up. The first thing that popped in my mind is the fact that I do not want these VM's starting under the root user, so I adapted my vmrun command for the script to look like this:
su - rivey -c "vmrun start nogui"
This will use the switch user command to run the command in parenthesis as the user I specify. I then snatched up the sshd init script, stripped all the junk I didn't need out, and finished with the following:
# Init file for start Virtual Machines
# will start the following VM's
# source function library.
#Location of vmx files
runlevel=$(set -- $(runlevel); eval "echo \$$#" )
echo -n $"Starting $prog: Solaris10 "
su - rivey -c "$VMRUN start $VM1 nogui"
[ "$RETVAL" = 0 ]
echo -n $"Starting $prog: DC01 "
su - rivey -c "$VMRUN start $VM2 nogui"
[ "$RETVAL" = 0 ]
echo -n $"Starting $prog: oraresource "
su - rivey -c "$VMRUN start $VM3 nogui"
[ "$RETVAL" = 0 ]
echo -n $"Starting $prog: WindowsMemberServer "
su - rivey -c "$VMRUN start $VM4 nogui"
[ "$RETVAL" = 0 ]
echo -n $"Stopping $prog: Solaris10 "
su - rivey -c "$VMRUN stop $VM1 nogui"
[ "$RETVAL" = 0 ]
echo -n $"Stopping $prog: DC01 "
su - rivey -c "$VMRUN stop $VM2 nogui"
[ "$RETVAL" = 0 ]
echo -n $"Stopping $prog: oraresource "
su - rivey -c "$VMRUN stop $VM3 nogui"
[ "$RETVAL" = 0 ]
echo -n $"Stopping $prog: WindowsMemberServer "
su - rivey -c "$VMRUN stop $VM4 nogui"
[ "$RETVAL" = 0 ]
case "$1" in
echo $"Usage: $0
Now, a few things to point out. The first is pretty obvious, this script is starting and stopping 4 different VM's. The second, this is by no means the cleanest way of doing this and probably not the best script in the world, but it was a pretty nice little thrown together script to get the job done. Quick and dirty. Initial tests show the script is working like a champ. I tossed the script in /etc/init.d, threw the executable bit on, then sym linked it into rc3.d and rc5.d.
Friday, July 24, 2009
The build starts out pretty simple. I spun up some instances of Windows server 2003, patched them up, and made them ready to be cloned for student systems. On the netapp VTL, I created a bunch of Robots and Tape devices, which are emulating a StorageTek T9840C. I added 16 tapes, 8 on one serial format and 8 on another (as requested by Symantec). I then dropped a couple HBA's into the ESX Server and connected them to the VTL (we had plenty of HBA's laying around so the Multipathing and redundancy is nice to have seeing as how we are going to share the HBA's for all of the systems).
There were a few things that threw me. The first one was how to match up the Robot and Tape drives on each machine. When I connected the HBA's and added a new SCSI device to the VM, the pulldown list just had them by name, so I saw 8 Robots and 16 drives with no information regarding which was which. They were also not in any specific order.
To figure this out, I SSH'd over to the service console of the ESX Server and did an ls -al on the devices directory that contained all of the targets. This matched a sym link that had the LUN target directly to a ridculously long unique number identifier. This long number also shows after you add the SCSI device to vmware if you hover over it in the edit settings window.
The next thing I tripped over a bit was making it to where two virtual machines had access to the same SCSI device. The way this is done is by modifying the SCSI controller used to a different type. In edit virtual machine settings, if you click on the SCSI controller, you can change it so that the SCSI devices can be shared. The only problem is this change is globally on the SCSI controller, so to make this not problematic for the other SCSI devices, such as the windows C: drive, I moved the VTL SCSI devices over to a different SCSI id (1:0 instead of 0:3). This will make vmware automatically add a new SCSI controller to the box. This one I made the change and everything was happy.
The next thing I bumped into was our firewall. For whatever reason our checkpoint vpn tunnels are not setup correctly. I could not figure out how to allow the Seattle DMZ network to talk to the Tampa DMZ network over tcp/3389 (RDP). After spending some time trying to figure it out, I used a work around. I had 4 consecutive external IP's available, so I simply NAT'd these boxes out to the edge, allowed our Seattle office's external IP to RDP in on those external IP's and had it work that way instead of figuring out the VPN tunnel madness.
The last headache to figure out was the driver for the tape devices. The funny part is this is a chicken and egg problem. In order to get the driver for the tape drives installed, I need netbackup installed (it uses a symantec halfinch.sys driver). Well, during the install for the configuration portion, it wants the tape devices to be installed to configure them. In order to remedy this, Symantec added a pause mechanism to their setup scripts so the user can pause the install after the drivers are present on the system, install the tape devices from device manager, then continue the install.
On the day of the class everything ran smoothly. The only thing I forgot to do was to populate the host files of each system with the IP information on the other machines in the 'classroom'. This wasn't a big deal, the instructor was able to figure it out and work through it.
Class ran, students seemed to be happy from what I understand, and now I have a template for this build as well as the procedure (this is why I blog this stuff!) to build it back up again.
Thursday, July 23, 2009
The user experience is pretty solid. When they are looking at something on their screen they want to fax, just act like you are printing it, select the Fax Printer from your printers list, enter the phone number, select a generic cover letter, enter the rest of the properties for the cover letter and fire away. Another neat little feature is the address book it has for faxing information. All of this is integrated into our AD domain, so I can shell out the driver from a windows print server and not have to worry about running around with a CD to everyones computer.
While its all fine and dandy, here are the shortcomings that I ran into. The AD integration works great, however, th little address book tool must be installed on each individual PC. The address book is also lacking the capability to integrate with my outlook/exchange account to pull my local contacts. It will integrate through MAPI to my Global Address List in AD, but that doesn't help since we do not store customer or vendor information in AD. This also eliminates the LDAP connector as well. I would like to have a centralized version of the address book for the 'private version' and/or have full integration with exchange/outlook local address books.
Another part that I found a bit annoying was the coversheets. I wish we could take an existing word document, insert special fields into the word document so that the printer can do a replace on those fields with the appropriate information field and give us the ability to do a custom coversheet.
If those features were available it would make faxing just that much smoother in my environment. People could either populate an existing shared address book in the domain and/or have the ability to use their local contacts in outlook/exchange to use for faxing. We would also be able to use our custom GCA coverletter for the faxing as well.
I can't really complain though, its an upgrade for sure. Just wish printing companies could keep up with network technologies a little better.
The neat part is, when a user opens a document for editting, it will automatically checkout that document (to prevent others from editting it). The other cool part is it saves a new minor revision every time a user saves it, also giving the user the ability to insert comments. It shows which user editted it, what time it was editted, and a slew of other information. The major revs are controlled by users simply selecting "Create new major revision" from a pulldown menu on the sharepoint site when looking at the document.
I think its very solid document management. Very intuitive to use. I have to admit, Microsoft actually did something right. If you know me, I typically am on the Microsoft hate bandwagon, but the project server product with sharepoint version control is pretty sweet.
Monday, July 13, 2009
So, instead of starting the download process all over, which will take another full day of downloading, I've decided that I will go a different route and install CentOS using an NFS mount point from a linux server that has the ISO file mounted on a loopback interface.
The idea behind this is to download a single 8 MB ISO file for the boot CD, then the boot CD can lookup the rest of the installation files from another box over the NFS protocol. First order of business, get NFS Server installed onto my Ubuntu Laptop that I will be using to host out the full distribution. On Ubuntu, this is a pretty simple task.
sudo apt-get install nfs-user-server
When I ran this command, I recieved an error saying "No installable candidate" with a bunch of information about my running kernel. This is just a guess, but I'm going to say that I need to update my Ubuntu laptop, which doesn't get much action, so I haven't updated it in months. So, I went ahead and started the update process which is going to take a while.
sudo apt-get update
sudo apt-get dist-upgrade
When this finishes, I'm going to run the nfs-server install command again, then it should just be a simple matter of setting up the NFS mount in /etc/exports file. While waiting I did a little research at the following site to get the syntax for setting up the export.
I've done NFS a few times, so my eyes immediately found the kind of export I am looking to do, it is this line, which is added to /etc/exports. Then I run the command to make the NFS server re-read the config file and make my share available.
sudo exportfs -ra
Now that I have my export, its time to boot from that netinstall CD and point it to my box to start the installation process. Because I am still waiting for my updates to finish, I did some Googling to find somebody who has done this before on CentOS. Here is what I found:
Seems pretty straight forward. The CD boots up, selected default options, DHCP, no IPv6 support (don't need it). Finally, it wants the NFS server name (IP) and the directory (nfs mountpoint with the media). I entered them both and sure enough, I was presented with a CentOS installation wizard. Rest of it is pretty straight forward, I go through the install as if I were using the DVD. Pretty cool stuff.
Friday, July 10, 2009
For a temporary fix, we grabbed an unused system from a classroom with solaris 10 on it and tossed it in place of the jumpstart server. We re-ip'd it to have the same IP as the old box that sat there (this prevents reconfiguration of the individual systems) and followed the steps in the following article to get it routing traffic.
Today, I am tasked with building a replacement for that box. The new paradigm that we are using here a GCA for the jumpstart servers is to make a linux host, then use VMware to virtualize the jumpstart server. All seven of our offices have a standard build for the classrooms. The classroom network is flat, then routes through a classserver (jumpstart server), which resides in the DMZ, and it will be forwarded out to the internet. Pretty simple setup.
The new server I am building is going to be using CentOS for the host OS, VMware server 2.0.1, and the Solaris 10 jumpstart server. The implementation we are using allows for us to be very flexible because every jumpstart server VM is exactly the same. The only modification we need to make for the host OS is to change the IP of the DMZ facing interface to the subnet of that office's DMZ.
The only difference we are doing on the new implementation is that we are not going to use the jumpstart server for routing, we are going to use CentOS. Instructors are allowed to login to the jumpstart servers to do bundles and move systems around, but that is not necessary on the CentOS stuff, because that should never change. I will use sudo and couple scripts to implement a way that the instructors can shutdown/reboot the CentOS box as well as start/shutdown/reboot the jumpstart server. These scripts are pretty cool and good to know about.
Stayed tuned for more info on iptables, vmware scripts, etc as I write and implement them. I probably will not get this done today as I am still waiting for CentOS to download.
Thursday, July 9, 2009
After doing a little bit of research, the long values are simply the concatenation of the four 8-bit binary numbers then casted into one giant long value. This makes life simple, because concurrent IP's will also be concurrent in the long format. Not only are they concurrent, but they are sequential, so you can find out if an IP is in the network with simple conditional logic:
If( $networkaddr <= $ipaddr <= $broadcastaddr )
Because the database is storing all of these values as longs, the SQL query can be constructed with the simple conditional logic, with no additional logic. After figuring it all out, everything is pretty simple. The network address is going to be the lowest IP in the subnet and the broadcast is going to be the highest value in the subnet. Combine this with the relationship properties the converted IP's have when they are long values and its a pretty simple way to deal with things, just not intuitive.
The easiest way to architect this solution was to simply install the whole standalone project server as a virtual machine and have the virtual machine files reside on an encrypted filesystem.
So, step 1, install a box with an encrypted volume. I am going to use my current OS of choice, Ubuntu Server 8.04.1. I found a great reference article using the Google debugger to use as a guide, see https://help.ubuntu.com/community/EncryptedFilesystemOnIntrepid. My implementation is going to vary slightly, but this is the document I am using as my reference.
So the first thing I did was install Ubuntu Server 8.04.1 from the CD. My partitioning is somewhat complicated, but nothing crazy. The box I am using has 6 drive bays filled with drives. I used the onboard raid controller to make two arrays, the first is a mirror of two 36 GB drives and the second is a raid 5 array of the other four 320 GB drives. Ubuntu sees two drives, /dev/sda and /dev/sdb. I formatted /dev/sda into two filesystems, a SWAP partition with 4GB and the rest was made as the / partition. The /dev/sdb drive I left unformatted, I will be using that for my encrypted filesystem.
When going through the install, I selected all of the defaults, only adding the openssh-server so I don't have to use the console in the server room. Once the install finished, I SSH'd into the box from my desk and ran the following two commands to update my server to the latest patches and reboot.
sudo apt-get updateAfter the reboot, I needed to add a couple additional packages, so I ran the following command to add them.
sudo apt-get dist-upgrade
sudo init 6
sudo apt-get install cryptsetup hashalot initramfs-tools
After those packages were added, I skipped down the document to the "Create the encrypted partition" section and started with those steps.
sudo modprobe dm_crypt
sudo modprobe sha256
sudo luksformat -t ext3 /dev/sdb
Note: I did get a warning, but it has not seemed to cause any problems.
WARNING: Error inserting padlock_sha (/lib/modules/2.6.24-24-server/kernel/drive
rs/crypto/padlock-sha.ko): No such device
After the volume has been formatted, I created a new mount point for it on /cryptvol. I ran the following commands to mount the volume where I wanted it.
sudo mkdir /cryptvol
sudo cryptsetup luksOpen /dev/sdb cryptvol
sudo mount /dev/mapper/cryptvol /cryptvol
When I ran the cryptsetup command, I was prompted or my password. After entering it the command finish and I was able to mount my new volume. Next order of business is to create some kind of documentation on the box to tell me those two commands. I opted to create a file called /readme.cryptedfs with the commands to mount this file system. I did not want to have it automatically mounted, I want a user to be forced to log into this box and enter the password manually to mount this volume after a (re)boot. I also created a sym link to this file in the /cryptvol directory when the filesystem is not mounted to the directory, just in case someone goes there looking for something, they see one file.
And its that simple, I now have a completely encrypted volume on a server. In order to mount this volume a password must be provided.
Wednesday, July 8, 2009
It took me a bit of time to get the full swing of Jasper, but I think I am finally wrapping my brain around it. The biggest bear for me to learn was the time series chart. I tried googling this for a little over an hour and could not find a single example of a time series chart, so I figured I'd blog it for reference by others who may be trying to do similar things with Jasper.
For starters, here is my massive SQL query. Its not as bad as it looks. I am grabbing the perfdata (string) , the timestamp, the server name, and the name of the service check. I use some where statements to ensure I get only the stuff I want for this report and dump any blank values so my report doesn't blow up.
nagios_servicechecks.`end_time` AS nagios_servicechecks_end_time,nagios_servicechecks.`perfdata` AS nagios_servicechecks_perfdata,nagios_services.`display_name` AS nagios_services_display_name,nagios_servicechecks.`end_time_usec` AS
nagios_servicechecks_end_time_usec,nagios_hosts.`display_name` AS nagios_hosts_display_name
AND nagios_services.`display_name` = "Disk_Utilization"AND nagios_hosts.`alias` = "njugl001"
AND nagios_services.`host_object_id` = nagios_hosts.`host_object_id`
AND nagios_servicechecks.`perfdata` <> ""
OR nagios_servicechecks.`service_object_id` =
AND nagios_services.`display_name` = "Disk_Utilization"
AND nagios_hosts.`alias` = "njugl002"
AND nagios_services.`host_object_id` = nagios_hosts.`host_object_id`
AND nagios_servicechecks.`perfdata` <> ""
I used a phpmyadmin to run the query and ensure its only returning the values I want, which it does, so I moved on. One snag that held me up for a while was which band to put the chart. I started by putting the chart in the detail band. After troubleshooting for quite some time, I came to the relization that for each row returned by the query from the main report, it would print a copy of the chart, which gave me a huge report, which I didn't want. To remedy, I moved the chart into the Title band.
Next, I added the time series chart onto my band and set it up all to fit perfectly on the page and created a subdataset for the query I will be using with this report. I then went to the chart data and set the Sub Dataset (under the dataset tab) to the new sub dataset that I created with my query. The report type was set to report (default) and all other values on this tab were left alone.
One thing I learned was the Time Period value under the details tab for chart data was VERY important. This specified the minimal increments for plot points to be differentiated. For example, if I have results that are timestamped every 15 minutes and I set the Time Period to "Days", it will only plot one point for all 96 values per day. This threw me off for a very long time, as I was only getting 1 point on my graph for a subset of data that I knew had hundreds of values. As soon as I changed the Time Period from 'weeks' down to 'hours' I was able to see my values. I would also suggest taking a look at the "print repeated values" checkbox in the chart properties.
Next I had to do some casting and parsing. My data plots were strings, so I needed to use string tokenizers to parse out the numerical value for plotting, then convert it over to a number so Jasper knew what to do with it. My timestamps were pretty complicated as well. The value was of type java.sql.timestamp, but the expected value was java.util.date. I was able to use the getTime() method on the timestamp to do a simple conversion. I was under the impression that the Time Period error I had was an error related to timestamp casting because my values were coming out at 00:00.000 every time, but I later learned this was the "round to the nearest week" that I had set in Time Period earlier.
Now that I have all of my data in the correct format, its time to get it all put into the chart data. So, I added a new Time Series under the details tab of Chart Data and the three fields I filled out are:
- Series Expression (String): this is what each different plot on the graph is labelled. I used my field that contains the server name to label each plot
- Time Period Expression (Date): this is the expression for the date value of each plot point
- Value Expression (Number): this is what number is going to be plotted on the chart.
I think I now have my head wrapped around this Jasper Time Series Chart stuff.. please feel free to shoot me an email and I can send you a copy of my .jrxml file for reference.