Here at GCA we run classes for multiple vendors. I was tasked with creating the class build for the Symantec Netbackup 6.5 for Windows class. The hardware we are using, however, is all virtual. We are using a Netapp VTL and VMWare ESX Server in order to create the student systems. The hardware we are virtualizing on lives at our Tampa office while the class is going to be run from our Seattle office.
The build starts out pretty simple. I spun up some instances of Windows server 2003, patched them up, and made them ready to be cloned for student systems. On the netapp VTL, I created a bunch of Robots and Tape devices, which are emulating a StorageTek T9840C. I added 16 tapes, 8 on one serial format and 8 on another (as requested by Symantec). I then dropped a couple HBA's into the ESX Server and connected them to the VTL (we had plenty of HBA's laying around so the Multipathing and redundancy is nice to have seeing as how we are going to share the HBA's for all of the systems).
There were a few things that threw me. The first one was how to match up the Robot and Tape drives on each machine. When I connected the HBA's and added a new SCSI device to the VM, the pulldown list just had them by name, so I saw 8 Robots and 16 drives with no information regarding which was which. They were also not in any specific order.
To figure this out, I SSH'd over to the service console of the ESX Server and did an ls -al on the devices directory that contained all of the targets. This matched a sym link that had the LUN target directly to a ridculously long unique number identifier. This long number also shows after you add the SCSI device to vmware if you hover over it in the edit settings window.
The next thing I tripped over a bit was making it to where two virtual machines had access to the same SCSI device. The way this is done is by modifying the SCSI controller used to a different type. In edit virtual machine settings, if you click on the SCSI controller, you can change it so that the SCSI devices can be shared. The only problem is this change is globally on the SCSI controller, so to make this not problematic for the other SCSI devices, such as the windows C: drive, I moved the VTL SCSI devices over to a different SCSI id (1:0 instead of 0:3). This will make vmware automatically add a new SCSI controller to the box. This one I made the change and everything was happy.
The next thing I bumped into was our firewall. For whatever reason our checkpoint vpn tunnels are not setup correctly. I could not figure out how to allow the Seattle DMZ network to talk to the Tampa DMZ network over tcp/3389 (RDP). After spending some time trying to figure it out, I used a work around. I had 4 consecutive external IP's available, so I simply NAT'd these boxes out to the edge, allowed our Seattle office's external IP to RDP in on those external IP's and had it work that way instead of figuring out the VPN tunnel madness.
The last headache to figure out was the driver for the tape devices. The funny part is this is a chicken and egg problem. In order to get the driver for the tape drives installed, I need netbackup installed (it uses a symantec halfinch.sys driver). Well, during the install for the configuration portion, it wants the tape devices to be installed to configure them. In order to remedy this, Symantec added a pause mechanism to their setup scripts so the user can pause the install after the drivers are present on the system, install the tape devices from device manager, then continue the install.
On the day of the class everything ran smoothly. The only thing I forgot to do was to populate the host files of each system with the IP information on the other machines in the 'classroom'. This wasn't a big deal, the instructor was able to figure it out and work through it.
Class ran, students seemed to be happy from what I understand, and now I have a template for this build as well as the procedure (this is why I blog this stuff!) to build it back up again.
Friday, July 24, 2009
Thursday, July 23, 2009
Print to Fax
Our Toshiba eStudio series printer has the capability to print directly to fax. While the technology is cool, I think it is lacking a few features that I would like to see.
The user experience is pretty solid. When they are looking at something on their screen they want to fax, just act like you are printing it, select the Fax Printer from your printers list, enter the phone number, select a generic cover letter, enter the rest of the properties for the cover letter and fire away. Another neat little feature is the address book it has for faxing information. All of this is integrated into our AD domain, so I can shell out the driver from a windows print server and not have to worry about running around with a CD to everyones computer.
While its all fine and dandy, here are the shortcomings that I ran into. The AD integration works great, however, th little address book tool must be installed on each individual PC. The address book is also lacking the capability to integrate with my outlook/exchange account to pull my local contacts. It will integrate through MAPI to my Global Address List in AD, but that doesn't help since we do not store customer or vendor information in AD. This also eliminates the LDAP connector as well. I would like to have a centralized version of the address book for the 'private version' and/or have full integration with exchange/outlook local address books.
Another part that I found a bit annoying was the coversheets. I wish we could take an existing word document, insert special fields into the word document so that the printer can do a replace on those fields with the appropriate information field and give us the ability to do a custom coversheet.
If those features were available it would make faxing just that much smoother in my environment. People could either populate an existing shared address book in the domain and/or have the ability to use their local contacts in outlook/exchange to use for faxing. We would also be able to use our custom GCA coverletter for the faxing as well.
I can't really complain though, its an upgrade for sure. Just wish printing companies could keep up with network technologies a little better.
The user experience is pretty solid. When they are looking at something on their screen they want to fax, just act like you are printing it, select the Fax Printer from your printers list, enter the phone number, select a generic cover letter, enter the rest of the properties for the cover letter and fire away. Another neat little feature is the address book it has for faxing information. All of this is integrated into our AD domain, so I can shell out the driver from a windows print server and not have to worry about running around with a CD to everyones computer.
While its all fine and dandy, here are the shortcomings that I ran into. The AD integration works great, however, th little address book tool must be installed on each individual PC. The address book is also lacking the capability to integrate with my outlook/exchange account to pull my local contacts. It will integrate through MAPI to my Global Address List in AD, but that doesn't help since we do not store customer or vendor information in AD. This also eliminates the LDAP connector as well. I would like to have a centralized version of the address book for the 'private version' and/or have full integration with exchange/outlook local address books.
Another part that I found a bit annoying was the coversheets. I wish we could take an existing word document, insert special fields into the word document so that the printer can do a replace on those fields with the appropriate information field and give us the ability to do a custom coversheet.
If those features were available it would make faxing just that much smoother in my environment. People could either populate an existing shared address book in the domain and/or have the ability to use their local contacts in outlook/exchange to use for faxing. We would also be able to use our custom GCA coverletter for the faxing as well.
I can't really complain though, its an upgrade for sure. Just wish printing companies could keep up with network technologies a little better.
Version Control for Sharepoint / Project Server
We have a project server and I was asked to investigate the version control capabilities of the server. After doing some Googling on the subject, I was able to find an article that walked me through how to enable document version control on a sharepoint 2007 document library. It was actually very simple. All I had to do was on the root of the document library, click on settings. The link for the settings of the document library contained another link for version control settings. I clicked this link, turned on major and minor revisions and selected the option to force checkout when editting a document.
The neat part is, when a user opens a document for editting, it will automatically checkout that document (to prevent others from editting it). The other cool part is it saves a new minor revision every time a user saves it, also giving the user the ability to insert comments. It shows which user editted it, what time it was editted, and a slew of other information. The major revs are controlled by users simply selecting "Create new major revision" from a pulldown menu on the sharepoint site when looking at the document.
I think its very solid document management. Very intuitive to use. I have to admit, Microsoft actually did something right. If you know me, I typically am on the Microsoft hate bandwagon, but the project server product with sharepoint version control is pretty sweet.
The neat part is, when a user opens a document for editting, it will automatically checkout that document (to prevent others from editting it). The other cool part is it saves a new minor revision every time a user saves it, also giving the user the ability to insert comments. It shows which user editted it, what time it was editted, and a slew of other information. The major revs are controlled by users simply selecting "Create new major revision" from a pulldown menu on the sharepoint site when looking at the document.
I think its very solid document management. Very intuitive to use. I have to admit, Microsoft actually did something right. If you know me, I typically am on the Microsoft hate bandwagon, but the project server product with sharepoint version control is pretty sweet.
Monday, July 13, 2009
CentOS NetInstall
The CentOS DVD finished downloading over the weekend. I went ahead and burned it to a DVD and popped it into the Dell PowerEdge 2600 Server and booted it. After it skipped the DVD an booted to the OS already on it, I checked BIOS to see the boot order is CD before HDD. I took a closer look at the drive, its CD only.. DOH!
So, instead of starting the download process all over, which will take another full day of downloading, I've decided that I will go a different route and install CentOS using an NFS mount point from a linux server that has the ISO file mounted on a loopback interface.
The idea behind this is to download a single 8 MB ISO file for the boot CD, then the boot CD can lookup the rest of the installation files from another box over the NFS protocol. First order of business, get NFS Server installed onto my Ubuntu Laptop that I will be using to host out the full distribution. On Ubuntu, this is a pretty simple task.
When I ran this command, I recieved an error saying "No installable candidate" with a bunch of information about my running kernel. This is just a guess, but I'm going to say that I need to update my Ubuntu laptop, which doesn't get much action, so I haven't updated it in months. So, I went ahead and started the update process which is going to take a while.
When this finishes, I'm going to run the nfs-server install command again, then it should just be a simple matter of setting up the NFS mount in /etc/exports file. While waiting I did a little research at the following site to get the syntax for setting up the export.
I've done NFS a few times, so my eyes immediately found the kind of export I am looking to do, it is this line, which is added to /etc/exports. Then I run the command to make the NFS server re-read the config file and make my share available.
Now that I have my export, its time to boot from that netinstall CD and point it to my box to start the installation process. Because I am still waiting for my updates to finish, I did some Googling to find somebody who has done this before on CentOS. Here is what I found:
Seems pretty straight forward. The CD boots up, selected default options, DHCP, no IPv6 support (don't need it). Finally, it wants the NFS server name (IP) and the directory (nfs mountpoint with the media). I entered them both and sure enough, I was presented with a CentOS installation wizard. Rest of it is pretty straight forward, I go through the install as if I were using the DVD. Pretty cool stuff.
So, instead of starting the download process all over, which will take another full day of downloading, I've decided that I will go a different route and install CentOS using an NFS mount point from a linux server that has the ISO file mounted on a loopback interface.
The idea behind this is to download a single 8 MB ISO file for the boot CD, then the boot CD can lookup the rest of the installation files from another box over the NFS protocol. First order of business, get NFS Server installed onto my Ubuntu Laptop that I will be using to host out the full distribution. On Ubuntu, this is a pretty simple task.
sudo apt-get install nfs-user-server
When I ran this command, I recieved an error saying "No installable candidate" with a bunch of information about my running kernel. This is just a guess, but I'm going to say that I need to update my Ubuntu laptop, which doesn't get much action, so I haven't updated it in months. So, I went ahead and started the update process which is going to take a while.
sudo apt-get update
sudo apt-get dist-upgrade
When this finishes, I'm going to run the nfs-server install command again, then it should just be a simple matter of setting up the NFS mount in /etc/exports file. While waiting I did a little research at the following site to get the syntax for setting up the export.
http://www.openfiler.com/products/system-requirements
I've done NFS a few times, so my eyes immediately found the kind of export I am looking to do, it is this line, which is added to /etc/exports. Then I run the command to make the NFS server re-read the config file and make my share available.
/usr/local 192.168.0.0/255.255.255.0(rw,sync,no_subtree_check)
sudo exportfs -ra
Now that I have my export, its time to boot from that netinstall CD and point it to my box to start the installation process. Because I am still waiting for my updates to finish, I did some Googling to find somebody who has done this before on CentOS. Here is what I found:
http://www.chrisgountanis.com/technical/34-technical/45-centos-netinstall.html
Seems pretty straight forward. The CD boots up, selected default options, DHCP, no IPv6 support (don't need it). Finally, it wants the NFS server name (IP) and the directory (nfs mountpoint with the media). I entered them both and sure enough, I was presented with a CentOS installation wizard. Rest of it is pretty straight forward, I go through the install as if I were using the DVD. Pretty cool stuff.
Friday, July 10, 2009
New Jumpstart Server Build
Yesterday I received a call that one of our remote offices was having serious network issues in the classrooms. We did some basic troubleshooting and determined the cause of the problem was the jumpstart server. The jumpstart server is used to 'jump' the classroom systems to a fresh build for the next class. It also acts as a DHCP server and a NAT router for all of the classrooms to route to the internet.
For a temporary fix, we grabbed an unused system from a classroom with solaris 10 on it and tossed it in place of the jumpstart server. We re-ip'd it to have the same IP as the old box that sat there (this prevents reconfiguration of the individual systems) and followed the steps in the following article to get it routing traffic.
http://gibbs.acu.edu/2007/02/24/using-solaris-10-as-a-firewallrouter/
Today, I am tasked with building a replacement for that box. The new paradigm that we are using here a GCA for the jumpstart servers is to make a linux host, then use VMware to virtualize the jumpstart server. All seven of our offices have a standard build for the classrooms. The classroom network is flat, then routes through a classserver (jumpstart server), which resides in the DMZ, and it will be forwarded out to the internet. Pretty simple setup.
The new server I am building is going to be using CentOS for the host OS, VMware server 2.0.1, and the Solaris 10 jumpstart server. The implementation we are using allows for us to be very flexible because every jumpstart server VM is exactly the same. The only modification we need to make for the host OS is to change the IP of the DMZ facing interface to the subnet of that office's DMZ.
The only difference we are doing on the new implementation is that we are not going to use the jumpstart server for routing, we are going to use CentOS. Instructors are allowed to login to the jumpstart servers to do bundles and move systems around, but that is not necessary on the CentOS stuff, because that should never change. I will use sudo and couple scripts to implement a way that the instructors can shutdown/reboot the CentOS box as well as start/shutdown/reboot the jumpstart server. These scripts are pretty cool and good to know about.
Stayed tuned for more info on iptables, vmware scripts, etc as I write and implement them. I probably will not get this done today as I am still waiting for CentOS to download.
For a temporary fix, we grabbed an unused system from a classroom with solaris 10 on it and tossed it in place of the jumpstart server. We re-ip'd it to have the same IP as the old box that sat there (this prevents reconfiguration of the individual systems) and followed the steps in the following article to get it routing traffic.
http://gibbs.acu.edu/2007/02/24/using-solaris-10-as-a-firewallrouter/
Today, I am tasked with building a replacement for that box. The new paradigm that we are using here a GCA for the jumpstart servers is to make a linux host, then use VMware to virtualize the jumpstart server. All seven of our offices have a standard build for the classrooms. The classroom network is flat, then routes through a classserver (jumpstart server), which resides in the DMZ, and it will be forwarded out to the internet. Pretty simple setup.
The new server I am building is going to be using CentOS for the host OS, VMware server 2.0.1, and the Solaris 10 jumpstart server. The implementation we are using allows for us to be very flexible because every jumpstart server VM is exactly the same. The only modification we need to make for the host OS is to change the IP of the DMZ facing interface to the subnet of that office's DMZ.
The only difference we are doing on the new implementation is that we are not going to use the jumpstart server for routing, we are going to use CentOS. Instructors are allowed to login to the jumpstart servers to do bundles and move systems around, but that is not necessary on the CentOS stuff, because that should never change. I will use sudo and couple scripts to implement a way that the instructors can shutdown/reboot the CentOS box as well as start/shutdown/reboot the jumpstart server. These scripts are pretty cool and good to know about.
Stayed tuned for more info on iptables, vmware scripts, etc as I write and implement them. I probably will not get this done today as I am still waiting for CentOS to download.
Thursday, July 9, 2009
IP Subnets
A buddy of mine sent me an IM today asking for some help with a project. Specifically, he is working with an ISP and needed to write some code to determine all of the IP's in a CIDR subnet based on the network and broadcast addresses, which are stored in a SQL database as long's (using the php function ip2long for conversion).
After doing a little bit of research, the long values are simply the concatenation of the four 8-bit binary numbers then casted into one giant long value. This makes life simple, because concurrent IP's will also be concurrent in the long format. Not only are they concurrent, but they are sequential, so you can find out if an IP is in the network with simple conditional logic:
Because the database is storing all of these values as longs, the SQL query can be constructed with the simple conditional logic, with no additional logic. After figuring it all out, everything is pretty simple. The network address is going to be the lowest IP in the subnet and the broadcast is going to be the highest value in the subnet. Combine this with the relationship properties the converted IP's have when they are long values and its a pretty simple way to deal with things, just not intuitive.
After doing a little bit of research, the long values are simply the concatenation of the four 8-bit binary numbers then casted into one giant long value. This makes life simple, because concurrent IP's will also be concurrent in the long format. Not only are they concurrent, but they are sequential, so you can find out if an IP is in the network with simple conditional logic:
If( $networkaddr <= $ipaddr <= $broadcastaddr )
Because the database is storing all of these values as longs, the SQL query can be constructed with the simple conditional logic, with no additional logic. After figuring it all out, everything is pretty simple. The network address is going to be the lowest IP in the subnet and the broadcast is going to be the highest value in the subnet. Combine this with the relationship properties the converted IP's have when they are long values and its a pretty simple way to deal with things, just not intuitive.
Encrypted Filesystems
I am doing work on a project and the client has a requirement that all data, including documentation, config data, and anything else related to the project be stored on an encrypted filesystem. We are going to be using a Microsoft Project Server 2007 with project web access and webdav for document libraries. So, I need to figure out how to encrypt the data stored in the SQL Server 2005, and all the other places.
The easiest way to architect this solution was to simply install the whole standalone project server as a virtual machine and have the virtual machine files reside on an encrypted filesystem.
So, step 1, install a box with an encrypted volume. I am going to use my current OS of choice, Ubuntu Server 8.04.1. I found a great reference article using the Google debugger to use as a guide, see https://help.ubuntu.com/community/EncryptedFilesystemOnIntrepid. My implementation is going to vary slightly, but this is the document I am using as my reference.
So the first thing I did was install Ubuntu Server 8.04.1 from the CD. My partitioning is somewhat complicated, but nothing crazy. The box I am using has 6 drive bays filled with drives. I used the onboard raid controller to make two arrays, the first is a mirror of two 36 GB drives and the second is a raid 5 array of the other four 320 GB drives. Ubuntu sees two drives, /dev/sda and /dev/sdb. I formatted /dev/sda into two filesystems, a SWAP partition with 4GB and the rest was made as the / partition. The /dev/sdb drive I left unformatted, I will be using that for my encrypted filesystem.
When going through the install, I selected all of the defaults, only adding the openssh-server so I don't have to use the console in the server room. Once the install finished, I SSH'd into the box from my desk and ran the following two commands to update my server to the latest patches and reboot.
After those packages were added, I skipped down the document to the "Create the encrypted partition" section and started with those steps.
Note: I did get a warning, but it has not seemed to cause any problems.
After the volume has been formatted, I created a new mount point for it on /cryptvol. I ran the following commands to mount the volume where I wanted it.
When I ran the cryptsetup command, I was prompted or my password. After entering it the command finish and I was able to mount my new volume. Next order of business is to create some kind of documentation on the box to tell me those two commands. I opted to create a file called /readme.cryptedfs with the commands to mount this file system. I did not want to have it automatically mounted, I want a user to be forced to log into this box and enter the password manually to mount this volume after a (re)boot. I also created a sym link to this file in the /cryptvol directory when the filesystem is not mounted to the directory, just in case someone goes there looking for something, they see one file.
And its that simple, I now have a completely encrypted volume on a server. In order to mount this volume a password must be provided.
The easiest way to architect this solution was to simply install the whole standalone project server as a virtual machine and have the virtual machine files reside on an encrypted filesystem.
So, step 1, install a box with an encrypted volume. I am going to use my current OS of choice, Ubuntu Server 8.04.1. I found a great reference article using the Google debugger to use as a guide, see https://help.ubuntu.com/community/EncryptedFilesystemOnIntrepid. My implementation is going to vary slightly, but this is the document I am using as my reference.
So the first thing I did was install Ubuntu Server 8.04.1 from the CD. My partitioning is somewhat complicated, but nothing crazy. The box I am using has 6 drive bays filled with drives. I used the onboard raid controller to make two arrays, the first is a mirror of two 36 GB drives and the second is a raid 5 array of the other four 320 GB drives. Ubuntu sees two drives, /dev/sda and /dev/sdb. I formatted /dev/sda into two filesystems, a SWAP partition with 4GB and the rest was made as the / partition. The /dev/sdb drive I left unformatted, I will be using that for my encrypted filesystem.
When going through the install, I selected all of the defaults, only adding the openssh-server so I don't have to use the console in the server room. Once the install finished, I SSH'd into the box from my desk and ran the following two commands to update my server to the latest patches and reboot.
sudo apt-get updateAfter the reboot, I needed to add a couple additional packages, so I ran the following command to add them.
sudo apt-get dist-upgrade
sudo init 6
sudo apt-get install cryptsetup hashalot initramfs-tools
After those packages were added, I skipped down the document to the "Create the encrypted partition" section and started with those steps.
sudo modprobe dm_crypt
sudo modprobe sha256
sudo luksformat -t ext3 /dev/sdb
Note: I did get a warning, but it has not seemed to cause any problems.
WARNING: Error inserting padlock_sha (/lib/modules/2.6.24-24-server/kernel/drive
rs/crypto/padlock-sha.ko): No such device
After the volume has been formatted, I created a new mount point for it on /cryptvol. I ran the following commands to mount the volume where I wanted it.
sudo mkdir /cryptvol
sudo cryptsetup luksOpen /dev/sdb cryptvol
sudo mount /dev/mapper/cryptvol /cryptvol
When I ran the cryptsetup command, I was prompted or my password. After entering it the command finish and I was able to mount my new volume. Next order of business is to create some kind of documentation on the box to tell me those two commands. I opted to create a file called /readme.cryptedfs with the commands to mount this file system. I did not want to have it automatically mounted, I want a user to be forced to log into this box and enter the password manually to mount this volume after a (re)boot. I also created a sym link to this file in the /cryptvol directory when the filesystem is not mounted to the directory, just in case someone goes there looking for something, they see one file.
And its that simple, I now have a completely encrypted volume on a server. In order to mount this volume a password must be provided.
Labels:
crypt volume,
encrypted filesystem,
secure data,
ubuntu
Wednesday, July 8, 2009
Jasper Time Series Charts
I've been working on a monitoring solution that uses a suite of open source tools to remotely monitor IT infrastructure resources. While that part has been moving along quite well, the next piece is the hardest part for me, giving managers a dumbed down report of what is going on. For this task, I have been learning how to use Jasper tools for generating reports based off of the perfdata that is being stored in a MySQL database.
It took me a bit of time to get the full swing of Jasper, but I think I am finally wrapping my brain around it. The biggest bear for me to learn was the time series chart. I tried googling this for a little over an hour and could not find a single example of a time series chart, so I figured I'd blog it for reference by others who may be trying to do similar things with Jasper.
For starters, here is my massive SQL query. Its not as bad as it looks. I am grabbing the perfdata (string) , the timestamp, the server name, and the name of the service check. I use some where statements to ensure I get only the stuff I want for this report and dump any blank values so my report doesn't blow up.
SELECT
nagios_servicechecks.`end_time` AS nagios_servicechecks_end_time,nagios_servicechecks.`perfdata` AS nagios_servicechecks_perfdata,nagios_services.`display_name` AS nagios_services_display_name,nagios_servicechecks.`end_time_usec` AS
nagios_servicechecks_end_time_usec,nagios_hosts.`display_name` AS nagios_hosts_display_name
FROM
`nagios_servicechecks` nagios_servicechecks,
`nagios_services` nagios_services,
`nagios_hosts` nagios_hosts
WHERE
nagios_servicechecks.`service_object_id` =
nagios_services.`service_object_id`
AND nagios_services.`display_name` = "Disk_Utilization"AND nagios_hosts.`alias` = "njugl001"
AND nagios_services.`host_object_id` = nagios_hosts.`host_object_id`
AND nagios_servicechecks.`perfdata` <> ""
OR nagios_servicechecks.`service_object_id` =
nagios_services.`service_object_id`
AND nagios_services.`display_name` = "Disk_Utilization"
AND nagios_hosts.`alias` = "njugl002"
AND nagios_services.`host_object_id` = nagios_hosts.`host_object_id`
AND nagios_servicechecks.`perfdata` <> ""
I used a phpmyadmin to run the query and ensure its only returning the values I want, which it does, so I moved on. One snag that held me up for a while was which band to put the chart. I started by putting the chart in the detail band. After troubleshooting for quite some time, I came to the relization that for each row returned by the query from the main report, it would print a copy of the chart, which gave me a huge report, which I didn't want. To remedy, I moved the chart into the Title band.
Next, I added the time series chart onto my band and set it up all to fit perfectly on the page and created a subdataset for the query I will be using with this report. I then went to the chart data and set the Sub Dataset (under the dataset tab) to the new sub dataset that I created with my query. The report type was set to report (default) and all other values on this tab were left alone.
One thing I learned was the Time Period value under the details tab for chart data was VERY important. This specified the minimal increments for plot points to be differentiated. For example, if I have results that are timestamped every 15 minutes and I set the Time Period to "Days", it will only plot one point for all 96 values per day. This threw me off for a very long time, as I was only getting 1 point on my graph for a subset of data that I knew had hundreds of values. As soon as I changed the Time Period from 'weeks' down to 'hours' I was able to see my values. I would also suggest taking a look at the "print repeated values" checkbox in the chart properties.
Next I had to do some casting and parsing. My data plots were strings, so I needed to use string tokenizers to parse out the numerical value for plotting, then convert it over to a number so Jasper knew what to do with it. My timestamps were pretty complicated as well. The value was of type java.sql.timestamp, but the expected value was java.util.date. I was able to use the getTime() method on the timestamp to do a simple conversion. I was under the impression that the Time Period error I had was an error related to timestamp casting because my values were coming out at 00:00.000 every time, but I later learned this was the "round to the nearest week" that I had set in Time Period earlier.
Now that I have all of my data in the correct format, its time to get it all put into the chart data. So, I added a new Time Series under the details tab of Chart Data and the three fields I filled out are:
It took me a bit of time to get the full swing of Jasper, but I think I am finally wrapping my brain around it. The biggest bear for me to learn was the time series chart. I tried googling this for a little over an hour and could not find a single example of a time series chart, so I figured I'd blog it for reference by others who may be trying to do similar things with Jasper.
For starters, here is my massive SQL query. Its not as bad as it looks. I am grabbing the perfdata (string) , the timestamp, the server name, and the name of the service check. I use some where statements to ensure I get only the stuff I want for this report and dump any blank values so my report doesn't blow up.
SELECT
nagios_servicechecks.`end_time` AS nagios_servicechecks_end_time,nagios_servicechecks.`perfdata` AS nagios_servicechecks_perfdata,nagios_services.`display_name` AS nagios_services_display_name,nagios_servicechecks.`end_time_usec` AS
nagios_servicechecks_end_time_usec,nagios_hosts.`display_name` AS nagios_hosts_display_name
FROM
`nagios_servicechecks` nagios_servicechecks,
`nagios_services` nagios_services,
`nagios_hosts` nagios_hosts
WHERE
nagios_servicechecks.`service_object_id` =
nagios_services.`service_object_id`
AND nagios_services.`display_name` = "Disk_Utilization"AND nagios_hosts.`alias` = "njugl001"
AND nagios_services.`host_object_id` = nagios_hosts.`host_object_id`
AND nagios_servicechecks.`perfdata` <> ""
OR nagios_servicechecks.`service_object_id` =
nagios_services.`service_object_id`
AND nagios_services.`display_name` = "Disk_Utilization"
AND nagios_hosts.`alias` = "njugl002"
AND nagios_services.`host_object_id` = nagios_hosts.`host_object_id`
AND nagios_servicechecks.`perfdata` <> ""
I used a phpmyadmin to run the query and ensure its only returning the values I want, which it does, so I moved on. One snag that held me up for a while was which band to put the chart. I started by putting the chart in the detail band. After troubleshooting for quite some time, I came to the relization that for each row returned by the query from the main report, it would print a copy of the chart, which gave me a huge report, which I didn't want. To remedy, I moved the chart into the Title band.
Next, I added the time series chart onto my band and set it up all to fit perfectly on the page and created a subdataset for the query I will be using with this report. I then went to the chart data and set the Sub Dataset (under the dataset tab) to the new sub dataset that I created with my query. The report type was set to report (default) and all other values on this tab were left alone.
One thing I learned was the Time Period value under the details tab for chart data was VERY important. This specified the minimal increments for plot points to be differentiated. For example, if I have results that are timestamped every 15 minutes and I set the Time Period to "Days", it will only plot one point for all 96 values per day. This threw me off for a very long time, as I was only getting 1 point on my graph for a subset of data that I knew had hundreds of values. As soon as I changed the Time Period from 'weeks' down to 'hours' I was able to see my values. I would also suggest taking a look at the "print repeated values" checkbox in the chart properties.
Next I had to do some casting and parsing. My data plots were strings, so I needed to use string tokenizers to parse out the numerical value for plotting, then convert it over to a number so Jasper knew what to do with it. My timestamps were pretty complicated as well. The value was of type java.sql.timestamp, but the expected value was java.util.date. I was able to use the getTime() method on the timestamp to do a simple conversion. I was under the impression that the Time Period error I had was an error related to timestamp casting because my values were coming out at 00:00.000 every time, but I later learned this was the "round to the nearest week" that I had set in Time Period earlier.
Now that I have all of my data in the correct format, its time to get it all put into the chart data. So, I added a new Time Series under the details tab of Chart Data and the three fields I filled out are:
- Series Expression (String): this is what each different plot on the graph is labelled. I used my field that contains the server name to label each plot
- Time Period Expression (Date): this is the expression for the date value of each plot point
- Value Expression (Number): this is what number is going to be plotted on the chart.
I think I now have my head wrapped around this Jasper Time Series Chart stuff.. please feel free to shoot me an email and I can send you a copy of my .jrxml file for reference.
Labels:
ireport,
jasper,
time series chart,
time series expression
Subscribe to:
Posts (Atom)