## Thursday, December 2, 2010

### Novell IDM Integration with Novell Sentinel - Part 4

This is probably the most useful part of customizing the IDM collector for Sentinel, but also was the most difficult part to figure out.  Most customers are going to want to store information for events that is not in one of the other events.  They will also most likely want to store the information in standardized places, so we may need to put data in different fields (such as the CustomerVariable fields).  I will attempt to explain how this process worked for me.

The first tidbit of knowledge will need to be how the Title fields are parsed out.  Start by opening the dirxml.js file.  This file is what actually parses up the Novell Audit fields and stuffs them into the fields.  It can be a little confusing at first though.  Here is a sample from dirxml.js:
String.prototype["Y-Entitlement"] = function(e){
rec.data = this;
}
The first part to break apart is the "Y-Entitlement" portion.  Lets reference back to our LSC file and how those messages were built, we have two things, the list of field titles and the letter to data references:

#EventID,Description,Originator Title,Target Title,Subtarget Title,Text1 Title,Text2 Title,Text3 Title,Value1 Title,Value1 Type,Value2 Title,Value2 Type,Value3 Title,Value3 Type,Group Title,Group Type,Data Title,Data Type,Display Schema
# Value (V):
# R - Source IP Address
# C - Platform Agent Date
# A - Audit Service Date
# B - Originator
# H - Originator Type
# U - Target
# V - Target Type
# Y - SubTarget
# 1 - Numerical value 1
# 2 - Numerical value 2
# 3 - Numerical value 3
# S - Text 1
# T - Text 2
# F - Text 3
# O - Component
# G - Group ID
# I - Event ID
# L - Log Level
# M - MIME Hint
# X - Data Size
# D - Data
If you look at a sample data piece:

#1200
000304B0,Account Create By Entitlement Grant,Driver DN,Target Account DN or Association,Entitlement,Src Identity DN or GUID,Detail,IDM EventID,Status,N,,,Version,N,,,XML Document,S,Status $ST:$SB object:$SU level:$SY objet-type:$SS event-id:$SF from $iR Our code from dirxml.js reads "Y-Entitlement", which means if the Y (Subtarget) field, which is the 5th field of the message, is equal to "Entitlement", use this parsing method. If you look elsewhere in the file, you will find other keywords for all of the different fields and a parsing method for that field. The next piece to determine is: function(e){ rec.data = this; } This is relatively simple, it is taking the data (e) and storing it in rec.data. But, the only problem is, what is rec.data? We next need to look at the Rec2Evt.map file in our collector. It has a bunch of values, such as: DataValue,data This file indicates that rec.data is stored in the Sentinel Database field known by "DataValue". There are a limited number of fields in the Rec2Evt.map file, so if the field you are looking for is not there, the Evt2EvtData.map file seems to have a full listing of all Sentinel Database fields. You can call the collector portion anything you want, it must simply match the code in dirxml.js. Now, to tie everything together, we will need to update a couple of files to get a new custom field. So, lets jump back to our example: #1200 000304B0,Account Create By Entitlement Grant,Driver DN,Target Account DN or Association,Entitlement,Src Identity DN or GUID,Detail,IDM EventID,Status,N,,,Version,N,,,XML Document,S,Status$ST:$SB object:$SU level:$SY objet-type:$SS event-id:$SF from$iR
If we wanted to change the SubTarget field to be stored in say, CustomerVar95 (reference http://www.novell.com/documentation/sentinel61/s61_reference/?page=/documentation/sentinel61/s61_reference/data/bgqshxm.html to ensure you are not using a reserved field), we would simply extend a few files.  First, we change our sample in the LSC file to read:
#1200
000304B0,Account Create By Entitlement Grant,Driver DN,Target Account DN or Association,cv95,Src Identity DN or GUID,Detail,IDM EventID,Status,N,,,Version,N,,,XML Document,S,Status $ST:$SB object:$SU level:$SY objet-type:$SS event-id:$SF from $iR We extend the dirxml.js file to have a new parse method so it understands how to stored: String.prototype["Y-cv95"] = function(e){ rec.cv95 = this; } We need to update the Event map so it knows what rec.cv95 actually refers to: CustomerVar95,cv95 We need to update custom.js to include the newly updated files to be loaded as well: Collector.prototype.customInit = function() { // load additional maps, parameters, etc var file = new File(instance.CONFIG.collDir + "rk4idm.lsc"); var file = new File(instance.CONFIG.collDir + "taxonomy.map"); var file = new File(instance.CONFIG.collDir + "dirxml.js"); var file = new File(instance.CONFIG.collDir + "Rec2Evt.map"); return true; } We add these into the collector, ensure the collector is set for custom mode, then restart the collector manager for the changes to take effect (please see part 2 for this procedure). This process can be used for completely custom events as well. You can use the generate-event token, assign values 1000-1999 in events and pass the data. When you build tokens and pass them, they will be in the appropriate audit fields and this is a good step for peeling the data out and stuffing into the proper sentinel fields in the database. As stated before, I highly recommend keeping your code adjustments in specifically commented sections of the files. Keep in mind that if the customer upgrades the collector, your code will need to be migrated to the newer versions of the files. ## Tuesday, November 30, 2010 ### Novell IDM Integration with Novell Sentinel - Part 3 Part 1 covered connecting Novell IDM to Novell Sentinel. Part 2 covered adding our own custom events to Sentinel and getting the data to appear in Sentinel. In this portion, I will cover adding taxonomy information to the custom events. Taxonomy is a way of allowing a generic query to gather all events of a specific type (IE User Password Change or User Creation) regardless of the source application. It is very useful data to include in our Sentinel events. In the out of the box connector, there is a file called taxonomy.map. This file contains the taxonomy information. These events can be copied directly, the eventID is in hex format in this file, similar to the LSC file. For reference information on the taxonomy values, see http://www.novell.com/developer/sentinel_taxonomy.html In order to add the newly modified taxonomy.map file into the collector, modify the custom.js file (see Part 2 for more information) to load this file. The new version of the taxonomy.map and custom.js files both need to be uploaded to the collector and the collector manager needs to be restarted again. Once this is done, the new custom events should have taxonomy information in them. In Part 4, adding your own custom titles to the LSC files so the data shows up wherever you like it in the Sentinel event. ### Novell IDM Integration with Novell Sentinel - Part 2 The next piece of the puzzle is determining how to insert custom events into IDM. If you just generate an event with an event ID between 1000 and 1999 and kick it over to sentinel, it will return an error from Sentinel: Event Name: Collector Internal Message Message: Event ID not found in LSC file: 000303E9 Where 3E9 is the Hex value of the eventID used in the IDM Generate-Event action. The message gives a hint where to go to remedy the problem, but getting the full customization is where I found the most problems getting good documentation. I hope I can provide something that can act as a sample to go by to accomplish the task of getting the events showing up, as well as getting the data where we need it. The first thing to do is find the LSC files in the collector scripts. If you take the zip file for the Novell IDM collector and unzip it, you'll find 2 different LSC files dirxml.lsc and rk4idm.lsc. I decided to use the rk4idm.lsc file to hold my customizations. It is very important that you use this file as a starting point and do not modify the existing data within it, we are simply going to extend this file with additional events. At the bottom of my file, I added a comment (#) to distinguish my custom added events and some descriptions. Since all of the events are defined using hex instead of decimal values, I put a comment with the hex value above each event line. Then, start by using a known good event from the file. #1200 000304B0,Account Create By Entitlement Grant,Driver DN,Target Account DN or Association,Entitlement,Src Identity DN or GUID,Detail,IDM EventID,Status,N,,,Version,N,,,XML Document,S,Status$ST:$SB object:$SU level:$SY objet-type:$SS event-id:$SF from$iR
This event is already in the file.  Please note that each field is separated by a comma, and the values correspond to the fields in the paradigm that looks like the following:
#EventID,Description,Originator Title,Target Title,Subtarget Title,Text1 Title,Text2 Title,Text3 Title,Value1 Title,Value1 Type,Value2 Title,Value2 Type,Value3 Title,Value3 Type,Group Title,Group Type,Data Title,Data Type,Display Schema
These fields should line up one to one.  Please also note, the text in each field is very important, they are referenced by a javascript file (will go over this later).  For now, copy the event, change the first two values (eventID and Description fields) to the hex value of the IDM event and whatever description you want to hold.  The Display Schema field (the last field) has references that start with $'s. There are two tables at the top that can be referenced to see what those construct. After the$ symbol, the first letter designates the data type and the second letter designates the data field.

The data type letters are as follows:
# Format (F):
# T - Time (UTC localized)
# D - Date (UTC localized)
# N - Number (32bit unsigned)
# N - Number (32bit signed)
# S - String
# X - Hexdecimal Number
# R - RFC822 format date/time
# I - IPv4 Internet Address (network order)
# i - IPv4 Internet Address (host order)
# B - Boolean (Yes/No)
# b - Boolean (True/False)
The data field letters are as follows:
# Value (V):
# R - Source IP Address
# C - Platform Agent Date
# A - Audit Service Date
# B - Originator
# H - Originator Type
# U - Target
# V - Target Type
# Y - SubTarget
# 1 - Numerical value 1
# 2 - Numerical value 2
# 3 - Numerical value 3
# S - Text 1
# T - Text 2
# F - Text 3
# O - Component
# G - Group ID
# I - Event ID
# L - Log Level
# M - MIME Hint
# X - Data Size
# D - Data
The display schema field can have any combination of text and the tokens for the fields above.  Re-use different values from other events to parse similar data types.  Please keep in mind, the text value for the field (for example, Driver DN in our sample message) is specific to that field.  So a value for Text 1 Title will probably not work for Numerical Value 1 field.

Once this is complete, the file needs to be added to the collector and used.  To do this, go to Event Source Management in the Sentinel Control panel, select the Collector to be modified (IDM) and click the add auxiliary file button.  Select the modified version of the file (do not change the name either!) and upload it.

Now that we have added the modified version of the file to our Sentinel environment, we need to tell Sentinel how to load that file.  To do this, we grab the custom.js file from the Sentinel SDK.  We need to add a line to a spot in the file to instruct Sentinel to load the modified version of the lsc file.  The default file with the one line modification (modification in bold) looks like the following:
// Javascript Collector Template 6.1
// Developed by Novell Engineering

/**
* @fileoverview
* This file is used to create additional custom parsing methods for a Collector.
* These methods can be used to modify the initialization and parsing of the released
* Collector, to provide for local customization of operation.
* <p>To use:
* <ol>
* <li>Edit this file to define your custom initialization and parsing</li>
* <li>Run the 'ant build-custom' target to create the build version of this file</li>
* <li>Place the //content/build/${name}/custom/${name}.js file in:<br/>
*    ESEC_HOME/data/collector_mgr.cache/collector_common<br/>
*    ESEC_HOME/data/control_center.cache/collector_common<br/>
*   (you may need to create these directories on each host where the Collector will run/debug)
* <li>Change the "Execution Mode" parameter for the Collector to "custom"
* <li>Restart the Collector
* </ul>
*/

/**
* This method is used to provide locally-defined custom initialization of the Collector.
* <p>Useful variables:
* <dl>
* <dt>instance.CONFIG.collDir</dt><dd>The directory that will contain all the Collector plugin files</dd>
* <dt>this.CONFIG.commonDir</dt><dd>The directory where this custom code resides - you can add additional files as necessary</dd>
* </dl>
* @return {Boolean} Result
*/
Collector.prototype.customInit = function() {
var file = new File(instance.CONFIG.collDir + "rk4idm.lsc");
return true;
}

/**
* This method is used to provide locally-defined custom pre-parsing of the input record.
* You might use this method if something in your environment modifies the normal input format,
* for example if the event is tunneled through some other protocol, you might strip off
* @param {Object} e  The output event
*/
Record.prototype.customPreparse = function(e) {
return true;
}

/**
* This method is used to provide locally-defined custom parsing of the input record.
* NOTE: There are two types of modifications that are typically peformed:
* <ul>
* <li>Modifications to the output of existing event fields: in this case, you may need to
* debug the Collector to determine which Record attribute is used to hold the data, and then
* perform your custom transformation on that data. For example:<br/>
* <pre>
* // Main code sets rec.evt to raw event name from device, but these overlap with
* // event names from other devices in our environment so are hard to distinguish.
* // We will add a prefix to help identify the events
* this.evt = "FW: " + this.evt;
* </pre></li>
* <li>Additional custom parsing that pulls more specific pieces of information out of the event,
* for example if a free-text field contains some info that you want to use to categorize events
* in your environment. In this scenario, you should:
* <ul>
* <li>Only use CustomerVars to hold the parsed-out data</li>
* <li>You will need to manipulate the Event object directly, as new additions to the Record object
* will be lost. The 'e' variable is used to access the Event object.</li>
* </ul>
* Use the 'CustomerVar1' through 'CustomerVar300' to hold your data. Refer to the documentation
* for the datatypes of those variables.
* <pre>
* // Want to extract the Department name that is injected into the "message" field
* e.CustomerVar21 = rec.message.substr(12,34);
* </pre>
* @param {Object} e  The output event
*/
Record.prototype.customParse = function(e) {
return true;
}
This file also needs to be added to the specific collector in the same method as the lsc file.

Finally, we have our modified LSC file and the javascript file that instructs sentinel to load the file, we just need to go ahead and tell Sentinel to execute the custom.js file that we uploaded.  To do this, right click on the collector, select Edit from the menu and change the Execution Mode to 'custom'.

Once this is complete, we need to restart our collector manager.  The way that I normally do this is from the Sentinel Control Center, under the Admin tab, I select the Collector manager, right click on it, then select restart.  You should now be able to send your new custom events over and see them in Sentinel.

The next logical progression in Part 3 will be adding taxonomy information for your newly added event.

### Novell IDM Integration with Novell Sentinel - Part 1

A few hundred Novell IDM drivers and a fully functional SIEM system (Sentinel).  Why not kick over events from IDM to Sentinel?  That part wasn't too bad, there is a collector specifically for Novell IDM that will kick a whole slew of events.  Here is the monkey wrench, what if you want to leverage IDM to send custom events to Sentinel?

Let me begin from the start.  In most instances, I find Novell's documentation to be much better than other vendors, but the Sentinel documentation doesn't seem to live up to the standards in which I have become accustomed.  I had quite a bit of confusion as to the Platform Agent installation and configuration in order to get events successfully sent over to IDM.  After much tedious research, I found that IDM automatically installs a version of the platform agent, so no additional Novell Audit installations are required.

As for configuration, it proved to be very straight forward, simply modify the C:\Windows\logevent.cfg file (please note, my customer environment is a pure windows shop, so tweaks will have to be made for suse and other OS's).  The good part about this config file is there are comments explaining all of the different settings, mine looked like the following:

LogHost=10.1.1.1
LogReconnectInterval=30

Where 10.1.1.1 is the IP address the Sentinel Collector Manager server is listening on for Novell IDM Audit events.  This file should be setup for all Identity Vault servers to ensure all messages are sent to Sentinel.

The next thing to do is instruct IDM which events to kick out.  In designer, go to the properties of the driverset which you would like to forward logs to Sentinel.  On the Log Level portion, select which types of logs.

I would recommend selecting the "Log specific events" radio button, then selecting events from the list.  Please note that the "Other" under "Status Events" will be used later.  This will allow events created with the "generate-event" action in IDM to be shown within Sentinel.

Once all of the events are selected on the driverset and the logevent.cfg file is updated, you will need to bounce edirectory entirely for them to take effect.  There may be a way to do it otherwise, but this was the easiest way to make it all take effect.  Also note that this must be done on all servers in the driverset.

Now, the events will be sent to that IP address, but you may not have a collector manager listening on that port.  Everything as far as setting up the Novell IDM collector is pretty straight forward.  There is one thing to note as it may become an issue.  In logevent.cfg we did not specify the port explicitly, therefore the audit events will be sent on the default port of 289.  Please note that if the Collector Manager is installed on a Linux/Unix machine, the process must run as root to listen on any port below 1024.  That being said, if Collector Managers are on Unix/Linux, I would recommend using port 1289, this will need to be defined in logevent.cfg as well as configured on the Sentinel environment.

## Wednesday, October 27, 2010

### ColdFusion 9 init script

Cloud migration is happening at GCA!  Very exciting stuff.  The cloud is great, we will no longer have to manage our own hardware, worry about the power situation, backup battery, snapshots, maintenance crap, or anything else associated with hosting our own websites.

The fun part of the whole thing is rebuilding the coldfusion stuff up in the cloud.  Not a big deal, been there done that.  The only quirk to the whole thing about building a Coldfusion 9 webserver on Ubuntu 10.04 LTS is that it does not have a valid startup script, so I created my own.  Its nothing fancy, but gets the job done.  Enjoy

#! /bin/sh

case "$1" in start) echo "Starting ColdFusion9" >&2 /opt/coldfusion9/bin/coldfusion start exit 3 ;; restart) echo "Restarting ColdFusion9" >&2 /opt/coldfusion9/bin/coldfusion restart exit 3 ;; stop) echo "Stopping ColdFusion9" >&2 /opt/coldfusion9/bin/coldfusion stop exit 3 ;; *) echo "Usage:$0 start|stop|restart" >&2
exit 3
;;
esac

:

## Tuesday, August 31, 2010

### ECMAScript to run a command on windows

I had the need to try to run a batch script on the command line using Novell IDM.  I was able to develop a quick function in ECMAScript in order to accomplish this task.  Simply call the following function from policy and pass it the full string of what you want run on the command line and it'll return any output it may return.  Very simple, yet infinitely useful.

importPackage(Packages.java.io);
importPackage(Packages.java.util);

function runCommand(commandString)
{
var runtime = new java.lang.Runtime.getRuntime();
var process = runtime.exec(commandString);
var is = process.getInputStream();
var line = new java.lang.String();
var fulltext = new java.lang.String();

while ((line = br.readLine()) != null )
{
fulltext = fulltext + line;
}

return fulltext;
}

## Wednesday, June 2, 2010

### Performance Tuning eDirectory

I have a particular eDirectory server that is one beefy sucker.  This server is running Windows Server 2008 64bit with a whopping 32GB of memory.  It's running 64bit eDirectory and the DIBFiles directory is about 1.2GB in size.  I was looking at this box, which was only using about 500MB of memory.  While the box is not struggling whatsoever, I was thinking, with 32GB of memory, why not start using some of that to jack up the performance as much as possible.

The first thing I did was max out the Java Heap Size for IDM (see yesterdays blog post) to 2GB (maximum for 32bit java).  If, for whatever reason, the box actually uses all of this memory in the heap, I'll still have at least 28GB of memory left over.  Why not just throw the entire directory into cache so that it performs wicked fast.  I searched around on the Google debugger until I found an article by Novell on performance tuning with eDirectory.  In this article I found some information on how to change the cache settings.  There is a file (C:\Novell\NDS\DIBFiles\_ndsdb.ini on windows) that has memory settings.  I made a quick change, then bounced eDirectory, and now my box will use up to 75% available memory for eDirectory cache.

The file before changes looked like this:

preallocatecache=true
cache=200000000

After my changes:

preallocatecache=true
cache=HARD, %:75, MIN:200000000

It took a while for it to use the cache, but after about 24 hours, dhost.exe is using up just over 3GB of memory instead of the 500MB it had accumulated over the period of a couple months.

For reference, the eDirectory performance tuning article was located here:  http://www.novell.com/documentation/edir88/pdfdoc/edir88tuning/edir88tuning.pdf

## Tuesday, June 1, 2010

### Performance Tuning Novell IDM

Working with IDM in a large environment.  I needed to tune some memory parameters on IDM because I was getting Out of Memory errors on my JDBC drivers.  The errors looked like the following:

DirXML Log Event -------------------
Driver:   \VAULT\ORG\ESC\DirXML\DRIVERSET\HMS S00177
Channel:  Publisher
Status:   Error
Message:  Code(-9010) An exception occurred: java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2786)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
at java.io.BufferedWriter.flush(BufferedWriter.java:236)
at java.io.PrintWriter.flush(PrintWriter.java:276)
at com.novell.nds.dirxml.driver.jdbc.util.JDBCLib.GetStackTrace(Unknown Source)
at com.novell.nds.dirxml.driver.jdbc.util.JDBCLib.UnhandledException(Unknown Source)
at com.novell.nds.dirxml.driver.jdbc.JDBCPublicationShim.start(Unknown Source)
at com.novell.nds.dirxml.driver.jdbc.JDBCPublicationProxy.start(Unknown Source)
at com.novell.nds.dirxml.engine.Publisher.run(Publisher.java:420)

I was able to tweak with these settings straight from Designer, but could also use iManager.  For iManager, you simply go to Identity Manager Overview, then select Driverset properties once you are looking at the driverset.  In the properties, it is under the Identity Management tab, Misc section (see screenshot below).

For designer, you simply go to the properties of the driverset object, select java and set your environment for the heap there (see below).

As you can tell, the values are the same (as they should be).  I set it to 640MB, which is fairly high, but I will be running many JDBC drivers in this driverset on this server, so I will use that memory.  The default value is 64MB, and it is recommended to increase this in increments of 64-128MB at a time.  In order for these settings to take effect, eDirectory must be restarted.  Please note, this setting is on a per server basis and must be tweaked as such.

There is the potential to set this value too high.  One symptom that you will observe if this value is set too high is the dhost.exe process will consume a very small amount of memory relative to the previous it was using and your IDM drivers do not start at all.  This is a hint that you have set the value too high and need to set it lower.

If you are using a tree that only has one server and you hose this up, it is possible that eDirectory will not even start.  This becomes problematic because you need eDirectory started in order to change this setting.  To resolve your problem, you can prevent the DirXML portion of the eDirectory stack from starting by renaming the following file and starting eDirectory.  At this point you should be able to change the cache value and move on:

C:\Novell\NDS\dirxmllib.dll

## Wednesday, May 26, 2010

### eDirectory Installation Errors

More troubleshooting fun today.  Large customer, lots of users, lots of drivers, needed more horsepower.  How do we get more horsepower?  That's easy, throw another server in the mix.  Step one of adding a new server to IDM, install eDirectory and join it to the tree.  This is where I got hung up and for some reason it actually took a little while to click in my head.

I was starting the installation, then receiving timeout errors.  Come to find out, I had forgot to turn off the stupid Windows Firewall.  It was blocking the replication and all of the other things I need to work to get the tree working.  My install should work now, but I need to go through and cleanup that old install and get cracking on the new one, there are a few things that I have to do for this to work.

First, I had to do the basic Add/Remove programs thing.  Of course, being Windows, this rarely, if ever, actually removes the entire program.  The next step was to go remove the C:\Novell directory in its entirety.  I had to escalate to admin privileges for this.

At this point, you would think it works, but it does not.  The stupid NDS console thing still exists in the control panel and needs to be removed.  This is the tricky one.  To remove this (and complete the uninstall), you must remove the following file:

C:\Windows\System32\NDScpa.cpl

Now, you can actually start the eDirectory installation again, but you have another problem.  The eDirectory object name for the server already exists.  Due to the installer crashing out early, you must go and manually delete two objects from the directory.  The server object and the ndsPredicateStats object.  Both will be named pretty obviously specific to your server (unless you changed them, in which case, shame on you).

After this, you can continue the installer (this time without the windows firewall!) and everything is happy.

## Tuesday, May 25, 2010

### Novell IDM Syntax Violation Errors

I was recently working on a driver.  I had finished the driver and everything was working great.  I then started getting the following errors on my driver:

Code(-9010) An exception occurred: novell.jclient.JCException: createEntry -613 ERR_SYNTAX_VIOLATION

After yanking out some of the hair on my scalp, I took two traces, one of a user that was created successfully, and one of a user that kicked back this error.  I looked at the XML document after all the driver logic was finished just before it tried to create the account.

What I found was there were very few differences (I hope not), but what ended up standing out is that in one of the traces, the user had a blank value for their Title attribute, and looked similar to this:

<value type="string"/>

Why is this significant?  Because if you look at the schema for eDirectory, the attribute Title is sized, with a minimum length of 1 character, meaning a blank attribute is not valid.

The resolution for this was simple.  I found all attributes that had sizing restrictions on them, then simply did a check and stripped them out if they had a blank value.  Here is what the Title attribute sample looked like:

<rule>
<description>Strip Title if blank</description>
<comment xml:space="preserve">If title is a blank value, strip it so it doesn't cause a syntax violation.</comment>
<conditions>
<and>
<if-class-name mode="nocase" op="equal">User</if-class-name>
<if-op-attr mode="nocase" name="Title" op="equal"/>
</and>
</conditions>
<actions>
<do-strip-op-attr name="Title"/>
</actions>
</rule>

Once this was done all of the errors disappeared!

## Thursday, May 20, 2010

### Sharing Policies between Drivers in Novell IDM

Working with Novell IDM to integrate an HMS Payroll System hosted on an AS400 system in a DB2 database.  The customer has approximately 120 of these databases that need to be interfaced.  In order to accomplish this, I need to create 120 separate JDBC drivers.

I was able to architect and implement the drivers in such a way that all of the policies are identical.  The only differences between them are the connection parameters and some GCV's.  With all of the policies identical, I wanted to deploy them in such a way that updates to the drivers are easier than duplicating the modification 120 times.

By creating a new Library container within the driverset, I was able to reference the policies within the drivers from the shared Library.  This allows all of the drivers to subscribe to the same policy and XSL objects.  When an update needs to occur for the 120 drivers, the policy in question is updated in the library, then all of the drivers are restarted.

To expedite the process of restarting all of the drivers, I simply restart DirXML on the servers hosting the drivers.  This will force a restart of these drivers, so long as they are set to automatic startup.

Please note that the Filter cannot be shared as it must exist on the driver object, so changes to filter will still require changing all 120 drivers individually.

## Wednesday, May 19, 2010

### Novell IDM CSV Fanout Driver

In a traditional Novell IDM CSV driver, you have one event, which will create one output CSV file.  You can use the output transform to format the CSV files output so it does not necessarily have to be a true CSV file, but a text file of any format you wish.

I was recently presented with a unique challenge where I needed upwards of 120 or so CSV files for a single event.  The total number of files was dependent on a few factors, but they are irrelevant as that logic was programmed in policy.  The part that was pretty nifty was creating my own version of a CSV fanout driver.

Instead of using the traditional CSV driver, I used a Null Services driver.  I used typical policy to determine my outputs, but did so in a loop, so I could loop through each iteration of an output.  Each time I hit a valid output, I constructed my text file format and held it in a local variable.  It looked something like this:

<do-set-local-variable name="lv.outputString" scope="policy">
<arg-string>
Construct string here
</arg-string>
</do-set-local-variable>

I then added a ECMAScript object to my driver and added it to the driver configuration.  The ECMAScript was very simple:

importPackage(Packages.java.io);

function writeFile(fileName, contentString)
{
try {
var printFile = new Packages.java.io.PrintStream(fileName);
printFile.print(contentString);
printFile.close();
return "";
} catch (e) {
return e.toString();
}
}

All I needed to do at this point was use an XPATH expression to call my ECMAScript function.  I pass it the path to the file I want to create and the string I created.  The function will create the file with the contents I specified.

The only thing to note about this is you will need to use unique filenames.  The way I found to most easily do this is to use a timestamp.  The following file path uses a timestamp down to the millisecond, so it should always be unique.

<do-set-local-variable name="lv.outputFile" scope="policy">
<arg-string>
<token-global-variable name="outputfilelocation"/>
<token-text xml:space="preserve">\</token-text>
<token-time format="yyyyMMddHHmmssSSS"/>
<token-text xml:space="preserve">.csv</token-text>
</arg-string>
</do-set-local-variable>

Once we have the output file string, the contents string, and the ECMAScript object created and added to the driver, we just need to use our XPATH expression to call it and write out our files.

<do-set-local-variable name="result" scope="policy">
<arg-string>
<token-xpath expression="es:writeFile($lv.outputFile,$lv.outputString)"/>
</arg-string>
</do-set-local-variable>

The local variable "result" is holding the return value, which should be blank.  If it is not blank, then there was a problem and the exception that was caught should be held in this value.  This can be used for simple error checking.

### Novell IDM Fun at a Large Hospital

I’ve been working with a large hospital chain and working on solving some pretty complex challenges.  Recently, we have been working on implementing some McKesson applications within their organization, but we also wanted to implement it with the concept of ‘Roles’ kept in mind.

Let me first start with our application challenges.  The two most commonly used McKesson applications are HCI and HPP.  For HCI, we found that there is an engine underneath called cloverleaf that uses a standard protocol called HL7 to send messages around.  We have three options to explore.  The first is a proprietary HL7 driver that will connect to an HL7 listener, and pass it a carefully constructed HL7 message (the message I have to construct in the driver output transform).  The second is to use an HL7 emulator that is already in place to consume a text file in the format of that same HL7 message.  The third is to use an open API that I found to open a socket using the java api.  I can call the java api from ECMAScript and use an XPath expression within my driver to do it.  Once I open the socket, I simply pass the same HL7 message used by the other two steps through it.  Currently, I’m debugging option three, and we have already done some preliminary testing on option 2.  If we can’t get option 3 working, we can fall back on 2.

For HPP, we did something pretty nifty as well.  The customer identified an API that could be used to interface with the HPP database.  They stood up a web service that I could send calls to in a very specific format to add/modify/disable users from that system.  I simply construct a URL with a bunch of arguments on the end and call the URL.  The webserver will take the URL, process the transaction and post back a return code instead of a bunch of HTML.  Very simple implementation.  The key part was to encrypt the arguments so I wouldn’t be sending username and password combinations over the wire in clear text.  I did this by implementing an encryption protocol using ECMAScript and XPath within my driver.  So now, my URL has a bunch of seemly meaningless characters on the end, but the webserver can decode this into a meaningful set of arguments.

The next challenge for implementing these two applications came up when the customer requested to not have over 200 drivers to do this implementation.  There are over 100 locations for this customer and each location has an HCI and HPP implementation.  We needed to architect the drivers in such a way that one HCI or HPP driver could service more than one location.  Just to add a little more fun and complexity, the customer also wants the “Roles” concept to be built in as well.

I worked with the customer and we kicked ideas back and forth until we came up with what we thought would be a great implementation.  We took the idea of a ‘role’ and decided to turn it into an edirectory object.  We also decided that we may as well make each ‘facility’ its own object as well.  We already have objects for each ‘user’.  Here is how they all tie together.  For HCI, there will be a ‘role’ for each role in that system (or you can define a role in HCI as a combination of attributes that consists of access to that system).  For each facility, we will create a ‘facility’ object.  We already have a ‘user’ object for each user in their system.  So, now we have a ‘role’ object with a bunch of attributes that define what the “Nurse” role in the HCI system consists.  We also have a facility object for Hospital 123, with some attributes telling us information about that location, such as the location of its HCI system, the port number, private key file used for securely coping files over, and such.  The only information we need to store on the actual user is what role and what facility (Role – Nurse, Facility – 123).

Due to the fact that a user at this customer could potentially have access to multiple locations at the same time, we had to ensure that the attribute where we store the role and facility for HCI and HPP were multivalued.  So, we now have two attributes attached to user (well, attached to an aux class, which is attached to our users).  These two attributes are multivalued lists of facilities with roles.  We now have what access and where it should be provisioned in a list format attached to each user.

In the actual logic, we keep an eye on this attribute to know when we need to provision/deprovision/modify an account on a respective system.  I used a series of loops to go through each value and did ECMAScript functions with XPath calls to actually write out files or call URL’s.  This allowed me to service multiple sites with a single driver.  I just used a multivalued GCV on the driver to list which facilities are supported by this driver (so I could split the load if required).  When the driver determined work needed to be done for a user, I would retrieve the access information from the role object using an XPath call, then retrieve the information on where to provision this information from the facility object, tag all that information with what I need from the user (username, password, etc), then make my call.

We were able to successfully implement complex IDM systems, such as McKesson HCI and HPP, using the out-of-the-box Novell IDM components and some creative architecture.  This also gave us the ability to grant roles into the framework and service multiple sites with a single driver for each system.  The drivers can be split up if required for load balancing.   Also, if a role needs to be modified in the future, we simply need to modify the ‘role’ object in question, without having to touch the potentially thousands of users subscribing to that ‘role’.

## Tuesday, April 27, 2010

### Novell IDM XSL to Global Find/Replace

I heavily use Novell IDM if you haven't noticed from my other blog articles.  Recently, I needed to use Novell IDM to do a global find/replace.  Typically, this can be done in policy very quickly by just replacing all instances of a character with another character (or series of characters).

I hit an issue where the characters that I was doing the replacement with were special characters in the context of the IDM, so they were not output correctly.  In order to get the replacement done correctly, I needed to do it in the Output Transformation policy so they didn't get fubar'd while passing the text around.

So, what I did was make an XSL Template to do a global find/replace, then call the function.  I was pretty proud of my code, its pretty elegant the way it was implemented.

<xsl:template name="globalReplace">
<xsl:param name="outputString"/>
<xsl:param name="target"/>
<xsl:param name="replacement"/>
<xsl:choose>
<xsl:when test="contains($outputString,$target)">
<xsl:value-of select="concat(substring-before($outputString,$target),$replacement)"/> <xsl:call-template name="globalReplace"> <xsl:with-param name="outputString" select="substring-after($outputString,$target)"/> <xsl:with-param name="target" select="$target"/>
<xsl:with-param name="replacement" select="$replacement"/> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$outputString"/>
</xsl:otherwise>
</xsl:choose>
</xsl:template>

So, we pass this function our information.  If the target isn't present, it just throws back the string unmodified.  If the target is present, it takes the substring up to that string, tacks on the replacement, then recursively calls itself until all replacements are done.

Nice little piece of code for XSL to hold on to.

### Cisco VPN Client for 64-bit Windows 7

So, I've stumbled upon a VPN client that I can use for a 64-bit windows OS.  Someone posted this and I thought I'd pass it along because it works!  Its called Shrew VPN, frmo a company called Shrew Soft Inc.  I downloaded it for free and am using it on 64-bit Windows 7 Ultimate and it seems to be working like a champ!

http://www.shrew.net/software

Enjoy!

### 3DES Encrypt data with Novell IDM

If you check the blog prior to this one, I developed a method to call a web service URL from within any driver for Novell IDM.  This is all good, but what happens if we need to pass sensitive data in this URL.  We can't just do a GET operation with a URL that has a password in clear text, that's just asking for trouble.

What we do to protect this data is to throw some 3DES encryption on it before we throw the args on the end.  With the 3DES encryption, we also need a function to URLEncode the contents so the URL is written in a language the browser can send across.  Once again, I used some ECMAScript and code to get this done.

Please note, you will need to generate your own encryption key, then also use this key and similar code on the remote side to decrypt the contents to read it.

Here is the code.  The first portion is two functions.  The first function encrypts the data string passed to it, then passes it to the second function which will URLEncode it.

importPackage(Packages.javax.crypto);
importPackage(Packages.javax.crypto.spec);
importPackage(Packages.java.security.spec);
importPackage(Packages.java.io);
importPackage(Packages.sun.misc);
importClass(java.net.URLEncoder);

function DESEncrypt(theString) {
try {
var secretKey   = new SecretKeySpec(new Packages.sun.misc.BASE64Decoder().decodeBuffer(new java.lang.String("thisiswhereyouputyourkey")), "DESede");
var ecipher = new Cipher.getInstance("DESede");
ecipher.init(Cipher.ENCRYPT_MODE, secretKey);
var utf8 = new java.lang.String(theString).getBytes("UTF8");
var enc = ecipher.doFinal(utf8);
return EncodeURLString(new Packages.sun.misc.BASE64Encoder().encode(enc));
} catch (e) {
return e.toString();
}

return null;
}

function EncodeURLString(theContents) {
try {
return new URLEncoder.encode(theContents, "UTF8");
} catch (e) {
return e.toString();
}
return null;
}

Once this ECMAScript is saved, pushed up, then called into your IDM, you can call it like this:

<do-set-local-variable name="lv.EncryptedArgs" scope="policy">
<arg-string>
<token-xpath expression="es:DESEncrypt($myArgs)"/> </arg-string> </do-set-local-variable> Now, you have your encrypted arguments stored in a local variable, all nice and URL encoded. Just tack it on the end of a URL and call it with the code defined in the previous blog article and you have now sent encrypted data across the wire. ### Web Services with Novell IDM So I have a pretty unique request. A customer asked me to integrate IDM with a system that doesn't have a standard interface that I could use to integrate. It did, however, have an API that we could use to create our own interface. The customers programmer decided that the easiest way to implement this would be to setup a web service for me to call. While initially, I thought this was a candidate for the SOAP driver, I realized that this driver is overkill for the simplicity of this implementation. This is a sample of how we would be creating a user in the system from IDM: http://www.server.com/addUser?username=testuser&fname=test&lname=user&password=Passw0rd&othreattribute=other1&.... SOAP is complete overkill, I just need to construct a simlpe URL and call a linux wget on that URL. The resulting status message would be returned instead of a typical HTML page when the API code processed the request. I did some research and decided the easiest way to implement this would be to use ECMAScript to call some custom code that would very easily call my URL and return the result for me. I would shove the result into a local variable in policy then jam that sucker into an attribute on the user object in eDirectory. Seems like a very simple implementation, here is how I was able to do it. Below is the ECMAScript for the function that I wrote. Its a simple function, urlGet. You pass it the URL and it returns whatever is returned when a GET operation is performed on that URL. importClass(java.net.URL); importClass(java.io.InputStreamReader); importClass(java.lang.StringBuilder); function urlGet(urlString) { try { var url = new java.net.URL(String(urlString)); var stream = url.openStream(); var reader = new java.io.InputStreamReader(stream, "UTF-8"); var sb = new java.lang.StringBuilder(); var c; while ((c = reader.read()) != -1) { sb.append(String.fromCharCode(c)); } return sb.toString(); } catch(e) { return e.toString(); } } Now, we post that ECMAScript and use it in our driver with the following. It will use the URL that is stored in the Local Variable myURL and return the results back to lv.URLResult. <do-set-local-variable name="lv.URLResult" scope="policy"> <arg-string> <token-xpath expression="es:urlGet($myURL)"/>
</arg-string>
</do-set-local-variable>

Program some logic to store the result in some sort of status attribute, then business logic to construct the correct URL and your can very easily IDM enable a web service based system.

## Wednesday, March 24, 2010

### Locked ESX Virtual Machines

Here is the scenario.  Power outage at 2am.  The outage lasted longer than our battery life.  We have in our roadmap plans to implement scripts to do graceful shutdowns when a low battery signal comes from the UPS, but for the time being, we do not have that.

We initially had some fun bringing everything back up in order.  It gets particularly fun when your AD Domain Controllers are all virtual, DNS is all virtual, and DHCP is virtual.  Get some nice little chicken and egg issues, but we have learned our lesson and are going to create a DNS, DHCP, and DC that are physical, so they can come up before the virtual environment.

The real meat of our problem was some of the virtual machines.  It wasn't directly related to the power outage either.  Our core switches, which are on separate UPS and did not lose power, decided to go Tango Uniform right after we got most of the boxes back up.  Unfortunately, our Netgear core switches are not sending logs to a syslog server and the log files do not persist a reboot (I know, I think its stupid too).  This means we don't have any way of knowing why they went stupid on us.

So, we have learned some lessons and moved on.  On to what I want this article to reflect.  When we lost the switches, the virtual machines lost their connection to their vmdk's.  We use NFS to connect to the datastore, so when we restored the switches, most of the virtual machines just flushed their writes and went on their merry way.  Some virtual machines, however, did not do this.  I inspected the vmware.log file stored with the virtual machine files to see what happened, and I noticed that on all of the virtual machines that were locked up (most of which were Windows XP boxes) had the following log message:

Mar 23 13:34:38.731: vmx| VMX has left the building: 0.

So, we have determined that VMWare just gave up on trying to talk to its VMDK file after some amount of time and the VMX decided to ditch this party.   Ok, so here is the procedure I had to go through to get the darn things back.

First, we need to get the virtual machine into an 'off' state.  This is not easy, nor is it intuitive.  What I had to do was the following.

From the service console of the ESX Server running the VM, find the vmid of the virtual machine in question.  To do so, run the following command and grep for the Virtual Machine name,

cat /proc/vmware/vm/*/names | grep vdi-rivey

The return should  start with vmid=####.  Take this number into the next command, where we are looking for the VM Group.

less -S /proc/vmware/vm/####/cpu/status

You are looking for the vmgroup, which will look something like vm.####.  Next, using the VM group ID number, feed it into the following command to run an effective kill -9 on the VM within the VMKernel.  Note:  be sure to run this command as root (or use sudo).

Once this command is complete, the virtual machine still shows as though its in a Powered On state.  To get the VIC to figure this out, I had to restart some daemons on ESX to force VI to figure it all out.  Please note, I disabled VMWare HA and DRS on my cluster because of some of the issues I was having, I am not sure what HA will do with the VM's on this ESX server if you run this command while they HA is enabled.

service vmware-vpxa restart
service mgmt-vmware restart

The virtual machines running on this ESX Server and the ESX Server itself will grey out in your VIC while the services restart.  When everything is back to normal, the VM in question will now be grayed out, with (invalid) appended to the name.

Next, I removed the virtual machine from the inventory, then browsed to it in the datastore, right clicked on the vmx file, and added it back into my inventory.  It still is not ready to boot because it has a couple of .lck directories (lock files).  I browsed in the service console to the virtual machine, went into its directory and ran the following command to blow away all of the locks

rm -rf .lck*

After this was done, I was able to boot the VM back up!  Unfortunately, good ole Windows had some issues on 3 of my 60+ virtual machines.  These virtual machines boot, but promptly lock up.  I am not sure why this happened to a small subset of VM's, but I am attributing this to corruption of the disk.  The OS lost access to the disk and we killed the virtual machine without flushing the writes, so that could have been the problem.  Luckily, we take snapshots every night of the volume that holds the VM's at midnight.  I simply copied the entire VM directory from this backup, blew away the lock files again, added it to inventory, and bam, instant restore from backup.

Everything is back up and running.  We have a few infrastructure changes to make to help our recovery from a down state much quicker, we have a new reason to push for the scripts to bring everything down gracefully, and I have a procedure for unlocking a virtual machine.  We have also (again), verified that our backups are working like a champ!

## Friday, March 12, 2010

### Novell IDM XPATH

This one is a pretty fun example.  I have a user coming from a payroll system.  The user has a PayrollCode identifier on them.  Unfortunately, this Payroll identifier code is not completely unique, so I have to query another object in eDirectory to get the uniqueCode.   To do this I will be using XPATH.  I have posted the actions XML of the rule below.  I'll break it apart and explain.

<actions>
<do-set-local-variable name="lv.PRCode" scope="policy">
<arg-string>
<token-attr name="PRCode"/>
</arg-string>
</do-set-local-variable>
<do-set-local-variable name="facnode" scope="policy">
<arg-node-set>
<token-xpath expression='query:search($destQueryProcessor,"subordinate","","dn\of\subtree\I\want\to\search","ObjectClassName","PRCode",$lv.PRCode,"ReturnAttr1,uniqueCode,ReturnAttr3")'/>
</arg-node-set>
</do-set-local-variable>
<do-for-each>
<arg-node-set>
<token-local-variable name="facnode"/>
</arg-node-set>
<arg-actions>
<do-trace-message level="1">
<arg-string>
<token-xpath expression="$current-node[1]/attr[@attr-name='ReturnAttr1']/value"/> </arg-string> </do-trace-message> <do-set-local-variable name="lv.ReturnAttr1" scope="policy"> <arg-string> <token-xpath expression="$current-node[1]/attr[@attr-name='ReturnAttr1']/value"/>
</arg-string>
</do-set-local-variable>
<do-if>
<arg-conditions>
<and>
<if-global-variable mode="nocase" name="GCVAttr1" op="equal">$lv.ReturnAttr1$</if-global-variable>
</and>
</arg-conditions>
<arg-actions>
<do-set-dest-attr-value class-name="User" name="uniqueCode">
<arg-value>
<token-xpath expression="$current-node[1]/attr[@attr-name='uniqueCode']/value"/> </arg-value> </do-set-dest-attr-value> <do-set-dest-attr-value class-name="User" name="Attr3"> <arg-value> <token-xpath expression="$current-node[1]/attr[@attr-name='ReturnAttr3']/value"/>
</arg-value>
</do-set-dest-attr-value>
</arg-actions>
<arg-actions/>
</do-if>
</arg-actions>
</do-for-each>
<do-if>
<arg-conditions>
<or>
<if-op-attr name="uniqueCode" op="not-available"/>
<if-op-attr name="ReturnAttr3" op="not-available"/>
<if-op-attr mode="nocase" name="uniqueCode" op="equal"/>
<if-op-attr mode="nocase" name="ReturnAttr3" op="equal"/>
</or>
</arg-conditions>
<arg-actions>
<do-trace-message level="1">
<arg-string>
<token-text xml:space="preserve">No matching facility object found, uniqueCode and Attr3 not set.  Veto'ing transaction.</token-text>
</arg-string>
</do-trace-message>
<do-veto/>
</arg-actions>
<arg-actions/>
</do-if>
</actions>

Ok, now for the breakdown.  The first section actually executes the meat of  our sample, its the XPATH portion.  first I set a local variable so I don't have to query back on my JDBC driver if the attribute is not readily available.  I can just grab it and store it once in our policy.  Then, I run the XPATH query and set the nodeset to another local variable.

<do-set-local-variable name="lv.PRCode" scope="policy">
<arg-string>
<token-attr name="PRCode"/>
</arg-string>
</do-set-local-variable>
<do-set-local-variable name="facnode" scope="policy">
<arg-node-set>
<token-xpath expression='query:search($destQueryProcessor,"subordinate","","dn\of\subtree\I\want\to\search","ObjectClassName","PRCode",$lv.PRCode,"ReturnAttr1,uniqueCode,ReturnAttr3")'/>
</arg-node-set>
</do-set-local-variable>

The query uses the destQueryProcessor.  We put in the DN of the subtree we want to search (so the query doesn't take forever).  We are looking specifically at objects of class "ObjectClassName".  We are matching the PRCode attribute with the value in the lv.PRCode local variable.  Finally, for each resulting object we find, we want to grab ReturnAttr1, uniqueCode, and ReturnAttr3 attributes from it.

The next thing we are going to do is loop through all of our resulting nodes.

<do-for-each>
<arg-node-set>
<token-local-variable name="facnode"/>
</arg-node-set>
<arg-actions>
<do-trace-message level="1">
<arg-string>
<token-xpath expression="$current-node[1]/attr[@attr-name='ReturnAttr1']/value"/> </arg-string> </do-trace-message> <do-set-local-variable name="lv.ReturnAttr1" scope="policy"> <arg-string> <token-xpath expression="$current-node[1]/attr[@attr-name='ReturnAttr1']/value"/>
</arg-string>
</do-set-local-variable>

We grabbed the local variable facnode that is holding our resulting set.  Foreach will loop through each result.  I put some debug code in there to echo out in the trace file the result of each loop, its not necessary but nice to help step through the code in the trace.  In the result, we grab the ReturnAttr1 value and set it to a local variable lv.ReturnAttr1.  The next thing we are going to do is verify if lv.ReturnAttr1 meets our other criteria of matching a GCV.

<do-if>
<arg-conditions>
<and>
<if-global-variable mode="nocase" name="GCVAttr1" op="equal">$lv.ReturnAttr1$</if-global-variable>
</and>
</arg-conditions>

Pretty straight forward.  If there is a match, we execute the following section of code.

<arg-actions>
<do-set-dest-attr-value class-name="User" name="uniqueCode">
<arg-value>
<token-xpath expression="$current-node[1]/attr[@attr-name='uniqueCode']/value"/> </arg-value> </do-set-dest-attr-value> <do-set-dest-attr-value class-name="User" name="Attr3"> <arg-value> <token-xpath expression="$current-node[1]/attr[@attr-name='ReturnAttr3']/value"/>
</arg-value>
</do-set-dest-attr-value>
</arg-actions>
<arg-actions/>
</do-if>
</arg-actions>
</do-for-each>

If there is a match, I grab the values of the other two attributes (uniqueCode and ReturnAttr3) and stuff them in attributes on the Current User object I am processing.  If not, it will continue looping through the objects.  Once the loop is finished, I want to verify that I found a result and kick back a trace message and veto if I did not find a match.

<do-if>
<arg-conditions>
<or>
<if-op-attr name="uniqueCode" op="not-available"/>
<if-op-attr name="ReturnAttr3" op="not-available"/>
<if-op-attr mode="nocase" name="uniqueCode" op="equal"/>
<if-op-attr mode="nocase" name="ReturnAttr3" op="equal"/>
</or>
</arg-conditions>
<arg-actions>
<do-trace-message level="1">
<arg-string>
<token-text xml:space="preserve">No matching facility object found, uniqueCode and Attr3 not set.  Veto'ing transaction.</token-text>
</arg-string>
</do-trace-message>
<do-veto/>
</arg-actions>
<arg-actions/>
</do-if>
</actions>

Thats all there is to it!  The XPATH was easily used to run off and grab stuff out of eDirectory that was not previously available to me.  I can pick it up and use other dirxml logic to process through what I have very easily.

## Thursday, March 11, 2010

### Novell IDM XSL - Change Attribute to Proper Case

Today I was working on a Novell IDM project and I needed to use some XSL to call an external Java to format some text.  So, when I created my policy, I created an XSLT policy instead of a standard DirXML policy.  My input values looked something like the following:

<modify-attr attr-name="Given Name">
<remove-all-values/>
</modify-attr>
<modify-attr attr-name="Given Name">
<value>
ROBERT
</value>
</modify-attr>
<modify-attr attr-name="Surname">
<remove-all-values/>
</modify-attr>
<modify-attr attr-name="Surname">
<value>
IVEY
</value>
</modify-attr>
<modify-attr attr-name="Full Name">
<remove-all-values/>
</modify-attr>
<modify-attr attr-name="Full Name">
<value>
ROBERT IVEY
</value>
</modify-attr>

So, here is what the meat of my XSL policy looked like to transform it.  First, we find a match on the tags we want and store the attribute name and its value in some local variables:

<xsl:variable name="attrName" select="./@attr-name"/>
<xsl:variable name="newVal" select="./value/text()"/>

The next thing we do is ensure that there is a value.  This is done because on modify events there are two tags, the <remove-all-values/> one and the one with the new value to add.  We don't want to send a blank value and output two different tags.

<xsl:choose>
<xsl:when test="$newVal"> Now that we have matched the tag and ensured there is a value, lets go ahead and write the new version of the xml element and call the template to replace the value. We can usee and otherwise statement to copy everything that isn't being replace (IE the <remove-all-values/> tags that we matched but didn't rewrite) and close up all of our xsl tags. <add-attr attr-name="{$attrName}">
<value>
<xsl:call-template name="convertCase">
<xsl:with-param name="UCData" select="$newVal"/> </xsl:call-template> </value> </add-attr> </xsl:when> <xsl:otherwise> <xsl:copy-of select="."/> </xsl:otherwise> </xsl:choose> </xsl:template> Lets have a look at the xsl template "convertCase" thats being called. Its very simple and calls some java methods that are included using a jar file that we added to our IDM server. <xsl:template name="convertCase"> <xsl:param name="UCData"/> <xsl:variable name="LCData" select="util:lowerString($UCData)"/>
<xsl:variable name="newData" select="util:capitalizeWords($LCData)"/> <xsl:value-of select="$newData"/>
</xsl:template>

The parameter UCData is passed to the method util:lowerString and stored in LCData.  Then, LCData is passed over to util:capitalizeWords and the new value is stored in newData.  Notice how newData is the selected value that we use in the replacement up in the xsl template for the modify.

This template is added to copy through everything we didn't match:

<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>

And our output should be exactly the same, except our snippet was modified to look like this:

<modify-attr attr-name="Given Name">
<remove-all-values/>
</modify-attr>
<modify-attr attr-name="Given Name">
<value>
Robert
</value>
</modify-attr>
<modify-attr attr-name="Surname">
<remove-all-values/>
</modify-attr>
<modify-attr attr-name="Surname">
<value>
Ivey
</value>
</modify-attr>
<modify-attr attr-name="Full Name">
<remove-all-values/>
</modify-attr>
<modify-attr attr-name="Full Name">
<value>
Robert Ivey
</value>
</modify-attr>

Please note, the java classes are custom code delivered by a consultant.  I do not have the source code, nor can I distribute this code without their permission.  A little time with a string tokenizer should help recreate this functionality, but I am by no means a java programmer.  Here is what all of our code looks like when we slap it together:

<xsl:variable name="attrName" select="./@attr-name"/>
<xsl:variable name="newVal" select="./value/text()"/>
<xsl:choose>
<xsl:when test="$newVal"> <add-attr attr-name="{$attrName}">
<value>
<xsl:call-template name="convertCase">
<xsl:with-param name="UCData" select="$newVal"/> </xsl:call-template> </value> </add-attr> </xsl:when> <xsl:otherwise> <xsl:copy-of select="."/> </xsl:otherwise> </xsl:choose> </xsl:template> <xsl:template name="convertCase"> <xsl:param name="UCData"/> <xsl:variable name="LCData" select="util:lowerString($UCData)"/>
<xsl:variable name="newData" select="util:capitalizeWords($LCData)"/> <xsl:value-of select="$newData"/>
</xsl:template>
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>

## Wednesday, February 24, 2010

### Novell Identity Manager integration with BlackBerry Enterprise Server

Working with a customer who owns Novell Identity Manager who wanted to integrate their existing BlackBerry Enterprise Server infrastructure to be managed by Novell IDM. Currently, they have many users who are no longer working for the company that still have active BlackBerry accounts on the BES server, which is a huge compliance isue. By using Novell IDM, we can automatically provision and deprovision the BlackBerry accounts.

Upon my initial research, I had determined that the SOAP driver would be our best method for integrating the BES infrastructure, but after meetings with the customer, I discovered that they were on version 4.x f BlackBerry and the BlackBerry API that I had intended to use requires at least version 5.x of BES.

I did some additional research and found that BlackBerry has a CLI tool that can be leveraged in the BlackBerry Resource Kit. Our solution implemented the Resource Kit and Identity Manager, using the scripting systems driver, as to be used to pass events directly to the CLI.

Unfortunately, the customer did not own the scripting systems driver, so it was decide that we would use the CSV Driver to create a CSV file with all required attributes and an event identifier to pass events to a custom created windows service. The windows service monitors an input directory, and when the CSV file is placed into the directory, it consumes it, formats the CLI string using the attributes in the CSV file, then passes it on to the CLI. The return code text is then formatted back to an output CSV file, which is consumed by the Novell IDM CSV driver and the return value is stored into an auxiliary attribute on the user object.

The driver is used to create, enable, and disable BlackBerry accounts. A workflow was implemented so that BlackBerry's could be formally requested. The workflow requires a few levels of approval, then is passed to the team that manages the BES system. The BES team members then have the ability to select which BES server, IT Policy, the Activation Password, and ensure the value for the mailbox is correct in the directory prior to the create event occuring.

The BlackBerry accounts are controlled through entitlements. The entitlement requires the workflow to be completed, the required attributes to be present, and the user cannot be disabled. An additional workflow can be created to revoke a BlackBerry. If the user is terminated and has their account disabled, the BlackBerry will automatically be revoked as well.