Archive

Archive for the ‘OpenAM’ Category

Hacking OpenAM – An Open Response to Radovan Semancik

March 23, 2015 2 comments

 

I have been working with Sun, Oracle and ForgeRock products for some time now and am always looking for new and interesting topics that pertain to theirs and other open source identity products.  When Google alerted me to the following blog posting, I just couldn’t resist:

Hacking OpenAM, Level: Nightmare

Radovan Semancik | February 25, 2015

There were two things in the alert that caught my attention.  The first was the title and the obvious implications that it contained and the second is the author of the blog and the fact that he’s associated with Evolveum, a ForgeRock OpenIDM competitor.

The identity community is relatively small and I have read many of Radovan’s postings in the past.  We share a few of the same mailing lists and I have seen his questions/comments come up in those forums from time to time.  I have never met Radovan in person, but I believe we are probably more alike than different.  We share a common lineage; both being successful Sun identity integrators.  We both agree that open source identity is preferable to closed source solutions.  And it seems that we both share many of the same concerns over Internet privacy.  So when I saw this posting, I had to find out what Radovan had discovered that I must have missed over the past 15 years in working with these products.  After reading his blog posting, however, I do not share his same concerns nor do I come to the same conclusions. In addition, there are several inaccuracies in the blog that could easily be misinterpreted and are being used to spread fear, uncertainty, and doubt around OpenAM.

What follows are my responses to each of Radovan’s concerns regarding OpenAM. These are based on my experiences of working with the product for over 15 years and as Radovan aptly said, “your mileage may vary.”

In the blog Radovan comments “OpenAM is formally Java 6. Which is a problem in itself. Java 6 does not have any public updates for almost two years.”

ForgeRock is not stuck with Java 6.  In fact, OpenAM 12 supports Java 7 and Java 8.  I have personally worked for governmental agencies that simply cannot upgrade their Java version for one reason or another.  ForgeRock must make their products both forward looking as well as backward compatible in order to support their vast customer base.

In the blog Radovan comments “OpenAM also does not have any documents describing the system architecture from a developers point of view.”


I agree with Radovan that early versions of the documentation were limited.  As with any startup, documentation is one of the things that suffers during the initial phases, but over the past couple of years, this has flipped.  Due to the efforts of the ForgeRock documentation team I now find most of my questions answered in the ForgeRock documentation.  In addition, ForgeRock is a commercial open source company, so they do not make all high value documents publicly available.  This is part of the ForgeRock value proposition for subscription customers.

In the blog Radovan comments “OpenAM is huge. It consists of approx. 2 million lines of source code. It is also quite complicated. There is some component structure. But it does not make much sense on the first sight.”


I believe that Radovan is confusing the open source trunk with commercial open source product.  Simply put, ForgeRock does not include all code from the trunk in the OpenAM commercial offering.  As an example the extensions directory, which is not part of the product, has almost 1000 Java files in it.

More importantly, you need to be careful in attempting to judge functionality, quality, and security based solely on the number of lines of code in any product.  When I worked at AT&T, I was part of a development team responsible for way more than 2M lines of code.  My personal area of responsibility was directly related to approximately 250K lines of code that I knew inside and out.  A sales rep could ask me a question regarding a particular feature or issue and I could envision the file, module, and even where in the code the question pertained (other developers can relate to this).  Oh, and this code was rock solid.

In the blog Radovan comments that the “bulk of the OpenAM code is still efficiently Java 1.4 or even older.”


Is this really a concern?  During the initial stages of my career as a software developer, my mentor beat into my head the following mantra:

If it ain’t broke, don’t fix it!

I didn’t always agree with my mentor, but I was reminded of this lesson each time I introduced bugs into code that I was simply trying to make better.  Almost 25 years later this motto has stuck with me but over time I have modified it to be:

If it ain’t broke, don’t fix it, unless there is a damn good reason to do so!

It has been my experience that ForgeRock follows a mantra similar to my modified version.  When they decide to refactor the code, they do so based on customer or market demand not just because there are newer ways to do it.  If the old way works, performance is not limited, and security is not endangered, then why change it.   Based on my experience with closed-source vendors, this is exactly what they do; their source code, however, is hidden so you don’t know how old it really is.

A final thought on refactoring.  ForgeRock has refactored the Entitlements Engine and the Secure Token Service (both pretty mammoth projects) all while fixing bugs, responding to RFEs, and implementing new market-driven features such as:

  • Adaptive Authentication
  • New XUI Interface
  • REST STS
  • WS-TRUST STS
  • OpenID Connect
  • OAuth 2.0
  • Core Token Service

In my opinion, ForgeRock product development is focused on the right areas.

In the blog Radovan comments “OpenAM is in fact (at least) two somehow separate products. There is “AM” part and “FM” part.”


From what I understand, ForgeRock intentionally keeps the federation code independent. This was done so that administrators could easily create and export a “Fedlet” which is essentially a small web application that provides a customer with the code they need to implement SAML in a non-SAML application.  In short, keeping it separate allows for sharing between the OpenAM core services and providing session independent federation capability.  Keeping federation independent has also made it possible to leverage the functionality in other products such as OpenIG.

In the blog Radovan comments “OpenAM debugging is a pain. It is almost uncontrollable, it floods log files with useless data and the little pieces of useful information are lost in it.“


There are several places that you can look in order to debug OpenAM issues and where you look depends mostly on how you have implemented the product.

  • Debug Files – OpenAM debug files contain messages output by the developer in order to debug code. This includes a timestamp, the ID of the thread that called the Debug API, the message recorded by the code (error, info, warning), and optionally, a Java stack trace.  As a default, the verbosity level is low, but you can increase the verbosity to see additional messages.
  • OpenAM Log Files – OpenAM log files contain operational information for the OpenAM components. They are not designed for debugging purposes, but they may shed additional light in the debugging process. As a default, the verbosity level is low, but you can increase the verbosity to see additional messages.
  • Java Container Log Files – The Java container hosting the OpenAM application will also contain log files that may assist in the debugging process.  These log files contain general connection request/response for all connectivity to/from OpenAM.
  • Policy Agent Log Files – Policy Agents also generate log messages when used in an OpenAM implementation.  These log files may be stored on the server hosting the Policy Agent or on OpenAM, itself (or both).

I will agree with Radovan’s comments that this can be intimidating at first, but as with most enterprise products, knowing where to look and how to interpret the results is as much of an art as it is a science.  For someone new to OpenAM, debugging can be complex.  For skilled OpenAM customers, integrators, and ForgeRock staff, the debug logs yield a goldmine of valuable information that often assists in the rapid diagnosis of a problem.

Note:  Debugging the source code is the realm of experienced developers and ForgeRock does not expect their customers to diagnose product issues.

For those who stick strictly to the open source version, the learning curve can be steep and they have to rely on the open source community for answers (but hey, what do you want for free).  ForgeRock customers, however, will most likely have taken some training on the product to know where to look and what to look for.  In the event that they need to work with ForgeRock’s 24×7 global support desk, then they will most likely be asked to capture these files (as well as configuration information) in order to submit a ticket to ForgeRock.

In the blog Radovan comments that the “OpenAM is still using obsolete technologies such as JAX-RPC. JAX-RPC is a really bad API.” He then goes on to recommend Apache CXF and states “it takes only a handful of lines of code to do. But not in OpenAM.”

Ironically, OpenAM 12 has a modern REST STS along with a WS-TRUST Apache CXF based implementation (exactly what Radovan recommends).  ForgeRock began migrating away from JAX-RPC towards REST-based web services as early as version 11.0.  Now with OpenAM 12, ForgeRock has a modern (fully documented) REST STS along with a WS-TRUST Apache CXF based implementation (exactly what Radovan recommends).

ForgeRock’s commitment to REST is so strong, in fact, that they have invested heavily in the ForgeRock Common REST (CREST) Framework and API – which is used across all of their products.  They are the only vendor that I am aware of that provides REST interfaces across all products in their IAM stack.  This doesn’t mean, however, that ForgeRock can simply eliminate JAX-RPC functionality from the product.  They must continue to support JAX-RPC to maintain backwards compatibility for existing customers that are utilizing this functionality.

In the blog Radovan comments “OpenAM originated between 1998 and 2002. And the better part of the code is stuck in that time as well.”


In general, Radovan focuses on very specific things he does not like in OpenAM, but ignores all the innovations and enhancements that have been implemented since Sun Microsystems.  As mentioned earlier, ForgeRock has continuously refactored, rewritten, and added several major new features to OpenAM.

“ForgeRock also has a mandatory code review process for every code modification. I have experienced that process first-hand when we were cooperating on OpenICF. This process heavily impacts efficiency and that was one of the reasons why we have separated from OpenICF project.”

I understand how in today’s Agile focused world there is the tendency to shy away from old school concepts such as design reviews and code reviews.  I understand the concerns about how they “take forever” and “cost a lot of money”, but consider the actual cost of a bug getting out the door and into a customer’s environment.  The cost is born by both the vendor and the customer but ultimately it is the vendor who incurs a loss of trust, reputation, and ultimately customers.  Call me old school, but I will opt for code reviews every time – especially when my customer’s security is on the line.

Note:  there is an interesting debate on the effectiveness of code reviews on Slashdot.

Conclusion

So, while I respect Radovan’s opinions, I don’t share them and apparently neither do many of the rather large companies and DOD entities that have implemented OpenAM in their own environments.  The DOD is pretty extensive when it comes to product reviews and I have worked with several Fortune 500 companies that have had their hands all up in the code – and still choose to use it.  I have worked with companies that elect to have a minimal IAM implementation team (and rely on ForgeRock for total support) to those that have a team of developers building in and around their IAM solution.  I have seen some pretty impressive integrations between OpenAM log files, debug files, and the actual source code using tools such as Splunk.  And while you don’t need to go to the extent that I have seen some companies go in getting into the code, knowing that you could if you wanted to is a nice thing to have in your back pocket.  That is the benefit of open source code and one of the benefits of working with ForgeRock in general.

I can remember working on an implementation for one rather large IAM vendor where we spent more than three months waiting for a patch.  Every status meeting with the customer became more and more uncomfortable as we waited for the vendor to respond.  With ForgeRock software, I have the opportunity to look into the code and put in my own temporary patch if necessary.  I can even submit the patch to ForgeRock and if they agree with the change (once it has gone through the code review), my patch can then be shared with others and become supported by ForgeRock.

It is the best of both worlds, it is commercial open source!

 

 

 

Categories: ForgeRock, OpenAM Tags: ,

OpenDJ Attribute Uniqueness (and the Effects on OpenAM)

September 29, 2014 Leave a comment

In real life we tend to value those traits that make us unique from others; but in an identity management deployment uniqueness is essential to the authentication process and should not be taken for granted.

uniqueness

Case in point, attributes in OpenDJ may share values that you may or may not want (or need) to be unique. For instance the following two (different) entries are both configured with the same value for the email address:

dn: uid=bnelson,ou=people,dc=example,dc=com
uid: bnelson
mail: bill.nelson@identityfusion.com
[LDIF Stuff Snipped]
dn: uid=scarter,ou=people,dc=example,dc=com
uid: scarter
mail: bill.nelson@identityfusion.com
[LDIF Stuff Snipped]

In some cases this may be fine, but in others this may not be the desired effect as you may need to enforce uniqueness for attributes such as uid, guid, email address, or simply credit cards. To ensure that attribute values are unique across directory server entries you need to configure attribute uniqueness.

UID Uniqueness Plug-In

OpenDJ has an existing plug-in that can be used to configure unique values for the uid attribute, but this plug-in is disabled by default.  You can find this entry in OpenDJ’s main configuration file (config.ldif) or by searching the cn=config tree in OpenDJ (assuming you have the correct permissions to do so).

dn: cn=UID Unique Attribute,cn=Plugins,cn=config
objectClass: ds-cfg-unique-attribute-plugin
objectClass: ds-cfg-plugin
objectClass: top
ds-cfg-enabled: false
ds-cfg-java-class: org.opends.server.plugins.UniqueAttributePlugin
ds-cfg-plugin-type: preOperationAdd
ds-cfg-plugin-type: preOperationModify
ds-cfg-plugin-type: preOperationModifyDN
ds-cfg-plugin-type: postOperationAdd
ds-cfg-plugin-type: postOperationModify
ds-cfg-plugin-type: postOperationModifyDN
ds-cfg-plugin-type: postSynchronizationAdd
ds-cfg-plugin-type: postSynchronizationModify
ds-cfg-plugin-type: postSynchronizationModifyDN
ds-cfg-invoke-for-internal-operations: true
ds-cfg-type: uid
cn: UID Unique Attribute

Leaving this plug-in disabled can cause problems with OpenAM, however, if OpenAM has been configured to authenticate using the uid attribute (and you ‘accidentally’ create entries with the same uid value). In such cases you will see an authentication error during the login process as OpenAM cannot determine which account you are trying to use for authentication.

Configuring Uniqueness

To fix this problem in OpenAM, you can use the OpenDJ dsconfig command to enable the UID Unique Attribute plug-in as follows:

./dsconfig set-plugin-prop --hostname localhost --port 4444  \
--bindDN "cn=Directory Manager" --bindPassword password \
--plugin-name "UID Unique Attribute" \
--set base-dn:ou=people,dc=example,dc=com --set enabled:true \
--trustAll --no-prompt

This will prevent entries from being added to OpenDJ where the value of any existing uids conflicts with the incoming entry’s uid.  This will address the situation where you are using the uid attribute for authentication in OpenAM, but what if you want to use a different attribute (such as mail) to authenticate? In such cases, you need to create your own uniqueness plug-in as follows:

./dsconfig create-plugin --hostname localhost --port 4444  \
--bindDN "cn=Directory Manager" --bindPassword password \
--plugin-name "Unique Email Address Plugin" \
--type unique-attribute --set type:mail --set enabled:true \
--set base-dn:ou=people,dc=example,dc=com --trustAll \
--no-prompt

In both cases the base-dn parameter defines the scope where the the uniqueness applies. This is useful in multitenant environments where you may want to define uniqueness within a particular subtree but not necessarily across the entire server.

Prerequisites

The uniqueness plug-in requires that you have an existing equality index configured for the attribute where you would like to enforce uniqueness.  The index is necessary so that OpenDJ can search for other entries (within the scope of the base-dn) where the attribute may already have a particular value set.

The following dscconfig command can be used to create an equality index for  the mail attribute:

./dsconfig  create-local-db-index  --hostname localhost --port 4444  \
--bindDN "cn=Directory Manager" --bindPassword password --backend-name userRoot  \
--index-name mail  --set index-type:equality --trustAll --no-prompt

Summary

OpenAM’s default settings (Data Store, LDAP authentication module, etc) uses the uid attribute to authenticate and uniquely identify a user.  OpenDJ typically uses uid as the unique naming attribute in a user’s distinguished name.  When combined together, it is almost assumed that you will be using the uid attribute in this manner, but that is not always the case.  You can easily run into issues when you start coloring outside of the lines and begin using other attributes (i.e. mail) for this purpose.  Armed with the information contained in this post, however, you should easily be able to configure OpenDJ to enforce uniqueness for any attribute.

 

Understanding OpenAM and OpenDJ Account Lockout Behaviors

April 22, 2014 6 comments

The OpenAM Authentication Service can be configured to lock a user’s account after a defined number of log in attempts has failed.  Account Lockout is disabled by default, but when configured properly, this feature can be useful in fending off brute force attacks against OpenAM login screens.

If your OpenAM environment includes an LDAP server (such as OpenDJ) as an authentication database, then you have options on how (and where) you can configure Account Lockout settings.  This can be performed in either OpenAM (as mentioned above) or in the LDAP server, itself.  But the behavior is different based on where this is configured.  There are benefits and drawbacks towards configuring Account Lockout in either product and knowing the difference is essential.

Note:  Configuring Account Lockout simultaneously in both products can lead to confusing results and should be avoided unless you have a firm understanding of how each product works.  See the scenario at the end of this article for a deeper dive on Account Lockout from an attribute perspective. 

The OpenAM Approach

You can configure Account Lockout in OpenAM either globally or for a particular realm.  To access the Account Lockout settings for the global configuration,

  1. Log in to OpenAM Console
  2. Navigate to:  Configuration > Authentication > Core
  3. Scroll down to Account Lockout section

To access Account Lockout settings for a particular realm,

  1. Log in to OpenAM Console
  2. Navigate to:  Access Control > realm > Authentication > All Core Settings
  3. Scroll down to Account Lockout section

In either location you will see various parameters for controlling Account Lockout as follows:

 

OpenAM Account Lockout Parameters

Configuring Account Lockout in OpenAM

 

Account Lockout is disabled by default; you need to select the “Login Failure Lockout Mode” checkbox to enable this feature.  Once it is enabled, you configure the number of attempts before an account is locked and even if a warning message is displayed to the user before their account is locked.  You can configure how long the account is locked and even the duration between successive lockouts (which can increase if you set the duration multiplier).  You can configure the attributes to use to store the account lockout information in addition to the default attributes configured in the Data Store.

Enabling Account Lockout affects the following Data Store attributes:  inetUserStatus and sunAMAuthInvalidAttemptsData.  By default, the value of the inetUserStatus attribute is either Active or Inactive, but this can be configured to use another attribute and another attribute value.  This can be configured in the User Configuration section of the Data Store configuration as follows:

 

AcctLockoutDataStoreConfig

Data Store Account Lockout Attributes

 

These attributes are updated in the Data Store configuration for the realm.  A benefit of implementing Account Lockout in OpenAM is that you can use any LDAPv3 directory, Active Directory, or even a relational database – but you do need to have a Data Store configured to provide OpenAM with somewhere to write these values.  An additional benefit is that OpenAM is already configured with error messages that can be easily displayed when a user’s account is about to be locked or has become locked.  Configuring Account Lockout within OpenAM, however, may not provide the level of granularity that you might need and as such, you may need to configure it in the authentication database (such as OpenDJ).

The OpenDJ Approach

OpenDJ can be configured to lock accounts as well.  This is defined in a password policy and can be configured globally (the entire OpenDJ instance) or it may be applied to a subentry (a group of users or a specific user).  Similar to OpenAM, a user’s account can be locked after a number of invalid authentication attempts have been made.  And similar to OpenAM, you have several additional settings that can be configured to control the lockout period, whether warnings should be sent, and even who to notify when the account has been locked.

But while configuring Account Lockout in OpenAM may recognize invalid password attempts in your SSO environment, configuring it in OpenDJ will recognize invalid attempts for any application that is using OpenDJ as an authentication database.  This is more of a centralized approach and can recognize attacks from several vectors.

Configuring Account Lockout in OpenDJ affects the following OpenDJ attributes:  pwdFailureTime (a multivalued attribute consisting of the timestamp of each invalid password attempt) and pwdAccountLockedTime (a timestamp indicating when the account was locked).

Another benefit of implementing Account Lockout in OpenDJ is the ability to configure Account Lockout for different types of users.  This is helpful when you want to have different password policies for users, administrators, or even service accounts.  This is accomplished by assigning different password polices directly to those users or indirectly through groups or virtual attributes.  A drawback to this approach, however, is that OpenAM doesn’t necessarily recognize the circumstances behind error messages returned from OpenDJ when a user is unable to log in.  A scrambled password in OpenDJ, for instance, simply displays as an Authentication failed error message in the OpenAM login screen.

By default, all users in OpenDJ are automatically assigned a generic (rather lenient) password policy that is aptly named:  Default Password Policy.  The definition of this policy can be seen as follows:

 

dn: cn=Default Password Policy,cn=Password Policies,cn=config
objectClass: ds-cfg-password-policy
objectClass: top
objectClass: ds-cfg-authentication-policy
ds-cfg-skip-validation-for-administrators: false
ds-cfg-force-change-on-add: false
ds-cfg-state-update-failure-policy: reactive
ds-cfg-password-history-count: 0
ds-cfg-password-history-duration: 0 seconds
ds-cfg-allow-multiple-password-values: false
ds-cfg-lockout-failure-expiration-interval: 0 seconds
ds-cfg-lockout-failure-count: 0
ds-cfg-max-password-reset-age: 0 seconds
ds-cfg-max-password-age: 0 seconds
ds-cfg-idle-lockout-interval: 0 seconds
ds-cfg-java-class: org.opends.server.core.PasswordPolicyFactory
ds-cfg-lockout-duration: 0 seconds
ds-cfg-grace-login-count: 0
ds-cfg-force-change-on-reset: false
ds-cfg-default-password-storage-scheme: cn=Salted SHA-1,cn=Password Storage 
  Schemes,cn=config
ds-cfg-allow-user-password-changes: true
ds-cfg-allow-pre-encoded-passwords: false
ds-cfg-require-secure-password-changes: false
cn: Default Password Policy
ds-cfg-require-secure-authentication: false
ds-cfg-expire-passwords-without-warning: false
ds-cfg-password-change-requires-current-password: false
ds-cfg-password-generator: cn=Random Password Generator,cn=Password Generators,
  cn=config
ds-cfg-password-expiration-warning-interval: 5 days
ds-cfg-allow-expired-password-changes: false
ds-cfg-password-attribute: userPassword
ds-cfg-min-password-age: 0 seconds

 

The value of the ds-cfg-lockout-failure-count attribute is 0; which means that user accounts are not locked by default – no matter how many incorrect attempts are made.  This is one of the many security settings that you can configure in a password policy and while many of these mimic what is available in OpenAM, others go quite deeper.

You can use the OpenDJ dsconfig command to change the Default Password Policy as follows:

 

dsconfig set-password-policy-prop --policy-name "Default Password Policy" --set 
lockout-failure-count:3 --hostname localhost --port 4444 --trustAll --bindDN 
"cn=Directory Manager" --bindPassword ****** --no-prompt

 

Rather than modifying the Default Password Policy, a preferred method is to create a new password policy and apply your own specific settings to the new policy.  This policy can then be applied to a specific set of users.

The syntax for using the OpenDJ dsconfig command to create a new password policy can be seen below.

 

dsconfig create-password-policy --set default-password-storage-scheme:"Salted 
  SHA-1" --set password-attribute:userpassword --set lockout-failure-count:3 
  --type password-policy --policy-name "Example Corp User Password Policy" 
  --hostname localhost --port 4444 --trustAll --bindDN cn="Directory Manager" 
  --bindPassword ****** --no-prompt

 

Note:  This example contains a minimum number of settings (default-password-storage-scheme, password-attribute, and lockout-failure-count).  Consider adding additional settings to customize your password policy as desired.

You can now assign the password policy to an individual user by adding the following attribute as a subentry to the user’s object:

 

ds-pwp-password-policy-dn:  cn=Example Corp User Password Policy,cn=Password 
  Policies, cn=config

 

This can be performed using any LDAP client where you have write permissions to a user’s entry.  The following example uses the ldapmodify command in an interactive mode to perform this operation:

 

$ ldapmodify -D "cn=Directory Manager" -w ****** <ENTER>
dn: uid=bnelson,ou=People,dc=example,dc=com <ENTER>
changetype: modify <ENTER>
replace: ds-pwp-password-policy-dn <ENTER>
ds-pwp-password-policy-dn: cn=Example Corp User Password Policy,
  cn=Password Policies,cn=config <ENTER>
<ENTER>

 

Another method of setting this password policy is through the use of a dynamically created virtual attribute (i.e. one that is not persisted in the OpenDJ database backend).  The following definition automatically assigns this new password policy to all users that exist beneath the ou=people container (the scope of the virtual attribute).

 

dn: cn=Example Corp User Password Policy Assignment,cn=Virtual 
  Attributes,cn=config
objectClass: ds-cfg-virtual-attribute
objectClass: ds-cfg-user-defined-virtual-attribute
objectClass: top
ds-cfg-base-dn: ou=people,dc=example,dc=com
cn: Example Corp User Password Policy Assignment
ds-cfg-attribute-type: ds-pwp-password-policy-dn
ds-cfg-enabled: true
ds-cfg-java-class: 
  org.opends.server.extensions.UserDefinedVirtualAttributeProvider
ds-cfg-filter: (objectclass=sbacperson)
ds-cfg-value: cn=Example Corp User Password Policy,cn=Password 
  Policies,cn=config

 

Note:  You can also use filters to create very granular results on how password polices are applied.

Configuring Account Lockout in OpenDJ has more flexibility and as such may be considered to be more powerful than OpenAM in this area.  The potential confusion, however, comes when attempting to unlock a user’s account when they have been locked out of both OpenAM and OpenDJ.  This is described in the following example.

A Deeper Dive into Account Lockout

Consider an environment where OpenAM is configured with the LDAP authentication module and that module has been configured to use an OpenDJ instance as the authentication database.

 

SSOComponents

OpenDJ Configured as AuthN Database

 

OpenAM and OpenDJ have both been configured to lock a user’s account after 3 invalid password attempts.  What kind of behavior can you expect?  Let’s walk through each step of an Account Lockout process and observe the behavior on Account Lockout specific attributes.

 

Step 1:  Query Account Lockout Specific Attributes for the Test User

 


$ ldapsearch -D "cn=Directory Manager" -w ****** uid=testuser1 inetuserstatus \
  sunAMAuthInvalidAttemptsData pwdFailureTime pwdAccountLockedTime

dn: uid=testuser1,ou=test,dc=example,dc=com
inetuserstatus: Active

 

The user is currently active and Account Lockout specific attributes are empty.

 

Step 2:  Open the OpenAM Console and access the login screen for the realm where Account Lockout has been configured.

 

OpenAMLoginScreen

OpenAM Login Page

 

Step 3:  Enter an invalid password for this user

 

OpenAMAuthNFailed

OpenAM Authentication Failure Message

 

Step 4:  Query Account Lockout Specific Attributes for the Test User

 

$ ldapsearch -D "cn=Directory Manager" -w ****** uid=testuser1 inetuserstatus \
  sunAMAuthInvalidAttemptsData pwdFailureTime pwdAccountLockedTime

dn: uid=testuser1,ou=test,dc=example,dc=com
sunAMAuthInvalidAttemptsData:: PEludmFsaWRQYXNzd29yZD48SW52YWxpZENvdW50PjE8L0
  ludmFsaWRDb3VudD48TGFzdEludmFsaWRBdD4xMzk4MTcxNTAwMDE4PC9MYXN0SW52YWxpZEF0P
  jxMb2NrZWRvdXRBdD4wPC9Mb2NrZWRvdXRBdD48QWN0dWFsTG9ja291dER1cmF0aW9uPjA8L0Fj
  dHVhbExvY2tvdXREdXJhdGlvbj48L0ludmFsaWRQYXNzd29yZD4=
inetuserstatus: Active
pwdFailureTime: 20140422125819.918Z

 

You now see that there is a value for the pwdFailureTime.  This is the timestamp of when the first password failure occurred.  This attribute was populated by OpenDJ.

The sunAMAuthInvalidAttemptsData attribute is populated by OpenAM.  This is a base64 encoded value that contains valuable information regarding the invalid password attempt.  Run this through a base64 decoder and you will see that this attribute contains the following information:

 

<InvalidPassword><InvalidCount>1</InvalidCount><LastInvalidAt>1398171500018
  </LastInvalidAt><LockedoutAt>0</LockedoutAt><ActualLockoutDuration>0
  </ActualLockoutDuration></InvalidPassword>

 

Step 5:  Repeat Steps 2 and 3.  (This is the second password failure.)

 

Step 6:  Query Account Lockout Specific Attributes for the Test User

 

$ ldapsearch -D "cn=Directory Manager" -w ****** uid=testuser1 inetuserstatus \
  sunAMAuthInvalidAttemptsData pwdFailureTime pwdAccountLockedTime

dn: uid=testuser1,ou=test,dc=example,dc=com
sunAMAuthInvalidAttemptsData:: PEludmFsaWRQYXNzd29yZD48SW52YWxpZENvdW50PjI8L0
  ludmFsaWRDb3VudD48TGFzdEludmFsaWRBdD4xMzk4MTcxNTUzMzUwPC9MYXN0SW52YWxpZEF0P
  jxMb2NrZWRvdXRBdD4wPC9Mb2NrZWRvdXRBdD48QWN0dWFsTG9ja291dER1cmF0aW9uPjA8L0Fj
  dHVhbExvY2tvdXREdXJhdGlvbj48L0ludmFsaWRQYXNzd29yZD4=
inetuserstatus: Active
pwdFailureTime: 20140422125819.918Z
pwdFailureTime: 20140422125913.151Z

 

There are now two values for the pwdFailureTime attribute – one for each password failure.  The sunAMAuthInvalidAttemptsData attribute has been updated as follows:

 

<InvalidPassword><InvalidCount>2</InvalidCount><LastInvalidAt>1398171553350
  </LastInvalidAt><LockedoutAt>0</LockedoutAt><ActualLockoutDuration>0
  </ActualLockoutDuration></InvalidPassword>

 

Step 7:  Repeat Steps 2 and 3.  (This is the third and final password failure.)

 

OpenAMAcctLocked

OpenAM Inactive User Page

 

OpenAM displays an error message indicating that the user’s account is not active.  This is OpenAM’s way of acknowledging that the user’s account has been locked.

 

Step 8:  Query Account Lockout Specific Attributes for the Test User

 

$ ldapsearch -D "cn=Directory Manager" -w ****** uid=testuser1 inetuserstatus \ 
  sunAMAuthInvalidAttemptsData pwdFailureTime pwdAccountLockedTime

dn: uid=testuser1,ou=test,dc=example,dc=com
sunAMAuthInvalidAttemptsData:: PEludmFsaWRQYXNzd29yZD48SW52YWxpZENvdW50PjA8L0
  ludmFsaWRDb3VudD48TGFzdEludmFsaWRBdD4wPC9MYXN0SW52YWxpZEF0PjxMb2NrZWRvdXRBd
  D4wPC9Mb2NrZWRvdXRBdD48QWN0dWFsTG9ja291dER1cmF0aW9uPjA8L0FjdHVhbExvY2tvdXRE
  dXJhdGlvbj48L0ludmFsaWRQYXNzd29yZD4=
inetuserstatus: Inactive
pwdFailureTime: 20140422125819.918Z
pwdFailureTime: 20140422125913.151Z
pwdFailureTime: 20140422125944.771Z
pwdAccountLockedTime: 20140422125944.771Z

 

There are now three values for the pwdFailureTime attribute – one for each password failure.  The sunAMAuthInvalidAttemptsData attribute has been updated as follows:

 

<InvalidPassword><InvalidCount>0</InvalidCount><LastInvalidAt>0</LastInvalidAt>
  <LockedoutAt>0</LockedoutAt><ActualLockoutDuration>0</ActualLockoutDuration>
  </InvalidPassword>

 

You will note that the counters have all been reset to zero.  That is because the user’s account has been inactivated by OpenAM by setting the value of the inetuserstatus attribute to Inactive.  Additionally, the third invalid password caused OpenDJ to lock the account by setting the value of the pwdAccountLockedTime attribute to the value of the last password failure.

Now that the account is locked out, how do you unlock it?  The natural thing for an OpenAM administrator to do is to reset the value of the inetuserstatus attribute and they would most likely use the OpenAM Console to do this as follows:

 

EditUser

OpenAM Edit User Page (Change User Status)

 

The problem with this approach is that while the user’s status in OpenAM is now made active, the status in OpenDJ remains locked.

 

$ ldapsearch -D "cn=Directory Manager" -w ****** uid=testuser1 inetuserstatus \
  sunAMAuthInvalidAttemptsData pwdFailureTime pwdAccountLockedTime

dn: uid=testuser1,ou=test,dc=example,dc=com
sunAMAuthInvalidAttemptsData:: PEludmFsaWRQYXNzd29yZD48SW52YWxpZENvdW50PjA8L0
  ludmFsaWRDb3VudD48TGFzdEludmFsaWRBdD4wPC9MYXN0SW52YWxpZEF0PjxMb2NrZWRvdXRBd
  D4wPC9Mb2NrZWRvdXRBdD48QWN0dWFsTG9ja291dER1cmF0aW9uPjA8L0FjdHVhbExvY2tvdXRE
  dXJhdGlvbj48L0ludmFsaWRQYXNzd29yZD4=
inetuserstatus: Active
pwdFailureTime: 20140422125819.918Z
pwdFailureTime: 20140422125913.151Z
pwdFailureTime: 20140422125944.771Z
pwdAccountLockedTime: 20140422125944.771Z

 

Attempting to log in to OpenAM with this user’s account yields an authentication error that would make most OpenAM administrators scratch their head; especially after just resetting the user’s status.

 

OpenAMAuthNFailed

OpenAM Authentication Failure Message

 

The trick to fixing this is to clear the pwdAccountLockedTime and pwdFailureTime attributes and the way to do this is by modifying the user’s password.  Once again, the ldapmodify command can be used as follows:

 

$ ldapmodify -D "cn=Directory Manager" -w ****** <ENTER>
dn: uid=testuser1,ou=test,dc=example,dc=com <ENTER>
changetype: modify <ENTER>
replace: userPassword <ENTER>
userPassword: newpassword <ENTER>
<ENTER>
$ ldapsearch -D "cn=Directory Manager" -w ****** uid=testuser1 inetuserstatus \
  sunAMAuthInvalidAttemptsData pwdFailureTime pwdAccountLockedTime

dn: uid=testuser1,ou=test,dc=example,dc=com
sunAMAuthInvalidAttemptsData:: PEludmFsaWRQYXNzd29yZD48SW52YWxpZENvdW50PjA8L0
  ludmFsaWRDb3VudD48TGFzdEludmFsaWRBdD4wPC9MYXN0SW52YWxpZEF0PjxMb2NrZWRvdXRBd
  D4wPC9Mb2NrZWRvdXRBdD48QWN0dWFsTG9ja291dER1cmF0aW9uPjA8L0FjdHVhbExvY2tvdXRE
  dXJhdGlvbj48L0ludmFsaWRQYXNzd29yZD4=
inetuserstatus: Active
pwdChangedTime: 20140422172242.676Z

 

This, however, requires two different interfaces for managing the user’s account.  An easier method is to combine the changes into one interface.  You can modify the inetuserstatus attribute using ldapmodify or if you are using the OpenAM Console, simply change the password while you are updating the user’s status.

 

EditUser2

OpenAM Edit User Page (Change Password)

 

There are other ways to update one attribute by simply modifying the other.  This can range in complexity from a simple virtual attribute to a more complex yet powerful custom OpenDJ plugin.   But in the words of Voltaire, “With great power comes great responsibility.”

So go forth and wield your new found power; but do it in a responsible manner.

It’s OK to Get Stressed Out with OpenAM

March 7, 2014 1 comment

In fact, it’s HIGHLY recommended….

Performance testing and stress testing are closely related and are essential tasks in any OpenAM deployment.

PerformanceTesting

When conducting performance testing, you are trying to determine how well your system performs when subjected to a particular load. A primary goal of performance testing is to determine whether the system that you just built can support your client base (as defined by your performance requirements).  Oftentimes you must tweak things (memory, configuration settings, hardware) in order to meet your performance requirements, but without executing performance tests, you will never know if you can support your clients until you are actually under fire (and by then, it may be too late).

Performance testing is an iterative process as shown in the following diagram:

IterativePerfTesting

Each of the states may be described as follows:

  1. Test – throw a load at your server
  2. Measure – take note of the results
  3. Compare – compare your results to those desired
  4. Tweak – modify the system to help achieve your performance results

During performance testing you may continue in this loop until such time that you meet your performance requirements – or until you find that your requirements were unrealistic in the first place.

Stress testing (aka “torture testing”) goes beyond normal performance testing in that the load you place on the system intentionally exceeds the anticipated capacity.  The goal of stress testing is to determine the breaking point of the system and observe the behavior when the system fails.

Load-Testing-and-Stress-Testing

Stress testing allows you to  create contingency plans for those ‘worse case scenarios’ that will eventually occur (thanks to Mr. Murphy).

Before placing OpenAM into production you should test to see if your implementation meets your current performance requirements (concurrent sessions, authentications per second, etc.) and have a pretty good idea of where your limitations are.  The problem is that an OpenAM deployment is comprised of multiple servers – each that may need to be tested (and tuned) separately.  So how do you know where to start?

When executing performance and stress tests in OpenAM, there are three areas where I like to place my focus: 1) the protected application, 2) the OpenAM server, and 3) the data store(s). Testing the system as a whole may not provide enough information to determine where problems may lie and so I prefer to take an incremental approach that tests each component in sequence.  I start with the data stores (authentication and user profile databases) and work my way back towards the protected application – with each iteration adding a new component.

InterativeTesting

Note:  It should go without saying that the testing environment should mimic your production environment as closely as possible.  Any deviation may cause your test results to be skewed and provide inaccurate data.

Data Store(s)

An OpenAM deployment may consist of multiple data stores – those that are used for authentication (Active Directory, OpenDJ, Radius Server, etc.) and those that are used to build a user’s profile (LDAP and RDBMS).  Both of these are core to an OpenAM deployment and while they are typically the easiest to test, a misconfiguration here may have a pretty big impact on overall performance.  As such, I start my testing at the database layer and focus only on that component.

IterativeTesting2

Performance of an authentication database can be measured by the average number of authentications that occur over a particular period of time (seconds, minutes, hours) and the easiest way to test these types of databases is to simply perform authentication operations against them.

You can write your own scripts to accomplish this, but there are many freely available tools that can be used as well.  One tool that I have used in the past is the SLAMD Distributed Load Generation Engine.  SLAMD was designed to test directory server performance, but it can be used to test web applications as well.  Unfortunately, SLAMD is no longer being actively developed, but you can still download a copy from http://dl.thezonemanager.com/slamd/.

A tool that I have started using to test authentications against an LDAP server is authrate, which is included in ForgeRock’s OpenDJ LDAP Toolkit.  Authrate allows you to stress the server and display some really nice statistics while doing so.  The authrate command line tool measures bind throughput and response times and is perfect for testing all sorts of LDAP authentication databases.

Performance of a user profile database is typically measured in search performance against that database.  If your user profile database can be searched using LDAP (i.e. Active Directory or any LDAPv3 server), then you can use searchrate – also included in the OpenDJ LDAP Toolkit.  searchrate is a command line tool that measures search throughput and response time.

The following is sample output from the searchrate command:

-------------------------------------------------------------------------------
     Throughput                            Response Time                       
   (ops/second)                           (milliseconds)                      
recent  average  recent  average  99.9%  99.99%  99.999%  err/sec  Entries/Srch
-------------------------------------------------------------------------------
 188.7    188.7   3.214    3.214  306.364 306.364  306.364    0.0           0.0
 223.1    205.9   2.508    2.831  27.805  306.364  306.364    0.0           0.0
 245.7    219.2   2.273    2.622  20.374  306.364  306.364    0.0           0.0
 238.7    224.1   2.144    2.495  27.805  306.364  306.364    0.0           0.0
 287.9    236.8   1.972    2.368  32.656  306.364  306.364    0.0           0.0
 335.0    253.4   1.657    2.208  32.656  306.364  306.364    0.0           0.0
 358.7    268.4   1.532    2.080  30.827  306.364  306.364    0.0           0.0

The first two columns represent the throughput (number of operations per second) observed in the server. The first column contains the most recent value and the second column contains the average throughput since the test was initiated (i.e. the average of all values contained in column one).

The remaining columns represent response times with the third column being the most recent response time and the fourth column containing the average response time since the test was initiated. Columns five, six, and seven (represented by percentile headers) demonstrate how many operations fell within that range.

For instance, by the time we are at the 7th row, 99.9% of the operations are completed in 30.827 ms (5th column, 7th row), 99.99% are completed in 306.364 ms (6th column, 7th row), and 99.999% of them are completed within 306.364 ms (7th column, 7th row). The percentile rankings provide a good indication of the real system performance and can be interpreted as follows:

  • 1 out of 1,000 search requests is exceeding 30 ms
  • 1 one out of 100,000 requests is exceeding 306 ms

Note: The values contained in this search were performed on an untuned, limited resource test system. Results will vary depending on the amount of JVM memory, the system CPU(s), and the data contained in the directory. Generally, OpenDJ systems can achieve much better performance that the values shown above.

There are several factors that may need to be considered when tuning authentication and user profile databases.  For instance, if you are using OpenDJ for your database you may need to modify your database cache, the number of worker threads, or even how indexing is configured in the server.  If your constraint is operating system based, you may need to increase the size of the JVM or the number of file descriptors.   If the hardware is the limiting factor, you may need to increase RAM, use high speed disks, or even faster network interfaces.  No matter what the constraint, you should optimize the databases (and database servers) before moving up the stack to the OpenAM instance.

OpenAM Instance + Data Store(s)

Once you have optimized any data store(s) you can now begin testing directly against OpenAM as it is configured against those data store(s).  Previous testing established a performance baseline and any degradation introduced at this point will be due to OpenAM or the environment (operating system, Java container) where it has been configured.

IterativeTesting3

But how can you test an OpenAM instance without introducing the application that it is protecting?  One way is to generate a series of authentications and authorizations using direct interfaces such as the OpenAM API or REST calls.  I prefer to use REST calls as this is the easiest to implement.

There are browser based applications such as Postman that are great for functional testing, but these are not easily scriptable.  As such, I lean towards a shell or Perl script containing a loop of cURL commands.

Note: You should use the same authentication and search operations in your cURL commands to be sure that you are making a fair comparison between the standalone database testing and the introduction of OpenAM.

You should expect some decrease in performance when the OpenAM server is introduced, but it should not be too drastic.  If you find that it falls outside of your requirements, however, then you should consider updating OpenAM in one of the following areas:

  • LDAP Configuration Settings (i.e. connections to the Configuration Server)
  • Session Settings (if you are hitting limitations)
  • JVM Settings (pay particular attention to garbage collection)
  • Cache Settings (size and time to live)

Details behind each of these areas can be found in the OpenAM Administration Guide.

You may also find that OpenAM’s interaction with the database(s) introduces searches (or other operations) that you did not previously test for.  This may require you to update your database(s) to account for this and restart your performance testing.

Note:  Another tool I have started playing with is the Java Application Monitor (aka JAMon).  While this tool is typically used to monitor a Java application, it provides some useful information to help determine bottlenecks working with databases, file IO, and garbage collection.

Application + OpenAM Instance + Data Store(s)

Once you feel comfortable with the performance delivered by OpenAM and its associated data store(s), it is time to introduce the final component – the protected application, itself.

InterativeTesting

This will differ quite a bit based on how you are protecting your application (for instance, policy agents will behave differently from OAuth2/OpenID Connect or SAML2) but this does provide you with the information you need to determine if you can meet your performance requirements in a production deployment.

If you have optimized everything up to this point, then the combination of all three components will provide a full end to end test of the entire system.  In this case, then an impact due to network latency will be the most likely factor in performance testing.

To perform a full end to end test of all components, I prefer to use Apache JMeter.  You configure JMeter to use a predefined set of credentials, authenticate to the protected resource, and look for specific responses from the server.  Once you see those responses, JMeter will act according to how you have preconfigured it to act.  This tool allows you to generate a load against OpenAM from login to logout and anything in between.

Other Considerations

Keep in mind that any time that you introduce a monitoring tool into a testing environment, the tool (itself) can impact performance.  So while the numbers you receive are useful, they are not altogether acurate.  There may be some slight performance degradation (due to the introduction of the tool) that your users will never see.

You should also be aware that the client machine (where the load generation tools are installed) may become a bottleneck if you are not careful.  You should consider distributing your performance testing tools across multiple client machines to minimize this effect.  This is another way of ensuring that the client environment does not become the limiting factor.

Summary

Like many other areas in our field, performance testing an OpenAM deployment may be considered as much of an art as it is a science.  There may be as many methods for testing as there are consultants and each varies based on the tools they use.  The information contained here is just one approach performance testing – one that I have used successfully in our deployments.

What methods have you used?  Feel free to share in the comments, below.

Understanding the iPlanetDirectoryPro Cookie

February 26, 2014 10 comments

So you have run into problems with OpenAM and you are now looking at the interaction between the Browser and the OpenAM server.  To assist you in your efforts you are using a plug-in like LiveHttpHeaders, SAML Tracer, or Fiddler and while you are intently studying “the dance” (as I like to call it), you come across a cookie called iPlanetDirectoryPro that contains a value that looks like something your two year old child randomly typed on the keyboard.

AQIC5wM2LY4Sfcy954IRN6Ixz7ZMwVdJkGlqr9urGirFNMQ.*AAJTSQACMDMAAlNLAAoxODIyMjQ4MDI0AAJTMQACMDI.*;

So what is this cookie and what do its contents actually mean?

I’m glad that you asked.  Allow me to explain.

OpenAM Sessions

When a user successfully authenticates against an OpenAM server, a session is generated on that server.  The session is nothing more than an object stored in the memory of the OpenAM server where it was created.  The session contains information about the interaction between the client and the server.  In addition to other things, the session will contain a session identifier, session times, the method used by the user to authenticate, and the user’s identity.  The following is a snippet of the information contained in a user’s OpenAM session.

sessionID:  AQIC5wM…
maxSessionTime:  120
maxIdleTime:  30
timeLeft:  6500
userID:  bnelson
authLevel: 1
loginURL:/auth/UI/Login
service: ldapService
locale: en_US

Sessions are identified using a unique token called SSOTokenID.  This token contains the information necessary to locate the session on the server where it is currently being maintained.  The entire value of the iPlanetDirectoryPro cookie is the SSOTokenID.

SSOTokenID = AQIC5wM2LY4Sfcy954IRN6Ixz7ZMwVdJkGlqr9urGirFNMQ.*AAJTSQACMDMAAlNLAAoxODIyMjQ4MDI0AAJTMQACMDI.*;

While this may look like gibberish, it actually has meaning (and is actually quite useful).

The period (.) in the middle of the SSOTokenID is a delimiter that separates the SSOToken from the Session Key.

SSOTokenID

The SSOToken is a C66Encoded string that points to the session in memory.  The Session Key is a Base64 Encoded string that identifies the location of the site and server where the session is being maintained.  Additionally, the Session Key contains the storage key of the session should you need to identify it in a persistent storage location (such as the amsessiondb [OpenAM v10.0 or lower] or the OpenDJ CTS Store [OpenAM v10.1 or higher]).

So separating the SSOTokenID into its two components you will find the following:

SSOToken:  AQIC5wM2LY4Sfcy954IRN6Ixz7ZMwVdJkGlqr9urGirFNMQ

Session Key:  *AAJTSQACMDMAAlNLAAoxODIyMjQ4MDI0AAJTMQACMDI.*;

As previously indicated, the Session Key is a Base64 Encoded value.  That means that you can decode the Session Key into meaningful information using Base64 Decoders.

Note:  A useful site for this is http://www.base64decode.org/.

Running the Session Key listed above through a Base64 decoder yields the following

SI03SK1822248024S102

And can be broken down as follows:

Site:  SI03
Server:  S102
Storage Key:  SK1822248024

 

Note:  The session key is a Base64 encoded Java DataInputStream.  As such, the decoded data includes a combination of both discernible and non-discernible data. The output of running the session key through a Base64 decoder is similar to performing a UNIX ‘strings’ command on a binary database.  The good thing is that the key bits of data are discernible (as shown above).

This information tells you that the session identified by the SSOToken (AQIC5wM2LY4Sfcy954IRN6Ixz7ZMwVdJkGlqr9urGirFNMQ) is being maintained on server 02 in site 03 and is being persisted to the database and may be identified by storage key 1822248024.

IMPORTANT:The SI and S1 keys have different meanings depending on whether the server belongs to a site or not.  If the server belongs to a site, then SI contains the primary site’s ID and S1 contains the server’s ID.  This is shown in the example, above.  If the server does not belong to a site, then SI contains the server’s ID.

Since 10.1.0-Xpress the Storage Key is always part of the session ID.

This may be useful information for debugging, but it is essential information for OpenAM – especially in cases where a load balancer may be configured incorrectly.

Session Stickiness

Another important cookie in OpenAM is amlbcookie.  This cookie defines the server where the session is being maintained (i.e. amlbcookie=02) and should be used by load balancers to maintain client stickiness with that server.  When used properly, the amlbcookie allows a client to be directed to the server where the session is available.  If, however, a client ends up being sent to a different server (due to an incorrectly configured load balancer or the primary server being down) then the new OpenAM server can simply look at the information contained in the Session Key to determine the session’s location and request the session from that server.

Note:  Obtaining the session from another properly working OpenAM server is referred to as “cross talk” and should be avoided if at all possible.  The additional overhead placed on both the OpenAM servers and the network can reduce overall performance and can be avoided by simply configuring the load balancer properly.

Cross Talk Example

If Server 02 in Site 03 is maintaining the session and the client is sent to Server 01, then Server 01 can query Server 02 and ask for the session identified by the SSOToken value.  Server 02 would send the session information to Server 01 where the request can now be serviced.  If Server 01 needs to update any session information , then it does so by updating the session stored on Server 02.  As long as Server 02 remains available, the session is maintained on that server and as such, the communication between Server 01 and Server 02 can become quite “chatty”.

Session Failover Example

If Server 02 in Site 03 is maintaining the session and the client is sent to Server 01, then Server 01 can query Server 02 and ask for the session identified by the SSOToken value.  If Server 02 is offline (or doesn’t respond), then Server 01 can obtain the session from the session store (amsessiondb or OpenDJ CTS) using the Storage Key value.

Note:  Session persistence is not enabled by default.

So now you know and you can stop blaming your two year old child.

How to Configure OpenAM Signing Keys

February 9, 2014 8 comments

The exchange of SAML assertions between an Identity Provider (IdP) and a Service Provider (SP) uses Public-key Cryptography to validate the identity of the IdP and the integrity of the assertion.  

Securing SAML Assertions

SAML assertions passed over the public Internet will include a digital signature signed by an Identity Provider’s private key.  Additionally, the assertion will include the IdP’s public key contained in the body of a digital certificate.  Service Providers receiving the assertion can be assured that it has not been tampered with by comparing the unencrypted (hashed) message obtained from the digital signature with a hashed version of the message created by the Service Provider using the same hashing algorithm.

The process can be demonstrated by the following diagram where the Signing process is performed by the IdP and the Verification process is performed by the SP.  The “Data” referred to in the diagram is the assertion and the “Hash function” is the hashing algorithm used by both the Identity Provider and the Service Provider.

Digital_Signature_diagram

In order for an Identity Provider to sign the assertion, they must first have a digital certificate.

OpenAM includes a default certificate that can use for testing purposes.  This certificate is common to all installations and while convenient, should not be used for production deployments.  Instead, you should either use a certificate obtained from a trusted certificate authority (such as Thawte or Entrust) or generate your own self-signed certificate.

Note:  For the purposes of this article, $CONFIG refers to the location of the configuration folder specified during the installation process.  $URI refers to the URI of the OpenAM application; also specified during the installation process (i.e. /openam).

OpenAM’s Default Signing Key

OpenAM stores its certificates in a Java Keystore file located in the $CONFIG/$URI folder by default.  This can be found in the OpenAM Console as follows:

  1. Log in to the OpenAM Console as the administrative user.
  2. Select the Configuration tab.
  3. Select the Servers and Sites subtab.
  4. In the Servers panel, select the link for the appropriate server instance.
  5. Select the Security tab.
  6. Select the Key Store link at the top of the page.

You will see that the default location for the Java Keystore file, all passwords, and the alias of the default test certificate as follows:

keystoreLocation

Viewing the Contents of OpenAM’s Default Certificate

You can view the contents of this file as follows:

  1. Change to the $CONFIG/$URI folder.
  2. Use the Java keytool utility to view the contents of the file.  (Note:  The contents of the file are password protected.  The default password is:  changeit)

# keytool –list –keystore keystore.jks

Enter keystore password:   changeit

Keystore type: JKS

Keystore provider: SUN

Your keystore contains 1 entry

Alias name: test

Creation date: Jul 16, 2008

Entry type: PrivateKeyEntry

Certificate chain length: 1

Certificate[1]:

Owner: CN=test, OU=OpenSSO, O=Sun, L=Santa Clara, ST=California, C=US

Issuer: CN=test, OU=OpenSSO, O=Sun, L=Santa Clara, ST=California, C=US

Serial number: 478d074b

Valid from: Tue Jan 15 19:19:39 UTC 2008 until: Fri Jan 12 19:19:39 UTC 2018

Certificate fingerprints:

MD5:  8D:89:26:BA:5C:04:D8:CC:D0:1B:85:50:2E:38:14:EF

SHA1: DE:F1:8D:BE:D5:47:CD:F3:D5:2B:62:7F:41:63:7C:44:30:45:FE:33

SHA256: 39:DD:8A:4B:0F:47:4A:15:CD:EF:7A:41:C5:98:A2:10:FA:90:5F:4B:8F:F4:08:04:CE:A5:52:9F:47:E7:CF:29

Signature algorithm name: MD5withRSA

Version: 1

*******************************************

*******************************************

Replacing OpenAM’s Default Keystore

You should replace this file with a Java Keystore file containing your own key pair and certificate.  This will be used as the key for digitally signing assertions as OpenAM plays the role of a Hosted Identity Provider.  The process for performing this includes five basic steps:

  1. Generate a new Java Keystore file containing a new key pair consisting of the public and private keys.
  2. Export the digital certificate from the file and make it trusted by your Java installation.
  3. Generate encrypted password files that permit OpenAM to read the keys from the Java Keystore.
  4. Replace OpenAM’s default Java Keystore and password files with your newly created files.
  5. Restart OpenAM.

The following provides the detailed steps for replacing the default Java Keystore.

1.  Generate a New Java Keystore Containing the Key Pair

a)      Change to a temporary folder where you will generate your files.

# cd /tmp

b)     Use the Java keytool utility to generate a new key pair that will be used as the signing key for your Hosted Identity Provider.

# keytool -genkeypair -alias idfsigningkey -keyalg RSA -keysize 1024 -validity 730 -storetype JKS -keystore keystore.jks

Enter keystore password:   cangetin

Re-enter new password:   cangetin

What is your first and last name?

  [Unknown]:  idp.identityfusion.com

What is the name of your organizational unit?

  [Unknown]:  Security

What is the name of your organization?

  [Unknown]:  Identity Fusion

What is the name of your City or Locality?

  [Unknown]:  Tampa

What is the name of your State or Province?

  [Unknown]:  FL

What is the two-letter country code for this unit?

  [Unknown]:  US

Is CN=idp.identityfusion.com, OU=Security, O=Identity Fusion, L=Tampa, ST=FL, C=US correct?

  [no]:  yes

Enter key password for <signingKey>

        (RETURN if same as keystore password):  cangetin

Re-enter new password:  cangetin

You have now generated a self-signed certificate but since it has been signed by you, it is not automatically trusted by other applications.  In order to trust the new certificate, you need to export it from your keystore file, and import it into the cacerts file for your Java installation.  To accomplish this, perform the following steps:

2.      Make the Certificate Trusted

a)      Export the self-signed certificate as follows:

# keytool -exportcert -alias idfsigningkey -file idfSelfSignedCert.crt -keystore keystore.jks

Enter keystore password:  cangetin

Certificate stored in file <idfSelfSignedCert.crt>

b)     Import the certificate into the Java trust store as follows:

# keytool -importcert -alias idfsigningkey -file idfSelfSignedCert.crt -trustcacerts -keystore /usr/lib/jvm/java-7-oracle/jre/lib/security/cacerts

Enter keystore password:   changeit

Owner: CN=idp.identityfusion.com, OU=Security, O=Identity Fusion, L=Tampa, ST=FL, C=US

Issuer: CN=idp.identityfusion.com, OU=Security, O=Identity Fusion, L=Tampa, ST=FL, C=US

Serial number: 34113557

Valid from: Thu Jan 30 04:25:51 UTC 2014 until: Sat Jan 30 04:25:51 UTC 2016

Certificate fingerprints:

         MD5:  AA:F3:60:D1:BA:1D:C6:64:61:7A:CC:16:5E:1C:12:1E

         SHA1: 4A:C3:7D:0E:4C:D6:4C:0F:0B:6B:EC:15:5A:5B:5E:EE:BB:6A:A5:08

         SHA256: A8:22:BE:79:72:52:02:6C:30:6E:86:35:DA:FD:E0:45:6A:85:2C:FE:AA:FB:69:EA:87:30:65:AF:2E:65:FB:EB

         Signature algorithm name: SHA256withRSA

         Version: 3

Extensions:

#1: ObjectId: 2.5.29.14 Criticality=false

SubjectKeyIdentifier [

KeyIdentifier [

0000: 12 3B 83 BE 46 D6 D5 17   0F 49 37 E4 61 CC 89 BE  .;..F….I7.a…

0010: 6D B0 5B F5                                        m.[.

]

]

Trust this certificate? [no]:  yes

OpenAM needs to be able to open the truststore (keystore.jks) and read the key created in step 1.  The private key and truststore database have both been locked with a password that you entered while configuring the truststore and signing key, however.  For OpenAM to be able to read this information you need to place these passwords in files on the file system.

3.      Generate Encrypted Password Files

Note:  The passwords will start out as clear text at first, but will be encrypted to provide secure access.

a)      Create the password file for the trust store as follows:

# echo “cangetin”  >  storepass.cleartext

b)     Create the password file for the signing key as follows:

# echo “cangetin”  >  keypass.cleartext

 c)      Prepare encrypted versions of these passwords by using the OpenAM ampassword utility (which is part of the OpenAM administration tools).

# ampassword –encrypt keypass.cleartext > .keypass

# ampassword –encrypt storepass.cleartext > .storepass

Note:  Use these file names as you will be replacing the default files of the same name.

4.      Replace the Default OpenAM Files With Your New Files

a)      Make a backup copy of your existing keystore and password files.

# cp $CONFIG/$URI/.keypass $CONFIG/$URI/.keypass.save

# cp $CONFIG/$URI/.storepass $CONFIG/$URI/.storepass.save

# cp $CONFIG/$URI/keystore.jks $CONFIG/$URI/keystore.jks.save

b)     Overwrite the existing keystore and password files as follows:

# cp .keypass $CONFIG/$URI/.keypass

# cp .storepass $CONFIG/$URI/.storepass

# cp keystore.jks $CONFIG/$URI/keystore.jks

5.      Restart the container where OpenAM is currently running. 

This will allow OpenAM to use the new keystore and read the new password files.

Verifying Your Changes

You can use the keytool utility to view the contents of your Keystore as previously mentioned in this article.  Alternately, you can log in to the OpenAM Console and see that OpenAM is using the new signing key as follows:

  1. Log in to OpenAM Console.
  2. Select the Common Tasks tab.
  3. Select the Create Hosted Identity Provider option beneath the Create SAMLv2 Providers section.

CreateHostedIdentityProvider

Verify that you now see your new signing key appear beneath the Signing Key option as follows:

 verifySigningCertificate

You have now successfully replaced the default OpenAM Java Keystore with your own custom version.