Archive

Archive for the ‘Implementation’ Category

It’s OK to Get Stressed Out with OpenAM

March 7, 2014 1 comment

In fact, it’s HIGHLY recommended….

Performance testing and stress testing are closely related and are essential tasks in any OpenAM deployment.

PerformanceTesting

When conducting performance testing, you are trying to determine how well your system performs when subjected to a particular load. A primary goal of performance testing is to determine whether the system that you just built can support your client base (as defined by your performance requirements).  Oftentimes you must tweak things (memory, configuration settings, hardware) in order to meet your performance requirements, but without executing performance tests, you will never know if you can support your clients until you are actually under fire (and by then, it may be too late).

Performance testing is an iterative process as shown in the following diagram:

IterativePerfTesting

Each of the states may be described as follows:

  1. Test – throw a load at your server
  2. Measure – take note of the results
  3. Compare – compare your results to those desired
  4. Tweak – modify the system to help achieve your performance results

During performance testing you may continue in this loop until such time that you meet your performance requirements – or until you find that your requirements were unrealistic in the first place.

Stress testing (aka “torture testing”) goes beyond normal performance testing in that the load you place on the system intentionally exceeds the anticipated capacity.  The goal of stress testing is to determine the breaking point of the system and observe the behavior when the system fails.

Load-Testing-and-Stress-Testing

Stress testing allows you to  create contingency plans for those ‘worse case scenarios’ that will eventually occur (thanks to Mr. Murphy).

Before placing OpenAM into production you should test to see if your implementation meets your current performance requirements (concurrent sessions, authentications per second, etc.) and have a pretty good idea of where your limitations are.  The problem is that an OpenAM deployment is comprised of multiple servers – each that may need to be tested (and tuned) separately.  So how do you know where to start?

When executing performance and stress tests in OpenAM, there are three areas where I like to place my focus: 1) the protected application, 2) the OpenAM server, and 3) the data store(s). Testing the system as a whole may not provide enough information to determine where problems may lie and so I prefer to take an incremental approach that tests each component in sequence.  I start with the data stores (authentication and user profile databases) and work my way back towards the protected application – with each iteration adding a new component.

InterativeTesting

Note:  It should go without saying that the testing environment should mimic your production environment as closely as possible.  Any deviation may cause your test results to be skewed and provide inaccurate data.

Data Store(s)

An OpenAM deployment may consist of multiple data stores – those that are used for authentication (Active Directory, OpenDJ, Radius Server, etc.) and those that are used to build a user’s profile (LDAP and RDBMS).  Both of these are core to an OpenAM deployment and while they are typically the easiest to test, a misconfiguration here may have a pretty big impact on overall performance.  As such, I start my testing at the database layer and focus only on that component.

IterativeTesting2

Performance of an authentication database can be measured by the average number of authentications that occur over a particular period of time (seconds, minutes, hours) and the easiest way to test these types of databases is to simply perform authentication operations against them.

You can write your own scripts to accomplish this, but there are many freely available tools that can be used as well.  One tool that I have used in the past is the SLAMD Distributed Load Generation Engine.  SLAMD was designed to test directory server performance, but it can be used to test web applications as well.  Unfortunately, SLAMD is no longer being actively developed, but you can still download a copy from http://dl.thezonemanager.com/slamd/.

A tool that I have started using to test authentications against an LDAP server is authrate, which is included in ForgeRock’s OpenDJ LDAP Toolkit.  Authrate allows you to stress the server and display some really nice statistics while doing so.  The authrate command line tool measures bind throughput and response times and is perfect for testing all sorts of LDAP authentication databases.

Performance of a user profile database is typically measured in search performance against that database.  If your user profile database can be searched using LDAP (i.e. Active Directory or any LDAPv3 server), then you can use searchrate – also included in the OpenDJ LDAP Toolkit.  searchrate is a command line tool that measures search throughput and response time.

The following is sample output from the searchrate command:

-------------------------------------------------------------------------------
     Throughput                            Response Time                       
   (ops/second)                           (milliseconds)                      
recent  average  recent  average  99.9%  99.99%  99.999%  err/sec  Entries/Srch
-------------------------------------------------------------------------------
 188.7    188.7   3.214    3.214  306.364 306.364  306.364    0.0           0.0
 223.1    205.9   2.508    2.831  27.805  306.364  306.364    0.0           0.0
 245.7    219.2   2.273    2.622  20.374  306.364  306.364    0.0           0.0
 238.7    224.1   2.144    2.495  27.805  306.364  306.364    0.0           0.0
 287.9    236.8   1.972    2.368  32.656  306.364  306.364    0.0           0.0
 335.0    253.4   1.657    2.208  32.656  306.364  306.364    0.0           0.0
 358.7    268.4   1.532    2.080  30.827  306.364  306.364    0.0           0.0

The first two columns represent the throughput (number of operations per second) observed in the server. The first column contains the most recent value and the second column contains the average throughput since the test was initiated (i.e. the average of all values contained in column one).

The remaining columns represent response times with the third column being the most recent response time and the fourth column containing the average response time since the test was initiated. Columns five, six, and seven (represented by percentile headers) demonstrate how many operations fell within that range.

For instance, by the time we are at the 7th row, 99.9% of the operations are completed in 30.827 ms (5th column, 7th row), 99.99% are completed in 306.364 ms (6th column, 7th row), and 99.999% of them are completed within 306.364 ms (7th column, 7th row). The percentile rankings provide a good indication of the real system performance and can be interpreted as follows:

  • 1 out of 1,000 search requests is exceeding 30 ms
  • 1 one out of 100,000 requests is exceeding 306 ms

Note: The values contained in this search were performed on an untuned, limited resource test system. Results will vary depending on the amount of JVM memory, the system CPU(s), and the data contained in the directory. Generally, OpenDJ systems can achieve much better performance that the values shown above.

There are several factors that may need to be considered when tuning authentication and user profile databases.  For instance, if you are using OpenDJ for your database you may need to modify your database cache, the number of worker threads, or even how indexing is configured in the server.  If your constraint is operating system based, you may need to increase the size of the JVM or the number of file descriptors.   If the hardware is the limiting factor, you may need to increase RAM, use high speed disks, or even faster network interfaces.  No matter what the constraint, you should optimize the databases (and database servers) before moving up the stack to the OpenAM instance.

OpenAM Instance + Data Store(s)

Once you have optimized any data store(s) you can now begin testing directly against OpenAM as it is configured against those data store(s).  Previous testing established a performance baseline and any degradation introduced at this point will be due to OpenAM or the environment (operating system, Java container) where it has been configured.

IterativeTesting3

But how can you test an OpenAM instance without introducing the application that it is protecting?  One way is to generate a series of authentications and authorizations using direct interfaces such as the OpenAM API or REST calls.  I prefer to use REST calls as this is the easiest to implement.

There are browser based applications such as Postman that are great for functional testing, but these are not easily scriptable.  As such, I lean towards a shell or Perl script containing a loop of cURL commands.

Note: You should use the same authentication and search operations in your cURL commands to be sure that you are making a fair comparison between the standalone database testing and the introduction of OpenAM.

You should expect some decrease in performance when the OpenAM server is introduced, but it should not be too drastic.  If you find that it falls outside of your requirements, however, then you should consider updating OpenAM in one of the following areas:

  • LDAP Configuration Settings (i.e. connections to the Configuration Server)
  • Session Settings (if you are hitting limitations)
  • JVM Settings (pay particular attention to garbage collection)
  • Cache Settings (size and time to live)

Details behind each of these areas can be found in the OpenAM Administration Guide.

You may also find that OpenAM’s interaction with the database(s) introduces searches (or other operations) that you did not previously test for.  This may require you to update your database(s) to account for this and restart your performance testing.

Note:  Another tool I have started playing with is the Java Application Monitor (aka JAMon).  While this tool is typically used to monitor a Java application, it provides some useful information to help determine bottlenecks working with databases, file IO, and garbage collection.

Application + OpenAM Instance + Data Store(s)

Once you feel comfortable with the performance delivered by OpenAM and its associated data store(s), it is time to introduce the final component – the protected application, itself.

InterativeTesting

This will differ quite a bit based on how you are protecting your application (for instance, policy agents will behave differently from OAuth2/OpenID Connect or SAML2) but this does provide you with the information you need to determine if you can meet your performance requirements in a production deployment.

If you have optimized everything up to this point, then the combination of all three components will provide a full end to end test of the entire system.  In this case, then an impact due to network latency will be the most likely factor in performance testing.

To perform a full end to end test of all components, I prefer to use Apache JMeter.  You configure JMeter to use a predefined set of credentials, authenticate to the protected resource, and look for specific responses from the server.  Once you see those responses, JMeter will act according to how you have preconfigured it to act.  This tool allows you to generate a load against OpenAM from login to logout and anything in between.

Other Considerations

Keep in mind that any time that you introduce a monitoring tool into a testing environment, the tool (itself) can impact performance.  So while the numbers you receive are useful, they are not altogether acurate.  There may be some slight performance degradation (due to the introduction of the tool) that your users will never see.

You should also be aware that the client machine (where the load generation tools are installed) may become a bottleneck if you are not careful.  You should consider distributing your performance testing tools across multiple client machines to minimize this effect.  This is another way of ensuring that the client environment does not become the limiting factor.

Summary

Like many other areas in our field, performance testing an OpenAM deployment may be considered as much of an art as it is a science.  There may be as many methods for testing as there are consultants and each varies based on the tools they use.  The information contained here is just one approach performance testing – one that I have used successfully in our deployments.

What methods have you used?  Feel free to share in the comments, below.

Advice to CIOs for High Exposure Projects

September 14, 2009 Leave a comment

I read an article in CIO Magazine about the plight of today’s CIOs when multi-million dollar multi-year projects go awry. The article entitled “The CIO Scapegoat” indicates that it is unfair to hold the IT department completely responsible when there are so many other business units that contribute to a project’s demise. In many cases, the CIO takes the fall for the failure and, as a direct result, they are either demoted, moved into a different organization or let go altogether.

The article goes on to provide advice to CIOs who are beginning such undertakings. First and foremost, large, complex projects should be broken up into “bite-sized” chunks and proper expectations of what will be delivered in each “mini-project” should be set – and agreed upon – with the various stakeholders.

I could not agree more with this statement and find it most concerning that this is not more of a common practice within the IT industry. In our World of rapid prototyping (turned production) and just-in-time development, to think that you could perform a multi-year project without implementing several checkpoints along the way is simply insane. This may be one of the reasons why the average life-span of a CIO is only two years within the same company.

CIOs who agree to perform projects under such conditions really need to read my previous blog entitled “Lessons Learned from Enterprise Identity Management Projects“. While it was written mainly for enterprise identity projects it has direct applicability to any enterprise project. In that article I directly address specific points about expectation setting and bite-sized chunks (did CIO Magazine read my blog on this?) and by taking my advice to heart, the average CIO might be able to extend their stay.

Opinions About the Federal Government’s Identity Initiative

September 11, 2009 Leave a comment

Interesting read. This is essentially a WebSSO initiative with authentication based on CAC type ID cards or OpenID.

The CAC type of implementation (ID Cards) are not practical as they require everyone to have a card reader on their PC in order to do business with the government. I don’t see this happening anytime too soon.

I understand that there are several holes in the OpenID initiative. I wonder if they have been fixed (I wonder if it matters).

Either way, Sun’s openSSO initiative is well positioned as it allows OpenID as a form of authentication. The fact that the government is looking at open source for this (OpenID) bodes well for openSSO.

Link to article on PCmag.com:

http://blogs.pcmag.com/securitywatch/2009/09/federal_government_starts_iden.php.

Text version of the article:

Federal Government Starts Identity Initiative

As part of a general effort of the Obama administration to make government more accessible through the web, the Federal government, through the GSA (Government Services Administration), is working to standardize identity systems to hundreds of government web sites. The two technologies being considered are OpenID and Information Cards (InfoCards). The first government site to implement this plan will be the NIH (National Institutes of Health).

OpenID is a standard for “single sign-on”. You may have noticed an option on many web sites, typically blogs, to log on with an OpenID. This ID would be a URI such as john_smith.pip.verisignlabs.com, which would be John Smith’s identifier on VeriSign’s Personal Identity Portal. Many ISPs and other services, such as AOL and Yahoo!, provide OpenIDs for their users. When you log on with your OpenID the session redirects to the OpenID server, such as pip.verisignlabs.com. This server, called an identity provider, is where you are authenticated, potentially with stricter measures than just a password. VeriSign is planning to add 2-factor authentication for example. Once authenticated or not, the result is sent back to the service to which you were trying to log in, also known as a relying party.

Information Cards work differently. The user presents a digital identity to a relying party. This can be in a number of forms, from a username/password to an X.509 certificate. IDs can also be managed by service providers who can also customize their authentication rules.

The OpenID Foundation and Information Card Foundation have a white paper which describes the initiative. Users will be able to use a single identity to access a wide variety of government resources, but in a way which preserves their privacy. For instance, there will be provision for the identity providers to supply each government site with a different virtual identity managed by the identity provider, so that the user’s movements on different government sites cannot be correlated.

Part of the early idea of OpenID was that anyone could make an identity provider and that everyone will trust everyone else’s identity provider, but this was never going to work on a large scale. For the government there will be a white list of some sort that will consist of certified identity providers who meet certain standards for identity management, including privacy protection.

Directory Servers vs Relational Databases

August 14, 2008 Leave a comment

An interesting question was posed on LinkedIn that asked, “If you were the architect of LinkedIn, MySpace, Facebook or other social networking sites and wanted to model the relationships amongst users and had to use LDAP, what would the schema look like?”

You can find the original post and responses here.

After reading the responses from other LinkedIn members, I felt compelled to add my proverbial $.02.

Directory Servers are simply special purpose data repositories. They are great for some applications and not so great for others. You can extend the schema and create a tree structure to model just about any kind of data for any type of application. But just because you “can” do something does not mean that you “should” do it.

The question becomes “Should you used a directory server or should you use a relational database?” For some applications a directory server would be a definite WRONG choice, for others it is clearly the RIGHT one, for yet others, the choice is not so clear. So, how do you decide?

Here are some simply rules of thumb that I have found work for me:

1) How often does your data change?

Keep in mind that directory servers are optimized for reads — this oftentimes comes at the expense of write operations. The reason is that directory servers typically implement extensive indexes that are tied to schema attributes (which by the way are tied to the application fields). So the question becomes, how often do these attributes change? If they do so often, then a directory server may not be the best choice (as you would be constantly rebuilding the indexes). If, however, they are relatively static, then a directory server would be a great choice.

2) What type of data are you trying to model?

If your data can be described in an attribute: value pair (i.e., name:Bill Nelson), then a directory server would be a good choice. If, however, your data is not so discrete, then a directory server should not be used. For instance, uploads to YouTube should NOT be kept in a directory server. User profiles in LinkedIn, however, would be.

3) Can your data be modeled in a hierarchical (tree-like) structure?

Directory servers implement a hierarchical structure for data modeling (similar to a file system layout). A benefit of a directory server is the ability to apply access control at a particular point in the tree and have that apply to all child elements in the tree structure. Additionally, you can start searching at a lower (child element) and increase your search performance times (much like selecting the proper starting point for the Unix “find” command). Relational databases cannot do this. You have to search all entries in the table. If your data lends to a hierarchical structure then a directory server might be a good choice.

I am a big fan on directory servers and have architected/implemented projects that sit 100% on top of a directory, 100% on top of relational databases, and a hybrid of both. Directory servers are extremely fast, flexible, scalable, and are able to handle the type of traffic you see on the Internet very well. Their ability to implement chaining, referrals, web services, and a flexible data modeling structure make them a very nice choice to use as a data repository to many applications, but I would not always lead with a directory server for every application.
So how do you decide which is best? It all comes down to the application, itself, and the way you want to access your data.

A site like LinkedIn might actually be modeled pretty well with a directory server as quite a bit of the content is actually static, lends well to an attribute:value pair, and can easily be modeled in a heirarchical structure. The user profiles for a site like facebook or YouTube could easily be modeled in a directory server, but I would NOT attempt to reference the YouTube or facebook uploads or the “what are you working on now” status with a directory server as it is constantly changing.

If you do decide to use a directory server, here are the general steps you should consider for development (your mileage may vary, but probably not too much):

  1. Evaluate the data fields that you want to access from your application
  2.  

  3. Map the fields to existing directory server schema (extend if necessary).
  4.  

  5. Build a heirarchical structure to model your data as appropriate (this is called the directory information tree, or DIT)
  6.  

  7. Architect a directory solution based on where your applications reside thorughout the world (do you need one, two, or multiple directories?) and then determine how you want your data to flow through the system (chaining, referrals, replication)
  8.  

  9. Implement the appropriate access control for attributes or the DIT in general
  10.  

  11. Implement an effective indexing strategy to increase performance
  12.  

  13. Test, test, test

Lessons Learned from Enterprise Identity Management Projects

August 1, 2008 Leave a comment

I have been implementing and/or managing identity-related projects for over 10 years now and I can say, from experience, that the biggest problem with any Identity Management project can be summed up in one word: EXPECTATIONS.

It does not matter whether you are tackling an identity project for compliance, security or cost-reduction reasons. You need to have proper expectations of what can be realistically accomplished within a reasonable timeframe and those expectations need to be shared among all team members and stakeholders.

Projects that fail to achieve a customer’s expectations do so because those expectations were either not validated or were not shared between all parties involved. When expectations are set (typically in a statement of work), communicated (periodic reports), and then reset if necessary (change orders), then the customer is much happier with the project results.

Here are a few lessons I have learned over the years. While they have general applicability to major projects, in general, they are especially true of identity-related projects.

1) Projects MUST be implemented in bite-sized chunks.

Identity projects are enterprise-wide projects; you should create an project roadmap that consists of multiple “mini” projects that can demonstrate an immediate ROI. The joke is, “How do you eat an elephant? One bite at a time.” To achieve success with identity projects, you should implement them one bite at a time and have demonstrable/measurable success after each bite.

2) The devil is in the data.

Using development/test data that is not representative of production data will kill you in the end and cause undue rework when going into production. Use data that is as close to production as possible.

3) Start with an analysis phase BEFORE scoping the entire project.

I HIGHLY recommend that the first project you undertake is an analysis. That will define the scope for which you can then get a better idea of how to divvy up the project into multiple bite-size chunks and then determine how much — and how long — each chunk will take. This allows you to effectively budget both time and money for the project(s).

Note: If a vendor gives you a price for an identity implementation without this, then run the other way. They are trying to simply get their foot in the door without first understanding your environment. If they say that the analysis phase is part of the project pricing, then get ready for an extensive barrage of change orders to the project.

4) Get everyone involved.

Keep in mind that these are enterprise-wide projects that affect multiple business units within your company. The project team should contain representatives from each organization that is being “touched” by the solution. This includes HR, IT, Help Desk, Training and above all, upper-level management (C-level).

(The following items apply if you are using external resources for project implementation.)

5) Find someone who has “been there and done that”.

Ask for references and follow up on them. More and more companies say that they can implement identity-related projects just because they have taken the latest course from the vendor. This is not enough. If training alone could give you the skills to implement the product, then you would have done the project yourself. You need to find someone who knows where the pitfalls are before you hit them.

6) Let the experts lead.

Don’t try to manage an Identity Management project unless you have done so before – and more than once. I have been involved with customers who have great project managers that have no experience with identity projects, yet they want to take ownership of the project and manage the resources. This is a recipe for disaster. Let the people who have done the implementation lead the project and allow your project manager to gain the knowledge for future phases.

7) Help build the car, don’t just take the keys.

Training takes place before, after and during the project. Don’t expect to simply take “the keys” from the vendor once the project has been completed. You need to have resources actively involved throughout the project in order to take ownership. Otherwise you not be able to support the product — or make changes to it — without assistance from the vendor. Ensure that you have your own team members actively engaged in the project – side by side with the external team. To do this, you have to ensure that they are not distracted by other work-related tasks.