Archive

Archive for the ‘Relational Database’ Category

It’s OK to Get Stressed Out with OpenAM

March 7, 2014 1 comment

In fact, it’s HIGHLY recommended….

Performance testing and stress testing are closely related and are essential tasks in any OpenAM deployment.

PerformanceTesting

When conducting performance testing, you are trying to determine how well your system performs when subjected to a particular load. A primary goal of performance testing is to determine whether the system that you just built can support your client base (as defined by your performance requirements).  Oftentimes you must tweak things (memory, configuration settings, hardware) in order to meet your performance requirements, but without executing performance tests, you will never know if you can support your clients until you are actually under fire (and by then, it may be too late).

Performance testing is an iterative process as shown in the following diagram:

IterativePerfTesting

Each of the states may be described as follows:

  1. Test – throw a load at your server
  2. Measure – take note of the results
  3. Compare – compare your results to those desired
  4. Tweak – modify the system to help achieve your performance results

During performance testing you may continue in this loop until such time that you meet your performance requirements – or until you find that your requirements were unrealistic in the first place.

Stress testing (aka “torture testing”) goes beyond normal performance testing in that the load you place on the system intentionally exceeds the anticipated capacity.  The goal of stress testing is to determine the breaking point of the system and observe the behavior when the system fails.

Load-Testing-and-Stress-Testing

Stress testing allows you to  create contingency plans for those ‘worse case scenarios’ that will eventually occur (thanks to Mr. Murphy).

Before placing OpenAM into production you should test to see if your implementation meets your current performance requirements (concurrent sessions, authentications per second, etc.) and have a pretty good idea of where your limitations are.  The problem is that an OpenAM deployment is comprised of multiple servers – each that may need to be tested (and tuned) separately.  So how do you know where to start?

When executing performance and stress tests in OpenAM, there are three areas where I like to place my focus: 1) the protected application, 2) the OpenAM server, and 3) the data store(s). Testing the system as a whole may not provide enough information to determine where problems may lie and so I prefer to take an incremental approach that tests each component in sequence.  I start with the data stores (authentication and user profile databases) and work my way back towards the protected application – with each iteration adding a new component.

InterativeTesting

Note:  It should go without saying that the testing environment should mimic your production environment as closely as possible.  Any deviation may cause your test results to be skewed and provide inaccurate data.

Data Store(s)

An OpenAM deployment may consist of multiple data stores – those that are used for authentication (Active Directory, OpenDJ, Radius Server, etc.) and those that are used to build a user’s profile (LDAP and RDBMS).  Both of these are core to an OpenAM deployment and while they are typically the easiest to test, a misconfiguration here may have a pretty big impact on overall performance.  As such, I start my testing at the database layer and focus only on that component.

IterativeTesting2

Performance of an authentication database can be measured by the average number of authentications that occur over a particular period of time (seconds, minutes, hours) and the easiest way to test these types of databases is to simply perform authentication operations against them.

You can write your own scripts to accomplish this, but there are many freely available tools that can be used as well.  One tool that I have used in the past is the SLAMD Distributed Load Generation Engine.  SLAMD was designed to test directory server performance, but it can be used to test web applications as well.  Unfortunately, SLAMD is no longer being actively developed, but you can still download a copy from http://dl.thezonemanager.com/slamd/.

A tool that I have started using to test authentications against an LDAP server is authrate, which is included in ForgeRock’s OpenDJ LDAP Toolkit.  Authrate allows you to stress the server and display some really nice statistics while doing so.  The authrate command line tool measures bind throughput and response times and is perfect for testing all sorts of LDAP authentication databases.

Performance of a user profile database is typically measured in search performance against that database.  If your user profile database can be searched using LDAP (i.e. Active Directory or any LDAPv3 server), then you can use searchrate – also included in the OpenDJ LDAP Toolkit.  searchrate is a command line tool that measures search throughput and response time.

The following is sample output from the searchrate command:

-------------------------------------------------------------------------------
     Throughput                            Response Time                       
   (ops/second)                           (milliseconds)                      
recent  average  recent  average  99.9%  99.99%  99.999%  err/sec  Entries/Srch
-------------------------------------------------------------------------------
 188.7    188.7   3.214    3.214  306.364 306.364  306.364    0.0           0.0
 223.1    205.9   2.508    2.831  27.805  306.364  306.364    0.0           0.0
 245.7    219.2   2.273    2.622  20.374  306.364  306.364    0.0           0.0
 238.7    224.1   2.144    2.495  27.805  306.364  306.364    0.0           0.0
 287.9    236.8   1.972    2.368  32.656  306.364  306.364    0.0           0.0
 335.0    253.4   1.657    2.208  32.656  306.364  306.364    0.0           0.0
 358.7    268.4   1.532    2.080  30.827  306.364  306.364    0.0           0.0

The first two columns represent the throughput (number of operations per second) observed in the server. The first column contains the most recent value and the second column contains the average throughput since the test was initiated (i.e. the average of all values contained in column one).

The remaining columns represent response times with the third column being the most recent response time and the fourth column containing the average response time since the test was initiated. Columns five, six, and seven (represented by percentile headers) demonstrate how many operations fell within that range.

For instance, by the time we are at the 7th row, 99.9% of the operations are completed in 30.827 ms (5th column, 7th row), 99.99% are completed in 306.364 ms (6th column, 7th row), and 99.999% of them are completed within 306.364 ms (7th column, 7th row). The percentile rankings provide a good indication of the real system performance and can be interpreted as follows:

  • 1 out of 1,000 search requests is exceeding 30 ms
  • 1 one out of 100,000 requests is exceeding 306 ms

Note: The values contained in this search were performed on an untuned, limited resource test system. Results will vary depending on the amount of JVM memory, the system CPU(s), and the data contained in the directory. Generally, OpenDJ systems can achieve much better performance that the values shown above.

There are several factors that may need to be considered when tuning authentication and user profile databases.  For instance, if you are using OpenDJ for your database you may need to modify your database cache, the number of worker threads, or even how indexing is configured in the server.  If your constraint is operating system based, you may need to increase the size of the JVM or the number of file descriptors.   If the hardware is the limiting factor, you may need to increase RAM, use high speed disks, or even faster network interfaces.  No matter what the constraint, you should optimize the databases (and database servers) before moving up the stack to the OpenAM instance.

OpenAM Instance + Data Store(s)

Once you have optimized any data store(s) you can now begin testing directly against OpenAM as it is configured against those data store(s).  Previous testing established a performance baseline and any degradation introduced at this point will be due to OpenAM or the environment (operating system, Java container) where it has been configured.

IterativeTesting3

But how can you test an OpenAM instance without introducing the application that it is protecting?  One way is to generate a series of authentications and authorizations using direct interfaces such as the OpenAM API or REST calls.  I prefer to use REST calls as this is the easiest to implement.

There are browser based applications such as Postman that are great for functional testing, but these are not easily scriptable.  As such, I lean towards a shell or Perl script containing a loop of cURL commands.

Note: You should use the same authentication and search operations in your cURL commands to be sure that you are making a fair comparison between the standalone database testing and the introduction of OpenAM.

You should expect some decrease in performance when the OpenAM server is introduced, but it should not be too drastic.  If you find that it falls outside of your requirements, however, then you should consider updating OpenAM in one of the following areas:

  • LDAP Configuration Settings (i.e. connections to the Configuration Server)
  • Session Settings (if you are hitting limitations)
  • JVM Settings (pay particular attention to garbage collection)
  • Cache Settings (size and time to live)

Details behind each of these areas can be found in the OpenAM Administration Guide.

You may also find that OpenAM’s interaction with the database(s) introduces searches (or other operations) that you did not previously test for.  This may require you to update your database(s) to account for this and restart your performance testing.

Note:  Another tool I have started playing with is the Java Application Monitor (aka JAMon).  While this tool is typically used to monitor a Java application, it provides some useful information to help determine bottlenecks working with databases, file IO, and garbage collection.

Application + OpenAM Instance + Data Store(s)

Once you feel comfortable with the performance delivered by OpenAM and its associated data store(s), it is time to introduce the final component – the protected application, itself.

InterativeTesting

This will differ quite a bit based on how you are protecting your application (for instance, policy agents will behave differently from OAuth2/OpenID Connect or SAML2) but this does provide you with the information you need to determine if you can meet your performance requirements in a production deployment.

If you have optimized everything up to this point, then the combination of all three components will provide a full end to end test of the entire system.  In this case, then an impact due to network latency will be the most likely factor in performance testing.

To perform a full end to end test of all components, I prefer to use Apache JMeter.  You configure JMeter to use a predefined set of credentials, authenticate to the protected resource, and look for specific responses from the server.  Once you see those responses, JMeter will act according to how you have preconfigured it to act.  This tool allows you to generate a load against OpenAM from login to logout and anything in between.

Other Considerations

Keep in mind that any time that you introduce a monitoring tool into a testing environment, the tool (itself) can impact performance.  So while the numbers you receive are useful, they are not altogether acurate.  There may be some slight performance degradation (due to the introduction of the tool) that your users will never see.

You should also be aware that the client machine (where the load generation tools are installed) may become a bottleneck if you are not careful.  You should consider distributing your performance testing tools across multiple client machines to minimize this effect.  This is another way of ensuring that the client environment does not become the limiting factor.

Summary

Like many other areas in our field, performance testing an OpenAM deployment may be considered as much of an art as it is a science.  There may be as many methods for testing as there are consultants and each varies based on the tools they use.  The information contained here is just one approach performance testing – one that I have used successfully in our deployments.

What methods have you used?  Feel free to share in the comments, below.

Directory Servers vs Relational Databases

August 14, 2008 Leave a comment

An interesting question was posed on LinkedIn that asked, “If you were the architect of LinkedIn, MySpace, Facebook or other social networking sites and wanted to model the relationships amongst users and had to use LDAP, what would the schema look like?”

You can find the original post and responses here.

After reading the responses from other LinkedIn members, I felt compelled to add my proverbial $.02.

Directory Servers are simply special purpose data repositories. They are great for some applications and not so great for others. You can extend the schema and create a tree structure to model just about any kind of data for any type of application. But just because you “can” do something does not mean that you “should” do it.

The question becomes “Should you used a directory server or should you use a relational database?” For some applications a directory server would be a definite WRONG choice, for others it is clearly the RIGHT one, for yet others, the choice is not so clear. So, how do you decide?

Here are some simply rules of thumb that I have found work for me:

1) How often does your data change?

Keep in mind that directory servers are optimized for reads — this oftentimes comes at the expense of write operations. The reason is that directory servers typically implement extensive indexes that are tied to schema attributes (which by the way are tied to the application fields). So the question becomes, how often do these attributes change? If they do so often, then a directory server may not be the best choice (as you would be constantly rebuilding the indexes). If, however, they are relatively static, then a directory server would be a great choice.

2) What type of data are you trying to model?

If your data can be described in an attribute: value pair (i.e., name:Bill Nelson), then a directory server would be a good choice. If, however, your data is not so discrete, then a directory server should not be used. For instance, uploads to YouTube should NOT be kept in a directory server. User profiles in LinkedIn, however, would be.

3) Can your data be modeled in a hierarchical (tree-like) structure?

Directory servers implement a hierarchical structure for data modeling (similar to a file system layout). A benefit of a directory server is the ability to apply access control at a particular point in the tree and have that apply to all child elements in the tree structure. Additionally, you can start searching at a lower (child element) and increase your search performance times (much like selecting the proper starting point for the Unix “find” command). Relational databases cannot do this. You have to search all entries in the table. If your data lends to a hierarchical structure then a directory server might be a good choice.

I am a big fan on directory servers and have architected/implemented projects that sit 100% on top of a directory, 100% on top of relational databases, and a hybrid of both. Directory servers are extremely fast, flexible, scalable, and are able to handle the type of traffic you see on the Internet very well. Their ability to implement chaining, referrals, web services, and a flexible data modeling structure make them a very nice choice to use as a data repository to many applications, but I would not always lead with a directory server for every application.
So how do you decide which is best? It all comes down to the application, itself, and the way you want to access your data.

A site like LinkedIn might actually be modeled pretty well with a directory server as quite a bit of the content is actually static, lends well to an attribute:value pair, and can easily be modeled in a heirarchical structure. The user profiles for a site like facebook or YouTube could easily be modeled in a directory server, but I would NOT attempt to reference the YouTube or facebook uploads or the “what are you working on now” status with a directory server as it is constantly changing.

If you do decide to use a directory server, here are the general steps you should consider for development (your mileage may vary, but probably not too much):

  1. Evaluate the data fields that you want to access from your application
  2.  

  3. Map the fields to existing directory server schema (extend if necessary).
  4.  

  5. Build a heirarchical structure to model your data as appropriate (this is called the directory information tree, or DIT)
  6.  

  7. Architect a directory solution based on where your applications reside thorughout the world (do you need one, two, or multiple directories?) and then determine how you want your data to flow through the system (chaining, referrals, replication)
  8.  

  9. Implement the appropriate access control for attributes or the DIT in general
  10.  

  11. Implement an effective indexing strategy to increase performance
  12.  

  13. Test, test, test