Archive

Author Archive

A Review of the Grizzly Steppe (Russian Hacking) Report

December 31, 2016 Leave a comment

 
I, like many, have heard the stories that the Russians hacked into sensitive applications/servers in an effort to compromise the US elections. That is a bold statement and if true, may justify the actions recently taken by the Obama administration. So it was with keen interest that I rushed to read the findings from the Joint Analysis Report (JAR-16-20296) between DHS and the FBI to see what evidence they had to substantiate these claims.

The full report may be found here:

grizzlysteppe
The report makes the following claims:

“This document provides technical details regarding the tools and infrastructure used by the Russian civilian and military intelligence Services (RIS) to compromise and exploit networks and endpoints associated with the U.S. election…”

“Previous JARs have not attributed malicious cyber activity to specific countries or threat actors. However, public attribution of these activities to RIS is supported by technical indicators from the U.S. Intelligence Community, DHS, FBI, the private sector, and other entities.”

 
Based on this information the US felt like it had the smoking gun and definitive proof of the following:

  • The Who – the Russians were behind the attack
  • The Why – to affect the US elections in order to guide the outcome

With this information in hand, I continued reading to now learn about:

  • The How – how the attacks were performed
  • The Proof – the evidence to substantiate Who, Why, and How

 
The report describes the “How” in a two pronged attack as follows:
 

Hack #1 – Casting the Phish Net…

 
Phishing
 
1. A general spearphishing attack was sent to more than 1,000 people which included (in addition to others) several people from the U.S. Government.
 
Note: The number “1,000” is very specific so it seems like the government has some knowledge of the recipients – but they stop short of specifying if that 1,000 was directed at a particular party or not. I would think that would be important to know if the purpose of the attack was to affect the US election.
 
2. The attack led to one person from a particular U.S. political party falling prey to the attack and opening an attachment containing malware. This led to a chain of events where the malware was able to:
 

  • Establish persistence on a “political party system”
  • “Escalate privileges”
  • “Enumerate Active Directory accounts”
  • “Exfiltrate email from several accounts through encrypted connections back through operational infrastructure”

 
Note: This all sounds really impressive, but what does it all mean? If you remove all the jargon (enumerate, exfiltrate, etc.) and put this in layman’s terms, it sounds like the following occurred:
 

  • Someone installed malware on their PC when they opened a file that they shouldn’t have
  • Somehow the malware was able to gain privileged access to Active Directory
  • The malware was able to perform a search against Active Directory
  • The results of the search returned several email accounts

 

With this information on mind, there are a few things I am curious about.

 
First, the malware is only able to impersonate the user on the operating system on which it was installed. I’m not sure how a “normal user” can have escalated privileges in Active Directory unless that user is an administrator with escalated privileges (which brings up a whole different conversation about administrators knowing better). So I am curious how the malware was able to “escalate privileges” on its own.
 
Second, if the user (hence the malware) was not an administrator and they were able to perform an unauthorized search against Active Directory, then that indicates that Active Directory authorization and/or limitations were not configured properly. It has been my experience that Active Directory is (by default) pretty well locked down. Is it possible that the default settings were “relaxed” a bit and therefore may have opened up a hole?
 
Finally, would I really need “escalated privileges” just to troll email accounts? Couldn’t I simply scan the Outlook address book to obtain this information? It seems like the approach described in the report would take a lot of effort to code and would have a limited chance of success. Wouldn’t the malware have to land on an administrator’s computer for this approach to work?
 
3. Either way, the end result was that APT29 was able to get a list of email addresses from Active Directory.
 
Fast forward almost a year later (summer 2015 to spring 2016) and this takes us to the second part of our story.
 

Hack #2 – Hooking the Phish…

 
spear_phishing
1. In the second hack, a targeted spearphishing attack was launched against the same political party that was compromised in the first attack.
 
Note: It should be noted that while the first attach was general (casting a net if you will), the second attack was targeted at a certain set of people using specific information to more easily trick those people. While the report doesn’t specifically say this, it is assumed that the attack targeted those email addresses that were obtained from the first attack.
 
Does this indicate that the political party in question was targeted because the end goal was to affect the election? If so, then this attack was planned almost a year in advance when we really didn’t have a clear picture as to who the candidates would be from either party. Were the Russians hedging their bets in case a certain party (or a certain candidate) was found to be leading? It seems more plausible that the second attack was launched more against a certain set of users more as a target of opportunity than anything else.
 
2. This spearphishing attack tricked multiple people into “changing their passwords through a fake webmail domain hosted by APT28”.
 
3. Upon initial login, APT28 was able to obtain the “real” credentials of users associated with the political party in question.
 
4. With these credentials in hand, APT28 was able to log into the real email server and access content (emails, attachments, etc.). The report goes on to say that this information was subsequently “leaked to the press and publicly disclosed.”
 

Where’s the Smoking Gun?

 
While the report is somewhat interesting, it does not provide the “smoking gun” that was anticipated. The report does provide a list of 48 hacker names of which APT28 and APT29 are included. The title of the table is “Reported Russian Military and Civilian Intelligence Services (RIS)” but there is nothing more than that to introduce the table and tell us anything about the names contained in the table. Am I supposed to jump to the conclusion that because APT28 and APT29 are listed that this is definitive proof that:
 

  • they are the ones behind these attacks
  • no one else has attempted to use these names as their hacking alias
  • they specifically targeted a particular political party
  • their intent was to affect the US election
  • and most importantly, they are “state sponsored”

 
The last item is one of the most important as the administration has chosen to take action against Russia (the state) as if they sanctioned the attacks. If that is true then the need for a smoking gun becomes infinitely more important and that information is simply not provided. Going back to a statement made early on in the report,
 

“Previous JARs have not attributed malicious cyber activity to specific countries or threat actors. However, public attribution of these activities to RIS is supported by technical indicators from the U.S. Intelligence Community, DHS, FBI, the private sector, and other entities.”

 
the government has made it clear that it is stepping outside of normal protocol by publicly naming the attacker in the JAR. But they don’t provide any information to back up their claim. Nor is there anything specifically that indicates that this had anything to do with an attempt to affect the outcome of the US election; in fact, the information presented may lead one to believe the contrary.
 
In general, the report lacks information and forces us to accept the government’s assertion of the Who (the Russians) and the Why (to affect the election) without providing the Proof. Maybe the government has more information that they are not sharing, but to ask me to simply trust without verifying is asking me to trust too much.

Performing Bulk Operations in OpenIDM

April 19, 2016 Leave a comment

 
OpenIDM does not support bulk operations out of the box. One way to to do this, however, is to obtain a list of IDs that you want to perform an operation on and then loop through the list performing the desired operation on each ID.
 

Yes this is a hack, but let’s be honest, isn’t life just one big set of hacks when you think about it?

 
Here are the steps.

Suppose for instance that you want to delete all managed users in OpenIDM that have a last name of “Nelson”. The first step is to obtain a list of those users; which you can easily do using a cURL command and an OpenIDM filter as follows:

curl -u openidm-admin:openidm-admin 'http://localhost:8080/openidm/managed/user?_queryFilter=(sn+eq+"Nelson")&_fields=_id&_prettyPrint=true'

This returns a listing of all managed objects that match the filter as follows.

 
{
  "result" : [ {
    "_id" : "ed979deb-2da2-4fe1-a309-2b7e9677d931",
    "_rev" : "5"
  },
  {
    "_id" : "581c2e13-d7c4-4fff-95b8-2d1686ef5b9c",
    "_rev" : "1"
  },
  {
    "_id" : "1295d5db-c6f8-4108-9842-06c4cde0d4eb",
    "_rev" : "3"
  } ],
  "resultCount" : 3,
  "pagedResultsCookie" : null,
  "totalPagedResultsPolicy" : "NONE",
  "totalPagedResults" : -1,
  "remainingPagedResults" : -1
}

 
But most of the data returned is extraneous for the purposes of this exercise; we only want the “_id” values for these users and to obtain this information, you can pipe the output into a grep command and redirect the output to a file as follows:

curl -u openidm-admin:openidm-admin 'http://localhost:8080/openidm/managed/user?_queryFilter=(sn+eq+"Nelson")&_fields=_id&_prettyPrint=true' | grep "_id" >> bulkOperationIDs.txt

This will produce a file that looks like this:

 
     "_id": "ed979deb-2da2-4fe1-a309-2b7e9677d931",
     "_id": "581c2e13-d7c4-4fff-95b8-2d1686ef5b9c",
     "_id": "1295d5db-c6f8-4108-9842-06c4cde0d4eb"

 
(yes, there are leading spaces in that output).

You are still not done yet as you need to strip off all the extraneous stuff and get it down to just the values of the “_id” attribute. You can probably devise a cool sed script, or find an awesome regular expression for the grep command, but it is just as easy to simply edit the file with and perform a couple global search/replace operations:

:1,$ s/ "_id": "//g
:1,$ s/",//g

The above example demonstrates a global search/replace operation in the “vi” editor – the best damn editor on God’s green earth!

 
However you approach it, the goal is to get the file to consist of only the IDs as follows:
 

 
ed979deb-2da2-4fe1-a309-2b7e9677d931
581c2e13-d7c4-4fff-95b8-2d1686ef5b9c
1295d5db-c6f8-4108-9842-06c4cde0d4eb

 
Now that you have this file, you can perform any operation you would like on it using a simple command line script tied into the appropriate cURL command. For instance, the following would perform a GET operation on all entries in the file (it is HIGHLY recommended that you do this before jumping right into a DELETE operation):

for i in `cat bulkOperationIDs.txt`; do curl -u openidm-admin:openidm-admin -X GET "http://localhost:8080/openidm/managed/user/$i?_prettyPrint=true"; done

Once you feel comfortable with the response, you can change the GET operation to a DELETE and kick the Nelsons to the proverbial curb.
 

Configuring OpenIDM on a Read Only File System in 10 Easy Steps

April 13, 2016 Leave a comment

 

During the normal course of events, OpenIDM writes or updates various files on the file system to which it is installed.  This includes log files, audit files, process IDs, configuration files, or even cached information.  There are times, however when you find yourself needing to deploy OpenIDM to a read only file system – one to which you cannot write typical data.

Read Only File SystemFortunately, OpenIDM is flexible enough to allow such an installation, you just need to make some adjustments to various settings to accommodate this.

The following information provides details on how to configure OpenIDM on a read only file system.  It includes the types of information that OpenIDM writes by default, where it writes the information, and how you can alter the default behavior – and it does it in just 10 EASY Steps (and not all of them are even required!).
 

Note:  The following steps assume that you have a shared (mounted) folder at /idm, that you are using OpenIDM 4.0, it is running as the frock user, and that the frock user has write access to the /idm folder.

 
1. Create external folder structure for logging, auditing, and internal repository information.

$ sudo mkdir -p /idm/log/openidm/audit
$ sudo mkdir /idm/log/openidm/logs
$ sudo mkdir -p /idm/cache/openidm/felix-cache
$ sudo mkdir /idm/run/openidm

2. Change ownership of the external folders to the “idm” user.

$ sudo chown -R frock:frock /idm/log/openidm
$ sudo chown -R frock:frock /idm/cache/openidm
$ sudo chown -R frock:frock /idm/run/openidm

 

Note: OpenIDM writes its audit data (recon, activity, access, etc.) to two locations by default: the filesystem and the repo. This is configured in the conf/audit.json file.

 

3. Open the conf/audit.json file and verify that OpenIDM is writing its audit data to the repo as follows (note: this is the default setting):

"handlerForQueries" : "repo",

4. Open the conf/audit.json file and redirect the audit data to the external folder as follows:

"config" : {
"logDirectory" : "/idm/log/openidm/audit"
},

After making these changes, this section of the audit.json will appear as follows (see items in bold font):

{
"auditServiceConfig" : {
"handlerForQueries" : "repo",
"availableAuditEventHandlers" : [
"org.forgerock.audit.events.handlers.csv.CsvAuditEventHandler",
"org.forgerock.openidm.audit.impl.RepositoryAuditEventHandler",
"org.forgerock.openidm.audit.impl.RouterAuditEventHandler"
]
},
"eventHandlers" : [
{
"name" : "csv",
"class" : "org.forgerock.audit.events.handlers.csv.CsvAuditEventHandler",
"config" : {
"logDirectory" : "/idm/log/openidm/audit"
},
"topics" : [ "access", "activity", "recon", "sync", "authentication", "config" ]
},

As an alternate option you can disable the writing of audit data altogether by setting the enabled flag to false for the appropriate event handler(s). The following snippet from the audit.json demonstrates how to disable file-based auditing.

"eventHandlers" : [
{
"name" : "csv",
"enabled" : false,
"class" : "org.forgerock.audit.events.handlers.csv.CsvAuditEventHandler",
"config" : {
"logDirectory" : "/audit",
},
"topics" : [ "access", "activity", "recon", "sync", "authentication", "config" ]

 

Note: OpenIDM writes its logging data to the local filesystem by default. This is configured in the conf/logging.properties file.

 

5. Open the conf/logging.properties file and redirect OpenIDM logging data to the external folder as follows:

java.util.logging.FileHandler.pattern = /idm/log/openidm/logs/openidm%u.log

 

Note: OpenIDM caches its Felix files in the felix-cache folder beneath the local installation. This is configured in the conf/config.properties file.

 

6. Open the conf/config.properties file and perform the following steps:

 
a. Redirect OpenIDM Felix Cache to the external folder as follows:
 

# If this value is not absolute, then the felix.cache.rootdir controls
# how the absolute location is calculated. (See buildNext property)
org.osgi.framework.storage=${felix.cache.rootdir}/felix-cache

 
b. Define the relative path to the Felix Cache as follows:
 

# The following property is used to convert a relative bundle cache
# location into an absolute one by specifying the root to prepend to
# the relative cache path. The default for this property is the
# current working directory.
felix.cache.rootdir=/idm/cache/openidm

 
After making these changes, this section of the config.properties will appear as follows (see items in bold font):
 

# If this value is not absolute, then the felix.cache.rootdir controls
# how the absolute location is calculated. (See buildNext property)
org.osgi.framework.storage=${felix.cache.rootdir}/felix-cache

# The following property is used to convert a relative bundle cache
# location into an absolute one by specifying the root to prepend to
# the relative cache path. The default for this property is the
# current working directory.
felix.cache.rootdir=/idm/cache/openidm

 

Note: During initial startup, OpenIDM generates a self-signed certificate and stores its security information in the keystore and truststore files as appropriate. This is not possible in a read-only file system, however. As such, you should generate a certificate ahead of time and make it part of your own deployment.

 

7. Update keystore and truststore files with certificate information and an updated password file as appropriate. The process you choose to follow will depend on whether you use a self-signed certificate or obtain one from a certificate authority.

 

Note: On Linux systems, OpenIDM creates a process ID file (PID file) on startup and removes the file during shutdown. The location of the PID File is defined in both the start script (startup.sh) and the shutdown script (shutdown.sh). The default location of the process ID is $OPENIDM_HOME folder.

 

8. Open the startup.sh script and update the location of the process ID file by adding the following line immediately after the comments section of the file:

OPENIDM_PID_FILE="/idm/run/openidm/.openidm.pid"

9. Repeat Step 7 with the shutdown.sh script.

 

Note: OpenIDM reads configuration file changes from the file system by default. If your environment allows you to update these files during the deployment process of a new release, then no additional changes are necessary. However, if you truly have a read only file system (i.e. no changes even to configuration files) then you can disable the monitoring of these configuration files in the next step. Keep in mind, however, that this requires that all configuration changes must then be performed over REST.

 

10. Open the conf/system.properties file and disable monitoring of JSON and subsequent loading of configuration file changes by uncommenting the following line:

#openidm.fileinstall.enabled=false

 
 

OpenIDM Property Value Substitution

October 15, 2015 2 comments

 

Property value substitution can be a useful technique for customizing OpenIDM deployments across multiple environments.

Assume, for instance, that you have three environments (Development, Test, and Production).  Your OpenIDM deployment has been configured for one OpenDJ system resource, but the configuration properties for that resource is different across each environment.  The following diagram demonstrates the different host names for the OpenDJ instance in each environment.

openIDMDev

openIDMTest

openIDMProd

In order to accommodate this you would need to maintain three separate provisioner files – one for each environment you are operating in and you would need to manage each file separately under source control.

openIDMProvFile

An alternative, however, is to create OpenIDM variables that customize the OpenDJ configuration parameters based on the environment.  The customizations are then centralized in a common file and selected based on settings contained in OpenIDM’s environment variables.

Here are the steps necessary to configure OpenIDM property values.

  1. Stop the OpenIDM instance.
  2. Edit the conf/boot.properties file and add environment specific variables as follows:

DEV.odjhost=dev.opendj.example.com

TEST.odjhost=test.opendj.example.com

PROD.odjhost=opendj.example.com

[Where DEV, TEST, and PROD are the OpenIDM environment variables set in OPENIDM_OPTS, odjhost is the name of the property used across the three environments, and the values (i.e. dev.opendj.example.com, test.opendj.example.com, and opendj.example.com) are the environment specific values.]

3.  Edit the provisioner file (provisioner.openicf-ODJ.json) and replace the value of the host property as follows:

“configurationProperties” : {

    “host” : “&{&{environment}.odjhost}”,

    “port” : “1389”,

Properties are referenced in OpenIDM with the construct, &{property}.  When OpenIDM sees these references, it replaces them with the appropriate property value as defined in the boot.properties file.  The example shown above demonstrates nesting of property values and assuming that the environment variable is set to a value of TEST in OPENIDM_OPTS, then the substitution occurs from the innermost to the outermost brackets as follows:

openIDMVarSubst

You now have a consistent set of configuration files (i.e. boot.properties and provisioner.openicf-ODJ.json) that can be deployed to all three environments (Development, Test, and Production), the only step remaining is to configure OPENIDM_OPTS in each environment.  This can be accomplished by setting the environment variable in your /etc/init.d/openidm or openidm/startup.sh scripts as follows:

OPENIDM_OPTS=”-Xmx1024m -Xms1024m -Dfile.encoding=UTF-8 –Denvironment=TEST

4.  Start the OpenIDM instance and look for any errors in the openidm0.log.0 log file.

Assuming that everything started without any problems, you can now see that your environment specific parameters have been set correctly by querying the configuration object for OpenDJ, or simply by looking at the connector configuration in the OpenIDM Admin Console as follows.

OpenIDMConnector

 

 

 

 

 

 

 

 

 

 

Categories: ForgeRock, OpenIDM

The Real Reason Oracle Dropped Sun Identity Manager

July 23, 2015 1 comment

 

I always appreciate it when someone attempts to educate others about identity management and related technologies.  So when I saw the the following presentation, it quickly caught my attention as I was working with both products when the Oracle deal to purchase Sun went down.

 

Why Oracle Dropped Waveset Lighthouse and Went to Oracle Identity Manager (OIM)

 

 

Not to be too nit picky, but there are quite a few errors in this presentation that I simply couldn’t ignore.

  • OID is not the acronym for “Oracle Identity Management”, it is an acronym for “Oracle Internet Directory” – Oracle’s LDAPv3 directory server. OIM is the acronym for “Oracle Identity Manager”.
  • SIM (“Sun Identity Manager”) was not a “suite of identity manager products” as you state. SIM was a data synchronization and provisioning product. SIM was part of the suite of Sun Identity Management products that also included Sun Directory Server Enterprise Edition (SDSEE), Sun Access Manager/OpenSSO, and Sun Role Manager (SRM).
  • It is stated that one reason that Oracle did not elect to continue with SIM was because it did not have a Web 2.0 UI. SIM version 9.0 (the version being developed when Oracle purchased Sun) did have a Web 2.0 UI. So this is not quite an accurate representation.
  • “Oracle IDM” is Oracle’s suite of identity management products which includes Oracle Virtual Directory (OVD), Oracle Identity Directory (OID), Oracle Access Manager (OAM), and Oracle Identity Manager (OIM). The presentation uses “Oracle IDM” (and later, simply “IDM”) to refer specifically to Oracle Identity Manager, however. This is both confusing and misleading.
  • It is stated that “IDM allowed faster application on-boarding.” As an integrator of both OIM and SIM, I can honestly say that this is not a true statement. We could have simple SIM deployments up and running in the order of days/week and a production deployment in a month or two. OIM, consistently took several months to deploy – which was great for a billable professional services firm, but not so great for the customer (who had to pay for those services).
  • It is inferred that OIM is able to provision to cloud and SIM was not and that was a reason why Oracle chose to go with OIM. That is a misleading statement as SIM was able to provision to cloud applications as well. SIM also supported SPML (not a big fan, myself) and SCIM for provisioning to other identity platforms and cloud based applications.

The main reasons that Oracle chose to go with OIM versus SIM was simply the deeper integration with Oracle products and their not wanting to alter the Oracle IDM roadmap. I was on the early calls with Oracle when they announced which products they would keep and which products they were getting rid of.  During those calls, they had their “politically correct” reasons as well as the “real” reasons and it always came back to these two.

There was only one place where I saw Oracle forced into altering their position and they had to update their roadmap; this was with the SDSEE product.  Oracle made it very clear that the only product they wanted in Sun’s identity product line was Sun Role Manager (which later became Oracle Identity Analytics).  In fact, only a couple weeks after the purchase was made, Oracle had already set an end of life date for all identity products including SDSEE.  What Oracle hadn’t counted on was how well entrenched that product was across Sun’s major customers (including the US Government and major Telcos).  It wasn’t until the outcry from their customers was raised that Oracle “decided” to continue product development.

Purely from a technology perspective, if you are a company that has deployed a wide array of Oracle products, then it made sense to go with OIM due to the deeper integration with Oracle products, but not so much if you are a heterogenous company. In such cases, I have found other products to be more flexible than OIM and provide a much quicker deployment times at much lower costs.

Configuring OpenIDM Password Reset

June 1, 2015 Leave a comment

 

ForgeRock OpenIDM is a powerful account management and data synchronization tool that provides many robust features out of the box.  Some of these features must be enabled, however, before they can be used.  Once such feature allows a user to reset their password in the OpenIDM Web UI by responding to challenge questions.

The OpenIDM Web UI provides an out of the box experience for resetting an OpenIDM user’s password based on a user’s response to challenge questions. This functionality is disabled by default and must be enabled in the ui-configuration.json file before attempting to perform this operation in the WebUI.

The following instructions describe how to enable this functionality and test the password reset flow and have been tested with OpenIDM version 3.1.0.

 

Configure Password Reset

 

1.  In a terminal window, change to the OpenIDM configuration folder

 

$ cd openidm/conf

 

2.  Edit the ui-configuration.json file

 

3.  Modify the value of the “securityQuestions” attribute from “false” to “true” as follows:

 

{
 "configuration" : {
 "selfRegistration" : false,
 "securityQuestions" : true,
 "siteIdentification" : false,
 "lang" : "en",

 

4.  (OPTIONAL)  The minimum length for responses to security questions is 16 characters.  This is defined by the securityAnswer policy in the openidm/conf/policy.json file.  You can modify the minimum number of characters associated with this policy by editing the minLength parameter in the minimum-length policyID as follows:

 

"name" : "securityAnswer",
 "policies" : [
    {
              "policyId" : "required-if-configured",
              "params" : {
                     "configBase" : "ui/configuration",
                     "baseKey" : "configuration.securityQuestions",
                    "exceptRoles" : [
                             "system",
                             "openidm-admin"
                    ]
             }
 },
 {
              "policyId" : "minimum-length",
              "params" : {
                      "minLength" : 4
               }
 },

 

Note:  Modifications to files in the conf folder are hot swappable; this means that your changes are now in effect without the need to restart OpenIDM.  Assuming you have not made any errors in configuring either of these two files (you can confirm by looking for errors in the openidm/logs/openidm0.log.0 file), then you have now enabled password reset for the WebUI.  Before this feature can be used, however, a user must first enter responses to challenge questions.

 

Enter Responses to Challenge Questions

 

4.  Go to OpenIDM user interface (http://openidm.example.com:8080/openidmui/index.html#login/). You will now see a “Reset your password” link available in the login form.

 

PasswordResetLink

 

5.  Log in as an existing user (i.e. bill.nelson@example.com).

 

6.  Select the “Change Security Data” link.

 

7.  Enter your current password in the form provided and select the “Continue” button.

 

EnterPassword

 

8.  Enter and confirm your password in the fields provided.

 

9.  Select a security question and provide an answer in the space provided.

 

SecurityQuestions

 

10.  Select the “Update” button and then log out of the user interface.

 

Note:  You can now test the password reset functionality.

 

Test Password Reset Functionality

 

11.  Return to OpenIDM user interface (http://openidm.example.com:8080/openidmui/index.html#login/).

 

12.  Select the “Reset your password link”.

 

PasswordResetLink

 

13.  Enter your username (i.e. bill.nelson@example.com) and click the “Submit” button.

 

forgotPassword

 

14.  You should see the security question that you selected in step 9 appear.  Provide the response that you entered in step 9 and click the “Submit” button.

 

 

Note: Case matters!

 

15.  If you responded to your security question properly, then you will now see form elements appear allowing you to enter and confirm a new password.

 

ChallengeResponse

 

 

16.  When you click the “Update” button, the new password will be applied to your OpenIDM object and you will be automatically logged in to the end user interface.

 

OpenDJ and the Fine Art of Impersonation

March 28, 2015 Leave a comment

 

Directory servers are often used in multi-tier applications to store user profiles, preferences, or other information useful to the application.  Oftentimes the web application includes an administrative console to assist in the management of that data; allowing operations such as user creation or password reset.  Multi-tier environments pose a challenge, however, as it is difficult to determine the identity of the user that actually performed the operation as opposed to the user that simply showed up in the log file(s).

Consider the relationship between the user logging in to the web application and the interaction between the web application and a directory server such as OpenDJ.

 

multi-Tier

 

There are two general approaches that many web applications follow when performing actions against the directory server; I will refer to these as Application Access and User Access.  In both scenarios, the user must first log in to the web application.  Their credentials may be validated directly against the directory server (using local authentication) or they may be accessing the web application using single sign-on.  In either pattern, the user must first prove his identity to the web application before they are allowed to perform administrative tasks.  The differences become apparent post authentication and can be found in the manner in which the web application integrates with the directory server to perform subsequent administrative tasks.

 

Note:  The following assumes that you are already familiar with OpenDJ access control.  If this is not the case, then it is highly advisable that you review the following:  OpenDJ Access Control Explained.

 

Approach 1:  Application Access

 

In the case of the Application Access approach all operations against the directory server are performed as an application owner account configured in the directory server.  This account typically has a superset of privileges required by all Web Application administrators in order to perform the tasks required of those users.  In this scenario, the Web Application binds to the directory server using its Web Application service account and performs the operation.  A quick look in the directory server log files demonstrates that all operations coming from the Web Application are performed by the service account and not the user who logged in to the Web Application.

 

[27/Mar/2015:16:37:40 +0000] BIND REQ conn=2053 op=0 msgID=1 version=3 type=SIMPLE dn=”uid=WebApp1,ou=AppAccounts,dc=example,dc=com

[27/Mar/2015:16:37:40 +0000] BIND RES conn=2053 op=0 msgID=1 result=0 authDN=”uid=WebApp1,ou=AppAccounts,dc=example,dc=com” etime=1

[27/Mar/2015:16:37:40 +0000] SEARCH REQ conn=2053 op=1 msgID=2 base=”ou=People,dc=example,dc=com” scope=wholeSubtree filter=”(l=Tampa)” attrs=”ALL”

[27/Mar/2015:16:37:40 +0000] SEARCH RES conn=2053 op=1 msgID=2 result=0 nentries=69 etime=2

 

While easiest to configure, one drawback to this approach is that you need to reconcile the directory server log files with the Web Application log files in order to determine the identity of the user performing the action.  This makes debugging more difficult.  Not all administrators have the same access rights; so another problem with this approach is that entitlements must be maintained and/or recognized in the Web Application and associated with Web Application users.  This increases complexity in the Web Application as those relationships must be maintained in yet another database.  Finally, some security officers may find this approach to be insecure as the entry appearing in the log files is not indicative of the user performing the actual operation.

 

Approach 2:  User Access

 

The User Access approach is an alternative where the Web Application impersonates the user when performing operations.  Instead of the Web Application binding with a general service account, it takes the credentials provided by the user, crafts a user-specific distinguished name, and then binds to the directory server with those credentials.  This approach allows you to manage access control in the directory server and the logs reflect the identity of the user that performed the operation.

 

[27/Mar/2015:17:01:01 +0000] BIND REQ conn=2059 op=0 msgID=1 version=3 type=SIMPLE dn=”uid=bnelson,ou=Administators,dc=example,dc=com

[27/Mar/2015:17:01:01 +0000] BIND RES conn=2059 op=0 msgID=1 result=0 authDN=” uid=bnelson,ou=Administators,dc=example,dc=com ” etime=1

[27/Mar/2015:17:40:40 +0000] SEARCH REQ conn=2059 op=1 msgID=2 base=”ou=People,dc=example,dc=com” scope=wholeSubtree filter=”(l=Tampa)” attrs=”ALL”

[27/Mar/2015:17:40:40 +0000] SEARCH RES conn=2059 op=1 msgID=2 result=0 nentries=69 etime=2

 

A benefit to this approach is that entitlements can be maintained in the directory server, itself.  This reduces the complexity of the application, but requires that you configure appropriate access controls for each user.  This can easily be performed at the group level, however, and even dynamically configured based on user attributes.  A drawback to this approach is that the Web Application is acting as if they are the user – which they are not.  The Browser is essentially the user and the Browser is not connecting directly to the directory server.  So while the log files may reflect the user, they are somewhat misleading as the connection will always be from the Web Application.  The other problem with this approach is the user’s credentials must be cached within the Web Application in order to perform subsequent operations against the directory server.  One could argue that you could simply keep the connection between the Web Application and the directory server open, and that is certainly an option, but you would need to keep it open for the user’s entire session to prevent them from having to re-authenticate.  This could lead to performance problems if you have extended session durations, a large number of administrative users, or a number of concurrent sessions by each administrative user.

 

Proxy Control – The Hybrid Approach

 

There are both benefits and drawbacks to each of the previously mentioned approaches, but I would like to offer up an alternative proxy-based approach that is essentially a hybrid between the two.  RFC 4370 defines a proxied authorization control (2.16.840.1.113730.3.4.18) that allows a client (i.e. the Web Application) to request the directory server (i.e. OpenDJ) to perform an operation not based on the access control granted to the client, but based on another identity (i.e. the person logging in to the Web Application).

The proxied authorization control requires a client to bind to the directory server as themselves, but it allows them to impersonate another entry for a specific operation.  This control can be used in situations where the application is trusted, but they need to perform operations on behalf of different users.  The fact that the client is binding to the directory server eliminates the need to cache the user’s credentials (or re-authenticate for each operation).  The fact that access is being determined based on that of the impersonated user means that you can centralize entitlements in the directory server and grant access based on security groups.  This is essentially the best of both worlds and keeps a smile on the face of your security officer (as if that were possible).

So how do you configure proxy authorization?  I am glad you asked.

 

Configuring Proxied Access

 

Before configuring proxied access, let’s return to the example of performing a search based on Application Access.  The following is an example of a command line search that can be used to retrieve information from an OpenDJ server.  The search operation uses the bindDN and password of the WebApp1 service account.

 

./ldapsearch -h localhost -D “uid=WebApp1,ou=AppAccounts,dc=example,dc=com ” -w password -b “ou=People,dc=example,dc=com” “l=Tampa”

 

The response to this search would include all entries that matched the filter (l=Tampa) beneath the container (ou=People).  My directory server has been configured with 69 entries that match this search and as such, the OpenDJ access log would contain the following entries:

 

[27/Mar/2015:16:37:40 +0000] SEARCH REQ conn=2053 op=1 msgID=2 base=”ou=People,dc=example,dc=com” scope=wholeSubtree filter=”(l=Tampa)” attrs=”ALL”

[27/Mar/2015:16:37:40 +0000] SEARCH RES conn=2053 op=1 msgID=2 result=0 nentries=69 etime=2

 

As previously mentioned, these are the results you would expect to see if the search was performed as the WebApp1 user.  So how can you perform a search impersonating another user?  The answer lies in the parameters used in the search operation.  The LDAP API supports a proxied search, you just need to determine how to access this functionality in your own LDAP client.

 

Note: I am using ldapsearch as the LDAP client for demonstration purposes.  This is a command line tool that is included with the OpenDJ distribution.  If you are developing a web application to act as the LDAP client, then you would need to determine how to access this functionality within your own development framework.

 

The OpenDJ search command includes a parameter that allows you to use the proxy authorization control.   Type ./ldapsearch –help to see the options for the ldapsearch command and look for the -Y or –proxyAs parameter as follows.

 

proxyAs

 

Now perform the search again, but this time include the proxy control (without making any changes to the OpenDJ server).  You will be binding as the WebApp1 account, but using the -Y option to instruct OpenDJ to evaluate ACIs based on the following user:  uid=bnelson,ou=People,dc=example,dc=com.

 

./ldapsearch -h localhost -D “uid=WebApp1,ou=AppAccounts,dc=example,dc=com” -w password –Y “uid=bnelson,ou=People,dc=example,dc=com” -b “ou=People,dc=example,dc=com” “l=Tampa”

 

You should see the following response:

 

SEARCH operation failed

Result Code:  123 (Authorization Denied)

Additional Information:  You do not have sufficient privileges to use the proxied authorization control  The request control with Object Identifier (OID) “2.16.840.1.113730.3.4.18” cannot be used due to insufficient access rights

 

The corresponding entries in OpenDJ’s access log would be as follows:

 

[27/Mar/2015:10:47:18 +0000] SEARCH REQ conn=787094 op=1 msgID=2 base=”ou=People,dc=example,dc=com” scope=wholeSubtree filter=”(l=Tampa)” attrs=”ALL”

[27/Mar/2015:10:47:18 +0000] SEARCH RES conn=787094 op=1 msgID=2 result=123 message=”You do not have sufficient privileges to use the proxied authorization control  You do not have sufficient privileges to use the proxied authorization control” nentries=0 etime=1

 

The key phrase in these messages is the following:

 

You do not have sufficient privileges to use the proxied authorization control

 

The key word in that phrase is “privileges” as highlighted above; the WebApp1 service account does not have the appropriate privileges to perform a proxied search and as such, the search operation is rejected.  The first step in configuring proxied access control is to grant proxy privileges to the Application Account.

 

Step 1:  Grant Proxy Privileges to the Application Account

 

The first step in allowing the WebApp1 service account to perform a proxied search is to give that account the proxied-auth privilege.  You can use the ldapmodify utility to perform this action as follows:

 

 ./ldapmodify -D “cn=Directory Manager” -w password

dn: uid=WebApp1,ou=AppAccounts,dc=example,dc=com

changetype: modify

add: ds-privilege-name

ds-privilege-name: proxied-auth

Processing MODIFY request for uid=WebApp1,ou=AppAccounts,dc=example,dc=com

MODIFY operation successful for DN uid=WebApp1,ou=AppAccounts,dc=example,dc=com

 

Now repeat the proxied search operation.

 

./ldapsearch -h localhost -D “uid=WebApp1,ou=AppAccounts,dc=example,dc=com” -w password –Y “uid=bnelson,ou=People,dc=example,dc=com” -b “ou=People,dc=example,dc=com” “l=Tampa”

 

Once again your search will fail, but this time it is for a different reason.

 

SEARCH operation failed

Result Code:  12 (Unavailable Critical Extension)

Additional Information:  The request control with Object Identifier (OID) “2.16.840.1.113730.3.4.18” cannot be used due to insufficient access rights

 

The corresponding entries in OpenDJ’s access log would be as follows:

 

[27/Mar/2015:11:39:17 +0000] SEARCH REQ conn=770 op=1 msgID=2 base=” ou=People,dc=example,dc=com ” scope=wholeSubtree filter=”(l=Tampa)” attrs=”ALL”

[27/Mar/2015:11:39:17 +0000] SEARCH RES conn=770 op=1 msgID=2 result=12 message=”” nentries=0 authzDN=”uid=bnelson,ou=People,dc=example,dc=com” etime=3

 

As discussed in OpenDJ Access Control Explained, authorization to perform certain actions may consist of a combination of privileges and ACIs.  You have granted the proxied-auth privilege to the WebApp1 service account, but it still needs an ACI to allow it to perform proxy-based operations.  For the purposes of this demonstration, we will use the following ACI to grant this permission.

 

(targetattr=”*”) (version 3.0; acl “Allow Proxy Authorization to Web App 1 Service Account”; allow (proxy) userdn=”ldap:///uid=WebApp1,ou=AppAccounts,dc=example,dc=com”;)

 

This ACI will be placed at the root suffix for ease of use, but you should consider limiting the scope of the ACI by placing it at the appropriate branch in your directory tree (and limiting the targetattr values).

 

Step 2:  Create a (Proxy) ACI for the Application Account

 

Once again, you can use the ldapmodify utility to update OpenDJ with this new ACI.

 

./ldapmodify -D “cn=Directory Manager” -w password

dn: dc=example,dc=com

changetype: modify

add: aci

aci: (targetattr=”*”) (version 3.0; acl “Allow Proxy Authorization to Web App 1 Service Account”; allow (proxy) userdn=”ldap:///uid=WebApp1,ou=AppAccounts,dc=example,dc=com”;)

Processing MODIFY request for dc=example,dc=com

MODIFY operation successful for DN dc=example,dc=com

 

Now repeat the proxied search a final time.

 

./ldapsearch -h localhost -D “uid=WebApp1,ou=AppAccounts,dc=example,dc=com” -w password –Y “uid=bnelson,ou=People,dc=example,dc=com” -b “ou=People,dc=example,dc=com” “l=Tampa”

 

This time you should see the results of the search performed correctly.  But how do you know that this was a proxied search and not simply one performed by the WebApp1 as before?  The clue is once again in the OpenDJ access log file.  Looking in this file, you will see the following entries:

 

[27/Mar/2015:11:40:23 +0000] SEARCH REQ conn=797 op=1 msgID=2 base=”ou=People,dc=example,dc=com” scope=wholeSubtree filter=”(l=Tampa)” attrs=”ALL”

[27/Mar/2015:11:40:23 +0000] SEARCH RES conn=797 op=1 msgID=2 result=12 message=”” nentries=69 authzDN=”uid=bnelson,ou=people,dc=example,dc=com” etime=1

 

The authzDN value contains the DN of the entry used for authorization purposes.  This is a clear indicator that access control was based on the uid=bnelson entry and not uid=WebApp1.

Still not convinced?  You can verify this by removing the rights for the uid=bnelson entry and running your search again.  Add the following ACI to the top of your tree.

 

(targetattr=”*”)(version 3.0;acl ” Deny Access to BNELSON”; deny (all)(userdn = “ldap:///uid=bnelson,out=people,dc=example,dc=com”);)

 

Now run the search again.  This time, you will not see any errors, but you will also not see any entries returned.  While you are binding as the WebApp1 service account, for all intents and purposes, you are impersonating the uid=bnelson user when determining access rights.

 

Summary of Steps

 

The following steps should be performed when configuring OpenDJ for proxied access control.

Create the Application Account in OpenDJ (i.e. WebApp1)

  1. Create the Application Account in OpenDJ (i.e. WebApp1)
  2. Add the proxy-auth privilege to the Application Account
  3. Create an ACI allowing the Application Account to perform proxy operations
  4. Create a User Account in OpenDJ (i.e. bnelson)
  5. Configure ACIs for User Account as appropriate
  6. Test the configuration by performing a command line search using the proxied access control parameter.