Archive

Archive for the ‘Identity Management’ Category

Performing Bulk Operations in OpenIDM

April 19, 2016 Leave a comment

 
OpenIDM does not support bulk operations out of the box. One way to to do this, however, is to obtain a list of IDs that you want to perform an operation on and then loop through the list performing the desired operation on each ID.
 

Yes this is a hack, but let’s be honest, isn’t life just one big set of hacks when you think about it?

 
Here are the steps.

Suppose for instance that you want to delete all managed users in OpenIDM that have a last name of “Nelson”. The first step is to obtain a list of those users; which you can easily do using a cURL command and an OpenIDM filter as follows:

curl -u openidm-admin:openidm-admin 'http://localhost:8080/openidm/managed/user?_queryFilter=(sn+eq+"Nelson")&_fields=_id&_prettyPrint=true'

This returns a listing of all managed objects that match the filter as follows.

 
{
  "result" : [ {
    "_id" : "ed979deb-2da2-4fe1-a309-2b7e9677d931",
    "_rev" : "5"
  },
  {
    "_id" : "581c2e13-d7c4-4fff-95b8-2d1686ef5b9c",
    "_rev" : "1"
  },
  {
    "_id" : "1295d5db-c6f8-4108-9842-06c4cde0d4eb",
    "_rev" : "3"
  } ],
  "resultCount" : 3,
  "pagedResultsCookie" : null,
  "totalPagedResultsPolicy" : "NONE",
  "totalPagedResults" : -1,
  "remainingPagedResults" : -1
}

 
But most of the data returned is extraneous for the purposes of this exercise; we only want the “_id” values for these users and to obtain this information, you can pipe the output into a grep command and redirect the output to a file as follows:

curl -u openidm-admin:openidm-admin 'http://localhost:8080/openidm/managed/user?_queryFilter=(sn+eq+"Nelson")&_fields=_id&_prettyPrint=true' | grep "_id" >> bulkOperationIDs.txt

This will produce a file that looks like this:

 
     "_id": "ed979deb-2da2-4fe1-a309-2b7e9677d931",
     "_id": "581c2e13-d7c4-4fff-95b8-2d1686ef5b9c",
     "_id": "1295d5db-c6f8-4108-9842-06c4cde0d4eb"

 
(yes, there are leading spaces in that output).

You are still not done yet as you need to strip off all the extraneous stuff and get it down to just the values of the “_id” attribute. You can probably devise a cool sed script, or find an awesome regular expression for the grep command, but it is just as easy to simply edit the file with and perform a couple global search/replace operations:

:1,$ s/ "_id": "//g
:1,$ s/",//g

The above example demonstrates a global search/replace operation in the “vi” editor – the best damn editor on God’s green earth!

 
However you approach it, the goal is to get the file to consist of only the IDs as follows:
 

 
ed979deb-2da2-4fe1-a309-2b7e9677d931
581c2e13-d7c4-4fff-95b8-2d1686ef5b9c
1295d5db-c6f8-4108-9842-06c4cde0d4eb

 
Now that you have this file, you can perform any operation you would like on it using a simple command line script tied into the appropriate cURL command. For instance, the following would perform a GET operation on all entries in the file (it is HIGHLY recommended that you do this before jumping right into a DELETE operation):

for i in `cat bulkOperationIDs.txt`; do curl -u openidm-admin:openidm-admin -X GET "http://localhost:8080/openidm/managed/user/$i?_prettyPrint=true"; done

Once you feel comfortable with the response, you can change the GET operation to a DELETE and kick the Nelsons to the proverbial curb.
 

Configuring OpenIDM on a Read Only File System in 10 Easy Steps

April 13, 2016 Leave a comment

 

During the normal course of events, OpenIDM writes or updates various files on the file system to which it is installed.  This includes log files, audit files, process IDs, configuration files, or even cached information.  There are times, however when you find yourself needing to deploy OpenIDM to a read only file system – one to which you cannot write typical data.

Read Only File SystemFortunately, OpenIDM is flexible enough to allow such an installation, you just need to make some adjustments to various settings to accommodate this.

The following information provides details on how to configure OpenIDM on a read only file system.  It includes the types of information that OpenIDM writes by default, where it writes the information, and how you can alter the default behavior – and it does it in just 10 EASY Steps (and not all of them are even required!).
 

Note:  The following steps assume that you have a shared (mounted) folder at /idm, that you are using OpenIDM 4.0, it is running as the frock user, and that the frock user has write access to the /idm folder.

 
1. Create external folder structure for logging, auditing, and internal repository information.

$ sudo mkdir -p /idm/log/openidm/audit
$ sudo mkdir /idm/log/openidm/logs
$ sudo mkdir -p /idm/cache/openidm/felix-cache
$ sudo mkdir /idm/run/openidm

2. Change ownership of the external folders to the “idm” user.

$ sudo chown -R frock:frock /idm/log/openidm
$ sudo chown -R frock:frock /idm/cache/openidm
$ sudo chown -R frock:frock /idm/run/openidm

 

Note: OpenIDM writes its audit data (recon, activity, access, etc.) to two locations by default: the filesystem and the repo. This is configured in the conf/audit.json file.

 

3. Open the conf/audit.json file and verify that OpenIDM is writing its audit data to the repo as follows (note: this is the default setting):

"handlerForQueries" : "repo",

4. Open the conf/audit.json file and redirect the audit data to the external folder as follows:

"config" : {
"logDirectory" : "/idm/log/openidm/audit"
},

After making these changes, this section of the audit.json will appear as follows (see items in bold font):

{
"auditServiceConfig" : {
"handlerForQueries" : "repo",
"availableAuditEventHandlers" : [
"org.forgerock.audit.events.handlers.csv.CsvAuditEventHandler",
"org.forgerock.openidm.audit.impl.RepositoryAuditEventHandler",
"org.forgerock.openidm.audit.impl.RouterAuditEventHandler"
]
},
"eventHandlers" : [
{
"name" : "csv",
"class" : "org.forgerock.audit.events.handlers.csv.CsvAuditEventHandler",
"config" : {
"logDirectory" : "/idm/log/openidm/audit"
},
"topics" : [ "access", "activity", "recon", "sync", "authentication", "config" ]
},

As an alternate option you can disable the writing of audit data altogether by setting the enabled flag to false for the appropriate event handler(s). The following snippet from the audit.json demonstrates how to disable file-based auditing.

"eventHandlers" : [
{
"name" : "csv",
"enabled" : false,
"class" : "org.forgerock.audit.events.handlers.csv.CsvAuditEventHandler",
"config" : {
"logDirectory" : "/audit",
},
"topics" : [ "access", "activity", "recon", "sync", "authentication", "config" ]

 

Note: OpenIDM writes its logging data to the local filesystem by default. This is configured in the conf/logging.properties file.

 

5. Open the conf/logging.properties file and redirect OpenIDM logging data to the external folder as follows:

java.util.logging.FileHandler.pattern = /idm/log/openidm/logs/openidm%u.log

 

Note: OpenIDM caches its Felix files in the felix-cache folder beneath the local installation. This is configured in the conf/config.properties file.

 

6. Open the conf/config.properties file and perform the following steps:

 
a. Redirect OpenIDM Felix Cache to the external folder as follows:
 

# If this value is not absolute, then the felix.cache.rootdir controls
# how the absolute location is calculated. (See buildNext property)
org.osgi.framework.storage=${felix.cache.rootdir}/felix-cache

 
b. Define the relative path to the Felix Cache as follows:
 

# The following property is used to convert a relative bundle cache
# location into an absolute one by specifying the root to prepend to
# the relative cache path. The default for this property is the
# current working directory.
felix.cache.rootdir=/idm/cache/openidm

 
After making these changes, this section of the config.properties will appear as follows (see items in bold font):
 

# If this value is not absolute, then the felix.cache.rootdir controls
# how the absolute location is calculated. (See buildNext property)
org.osgi.framework.storage=${felix.cache.rootdir}/felix-cache

# The following property is used to convert a relative bundle cache
# location into an absolute one by specifying the root to prepend to
# the relative cache path. The default for this property is the
# current working directory.
felix.cache.rootdir=/idm/cache/openidm

 

Note: During initial startup, OpenIDM generates a self-signed certificate and stores its security information in the keystore and truststore files as appropriate. This is not possible in a read-only file system, however. As such, you should generate a certificate ahead of time and make it part of your own deployment.

 

7. Update keystore and truststore files with certificate information and an updated password file as appropriate. The process you choose to follow will depend on whether you use a self-signed certificate or obtain one from a certificate authority.

 

Note: On Linux systems, OpenIDM creates a process ID file (PID file) on startup and removes the file during shutdown. The location of the PID File is defined in both the start script (startup.sh) and the shutdown script (shutdown.sh). The default location of the process ID is $OPENIDM_HOME folder.

 

8. Open the startup.sh script and update the location of the process ID file by adding the following line immediately after the comments section of the file:

OPENIDM_PID_FILE="/idm/run/openidm/.openidm.pid"

9. Repeat Step 7 with the shutdown.sh script.

 

Note: OpenIDM reads configuration file changes from the file system by default. If your environment allows you to update these files during the deployment process of a new release, then no additional changes are necessary. However, if you truly have a read only file system (i.e. no changes even to configuration files) then you can disable the monitoring of these configuration files in the next step. Keep in mind, however, that this requires that all configuration changes must then be performed over REST.

 

10. Open the conf/system.properties file and disable monitoring of JSON and subsequent loading of configuration file changes by uncommenting the following line:

#openidm.fileinstall.enabled=false

 
 

The Real Reason Oracle Dropped Sun Identity Manager

July 23, 2015 1 comment

 

I always appreciate it when someone attempts to educate others about identity management and related technologies.  So when I saw the the following presentation, it quickly caught my attention as I was working with both products when the Oracle deal to purchase Sun went down.

 

Why Oracle Dropped Waveset Lighthouse and Went to Oracle Identity Manager (OIM)

 

 

Not to be too nit picky, but there are quite a few errors in this presentation that I simply couldn’t ignore.

  • OID is not the acronym for “Oracle Identity Management”, it is an acronym for “Oracle Internet Directory” – Oracle’s LDAPv3 directory server. OIM is the acronym for “Oracle Identity Manager”.
  • SIM (“Sun Identity Manager”) was not a “suite of identity manager products” as you state. SIM was a data synchronization and provisioning product. SIM was part of the suite of Sun Identity Management products that also included Sun Directory Server Enterprise Edition (SDSEE), Sun Access Manager/OpenSSO, and Sun Role Manager (SRM).
  • It is stated that one reason that Oracle did not elect to continue with SIM was because it did not have a Web 2.0 UI. SIM version 9.0 (the version being developed when Oracle purchased Sun) did have a Web 2.0 UI. So this is not quite an accurate representation.
  • “Oracle IDM” is Oracle’s suite of identity management products which includes Oracle Virtual Directory (OVD), Oracle Identity Directory (OID), Oracle Access Manager (OAM), and Oracle Identity Manager (OIM). The presentation uses “Oracle IDM” (and later, simply “IDM”) to refer specifically to Oracle Identity Manager, however. This is both confusing and misleading.
  • It is stated that “IDM allowed faster application on-boarding.” As an integrator of both OIM and SIM, I can honestly say that this is not a true statement. We could have simple SIM deployments up and running in the order of days/week and a production deployment in a month or two. OIM, consistently took several months to deploy – which was great for a billable professional services firm, but not so great for the customer (who had to pay for those services).
  • It is inferred that OIM is able to provision to cloud and SIM was not and that was a reason why Oracle chose to go with OIM. That is a misleading statement as SIM was able to provision to cloud applications as well. SIM also supported SPML (not a big fan, myself) and SCIM for provisioning to other identity platforms and cloud based applications.

The main reasons that Oracle chose to go with OIM versus SIM was simply the deeper integration with Oracle products and their not wanting to alter the Oracle IDM roadmap. I was on the early calls with Oracle when they announced which products they would keep and which products they were getting rid of.  During those calls, they had their “politically correct” reasons as well as the “real” reasons and it always came back to these two.

There was only one place where I saw Oracle forced into altering their position and they had to update their roadmap; this was with the SDSEE product.  Oracle made it very clear that the only product they wanted in Sun’s identity product line was Sun Role Manager (which later became Oracle Identity Analytics).  In fact, only a couple weeks after the purchase was made, Oracle had already set an end of life date for all identity products including SDSEE.  What Oracle hadn’t counted on was how well entrenched that product was across Sun’s major customers (including the US Government and major Telcos).  It wasn’t until the outcry from their customers was raised that Oracle “decided” to continue product development.

Purely from a technology perspective, if you are a company that has deployed a wide array of Oracle products, then it made sense to go with OIM due to the deeper integration with Oracle products, but not so much if you are a heterogenous company. In such cases, I have found other products to be more flexible than OIM and provide a much quicker deployment times at much lower costs.

The Next Generation of Identity Management

March 1, 2015 5 comments

The face of identity is changing. Historically, it was the duty of an identity management solution to manage and control an individual’s access to corporate resources. Such solutions worked well as long as the identity was safe behind the corporate firewall – and the resources were owned by the organization.

But in today’s world of social identities (BYOI), mobile devices (BYOD), dynamic alliances (federation), and everything from tractors to refrigerators being connected to the Internet (IoT), companies are finding that legacy identity management solutions are no longer able to keep up with the demand. Rather than working with thousands to hundreds of thousands of identities, today’s solutions are tasked with managing hundreds of thousands to millions of identities and include not only carbon-based life forms (people) but also those that are silicon-based (devices).

In order to meet this demand, today’s identity solutions must shift from the corporation-centric view of a user’s identity to one that is more user-centric. Corporations typically view the identity relationship as one between the user and the organization’s resources. This is essentially a one-to-many relationship and is relatively easy to manage using legacy identity management solutions.

One to Many Relationship

What is becoming evident, however, is the growing need to manage many-to-many relationships as these same users actually have multiple identities (personas) that must be shared with others that, in turn, have multiple identities, themselves.

Many to Many Relationships

The corporation is no longer the authoritative source of a user’s identity, it has been diminished to the role of a persona as users begin to take control of their own identities in other aspects of their lives.

Identity : the state or fact of being the same one as described.

Persona : (in the psychology of C. G. Jung) the mask or façade presented to satisfy the demands of the situation or the environment.

In developing the next generation of identity management solutions, the focus needs to move away from the node (a reference to an entry in a directory server) and more towards the links (or relationships) between the nodes (a reference to social graphs).

Social Graph

In order to achieve this, today’s solutions must take a holistic view of the user’s identity and allow the user to aggregate, manage, and decide with whom to share their identity data.

Benefits to Corporations

While corporations may perceive this as a loss of control, in actuality it is the corporation that stands to benefit the most from a user-centric identity management solution. Large corporations spend hundreds of thousands of dollars each year in an attempt to manage a user’s identity only to find that much of what they have on file is incorrect. There are indeed many characteristics that must be managed by the organization, but many of a user’s attributes go well-beyond a corporation’s reach. In such cases, its ability to maintain accurate data within these attributes is relatively impossible without the user’s involvement.

Take for instance a user’s mobile telephone number; in the past, corporations issued, sponsored, and managed these devices. But today’s employees typically purchase their own mobile phones and change carriers (or even phone numbers) on a periodic basis. As such, corporate white pages are filled with inaccurate data; this trend will only increase as users continue to bring more and more of themselves into the workplace.

Legacy identity solutions attempt to address this issue by introducing “end-user self-service” – a series of Web pages that allow a user to maintain their corporate profile. Users are expected to update their profile whenever a change occurs. The problem with this approach is that users selectively update their profiles and in some cases purposely supply incorrect data (in order to avoid after hours calls). The other problem with this approach is that it still adheres to a corporate-centric/corporate-owned identity mindset. The truth is that users’ identities are not centralized, they are distributed across many different systems both in front of and behind the corporate firewall and while companies may “own” certain data, it is the information that the user brings from other sources that is elusive to the company.

Identity Relationship Management

A user has relationships that extend well beyond those maintained within a company and as such has core identity data strewn across hundreds, if not thousands of databases. The common component in all of these relationships is the user. It is the user who is in charge of that data and it is the user who elects to share their information within the context of those relationships. The company is just one of those relationships, but it is the one for which legacy identity management solutions have been written.

Note: Relationships are not new, but the number of relationships that a user has and types of relationships they have with other users and other things is rapidly growing.

Today’s identity management solutions must evolve to accept (or at a minimum acknowledge) multiple authoritative sources beyond their own. They must evolve to understand the vast number of relationships that a user has both with other users, but also with the things the user owns (or uses) and they must be able to provide (or deny) services based on those relationships and even the context of those relationships. These are lofty goals for today’s identity management solutions as they require vendors to think in a whole new way, implement a whole new set of controls, and come up with new and inventive interfaces to scale to the order of millions. To borrow a phrase from Ian Glazer, we need to kill our current identity management solutions in order to save them, but such an evolution is necessary for identity to stay relevant in today’s relationship-driven world.

I am not alone in recognizing the need for a change.  Others have come to similar conclusions and this has given rise to the term, Identity Relationship Management (or IRM).  The desire for change is so great in fact that Kantara has sponsored the Identity Relationship Management Working Group of which I am privileged to be a member.  This has given rise to a LinkedIn Group on IRM, a Twitter feed (@irmwg), various conferences either focused on or discussing IRM, and multiple blogs of which this is only one.

LinkedIn IRM Working Group Description:

In today’s internet-connected world, employees, partners, and customers all need anytime access to secure data from millions of laptops, phones, tablets, cars, and any devices with internet connections.

Identity relationship management platforms are built for IoT, scale, and contextual intelligence. No matter the device, the volume, or the circumstance, an IRM platform will adapt to understand who you are and what you can access.

Call to Action

Do you share similar thoughts and/or concerns?  Are you looking to help craft the future of identity management?  If so, then consider being part of the IRM Working Group or simply joining the conversation on LinkedIn or Twitter.

 

Trust – The Missing Ingredient

November 18, 2011 Leave a comment

I was having a conversation with friends the other day and while it may sound nerdy as hell, the topic was focused on identity.  I swear (trust me) that no drinks were involved but the conversation went pretty deep, nonetheless.  What is identity, how is it used, and how can it be protected?  Like Aristotle and Plato before us, we modern day philosophers discussed the various aspects that make up our identity, how we can control it, and how we can selectively share it with our intended audiences.  In an era when our private information has been unleashed like the proverbial opening of Pandora’s Box, how can we regain control of our identities without impacting our existing relationships or experiences?

But what about identity?  What is it really, and why should you care?

When I think about identity, I think in terms of aggregation, management, and sharing.  Each of these are key ingredients when it comes to users owning their own identities, but each of these can be further strengthened when we add trust to the mix.  So, what is the recipe for success as it pertains to trusting identities in cyberspace?  Let’s take a closer look at each of these ingredients to see.

Aggregation

My identity is the aggregation of all the things there is to know about me.  One could trivialize this by saying it is simply all the discrete data elements about me (i.e. hair color, height, ssn, etc.) but in essence, it is much more.  It consists of my habits, my history, my data, my relationships – basically everything that can be me and everything that can be tracked about me.  Identity information is not found in a single location, it is distributed across multiple repositories but this informaiton can be aggregated into a virtual identity – which is essentially, me.

Management

When we allow someone to manage their own identity, we are allowing them to control their discrete data elements, but we are also allowing them to manage every other aspect about themselves as well.  You can change your mobile number attribute (data element) when you get a new phone, or you can change your address attribute when you move.  But just like you can remove the cache, history, and cookies in your browser, you should be able to maintain your privacy by removing (or hiding) your identity characteristics as well.  Identity management simply means that I am able to manage those aspects of my identity that are my own.

Sharing

In real life, I have the ability to select which characteristics and/or information about myself that I want to share with each of my friends, family, co-workers or acquaintances.  My work-related benefits stay private between my boss and I in the workplace.  Conversely, I don’t share my family conversations within the office.  Investment information stays private between my broker and I, yet I Tweet favorite quotes to the world.  In essence, I selectively share information with different audiences based on the role I am playing at that time.  Online personas facilitate the same selective sharing within the social web similar to our interations in the real world.  I may take on a different persona as I interact in the virtual world and elect to share different information with each audience based on where I elect to use that persona.  This also means that I can act anonymously if I so choose (which is similar to going ‘incognito’ in your browser).

Trust

Sharing data with others fulfills my desire to communicate information about me to you, but just like in real life it is totally your option to accept the validity of that information or not.  To take the sharing to the next level (and address a major need on the Internet today), we need to have some method of trusting the information that we receive.  Trust is transient (it changes), contextual (it is based on the situation), and 100% given by the receiving party – essentially they decide to trust you or not.  In the real world we use driver’s licenses, passports, or referrals from friends to validate users and establish trust.   This is no difference in the social web except for the fact that we are not seeing each other face to face and do not have the ability to provide a driver’s license as proof of identity.  Hence the need for another method.

If the ingredients in the identity cake are aggregation, management and sharing, then validation is the icing on the cake; not the cake itself.  While each of these ingredients are key in making the perfect cake, leaving trust out of the mix is kind of like leaving salt out of the recipe.  Trust simply brings out the flavor and without it, the cake is way too bland!

Is Your Intellectual Property Slipping Out the Door with Their Pink Slip?

December 16, 2010 Leave a comment

(I wrote the following article for BABM Business Magazine back in May/June of 2009. The article is reprinted here with their permission.)

With the latest layoff news continuing to add chaos to the economy, CEOs need to protect their businesses in case of staff cuts, restructuring or consolidation of offices. While your company may not be planning layoffs now, there is no guarantee that in three or six months from now this will be the case. There are steps your business should take, both proactively and reactively, to ensure that your most valuable information such as customer data and contracts isn’t walking out the door with terminated employees.

Ideally, even before layoffs occur, businesses need to be prepared to protect their assets. Employees may sense a layoff is imminent and start grabbing what data they can before they get the official word. This could lead to a loss of your company’s most valuable contacts that former employees may use to compete against you. Proactive monitoring of systems, before layoffs begin, can ensure that your company’s data is protected.

There are a variety of technologies you can implement to monitor your employees’ access of specific applications. For example, you can monitor who has access to what type of database and determine if an employee is running unusual reports. Are certain employees extracting every field, downloading the data to a local disk and/or sending it to themselves over email?

Having a solid process for role provisioning and access management will help limit access of certain information to those people who need it to do their jobs. If levels of access to various applications and corporate information are assigned for each job description, it is easier to set up monitoring systems for each employee as well as protocols for changing passwords and other termination procedures to remove access when an employee is let go.

A good rule of thumb is to trust, but verify. Monitoring can be performed at many levels and includes database access, disc usage, and whether or not USB drives are being plugged into company computers. Monitoring can even determine if proprietary data is being sent to an email account. When it comes to access management and monitoring, CEOs and executive management need to weigh how much protection they want with how much they protection they can afford. It’s a formula that will vary for every company.

Once a company is in an action stage and layoffs are about to begin, it’s almost too late to protect and secure its data without shutting off access altogether (which may not be feasible in all cases). As a fallback plan, many companies provide their security team with a list of users they plan to let go. On the morning the layoffs are to take place, the team is tasked with acting on the list and locking out those employees from their accounts. But there’s often the lingering feeling that something was missed. Are they prevented from accessing your systems remotely? Are they still receiving their email on their home PCs? Does the employee have access to vendor accounts? Can your security team effectively map the employee to all the accounts they have accumulated over the years?

There are many types of technologies that can be used from a proactive perspective and subsequently verified from a reactive perspective. CEOs should be proactive and have an effective user provisioning solution in place. This ensures that they have accounted for all the systems and the types of system access where a user has an account. Once layoffs have occurred companies should continue monitoring mission critical systems to ensure that the access has been terminated appropriately. A security event monitoring solution on the back end can monitor log files or traffic patterns to these systems and immediately notify of any unusual activity.

Companies that have implemented centralized account management systems have peace of mind that they can quickly prevent access by employees who are no longer associated with the company. They can be certain that they have locked all accounts being managed by the system and actions such as terminations can be performed by management (ahead of time) rather than needing to involve people from the security team.

Companies that have not implemented a centralized account management system are increasing their workload and effectively putting valuable corporate assets at risk. At this point, there has to be due diligence as you have to perform these tasks manually. The potential for damage is great, however, and fallout will rise exponentially as more layoffs occur. If you have implemented a centralized user provisioning system, congratulations! If not, don’t panic, there are still tasks you can perform to help protect your assets.

    1. Prepare your list well in advance and give your security team a chance to locate the various user accounts.
    2. Work with functional managers, supervisors, or project managers to further determine the user’s access.
    3. Monitor system logs and network traffic to determine if any unusual access or traffic patterns appear. Respond immediately.

 

Even with this type of preparation, the tasks can be quite time consuming and it could take weeks to properly locate and delete access. Hence, our advice is that it’s better to take more proactive steps to avoid headaches and possible customer data and other business asset loss later on. Getting a handle on your role provisioning and user access procedures and having a plan for monitoring employee application use are good places to start.

Staff reduction is never easy and you should make the separation as painless as possible. It is unfortunate that some employees view corporate assets as their own and feel entitled to take information with them when they leave. As a business owner responsible to shareholders or even to the remaining workforce, you need to take every action possible to ensure the protection of this data.

Advice to CIOs for High Exposure Projects

September 14, 2009 Leave a comment

I read an article in CIO Magazine about the plight of today’s CIOs when multi-million dollar multi-year projects go awry. The article entitled “The CIO Scapegoat” indicates that it is unfair to hold the IT department completely responsible when there are so many other business units that contribute to a project’s demise. In many cases, the CIO takes the fall for the failure and, as a direct result, they are either demoted, moved into a different organization or let go altogether.

The article goes on to provide advice to CIOs who are beginning such undertakings. First and foremost, large, complex projects should be broken up into “bite-sized” chunks and proper expectations of what will be delivered in each “mini-project” should be set – and agreed upon – with the various stakeholders.

I could not agree more with this statement and find it most concerning that this is not more of a common practice within the IT industry. In our World of rapid prototyping (turned production) and just-in-time development, to think that you could perform a multi-year project without implementing several checkpoints along the way is simply insane. This may be one of the reasons why the average life-span of a CIO is only two years within the same company.

CIOs who agree to perform projects under such conditions really need to read my previous blog entitled “Lessons Learned from Enterprise Identity Management Projects“. While it was written mainly for enterprise identity projects it has direct applicability to any enterprise project. In that article I directly address specific points about expectation setting and bite-sized chunks (did CIO Magazine read my blog on this?) and by taking my advice to heart, the average CIO might be able to extend their stay.

Opinions About the Federal Government’s Identity Initiative

September 11, 2009 Leave a comment

Interesting read. This is essentially a WebSSO initiative with authentication based on CAC type ID cards or OpenID.

The CAC type of implementation (ID Cards) are not practical as they require everyone to have a card reader on their PC in order to do business with the government. I don’t see this happening anytime too soon.

I understand that there are several holes in the OpenID initiative. I wonder if they have been fixed (I wonder if it matters).

Either way, Sun’s openSSO initiative is well positioned as it allows OpenID as a form of authentication. The fact that the government is looking at open source for this (OpenID) bodes well for openSSO.

Link to article on PCmag.com:

http://blogs.pcmag.com/securitywatch/2009/09/federal_government_starts_iden.php.

Text version of the article:

Federal Government Starts Identity Initiative

As part of a general effort of the Obama administration to make government more accessible through the web, the Federal government, through the GSA (Government Services Administration), is working to standardize identity systems to hundreds of government web sites. The two technologies being considered are OpenID and Information Cards (InfoCards). The first government site to implement this plan will be the NIH (National Institutes of Health).

OpenID is a standard for “single sign-on”. You may have noticed an option on many web sites, typically blogs, to log on with an OpenID. This ID would be a URI such as john_smith.pip.verisignlabs.com, which would be John Smith’s identifier on VeriSign’s Personal Identity Portal. Many ISPs and other services, such as AOL and Yahoo!, provide OpenIDs for their users. When you log on with your OpenID the session redirects to the OpenID server, such as pip.verisignlabs.com. This server, called an identity provider, is where you are authenticated, potentially with stricter measures than just a password. VeriSign is planning to add 2-factor authentication for example. Once authenticated or not, the result is sent back to the service to which you were trying to log in, also known as a relying party.

Information Cards work differently. The user presents a digital identity to a relying party. This can be in a number of forms, from a username/password to an X.509 certificate. IDs can also be managed by service providers who can also customize their authentication rules.

The OpenID Foundation and Information Card Foundation have a white paper which describes the initiative. Users will be able to use a single identity to access a wide variety of government resources, but in a way which preserves their privacy. For instance, there will be provision for the identity providers to supply each government site with a different virtual identity managed by the identity provider, so that the user’s movements on different government sites cannot be correlated.

Part of the early idea of OpenID was that anyone could make an identity provider and that everyone will trust everyone else’s identity provider, but this was never going to work on a large scale. For the government there will be a white list of some sort that will consist of certified identity providers who meet certain standards for identity management, including privacy protection.

Identity Management Lessons from Sarah Palin

September 19, 2008 Leave a comment

By now, many of you have already heard about the hacking of Alaska Governor Sarah Palin’s Yahoo e-mail account earlier this week (on or about Tuesday 9/16/2008). If not, here is a brief synopsys of the story.

Sarah Palin’s personal Yahoo e-mail account was compromised and the contents of her account (including her address book, inbox and several family photos) were posted to the Internet.

Someone with the e-mail address of rubico10@yahoo.com posted a message on the Web site 4chan about how he used Yahoo! Mail’s password-recovery tool to change the Alaska governor’s password and gain full access to her e-mail account.

“i am the lurker who did it, and i would like to tell the story,” rubico10@yahoo.com wrote.

(I have included the full text at the bottom of the post for those interested. Be forewarned that some of the language is NOT family friendly.)

The rubico10@yahoo.com e-mail account has been linked to 20-year old David Kernell; son of democratic Tennessee State Representative, Mike Kernell, and a student at the University of Tennessee-Knoxville. While David has not been included in any official investigation as of yet, his father, has confirmed that the person being the subject of the many blog posts and news articles around the Internet is indeed his son.

So how did the alleged hacker do it?

First of all, he had to identify Sarah Palin’s email address to be gov.palin@yahoo.com. A recent article in The Washington Post indicated that Sarah Palin was using a personal e-mail address of gov.sarah@yahoo.com to conduct government business. But that was not the e-mail account that got hacked. So how do you get from gov.sarah@yahoo.com to gov.palin@yahoo.com?

Allahpundit posted an article on hotair.com that presents some interesting ideas about how the hacker might have arrived at the gov.palin@yahoo.com account, but for the time being — and void of any conspiracy theories — let’s just assume he figured it out.

Now that he had the e-mail address, how was he able to gain access to the account?

The hacker claims to have used Yahoo! Mail”s password-recovery tool to reset the password. To do this, you simply go to Yahoo! Mail and click on the Forget your ID or password link.

Yahoo Login Page

This takes you to a page where you enter your Yahoo! ID. In the case of Sarah Palin’s account, this would be “gov.palin”.

Lost Password Page

To reset your password with Yahoo! Mail, you can either have it sent to your secondary email address or you can indicate that you no longer have access to this account.

(As a side note, I do not particularly like the fact that Yahoo! shows even a portion of my secondary email account in the email address HINT. But that is another story. )

Alternate Email Address

Selecting the “I can’t access my alternate email address” radio button allows you to answer questions to challenge questions as follows:

Challenge/Response Questions

These are generic authentication questions, but in the case of Sarah Palin, the hacker had to answer one additional question that had to do with where she met her husband. The hacker guessed that Alaska’s governor had met her husband in high school, and knew the Republican vice presidential candidate’s date of birth and home Zip code, the Associated Press reported. Using those details, the hacker was able to successfully access Palin’s email account where he was able to assign a new password of “popcorn”.

The rest is simply news.

So what does the hacking of Sarah Palin’s email account tell us about security and Identity Management in general?

One of the big benefits of an Identity Management solution is that it provides end-users with a way to update their own data and reset their own passwords. This is a HUGE cost reduction for companies as it reduces the number of calls to the Help Desk. But just like everything else, there has to be a careful balance between security and convenience.

Authentication questions provide a means for users to gain access to their accounts when they have forgotten their passwords. This is the mechanism that Yahoo! Mail uses and has been adopted by many Identity Management solutions. Authentication questions are extremely convenient for companies that have password policies that are so stringent that their users cannot remember their passwords. They also come in handy after three-day holiday weekends as the day that employees return to work typically generates numerous calls to the Help Desk for password reset.

While authentication questions are convenient and produce a cost savings, a company does, however, need to take care when providing this solution. Who decides what the questions are and what happens if the end-user does not have an answer for a particular question? These are some of the issues that need to be considered. I have seen questions all over the board. Below are some of the ones that I find particularly insecure since many of them can be answered by Google searches or social engineering. In some cases, the questions cannot be answered with one answer and some cannot be answered at all.

Questions that can be answered by social engineering or search:

· What is your mother’s maiden name?
· In what city where you born?
· In what year where you born?
· What was your first school?
· What was your first phone number?

Questions that might not be answered at all:
· Who is your favorite superhero?
· What is your pet’s name?
· What is your library card number?
· What was your first teacher’s name?
· What is the air speed velocity of a coconut-laden swallow?

If you force a user to provide answers that are easily obtainable, then your risk is drastically increased (just ask Sarah Palin). If you force users to answer questions that are difficult (or impossible) to answer, then then your risk is also increased as the user may just provide a common answer to all questions (i.e. “blue”). So either way you go, it can be a difficult decision to make.

I have found that one of the best mechanisms is a an approach that allows the end user to define their own set of authentication questions while the company provides a sample set of common (yet hopefully secure) questions as well. This allows the company to have certain control, but also allows the user the ability to provide questions and answers using information that only they know. Now, I know that some may argue that users typically pick the path of least resistance and that many of them will pick easy questions (and therefore have easy answers) but by combining a set of the company-specific questions in addition to those supplied by the user the company can bridge the gap between security and convenience.

By the way, if you use an application that allows you to provide your own authentication questions, then I STRONGLY suggest that you go and provide your own security question(s) to one(s) that have meaning and applicability to you.

Here is the synopsis of what rubico said at 4chan:

rubico 09/17/08(Wed)12:57:22 No.85782652

Hello, /b/ as many of you might already know, last night sarah palin’s yahoo was “hacked” and caps were posted on /b/, i am the lurker who did it, and i would like to tell the story.

In the past couple days news had come to light about palin using a yahoo mail account, it was in news stories and such, a thread was started full of newfags trying to do something that would not get this off the ground, for the next 2 hours the acct was locked from password recovery presumably from all this bulls**t spamming.

after the password recovery was reenabled, it took seriously 45 mins on wikipedia and google to find the info, Birthday? 15 seconds on wikipedia, zip code?

well she had always been from wasilla, and it only has 2 zip codes (thanks online postal service!)

the second was somewhat harder, the question was “where did you meet your spouse?” did some research, and apparently she had eloped with mister palin after college, if youll look on some of the screensh**s that I took and other fellow anon have so graciously put on photobucket you will see the google search for “palin eloped” or some such in one of the tabs.

I found out later though more research that they met at high school, so I did variations of that, high, high school, eventually hit on “Wasilla high” I promptly changed the password to popcorn and took a cold shower…

>> rubico 09/17/08(Wed)12:58:04 No.85782727

this is all verifiable if some anal /b/tard wants to think Im a troll, and there isn’t any hard proof to the contrary, but anyone who had followed the thread from the beginning to the 404 will know I probably am not, the picture I posted this topic with is the same one as the original thread.

I read though the emails… ALL OF THEM… before I posted, and what I concluded was anticlimactic, there was nothing there, nothing incriminating, nothing that would derail her campaign as I had hoped, all I saw was personal stuff, some clerical stuff from when she was governor…. And pictures of her family

I then started a topic on /b/, peeps asked for pics or gtfo and I obliged, then it started to get big

Earlier it was just some prank to me, I really wanted to get something incriminating which I was sure there would be, just like all of you anon out there that you think there was some missed opportunity of glory, well there WAS NOTHING, I read everything, every little blackberry confirmation… all the pictures, and there was nothing, and it finally set in, THIS internet was serious business, yes I was behind a proxy, only one, if this s**t ever got to the FBI I was f****d, I panicked, i still wanted the stuff out there but I didn’t know how to rapids**t all that stuff, so I posted the pass on /b/, and then promptly deleted everything, and unplugged my internet and just sat there in a comatose state

Then the white knight f****r came along, and did it in for everyone, I trusted /b/ with that email password, I had gotten done what I could do well, then passed the torch , all to be let down by the douchebaggery, good job /b/, this is why we cant have nice things

Lessons Learned from Enterprise Identity Management Projects

August 1, 2008 Leave a comment

I have been implementing and/or managing identity-related projects for over 10 years now and I can say, from experience, that the biggest problem with any Identity Management project can be summed up in one word: EXPECTATIONS.

It does not matter whether you are tackling an identity project for compliance, security or cost-reduction reasons. You need to have proper expectations of what can be realistically accomplished within a reasonable timeframe and those expectations need to be shared among all team members and stakeholders.

Projects that fail to achieve a customer’s expectations do so because those expectations were either not validated or were not shared between all parties involved. When expectations are set (typically in a statement of work), communicated (periodic reports), and then reset if necessary (change orders), then the customer is much happier with the project results.

Here are a few lessons I have learned over the years. While they have general applicability to major projects, in general, they are especially true of identity-related projects.

1) Projects MUST be implemented in bite-sized chunks.

Identity projects are enterprise-wide projects; you should create an project roadmap that consists of multiple “mini” projects that can demonstrate an immediate ROI. The joke is, “How do you eat an elephant? One bite at a time.” To achieve success with identity projects, you should implement them one bite at a time and have demonstrable/measurable success after each bite.

2) The devil is in the data.

Using development/test data that is not representative of production data will kill you in the end and cause undue rework when going into production. Use data that is as close to production as possible.

3) Start with an analysis phase BEFORE scoping the entire project.

I HIGHLY recommend that the first project you undertake is an analysis. That will define the scope for which you can then get a better idea of how to divvy up the project into multiple bite-size chunks and then determine how much — and how long — each chunk will take. This allows you to effectively budget both time and money for the project(s).

Note: If a vendor gives you a price for an identity implementation without this, then run the other way. They are trying to simply get their foot in the door without first understanding your environment. If they say that the analysis phase is part of the project pricing, then get ready for an extensive barrage of change orders to the project.

4) Get everyone involved.

Keep in mind that these are enterprise-wide projects that affect multiple business units within your company. The project team should contain representatives from each organization that is being “touched” by the solution. This includes HR, IT, Help Desk, Training and above all, upper-level management (C-level).

(The following items apply if you are using external resources for project implementation.)

5) Find someone who has “been there and done that”.

Ask for references and follow up on them. More and more companies say that they can implement identity-related projects just because they have taken the latest course from the vendor. This is not enough. If training alone could give you the skills to implement the product, then you would have done the project yourself. You need to find someone who knows where the pitfalls are before you hit them.

6) Let the experts lead.

Don’t try to manage an Identity Management project unless you have done so before – and more than once. I have been involved with customers who have great project managers that have no experience with identity projects, yet they want to take ownership of the project and manage the resources. This is a recipe for disaster. Let the people who have done the implementation lead the project and allow your project manager to gain the knowledge for future phases.

7) Help build the car, don’t just take the keys.

Training takes place before, after and during the project. Don’t expect to simply take “the keys” from the vendor once the project has been completed. You need to have resources actively involved throughout the project in order to take ownership. Otherwise you not be able to support the product — or make changes to it — without assistance from the vendor. Ensure that you have your own team members actively engaged in the project – side by side with the external team. To do this, you have to ensure that they are not distracted by other work-related tasks.