Archive

Archive for the ‘Identity Management’ Category

Performing Bulk Operations in OpenIDM

April 19, 2016 Leave a comment

 
OpenIDM does not support bulk operations out of the box. One way to to do this, however, is to obtain a list of IDs that you want to perform an operation on and then loop through the list performing the desired operation on each ID.
 

Yes this is a hack, but let’s be honest, isn’t life just one big set of hacks when you think about it?

 
Here are the steps.

Suppose for instance that you want to delete all managed users in OpenIDM that have a last name of “Nelson”. The first step is to obtain a list of those users; which you can easily do using a cURL command and an OpenIDM filter as follows:

curl -u openidm-admin:openidm-admin 'http://localhost:8080/openidm/managed/user?_queryFilter=(sn+eq+"Nelson")&_fields=_id&_prettyPrint=true'

This returns a listing of all managed objects that match the filter as follows.

 
{
  "result" : [ {
    "_id" : "ed979deb-2da2-4fe1-a309-2b7e9677d931",
    "_rev" : "5"
  },
  {
    "_id" : "581c2e13-d7c4-4fff-95b8-2d1686ef5b9c",
    "_rev" : "1"
  },
  {
    "_id" : "1295d5db-c6f8-4108-9842-06c4cde0d4eb",
    "_rev" : "3"
  } ],
  "resultCount" : 3,
  "pagedResultsCookie" : null,
  "totalPagedResultsPolicy" : "NONE",
  "totalPagedResults" : -1,
  "remainingPagedResults" : -1
}

 
But most of the data returned is extraneous for the purposes of this exercise; we only want the “_id” values for these users and to obtain this information, you can pipe the output into a grep command and redirect the output to a file as follows:

curl -u openidm-admin:openidm-admin 'http://localhost:8080/openidm/managed/user?_queryFilter=(sn+eq+"Nelson")&_fields=_id&_prettyPrint=true' | grep "_id" >> bulkOperationIDs.txt

This will produce a file that looks like this:

 
     "_id": "ed979deb-2da2-4fe1-a309-2b7e9677d931",
     "_id": "581c2e13-d7c4-4fff-95b8-2d1686ef5b9c",
     "_id": "1295d5db-c6f8-4108-9842-06c4cde0d4eb"

 
(yes, there are leading spaces in that output).

You are still not done yet as you need to strip off all the extraneous stuff and get it down to just the values of the “_id” attribute. You can probably devise a cool sed script, or find an awesome regular expression for the grep command, but it is just as easy to simply edit the file with and perform a couple global search/replace operations:

:1,$ s/ "_id": "//g
:1,$ s/",//g

The above example demonstrates a global search/replace operation in the “vi” editor – the best damn editor on God’s green earth!

 
However you approach it, the goal is to get the file to consist of only the IDs as follows:
 

 
ed979deb-2da2-4fe1-a309-2b7e9677d931
581c2e13-d7c4-4fff-95b8-2d1686ef5b9c
1295d5db-c6f8-4108-9842-06c4cde0d4eb

 
Now that you have this file, you can perform any operation you would like on it using a simple command line script tied into the appropriate cURL command. For instance, the following would perform a GET operation on all entries in the file (it is HIGHLY recommended that you do this before jumping right into a DELETE operation):

for i in `cat bulkOperationIDs.txt`; do curl -u openidm-admin:openidm-admin -X GET "http://localhost:8080/openidm/managed/user/$i?_prettyPrint=true"; done

Once you feel comfortable with the response, you can change the GET operation to a DELETE and kick the Nelsons to the proverbial curb.
 

Configuring OpenIDM on a Read Only File System in 10 Easy Steps

April 13, 2016 Leave a comment

 

During the normal course of events, OpenIDM writes or updates various files on the file system to which it is installed.  This includes log files, audit files, process IDs, configuration files, or even cached information.  There are times, however when you find yourself needing to deploy OpenIDM to a read only file system – one to which you cannot write typical data.

Read Only File SystemFortunately, OpenIDM is flexible enough to allow such an installation, you just need to make some adjustments to various settings to accommodate this.

The following information provides details on how to configure OpenIDM on a read only file system.  It includes the types of information that OpenIDM writes by default, where it writes the information, and how you can alter the default behavior – and it does it in just 10 EASY Steps (and not all of them are even required!).
 

Note:  The following steps assume that you have a shared (mounted) folder at /idm, that you are using OpenIDM 4.0, it is running as the frock user, and that the frock user has write access to the /idm folder.

 
1. Create external folder structure for logging, auditing, and internal repository information.

$ sudo mkdir -p /idm/log/openidm/audit
$ sudo mkdir /idm/log/openidm/logs
$ sudo mkdir -p /idm/cache/openidm/felix-cache
$ sudo mkdir /idm/run/openidm

2. Change ownership of the external folders to the “idm” user.

$ sudo chown -R frock:frock /idm/log/openidm
$ sudo chown -R frock:frock /idm/cache/openidm
$ sudo chown -R frock:frock /idm/run/openidm

 

Note: OpenIDM writes its audit data (recon, activity, access, etc.) to two locations by default: the filesystem and the repo. This is configured in the conf/audit.json file.

 

3. Open the conf/audit.json file and verify that OpenIDM is writing its audit data to the repo as follows (note: this is the default setting):

"handlerForQueries" : "repo",

4. Open the conf/audit.json file and redirect the audit data to the external folder as follows:

"config" : {
"logDirectory" : "/idm/log/openidm/audit"
},

After making these changes, this section of the audit.json will appear as follows (see items in bold font):

{
"auditServiceConfig" : {
"handlerForQueries" : "repo",
"availableAuditEventHandlers" : [
"org.forgerock.audit.events.handlers.csv.CsvAuditEventHandler",
"org.forgerock.openidm.audit.impl.RepositoryAuditEventHandler",
"org.forgerock.openidm.audit.impl.RouterAuditEventHandler"
]
},
"eventHandlers" : [
{
"name" : "csv",
"class" : "org.forgerock.audit.events.handlers.csv.CsvAuditEventHandler",
"config" : {
"logDirectory" : "/idm/log/openidm/audit"
},
"topics" : [ "access", "activity", "recon", "sync", "authentication", "config" ]
},

As an alternate option you can disable the writing of audit data altogether by setting the enabled flag to false for the appropriate event handler(s). The following snippet from the audit.json demonstrates how to disable file-based auditing.

"eventHandlers" : [
{
"name" : "csv",
"enabled" : false,
"class" : "org.forgerock.audit.events.handlers.csv.CsvAuditEventHandler",
"config" : {
"logDirectory" : "/audit",
},
"topics" : [ "access", "activity", "recon", "sync", "authentication", "config" ]

 

Note: OpenIDM writes its logging data to the local filesystem by default. This is configured in the conf/logging.properties file.

 

5. Open the conf/logging.properties file and redirect OpenIDM logging data to the external folder as follows:

java.util.logging.FileHandler.pattern = /idm/log/openidm/logs/openidm%u.log

 

Note: OpenIDM caches its Felix files in the felix-cache folder beneath the local installation. This is configured in the conf/config.properties file.

 

6. Open the conf/config.properties file and perform the following steps:

 
a. Redirect OpenIDM Felix Cache to the external folder as follows:
 

# If this value is not absolute, then the felix.cache.rootdir controls
# how the absolute location is calculated. (See buildNext property)
org.osgi.framework.storage=${felix.cache.rootdir}/felix-cache

 
b. Define the relative path to the Felix Cache as follows:
 

# The following property is used to convert a relative bundle cache
# location into an absolute one by specifying the root to prepend to
# the relative cache path. The default for this property is the
# current working directory.
felix.cache.rootdir=/idm/cache/openidm

 
After making these changes, this section of the config.properties will appear as follows (see items in bold font):
 

# If this value is not absolute, then the felix.cache.rootdir controls
# how the absolute location is calculated. (See buildNext property)
org.osgi.framework.storage=${felix.cache.rootdir}/felix-cache

# The following property is used to convert a relative bundle cache
# location into an absolute one by specifying the root to prepend to
# the relative cache path. The default for this property is the
# current working directory.
felix.cache.rootdir=/idm/cache/openidm

 

Note: During initial startup, OpenIDM generates a self-signed certificate and stores its security information in the keystore and truststore files as appropriate. This is not possible in a read-only file system, however. As such, you should generate a certificate ahead of time and make it part of your own deployment.

 

7. Update keystore and truststore files with certificate information and an updated password file as appropriate. The process you choose to follow will depend on whether you use a self-signed certificate or obtain one from a certificate authority.

 

Note: On Linux systems, OpenIDM creates a process ID file (PID file) on startup and removes the file during shutdown. The location of the PID File is defined in both the start script (startup.sh) and the shutdown script (shutdown.sh). The default location of the process ID is $OPENIDM_HOME folder.

 

8. Open the startup.sh script and update the location of the process ID file by adding the following line immediately after the comments section of the file:

OPENIDM_PID_FILE="/idm/run/openidm/.openidm.pid"

9. Repeat Step 7 with the shutdown.sh script.

 

Note: OpenIDM reads configuration file changes from the file system by default. If your environment allows you to update these files during the deployment process of a new release, then no additional changes are necessary. However, if you truly have a read only file system (i.e. no changes even to configuration files) then you can disable the monitoring of these configuration files in the next step. Keep in mind, however, that this requires that all configuration changes must then be performed over REST.

 

10. Open the conf/system.properties file and disable monitoring of JSON and subsequent loading of configuration file changes by uncommenting the following line:

#openidm.fileinstall.enabled=false

 
 

The Real Reason Oracle Dropped Sun Identity Manager

July 23, 2015 1 comment

 

I always appreciate it when someone attempts to educate others about identity management and related technologies.  So when I saw the the following presentation, it quickly caught my attention as I was working with both products when the Oracle deal to purchase Sun went down.

 

Why Oracle Dropped Waveset Lighthouse and Went to Oracle Identity Manager (OIM)

 

 

Not to be too nit picky, but there are quite a few errors in this presentation that I simply couldn’t ignore.

  • OID is not the acronym for “Oracle Identity Management”, it is an acronym for “Oracle Internet Directory” – Oracle’s LDAPv3 directory server. OIM is the acronym for “Oracle Identity Manager”.
  • SIM (“Sun Identity Manager”) was not a “suite of identity manager products” as you state. SIM was a data synchronization and provisioning product. SIM was part of the suite of Sun Identity Management products that also included Sun Directory Server Enterprise Edition (SDSEE), Sun Access Manager/OpenSSO, and Sun Role Manager (SRM).
  • It is stated that one reason that Oracle did not elect to continue with SIM was because it did not have a Web 2.0 UI. SIM version 9.0 (the version being developed when Oracle purchased Sun) did have a Web 2.0 UI. So this is not quite an accurate representation.
  • “Oracle IDM” is Oracle’s suite of identity management products which includes Oracle Virtual Directory (OVD), Oracle Identity Directory (OID), Oracle Access Manager (OAM), and Oracle Identity Manager (OIM). The presentation uses “Oracle IDM” (and later, simply “IDM”) to refer specifically to Oracle Identity Manager, however. This is both confusing and misleading.
  • It is stated that “IDM allowed faster application on-boarding.” As an integrator of both OIM and SIM, I can honestly say that this is not a true statement. We could have simple SIM deployments up and running in the order of days/week and a production deployment in a month or two. OIM, consistently took several months to deploy – which was great for a billable professional services firm, but not so great for the customer (who had to pay for those services).
  • It is inferred that OIM is able to provision to cloud and SIM was not and that was a reason why Oracle chose to go with OIM. That is a misleading statement as SIM was able to provision to cloud applications as well. SIM also supported SPML (not a big fan, myself) and SCIM for provisioning to other identity platforms and cloud based applications.

The main reasons that Oracle chose to go with OIM versus SIM was simply the deeper integration with Oracle products and their not wanting to alter the Oracle IDM roadmap. I was on the early calls with Oracle when they announced which products they would keep and which products they were getting rid of.  During those calls, they had their “politically correct” reasons as well as the “real” reasons and it always came back to these two.

There was only one place where I saw Oracle forced into altering their position and they had to update their roadmap; this was with the SDSEE product.  Oracle made it very clear that the only product they wanted in Sun’s identity product line was Sun Role Manager (which later became Oracle Identity Analytics).  In fact, only a couple weeks after the purchase was made, Oracle had already set an end of life date for all identity products including SDSEE.  What Oracle hadn’t counted on was how well entrenched that product was across Sun’s major customers (including the US Government and major Telcos).  It wasn’t until the outcry from their customers was raised that Oracle “decided” to continue product development.

Purely from a technology perspective, if you are a company that has deployed a wide array of Oracle products, then it made sense to go with OIM due to the deeper integration with Oracle products, but not so much if you are a heterogenous company. In such cases, I have found other products to be more flexible than OIM and provide a much quicker deployment times at much lower costs.

The Next Generation of Identity Management

March 1, 2015 4 comments

The face of identity is changing. Historically, it was the duty of an identity management solution to manage and control an individual’s access to corporate resources. Such solutions worked well as long as the identity was safe behind the corporate firewall – and the resources were owned by the organization.

But in today’s world of social identities (BYOI), mobile devices (BYOD), dynamic alliances (federation), and everything from tractors to refrigerators being connected to the Internet (IoT), companies are finding that legacy identity management solutions are no longer able to keep up with the demand. Rather than working with thousands to hundreds of thousands of identities, today’s solutions are tasked with managing hundreds of thousands to millions of identities and include not only carbon-based life forms (people) but also those that are silicon-based (devices).

In order to meet this demand, today’s identity solutions must shift from the corporation-centric view of a user’s identity to one that is more user-centric. Corporations typically view the identity relationship as one between the user and the organization’s resources. This is essentially a one-to-many relationship and is relatively easy to manage using legacy identity management solutions.

One to Many Relationship

What is becoming evident, however, is the growing need to manage many-to-many relationships as these same users actually have multiple identities (personas) that must be shared with others that, in turn, have multiple identities, themselves.

Many to Many Relationships

The corporation is no longer the authoritative source of a user’s identity, it has been diminished to the role of a persona as users begin to take control of their own identities in other aspects of their lives.

Identity : the state or fact of being the same one as described.

Persona : (in the psychology of C. G. Jung) the mask or façade presented to satisfy the demands of the situation or the environment.

In developing the next generation of identity management solutions, the focus needs to move away from the node (a reference to an entry in a directory server) and more towards the links (or relationships) between the nodes (a reference to social graphs).

Social Graph

In order to achieve this, today’s solutions must take a holistic view of the user’s identity and allow the user to aggregate, manage, and decide with whom to share their identity data.

Benefits to Corporations

While corporations may perceive this as a loss of control, in actuality it is the corporation that stands to benefit the most from a user-centric identity management solution. Large corporations spend hundreds of thousands of dollars each year in an attempt to manage a user’s identity only to find that much of what they have on file is incorrect. There are indeed many characteristics that must be managed by the organization, but many of a user’s attributes go well-beyond a corporation’s reach. In such cases, its ability to maintain accurate data within these attributes is relatively impossible without the user’s involvement.

Take for instance a user’s mobile telephone number; in the past, corporations issued, sponsored, and managed these devices. But today’s employees typically purchase their own mobile phones and change carriers (or even phone numbers) on a periodic basis. As such, corporate white pages are filled with inaccurate data; this trend will only increase as users continue to bring more and more of themselves into the workplace.

Legacy identity solutions attempt to address this issue by introducing “end-user self-service” – a series of Web pages that allow a user to maintain their corporate profile. Users are expected to update their profile whenever a change occurs. The problem with this approach is that users selectively update their profiles and in some cases purposely supply incorrect data (in order to avoid after hours calls). The other problem with this approach is that it still adheres to a corporate-centric/corporate-owned identity mindset. The truth is that users’ identities are not centralized, they are distributed across many different systems both in front of and behind the corporate firewall and while companies may “own” certain data, it is the information that the user brings from other sources that is elusive to the company.

Identity Relationship Management

A user has relationships that extend well beyond those maintained within a company and as such has core identity data strewn across hundreds, if not thousands of databases. The common component in all of these relationships is the user. It is the user who is in charge of that data and it is the user who elects to share their information within the context of those relationships. The company is just one of those relationships, but it is the one for which legacy identity management solutions have been written.

Note: Relationships are not new, but the number of relationships that a user has and types of relationships they have with other users and other things is rapidly growing.

Today’s identity management solutions must evolve to accept (or at a minimum acknowledge) multiple authoritative sources beyond their own. They must evolve to understand the vast number of relationships that a user has both with other users, but also with the things the user owns (or uses) and they must be able to provide (or deny) services based on those relationships and even the context of those relationships. These are lofty goals for today’s identity management solutions as they require vendors to think in a whole new way, implement a whole new set of controls, and come up with new and inventive interfaces to scale to the order of millions. To borrow a phrase from Ian Glazer, we need to kill our current identity management solutions in order to save them, but such an evolution is necessary for identity to stay relevant in today’s relationship-driven world.

I am not alone in recognizing the need for a change.  Others have come to similar conclusions and this has given rise to the term, Identity Relationship Management (or IRM).  The desire for change is so great in fact that Kantara has sponsored the Identity Relationship Management Working Group of which I am privileged to be a member.  This has given rise to a LinkedIn Group on IRM, a Twitter feed (@irmwg), various conferences either focused on or discussing IRM, and multiple blogs of which this is only one.

LinkedIn IRM Working Group Description:

In today’s internet-connected world, employees, partners, and customers all need anytime access to secure data from millions of laptops, phones, tablets, cars, and any devices with internet connections.

Identity relationship management platforms are built for IoT, scale, and contextual intelligence. No matter the device, the volume, or the circumstance, an IRM platform will adapt to understand who you are and what you can access.

Call to Action

Do you share similar thoughts and/or concerns?  Are you looking to help craft the future of identity management?  If so, then consider being part of the IRM Working Group or simply joining the conversation on LinkedIn or Twitter.

 

Trust – The Missing Ingredient

November 18, 2011 Leave a comment

I was having a conversation with friends the other day and while it may sound nerdy as hell, the topic was focused on identity.  I swear (trust me) that no drinks were involved but the conversation went pretty deep, nonetheless.  What is identity, how is it used, and how can it be protected?  Like Aristotle and Plato before us, we modern day philosophers discussed the various aspects that make up our identity, how we can control it, and how we can selectively share it with our intended audiences.  In an era when our private information has been unleashed like the proverbial opening of Pandora’s Box, how can we regain control of our identities without impacting our existing relationships or experiences?

But what about identity?  What is it really, and why should you care?

When I think about identity, I think in terms of aggregation, management, and sharing.  Each of these are key ingredients when it comes to users owning their own identities, but each of these can be further strengthened when we add trust to the mix.  So, what is the recipe for success as it pertains to trusting identities in cyberspace?  Let’s take a closer look at each of these ingredients to see.

Aggregation

My identity is the aggregation of all the things there is to know about me.  One could trivialize this by saying it is simply all the discrete data elements about me (i.e. hair color, height, ssn, etc.) but in essence, it is much more.  It consists of my habits, my history, my data, my relationships – basically everything that can be me and everything that can be tracked about me.  Identity information is not found in a single location, it is distributed across multiple repositories but this informaiton can be aggregated into a virtual identity – which is essentially, me.

Management

When we allow someone to manage their own identity, we are allowing them to control their discrete data elements, but we are also allowing them to manage every other aspect about themselves as well.  You can change your mobile number attribute (data element) when you get a new phone, or you can change your address attribute when you move.  But just like you can remove the cache, history, and cookies in your browser, you should be able to maintain your privacy by removing (or hiding) your identity characteristics as well.  Identity management simply means that I am able to manage those aspects of my identity that are my own.

Sharing

In real life, I have the ability to select which characteristics and/or information about myself that I want to share with each of my friends, family, co-workers or acquaintances.  My work-related benefits stay private between my boss and I in the workplace.  Conversely, I don’t share my family conversations within the office.  Investment information stays private between my broker and I, yet I Tweet favorite quotes to the world.  In essence, I selectively share information with different audiences based on the role I am playing at that time.  Online personas facilitate the same selective sharing within the social web similar to our interations in the real world.  I may take on a different persona as I interact in the virtual world and elect to share different information with each audience based on where I elect to use that persona.  This also means that I can act anonymously if I so choose (which is similar to going ‘incognito’ in your browser).

Trust

Sharing data with others fulfills my desire to communicate information about me to you, but just like in real life it is totally your option to accept the validity of that information or not.  To take the sharing to the next level (and address a major need on the Internet today), we need to have some method of trusting the information that we receive.  Trust is transient (it changes), contextual (it is based on the situation), and 100% given by the receiving party – essentially they decide to trust you or not.  In the real world we use driver’s licenses, passports, or referrals from friends to validate users and establish trust.   This is no difference in the social web except for the fact that we are not seeing each other face to face and do not have the ability to provide a driver’s license as proof of identity.  Hence the need for another method.

If the ingredients in the identity cake are aggregation, management and sharing, then validation is the icing on the cake; not the cake itself.  While each of these ingredients are key in making the perfect cake, leaving trust out of the mix is kind of like leaving salt out of the recipe.  Trust simply brings out the flavor and without it, the cake is way too bland!

Is Your Intellectual Property Slipping Out the Door with Their Pink Slip?

December 16, 2010 Leave a comment

(I wrote the following article for BABM Business Magazine back in May/June of 2009. The article is reprinted here with their permission.)

With the latest layoff news continuing to add chaos to the economy, CEOs need to protect their businesses in case of staff cuts, restructuring or consolidation of offices. While your company may not be planning layoffs now, there is no guarantee that in three or six months from now this will be the case. There are steps your business should take, both proactively and reactively, to ensure that your most valuable information such as customer data and contracts isn’t walking out the door with terminated employees.

Ideally, even before layoffs occur, businesses need to be prepared to protect their assets. Employees may sense a layoff is imminent and start grabbing what data they can before they get the official word. This could lead to a loss of your company’s most valuable contacts that former employees may use to compete against you. Proactive monitoring of systems, before layoffs begin, can ensure that your company’s data is protected.

There are a variety of technologies you can implement to monitor your employees’ access of specific applications. For example, you can monitor who has access to what type of database and determine if an employee is running unusual reports. Are certain employees extracting every field, downloading the data to a local disk and/or sending it to themselves over email?

Having a solid process for role provisioning and access management will help limit access of certain information to those people who need it to do their jobs. If levels of access to various applications and corporate information are assigned for each job description, it is easier to set up monitoring systems for each employee as well as protocols for changing passwords and other termination procedures to remove access when an employee is let go.

A good rule of thumb is to trust, but verify. Monitoring can be performed at many levels and includes database access, disc usage, and whether or not USB drives are being plugged into company computers. Monitoring can even determine if proprietary data is being sent to an email account. When it comes to access management and monitoring, CEOs and executive management need to weigh how much protection they want with how much they protection they can afford. It’s a formula that will vary for every company.

Once a company is in an action stage and layoffs are about to begin, it’s almost too late to protect and secure its data without shutting off access altogether (which may not be feasible in all cases). As a fallback plan, many companies provide their security team with a list of users they plan to let go. On the morning the layoffs are to take place, the team is tasked with acting on the list and locking out those employees from their accounts. But there’s often the lingering feeling that something was missed. Are they prevented from accessing your systems remotely? Are they still receiving their email on their home PCs? Does the employee have access to vendor accounts? Can your security team effectively map the employee to all the accounts they have accumulated over the years?

There are many types of technologies that can be used from a proactive perspective and subsequently verified from a reactive perspective. CEOs should be proactive and have an effective user provisioning solution in place. This ensures that they have accounted for all the systems and the types of system access where a user has an account. Once layoffs have occurred companies should continue monitoring mission critical systems to ensure that the access has been terminated appropriately. A security event monitoring solution on the back end can monitor log files or traffic patterns to these systems and immediately notify of any unusual activity.

Companies that have implemented centralized account management systems have peace of mind that they can quickly prevent access by employees who are no longer associated with the company. They can be certain that they have locked all accounts being managed by the system and actions such as terminations can be performed by management (ahead of time) rather than needing to involve people from the security team.

Companies that have not implemented a centralized account management system are increasing their workload and effectively putting valuable corporate assets at risk. At this point, there has to be due diligence as you have to perform these tasks manually. The potential for damage is great, however, and fallout will rise exponentially as more layoffs occur. If you have implemented a centralized user provisioning system, congratulations! If not, don’t panic, there are still tasks you can perform to help protect your assets.

    1. Prepare your list well in advance and give your security team a chance to locate the various user accounts.
    2. Work with functional managers, supervisors, or project managers to further determine the user’s access.
    3. Monitor system logs and network traffic to determine if any unusual access or traffic patterns appear. Respond immediately.

 

Even with this type of preparation, the tasks can be quite time consuming and it could take weeks to properly locate and delete access. Hence, our advice is that it’s better to take more proactive steps to avoid headaches and possible customer data and other business asset loss later on. Getting a handle on your role provisioning and user access procedures and having a plan for monitoring employee application use are good places to start.

Staff reduction is never easy and you should make the separation as painless as possible. It is unfortunate that some employees view corporate assets as their own and feel entitled to take information with them when they leave. As a business owner responsible to shareholders or even to the remaining workforce, you need to take every action possible to ensure the protection of this data.

Advice to CIOs for High Exposure Projects

September 14, 2009 Leave a comment

I read an article in CIO Magazine about the plight of today’s CIOs when multi-million dollar multi-year projects go awry. The article entitled “The CIO Scapegoat” indicates that it is unfair to hold the IT department completely responsible when there are so many other business units that contribute to a project’s demise. In many cases, the CIO takes the fall for the failure and, as a direct result, they are either demoted, moved into a different organization or let go altogether.

The article goes on to provide advice to CIOs who are beginning such undertakings. First and foremost, large, complex projects should be broken up into “bite-sized” chunks and proper expectations of what will be delivered in each “mini-project” should be set – and agreed upon – with the various stakeholders.

I could not agree more with this statement and find it most concerning that this is not more of a common practice within the IT industry. In our World of rapid prototyping (turned production) and just-in-time development, to think that you could perform a multi-year project without implementing several checkpoints along the way is simply insane. This may be one of the reasons why the average life-span of a CIO is only two years within the same company.

CIOs who agree to perform projects under such conditions really need to read my previous blog entitled “Lessons Learned from Enterprise Identity Management Projects“. While it was written mainly for enterprise identity projects it has direct applicability to any enterprise project. In that article I directly address specific points about expectation setting and bite-sized chunks (did CIO Magazine read my blog on this?) and by taking my advice to heart, the average CIO might be able to extend their stay.