Cloud securityAs enterprises move forward with Identity and Security, eventually they will come to the Crossroads (nod to all the blues fans out there).  They will have to face moving some or all of their applications into hosted cloud services. And that will bring up the issue of how safe is the cloud and how does identity play into the mix.

Most of my clients have already some assets in the cloud and are moving more and more resources off prem.  So they already have had to wrestle with this issue.   The biggest fear is identity and security of these previously secured and locked down applications from the private data center now riding in a shared tenant data center. How can that be safe?

Well it is. My Eureka moment was about 2 years ago when I was listening to a discussion of database security in the cloud.  I also cover database security in my job and customers were having concerns of putting PII, PHI, PCI, and SOX controlled data into a shared data base service.

To physically migrate is simple.  Download a small utility that goes into the Oracle Database and encrypts the database contents and exports it to a file. That file is uploaded to the database cloud and put in its own pluggable database (which shares a common parent database with other clients) where it is stored in encrypted format.

Just by moving that data from the cloud, security measures are imposed.  The data is encrypted where it may not have been before.  It is in a secured pluggable container.  The cloud database administrators manage not only this database, but all databases for all customers, so they are focused on implementing and adhering to secure practices.

The cloud hosting company has to have its security act together. They are hosting not only your assets, but possibly your competitors.  The hosting company has already invested in background checks for its employees, installed man traps, actively updates it perimeter firewalls.  They have proven provisioning and deployment techniques that keep everything secure.

So just by moving to the off prem cloud, the security imposed on the database data may have been improved. You and your team can now go focus on building better apps and providing better services to the business units.

Same goes for identity in the cloud.  It has to be safer, because it has to be.  Why try and keep up with all the latest security issues on premises when your cloud hosting provider does it for a living?

It will take a while to convince yourself that the identity and security is actually better with a cloud hosting service than trying to do it all in-house.  Then you can step over the crossroads onto the other side.


Keep Calm hes backTime flies when you are having fun.  Or extremely busy. Or a legendary procrastinator.

Noticed that this blog, which  I keep putting on my to do list to update, is going on 3 years stagnant. Its not just because I am lazy, but things just got busier and busier.  Sell the house and move to Florida, take on new coverage territories, new products to learn and expanding coverage of identity and security in mobile and the cloud.

Not lazy, just busy. Very busy.  Since the breaches with Home Depot, Target, Sony, etc., security and identity have now earned a seat at the big boy/girl table in the enterprise.  Now have Platinum status with several travel services now.

But now is the time to blow the dust off this blog and put it back in motion.   Customers are migrating to generation 2 mobile applications and considering moving identity and security functions to the cloud.  Lots to talk about.  This time I promise to keep this updated regularly.

Good to have you back.


TRESPASSINGIn the maturity model of identity management, we have been through many stages of evolution and may be on the brink of the final stage – context aware identity management.  First it was centralized authorization (directory), authentication (access managmeent, SSO, STS, federation), identity management (provisioning), roles (RBAC), auditing/regulatory (compliance and analytics), privileged accounts management, and finally centralized policy management (entitlements servers).  

The final frontier, once you have mastered all of the above, is context aware identity management.  The user may have all the rights and privileges to access the resources, but they are doing so in an abnormal way.  Call it behavioral security.  My classic example is a company’s CIO may have access to a sensitive data room and may even have their badges grant them access to the data center floor, but one has to ask why the CIO is entering the sensitive data center at  2 AM.  However, a member of the cleaning staff would have the same privileges and would be expected in at 2 am to do their cleaning.

So its all about context.  Having the right credentials and using them in an manner as expected and flagging when they are used atypically.  As of this writing, Bradley Manning is waiting his sentencing for releasing 700,000 classified documents to Wikileaks.  What many miss in this sad adventure is that Pvt. Manning did not “hack” his way into the systems containing this information.  He was hired/recruited, trained, authorized to have sensitive access, and had his access vouched for in several compliance reviews.  The only question nobody asked was why a low level private with clearance was downloading hundreds of thousands of files at one time.  His behavior with his access level should have sent up warning signs.

This type of behavioral monitoring has been around for years and found some success, particularly in the financial sector.  Banks and investment firms have employed adaptive access management tools to work with their single sign-on front ends to their web sites. You have probably seen them when your bank shows you a confirmation picture first and asks you to set up security questions.  What you may not know is the software also “fingerprints” your system (OS type/version, MAC address, IP location, etc.) and starts building a profile of how you access your account. If you do anything out of the ordinary, it may ask you who your second grade teacher was even though you presented the correct user ID and password. Try logging into your bank from your mother-in-law’s computer in Florida when you visit next and you will most likely have to answer some additional security questions because we need to insure its you.

So buried deep in the latest release of Oracle Entitlements Server (try no to thump my company’s products, but this is the only software I know that can do this at this point) is the ability to add to your enforcement policies to make them context aware. The enforcement policies can look at more than just job title and role, it can also look at location, device, time, organization, etc. to make a more informed decision on whether to grant access.

It may be okay that you have privileged access to mission critical servers to issue a reboot command, but, ya know, we are just not going to allow you to do that if you came in through an external firewall using you iPad.  Just not going to happen. You need to be onsite in the data center to do that.

It is particularly helpful when several users have access to the same system, but need to be limited on what they can see. Just saw a killer demo a few weeks back where users of a military application can see a map of the world showing location of current deployments, but the data is filtered so you can only see resources in your theater of operation.  Europeans can only see assets based in Europe, Africans only see stuff based in Africa, etc.  All controlled centrally with one access policy.

To get to this level of context aware security is not easy to do and represents what I believe is the final frontier in online security.  Its the ability to not only control access and authorization, but understand when and how the proper credentials are being used.  Remember that most damaging breaches are by insiders who have proper access.

Patches Required

eyepatchSo we get this call from a customer who has been using our identity software for sometime now with a huge complaint that there was a security hole in our software that left them exposed for several years.  Needless to say, they were not happy.

Why had we, the vendor, not made them aware of the security hole, sent them a patch, and insured it was fixed?

After all, are they not paying a not so insignificant sum in support costs?  The software they have deployed is an older version of the software, but it is still under active support.  They demanded we patch their version of the software, as they did not want to go through an upgrade in production, and we were bound by the support agreement to do so.

Well, we have been spending the last several weeks trying to show them the error of that approach.

There are several issues here, but first some background.  The software they deployed was a slightly older version (v11. delivered in mid 2010.  The current release of the software is  The devil, as they say, is in the details.

To read these numbers, lets go left to right.  The “11” indicates the major version 11g, versus 10g, thus both are 11g.  The first “1” is the family of software, which rarely changes.  The third number (“1” in the first, “2” in the second) is the major release, thus the first is “11g R1” and “11g R2”.  This usually indicates significant changes between the versions, including new features, functions, etc.  They are not interchangeable and would require a formal software upgrade procedure to implement the new features.

The fourth number is the sub-release within that software version, with cumulative bug patches and improvements to the software.  The last number is the bug patch set (BP) for that version.

Our customer was running, which indicates this was the first unpatched release of that particular version of software. They spent many hours integrating, developing, and rolling out the software into production.  That was in mid 2010.  But there was a flaw that was discovered later that year, a shortcut left by a developer to aid testing, that would allow someone to circumvent the login process if you knew how.  A major hole, yes, for us and the ex-developer.  And quickly repaired in a patch and rolled into BP2 ( This was in late 2010

Which is where the story gets interesting (I hope).  We did not broadly announce a security issue we found with one customer, as this would immediately put the rest of our customers at risk with a zero day flaw. We do quickly release a tested patch that corrects the issue and notify our support customers through support contacts to apply the patch as soon as possible without tipping the bad guys off. Then we roll the patch into the next bundle patch, in this case BP2.

Bundle patches are collections of patches (the goal is 20 to 60 or so) rolled together and tested to not break the current software. Most of the times the are cumulative.  However, our customer chose the path of least resistance (or resources required) and did not implement a patch process to their production environment, nor test any updates to the software released. Thus they ran the better part of three years with a major whole in their public website.

It was finally when a new project person looked at the last bundle patch release (BP4 or notes did they see there was this flaw.  That is when things got screwy.  The customer wanted us to back port the one patch for the flaw to, as it was still under active support.  We recommended they apply the patches up to BP4 to at least benefit from all of the fixes we have implemented over the last three years.  They consider it an upgrade and say we are not supporting our product. We are.  We fixed the problem over two years ago.

Here is the flaw in their logic.  First, if we did do the one off fix, they would now have a unique production deployment. No other customer would have the release with a solo patch on it, so it would complicate the support effort going forward. Second, the customer would still be flying in production with the initial release of the software.  Given an estimated 50 fixes per bundle patch, four bundle patches means roughly 200 things have been fixed and tested together.   The customer may fix the one issue they are concerned with in production, but will still run into problems as they run into other software glitches that have already been fixed. We would fix one issue, but they would still be 199 fixes behind.

One other quick tidbit: the  grace period.  As a vendor rolls out an update or patch, the grace period is the time a vendor allots for its customers to migrate to the newer version (not new release).  This  takes into consideration i would take up to a year to apply a patch, so the older version is still kept up on active support.  So if BP3 comes out, BP2 falls into the grace period (usually 1 year) before active support ceases and customers should use the newer version. Note, if BP 4 comes out within 9 months, BP2’s grace period continues for another 3 months or one year after BP3 was released. Note that the old grace period clock starts ticking on BP3 the day BP4 comes out.

Once out of the grace period, active support for that particular BP ceases.  In our customer’s case, BP0, was well past the grace period, so technically the lawyers would argue we were not obligated to actively support it.   We still actively support software, but only the latest release (BP4 in this case) and any BP3 software living out its grace period.  BP0, BP1, and BP2 would be considered deprecated versions of patch sets.

So here are the important points and learnings from all of this:

  1. The vendor must supply fixes and patches to the software as bound by the support contract, but it is up to the customer to stay aware of the releases and apply patches in a timely fashion.
  2. All projects must include resources to maintain patch levels.
  3. Bundle patches are usually cumulative and only the latest one needs to be applied. Usually.
  4. When notified of a vendor patch set release, someone on the customer side must invest the time to investigate the bugs and the patches and determine if any apply to the currently deployed stack. If so, it should be applied in a timely fashion.  If it does not address a particular combination of software currently being used, only then can the decision be made to forgo the update.
  5. At a minimum, patch grace periods should be noted (see vendor support documentation).  If current software falls out of the grace period, support may not be able to help and the customer may have to apply released bundled patches first if they run into a problem in a deprecated version of the code..
  6. There is a benefit to applying bundle patches, as the usually contain several dozen patches and they have been tested together, so one avoids running into the same problem someone else already has going forward.
  7. Do not expect the vendor to shout from the roof tops any major security issues fixed. It gives the bad guys too much information on the rest of the customer install base.
  8. Doing nothing year after year will only lead you into trouble.

Remember, this is security and identity software, so you need to make sure patches and updates are reviewed an applied in a timely manner.

nosql-textGot a question from a customer on if it was a good idea to drop their LDAP directory in favor of a NoSQL repository.  For what they needed to do, they felt it freed them to have a more flexible architecture.  This follows up another client request who wished our directory products were based on a NoSQL data repository.

For those not keeping up with data repository trends, SQL has dominated the standards as a way to retrieve data from a relational database (dumb statement of the month). In theory, it abstracts the underlying repository so SQL could be used to pull information out of any vendor’s compliant product.  It works great in joining information together, such as transactions, from the underlying tables.

But not all data fits neatly into this SELECT, JOIN, table paradigm, including directory objects.  A more object oriented approach to data storage is the right way to go.  inetOrgPerson, the de facto standard of how to store information about a user in a directory is actually made up of several objects that inherent the other. To paraphrase Genisis, Top begat User begat Person begat InetOrgPerson or something like that.  To build an SQL statement that could retrieve or store all the information contained in the inetOrgPerson object would be complicated and complex using at least three joins.

The SQL statement would be complex to build and require the SQL engine to decompose it during execution. Overhead to build the statement, overhead to process it.  Kills performance which is the enemy of a good directory.

Instead, the developer should just be able to build the inetOrgPerson and trigger is serialization into the data repository and let the database figure out how to store the information.  Much simpler and, most importantly for a directory, much faster.

So to get back to the original question, should they pull the LDAP directory out and go with a NoSQL repository directly?

My answer is the typical consulting “it depends”.  If this is a stand alone application and all you need to do is store some key-value pairs, it will be hard to argue against going with the NoSQL repository.  But I would suggest you think long and hard about dropping the LDAP directory layer.

For two reasons. One, LDAP brings more than just data storage.  It has years of standards built into it, such as the inetOrgPerson and the ACI rules and policies to access of the objects.  That would have to be recreated at the database level. Second is the fact that many of the newer LDAP directories out there already are adopting object oriented repositories underneath. One of the main reasons my company Oracle came out with a new directory, Oracle Unified Directory(OUD), is to take advantage of the embedded Berkeley DB Java Edition technology.  As OUD is written in pure Java, it made the underlying repository, and thus the overall directory, faster, more robust, and highly scalable.

Good news is you can take advantage of this NoSQL storage approach, but still treat the directory as an LDAP repository, easing migration and upgrade costs.

Cool Graphic on Data Breaches

Cool little information graphic on recent information breaches.  

I know several of these companies and the troubles they ran into. One recurring common trait between them all is they did not have a security mentality before the breaches occurred.  The trusted too many people, both internally and externally. Some even had bought our security software and had not implemented it in nearly a year. 

Cool graphic, disturbing information.


Interesting question came up on an internal database security alias that I thought should be shared.

The question posed on behalf of a database customer was around seeding random routines.  Was it more secure to provide a seed to the random number generator routines or not?  The question was extended to say that if seeding the routines was more secure, what new security issues were introduced in storing the seed values used?   To seed or not to seed, that is the question.

In this particular case, the answer is NOT to provide the seed the database random routines in production.  Seeding is similar to the cryptographic concepts of salts and even nonces, but within databases, it is used slightly differently.  In the Oracle DB world, generating a random numbers are the domain of the DBMS_RANDOM package.  It specifically states it should not be used for cryptography.

Like salts and nonces, a seed is used to add some increased entropy to the generation of random values.  Computers, for all their advanced capabilities, are still psuedo-random generators, not pure random generators.  Given the same input, they will generate the same output.  So to add some additional complexity to the random number generation to make it appear more a true random number, the DBMS_RANDOM routines allow the developer to “seed” any random number calculation. So one would think adding a seed to the generation would make the overall system more secure because the seed would better enhance the randomness of the pseudo random generation.

But, by supplying this seed to the calculation, the developer now introduces an added security complexity.  Where to store the “seed” securely between calculations? It is in effect a secret key if used (inappropriately I might add) to encrypt or protect information in the database.  Even with the documentation saying to do this, we have seen this done again and again.

What most developers miss is the DBMS_RANDOM package is a random package, not an encryption package.  If one does not supply a seed to the random number generator, the routine creates a random seed on its own, based on userid, date, and process id.  Not a truly random seed, but does add some entropy to the random generation.

In fact, the reason the routines allow the code to pass a seed instead of random generation of the seed is to allow testing.  By passing the same seed to the random number generator, you get the same random number back, which allows testing across runs.  As long as the seed remains fixed (non-null) the random number generator will pass back the same number. So, the thought that adding a seed to the random number generator to improve randomness actually fixes the generator to producing the same number, not a random one.  Plus, you have a security problem of where to store the seed.

So the answer is NOT to seed the routine in DBMS_RANDOM, but let it generate its own “random” seed when called. Remember, this discussion is around this implementation of one random number generation routine in an Oracle database package and should not be applied to other random number generation routines. But be dutiful of the seeding documentation.    Also, be aware that if you are using random numbers to do cryptographic encryption, you need to be sure the random routines are strong enough to insure the resulting security is trustworthy.  You need to use cryptographically secure routines. Not all psuedo-random routines are designed with that in mind.