Archive for the ‘security’ Category

Keep Calm hes backTime flies when you are having fun.  Or extremely busy. Or a legendary procrastinator.

Noticed that this blog, which  I keep putting on my to do list to update, is going on 3 years stagnant. Its not just because I am lazy, but things just got busier and busier.  Sell the house and move to Florida, take on new coverage territories, new products to learn and expanding coverage of identity and security in mobile and the cloud.

Not lazy, just busy. Very busy.  Since the breaches with Home Depot, Target, Sony, etc., security and identity have now earned a seat at the big boy/girl table in the enterprise.  Now have Platinum status with several travel services now.

But now is the time to blow the dust off this blog and put it back in motion.   Customers are migrating to generation 2 mobile applications and considering moving identity and security functions to the cloud.  Lots to talk about.  This time I promise to keep this updated regularly.

Good to have you back.



Read Full Post »

TRESPASSINGIn the maturity model of identity management, we have been through many stages of evolution and may be on the brink of the final stage – context aware identity management.  First it was centralized authorization (directory), authentication (access managmeent, SSO, STS, federation), identity management (provisioning), roles (RBAC), auditing/regulatory (compliance and analytics), privileged accounts management, and finally centralized policy management (entitlements servers).  

The final frontier, once you have mastered all of the above, is context aware identity management.  The user may have all the rights and privileges to access the resources, but they are doing so in an abnormal way.  Call it behavioral security.  My classic example is a company’s CIO may have access to a sensitive data room and may even have their badges grant them access to the data center floor, but one has to ask why the CIO is entering the sensitive data center at  2 AM.  However, a member of the cleaning staff would have the same privileges and would be expected in at 2 am to do their cleaning.

So its all about context.  Having the right credentials and using them in an manner as expected and flagging when they are used atypically.  As of this writing, Bradley Manning is waiting his sentencing for releasing 700,000 classified documents to Wikileaks.  What many miss in this sad adventure is that Pvt. Manning did not “hack” his way into the systems containing this information.  He was hired/recruited, trained, authorized to have sensitive access, and had his access vouched for in several compliance reviews.  The only question nobody asked was why a low level private with clearance was downloading hundreds of thousands of files at one time.  His behavior with his access level should have sent up warning signs.

This type of behavioral monitoring has been around for years and found some success, particularly in the financial sector.  Banks and investment firms have employed adaptive access management tools to work with their single sign-on front ends to their web sites. You have probably seen them when your bank shows you a confirmation picture first and asks you to set up security questions.  What you may not know is the software also “fingerprints” your system (OS type/version, MAC address, IP location, etc.) and starts building a profile of how you access your account. If you do anything out of the ordinary, it may ask you who your second grade teacher was even though you presented the correct user ID and password. Try logging into your bank from your mother-in-law’s computer in Florida when you visit next and you will most likely have to answer some additional security questions because we need to insure its you.

So buried deep in the latest release of Oracle Entitlements Server (try no to thump my company’s products, but this is the only software I know that can do this at this point) is the ability to add to your enforcement policies to make them context aware. The enforcement policies can look at more than just job title and role, it can also look at location, device, time, organization, etc. to make a more informed decision on whether to grant access.

It may be okay that you have privileged access to mission critical servers to issue a reboot command, but, ya know, we are just not going to allow you to do that if you came in through an external firewall using you iPad.  Just not going to happen. You need to be onsite in the data center to do that.

It is particularly helpful when several users have access to the same system, but need to be limited on what they can see. Just saw a killer demo a few weeks back where users of a military application can see a map of the world showing location of current deployments, but the data is filtered so you can only see resources in your theater of operation.  Europeans can only see assets based in Europe, Africans only see stuff based in Africa, etc.  All controlled centrally with one access policy.

To get to this level of context aware security is not easy to do and represents what I believe is the final frontier in online security.  Its the ability to not only control access and authorization, but understand when and how the proper credentials are being used.  Remember that most damaging breaches are by insiders who have proper access.

Read Full Post »

eyepatchSo we get this call from a customer who has been using our identity software for sometime now with a huge complaint that there was a security hole in our software that left them exposed for several years.  Needless to say, they were not happy.

Why had we, the vendor, not made them aware of the security hole, sent them a patch, and insured it was fixed?

After all, are they not paying a not so insignificant sum in support costs?  The software they have deployed is an older version of the software, but it is still under active support.  They demanded we patch their version of the software, as they did not want to go through an upgrade in production, and we were bound by the support agreement to do so.

Well, we have been spending the last several weeks trying to show them the error of that approach.

There are several issues here, but first some background.  The software they deployed was a slightly older version (v11. delivered in mid 2010.  The current release of the software is  The devil, as they say, is in the details.

To read these numbers, lets go left to right.  The “11” indicates the major version 11g, versus 10g, thus both are 11g.  The first “1” is the family of software, which rarely changes.  The third number (“1” in the first, “2” in the second) is the major release, thus the first is “11g R1” and “11g R2”.  This usually indicates significant changes between the versions, including new features, functions, etc.  They are not interchangeable and would require a formal software upgrade procedure to implement the new features.

The fourth number is the sub-release within that software version, with cumulative bug patches and improvements to the software.  The last number is the bug patch set (BP) for that version.

Our customer was running, which indicates this was the first unpatched release of that particular version of software. They spent many hours integrating, developing, and rolling out the software into production.  That was in mid 2010.  But there was a flaw that was discovered later that year, a shortcut left by a developer to aid testing, that would allow someone to circumvent the login process if you knew how.  A major hole, yes, for us and the ex-developer.  And quickly repaired in a patch and rolled into BP2 ( This was in late 2010

Which is where the story gets interesting (I hope).  We did not broadly announce a security issue we found with one customer, as this would immediately put the rest of our customers at risk with a zero day flaw. We do quickly release a tested patch that corrects the issue and notify our support customers through support contacts to apply the patch as soon as possible without tipping the bad guys off. Then we roll the patch into the next bundle patch, in this case BP2.

Bundle patches are collections of patches (the goal is 20 to 60 or so) rolled together and tested to not break the current software. Most of the times the are cumulative.  However, our customer chose the path of least resistance (or resources required) and did not implement a patch process to their production environment, nor test any updates to the software released. Thus they ran the better part of three years with a major whole in their public website.

It was finally when a new project person looked at the last bundle patch release (BP4 or notes did they see there was this flaw.  That is when things got screwy.  The customer wanted us to back port the one patch for the flaw to, as it was still under active support.  We recommended they apply the patches up to BP4 to at least benefit from all of the fixes we have implemented over the last three years.  They consider it an upgrade and say we are not supporting our product. We are.  We fixed the problem over two years ago.

Here is the flaw in their logic.  First, if we did do the one off fix, they would now have a unique production deployment. No other customer would have the release with a solo patch on it, so it would complicate the support effort going forward. Second, the customer would still be flying in production with the initial release of the software.  Given an estimated 50 fixes per bundle patch, four bundle patches means roughly 200 things have been fixed and tested together.   The customer may fix the one issue they are concerned with in production, but will still run into problems as they run into other software glitches that have already been fixed. We would fix one issue, but they would still be 199 fixes behind.

One other quick tidbit: the  grace period.  As a vendor rolls out an update or patch, the grace period is the time a vendor allots for its customers to migrate to the newer version (not new release).  This  takes into consideration i would take up to a year to apply a patch, so the older version is still kept up on active support.  So if BP3 comes out, BP2 falls into the grace period (usually 1 year) before active support ceases and customers should use the newer version. Note, if BP 4 comes out within 9 months, BP2’s grace period continues for another 3 months or one year after BP3 was released. Note that the old grace period clock starts ticking on BP3 the day BP4 comes out.

Once out of the grace period, active support for that particular BP ceases.  In our customer’s case, BP0, was well past the grace period, so technically the lawyers would argue we were not obligated to actively support it.   We still actively support software, but only the latest release (BP4 in this case) and any BP3 software living out its grace period.  BP0, BP1, and BP2 would be considered deprecated versions of patch sets.

So here are the important points and learnings from all of this:

  1. The vendor must supply fixes and patches to the software as bound by the support contract, but it is up to the customer to stay aware of the releases and apply patches in a timely fashion.
  2. All projects must include resources to maintain patch levels.
  3. Bundle patches are usually cumulative and only the latest one needs to be applied. Usually.
  4. When notified of a vendor patch set release, someone on the customer side must invest the time to investigate the bugs and the patches and determine if any apply to the currently deployed stack. If so, it should be applied in a timely fashion.  If it does not address a particular combination of software currently being used, only then can the decision be made to forgo the update.
  5. At a minimum, patch grace periods should be noted (see vendor support documentation).  If current software falls out of the grace period, support may not be able to help and the customer may have to apply released bundled patches first if they run into a problem in a deprecated version of the code..
  6. There is a benefit to applying bundle patches, as the usually contain several dozen patches and they have been tested together, so one avoids running into the same problem someone else already has going forward.
  7. Do not expect the vendor to shout from the roof tops any major security issues fixed. It gives the bad guys too much information on the rest of the customer install base.
  8. Doing nothing year after year will only lead you into trouble.

Remember, this is security and identity software, so you need to make sure patches and updates are reviewed an applied in a timely manner.

Read Full Post »


Happy holidays all.  Apologize for the dormant state of this blog, but its been crazy trying to keep up with everything.  That is a good sign that identity and security business is stronger than ever.  Already making a New Year’s resolution to be more active in the blog-o-sphere.

Never has the focus on identity and security been so intense.  One security firm we deal with stated that they believe there are over 10,000 active identity theft entities out there. These run the gamut from your neighbor’s kids trying something after school to organized crime rings and even foreign governments.  Recently, it was reported, authorities have disrupted a planned cyber attack on US banks set for early next year by organized hackers out of Russia.

Which brings me to the point.  It was a comment from a CIO presenting at one of our sponsored events on identity.  The comment rang true for me.  He said something to the effect: “The biggest challenge facing me in the coming months is securing my company’s assets from online threats.  The need to lower costs has us migrating our assets to the cloud.  There, I lose a lot of the security I have built up over the years.  All our investments in firewalls, DMZ’s, certificate servers, centralized directories…. all of that is no longer under my control.   As I see it, moving forward, the only security I can rely on is identity management.”

This is a recurring thing with my clients.   As we, as an industry move more to a services, cloud based model,  we are going to have to re-invent what we know about security and identity management.  We will look more into these issues in the coming weeks as we get ready to move into the challenges of 2013.

The future’s so bright, I have to were shades…..


Read Full Post »


What can I say but “really”?  (My nod to one of my favorite SNL Weekend Update bits).

LinkedIn just had six million of its user accounts and passwords posted on a Russian hackers website and LinkedIn has 160 million business user accounts, so nobody knows how many accounts were stolen.  Might be just computational time until all are published.  If you have a LinkedIn account, change your password immediately and regularly until LinkedIn has earned your trust back.

But everyone asks, how could this happen.  I have a guess.  Some yo-yo on the project team decided to save a little money for the company.  They chose not to salt their encryption hashs.

For those unsure what a “salt” is, its a way to add complexity to the encryption algorithms to make them harder to break.  Computers are computers and encryption routines are just mathematical formulas. Given a set input, you always get the same output.

To set up a brute force atttack, just use a computer to create a table of all the known output from an encryption.  Steal the hashed passwords from linked in, do a reverse lookup on your hashed output table and viola, you know the user’s password.  Particularly if they used common phrases and words.  So far, reports are the hackers were able to brute force reverse engineer 3.6 million passwords so far.

But what if I tacked “DSON” to the front of every password before I encrypted it?  Only I know to do this as application owner.  And this invalidates the output lookup table because its values were not generated with this added “salt” prefix to the encryption algorithm.  As a hacker I would now need a lookup table for every combination of 4 letter prefixes; tough to do. Today, modern secured systems use between a 48 to 128 bit salt to their encryption practices.

But here is where I think Linkedin got in trouble, because I see this come up during procurement on security projects all the time.  The added “salt” requires more storage space than the naked password and takes a little performance off the system to calculate.  So some make the extremely silly decision to drop the salt, use the naked password with the OOTB encryption routines.  Lets save money, everybody.

But last time I checked, disk space was extremely inexpensive and the added performance hit was near zero.  But the exposure to hacked passwords remain and is extremely expensive.

So check and see if your systems salt their password hashes.  If not, strongly recommend you start a project to migrate users going forward to a salted encryption.  Unless of course, you like having your customer passwords on display on a Russian hack-site.

Read Full Post »


Try not to make this a blog an outpost for plugging my company’s wares, but want to make an exception in this case as I think many can benefit from it.

Got word yesterday that our long awaited data masking templates for Oracle E-Business Suite 12.1.3 were released to production.

Okay, I hear you whimper.  Why is that so important?

Because unmasked test data is one of the easiest targets for thieves to hit.  And its so simple to plug this hole.

For those who need the short background version:  companies go to great pains to set up identity management and secure their data in production, rich with credit card numbers, ssn’s, PII, etc.  ERP applications like Oracle’s E-Business Suite 12.1.3 capture and store a great deal of this information in their databases and do a lot to make sure it is secure in production.

But many of my customers think nothing about taking a snapshot of the database for QA and testing purposes. Maybe once or twice a month, they clone the database into testing, because “nothing tests our new code better than our production data”.  They now have a new copy of production in an less secured testing environment.

Often, these test/QA folks are with the development team (maybe even outsourced) and often have privileged accounts into the database and the application so they can test full functionality and affect changes in the test environment.  Your production site might be under strong lock and key, but it would take a dev tester mere minutes to clone yet another copy of sensitive data on that SD card they brought with them to work that day.

So data masking is just that.  A the production data is cloned, key elements of the database are masked, obfuscating the actual data.  Masking may randomize user’s SSN’s, exchange phone numbers and addresses between user records, etc.

Masking is a little more involved than just scrambling data.  You still need the data set to behave as it does in production.  You just cannot swap everyone’s zip codes if your software uses it to do reports by region.  An 07405 zip code with a California address will not test cleanly.  Plus, you need to track the changes to the database so you can do them again with some consistency, so refreshes of the masked data behave in a like and similar fashion.

So what’s the big deal with the announcement?  The problem with masking, particularly complex ERP systems, is to know where the sensitive data is in the system and how best to mask it for protection, yet still have it behave properly for testing.  ERP systems have their own way of building relationships between the tables the are built on and often these table can run into the hundreds.  So the good news is the development team for EBS worked with our masking team to create templates specifically for the EBS 12.1.3 suite.  This will significantly compress the time to get a masking program in place and plug that security hole. You can leave a copy of your masked production data on a SD card at the counter at Starbuck’s and its safe as can be.

So if your company runs EBS 12 and runs copies of production data in testing, you should look into these new data masking templates. And I know there are a lot of you.  You can find out more about the the Oracle Data Masking Pack with Oracle Enterprise Manager.  You will need to purchase a license for the Data Masking Pack, but the EBS templates are available free in the following patch:

Read Full Post »

Older Posts »