Feeds:
Posts
Comments

Posts Tagged ‘Identity’

TRESPASSINGIn the maturity model of identity management, we have been through many stages of evolution and may be on the brink of the final stage – context aware identity management.  First it was centralized authorization (directory), authentication (access managmeent, SSO, STS, federation), identity management (provisioning), roles (RBAC), auditing/regulatory (compliance and analytics), privileged accounts management, and finally centralized policy management (entitlements servers).  

The final frontier, once you have mastered all of the above, is context aware identity management.  The user may have all the rights and privileges to access the resources, but they are doing so in an abnormal way.  Call it behavioral security.  My classic example is a company’s CIO may have access to a sensitive data room and may even have their badges grant them access to the data center floor, but one has to ask why the CIO is entering the sensitive data center at  2 AM.  However, a member of the cleaning staff would have the same privileges and would be expected in at 2 am to do their cleaning.

So its all about context.  Having the right credentials and using them in an manner as expected and flagging when they are used atypically.  As of this writing, Bradley Manning is waiting his sentencing for releasing 700,000 classified documents to Wikileaks.  What many miss in this sad adventure is that Pvt. Manning did not “hack” his way into the systems containing this information.  He was hired/recruited, trained, authorized to have sensitive access, and had his access vouched for in several compliance reviews.  The only question nobody asked was why a low level private with clearance was downloading hundreds of thousands of files at one time.  His behavior with his access level should have sent up warning signs.

This type of behavioral monitoring has been around for years and found some success, particularly in the financial sector.  Banks and investment firms have employed adaptive access management tools to work with their single sign-on front ends to their web sites. You have probably seen them when your bank shows you a confirmation picture first and asks you to set up security questions.  What you may not know is the software also “fingerprints” your system (OS type/version, MAC address, IP location, etc.) and starts building a profile of how you access your account. If you do anything out of the ordinary, it may ask you who your second grade teacher was even though you presented the correct user ID and password. Try logging into your bank from your mother-in-law’s computer in Florida when you visit next and you will most likely have to answer some additional security questions because we need to insure its you.

So buried deep in the latest release of Oracle Entitlements Server (try no to thump my company’s products, but this is the only software I know that can do this at this point) is the ability to add to your enforcement policies to make them context aware. The enforcement policies can look at more than just job title and role, it can also look at location, device, time, organization, etc. to make a more informed decision on whether to grant access.

It may be okay that you have privileged access to mission critical servers to issue a reboot command, but, ya know, we are just not going to allow you to do that if you came in through an external firewall using you iPad.  Just not going to happen. You need to be onsite in the data center to do that.

It is particularly helpful when several users have access to the same system, but need to be limited on what they can see. Just saw a killer demo a few weeks back where users of a military application can see a map of the world showing location of current deployments, but the data is filtered so you can only see resources in your theater of operation.  Europeans can only see assets based in Europe, Africans only see stuff based in Africa, etc.  All controlled centrally with one access policy.

To get to this level of context aware security is not easy to do and represents what I believe is the final frontier in online security.  Its the ability to not only control access and authorization, but understand when and how the proper credentials are being used.  Remember that most damaging breaches are by insiders who have proper access.

Advertisements

Read Full Post »

eyepatchSo we get this call from a customer who has been using our identity software for sometime now with a huge complaint that there was a security hole in our software that left them exposed for several years.  Needless to say, they were not happy.

Why had we, the vendor, not made them aware of the security hole, sent them a patch, and insured it was fixed?

After all, are they not paying a not so insignificant sum in support costs?  The software they have deployed is an older version of the software, but it is still under active support.  They demanded we patch their version of the software, as they did not want to go through an upgrade in production, and we were bound by the support agreement to do so.

Well, we have been spending the last several weeks trying to show them the error of that approach.

There are several issues here, but first some background.  The software they deployed was a slightly older version (v11.1.1.3.0) delivered in mid 2010.  The current release of the software is 11.1.2.1.1.  The devil, as they say, is in the details.

To read these numbers, lets go left to right.  The “11” indicates the major version 11g, versus 10g, thus both are 11g.  The first “1” is the family of software, which rarely changes.  The third number (“1” in the first, “2” in the second) is the major release, thus the first is “11g R1” and “11g R2”.  This usually indicates significant changes between the versions, including new features, functions, etc.  They are not interchangeable and would require a formal software upgrade procedure to implement the new features.

The fourth number is the sub-release within that software version, with cumulative bug patches and improvements to the software.  The last number is the bug patch set (BP) for that version.

Our customer was running 11.1.1.3.0, which indicates this was the first unpatched release of that particular version of software. They spent many hours integrating, developing, and rolling out the software into production.  That was in mid 2010.  But there was a flaw that was discovered later that year, a shortcut left by a developer to aid testing, that would allow someone to circumvent the login process if you knew how.  A major hole, yes, for us and the ex-developer.  And quickly repaired in a patch and rolled into BP2 (11.1.1.3.2). This was in late 2010

Which is where the story gets interesting (I hope).  We did not broadly announce a security issue we found with one customer, as this would immediately put the rest of our customers at risk with a zero day flaw. We do quickly release a tested patch that corrects the issue and notify our support customers through support contacts to apply the patch as soon as possible without tipping the bad guys off. Then we roll the patch into the next bundle patch, in this case BP2.

Bundle patches are collections of patches (the goal is 20 to 60 or so) rolled together and tested to not break the current software. Most of the times the are cumulative.  However, our customer chose the path of least resistance (or resources required) and did not implement a patch process to their production environment, nor test any updates to the software released. Thus they ran the better part of three years with a major whole in their public website.

It was finally when a new project person looked at the last bundle patch release (BP4 or 11.1.1.3.4) notes did they see there was this flaw.  That is when things got screwy.  The customer wanted us to back port the one patch for the flaw to 11.1.1.3.0, as it was still under active support.  We recommended they apply the patches up to BP4 to at least benefit from all of the fixes we have implemented over the last three years.  They consider it an upgrade and say we are not supporting our product. We are.  We fixed the problem over two years ago.

Here is the flaw in their logic.  First, if we did do the one off fix, they would now have a unique production deployment. No other customer would have the 11.1.1.3.0 release with a solo patch on it, so it would complicate the support effort going forward. Second, the customer would still be flying in production with the initial release of the software.  Given an estimated 50 fixes per bundle patch, four bundle patches means roughly 200 things have been fixed and tested together.   The customer may fix the one issue they are concerned with in production, but will still run into problems as they run into other software glitches that have already been fixed. We would fix one issue, but they would still be 199 fixes behind.

One other quick tidbit: the  grace period.  As a vendor rolls out an update or patch, the grace period is the time a vendor allots for its customers to migrate to the newer version (not new release).  This  takes into consideration i would take up to a year to apply a patch, so the older version is still kept up on active support.  So if BP3 comes out, BP2 falls into the grace period (usually 1 year) before active support ceases and customers should use the newer version. Note, if BP 4 comes out within 9 months, BP2’s grace period continues for another 3 months or one year after BP3 was released. Note that the old grace period clock starts ticking on BP3 the day BP4 comes out.

Once out of the grace period, active support for that particular BP ceases.  In our customer’s case, BP0, was well past the grace period, so technically the lawyers would argue we were not obligated to actively support it.   We still actively support 11.1.1.3.x software, but only the latest release (BP4 in this case) and any BP3 software living out its grace period.  BP0, BP1, and BP2 would be considered deprecated versions of patch sets.

So here are the important points and learnings from all of this:

  1. The vendor must supply fixes and patches to the software as bound by the support contract, but it is up to the customer to stay aware of the releases and apply patches in a timely fashion.
  2. All projects must include resources to maintain patch levels.
  3. Bundle patches are usually cumulative and only the latest one needs to be applied. Usually.
  4. When notified of a vendor patch set release, someone on the customer side must invest the time to investigate the bugs and the patches and determine if any apply to the currently deployed stack. If so, it should be applied in a timely fashion.  If it does not address a particular combination of software currently being used, only then can the decision be made to forgo the update.
  5. At a minimum, patch grace periods should be noted (see vendor support documentation).  If current software falls out of the grace period, support may not be able to help and the customer may have to apply released bundled patches first if they run into a problem in a deprecated version of the code..
  6. There is a benefit to applying bundle patches, as the usually contain several dozen patches and they have been tested together, so one avoids running into the same problem someone else already has going forward.
  7. Do not expect the vendor to shout from the roof tops any major security issues fixed. It gives the bad guys too much information on the rest of the customer install base.
  8. Doing nothing year after year will only lead you into trouble.

Remember, this is security and identity software, so you need to make sure patches and updates are reviewed an applied in a timely manner.

Read Full Post »

nosql-textGot a question from a customer on if it was a good idea to drop their LDAP directory in favor of a NoSQL repository.  For what they needed to do, they felt it freed them to have a more flexible architecture.  This follows up another client request who wished our directory products were based on a NoSQL data repository.

For those not keeping up with data repository trends, SQL has dominated the standards as a way to retrieve data from a relational database (dumb statement of the month). In theory, it abstracts the underlying repository so SQL could be used to pull information out of any vendor’s compliant product.  It works great in joining information together, such as transactions, from the underlying tables.

But not all data fits neatly into this SELECT, JOIN, table paradigm, including directory objects.  A more object oriented approach to data storage is the right way to go.  inetOrgPerson, the de facto standard of how to store information about a user in a directory is actually made up of several objects that inherent the other. To paraphrase Genisis, Top begat User begat Person begat InetOrgPerson or something like that.  To build an SQL statement that could retrieve or store all the information contained in the inetOrgPerson object would be complicated and complex using at least three joins.

The SQL statement would be complex to build and require the SQL engine to decompose it during execution. Overhead to build the statement, overhead to process it.  Kills performance which is the enemy of a good directory.

Instead, the developer should just be able to build the inetOrgPerson and trigger is serialization into the data repository and let the database figure out how to store the information.  Much simpler and, most importantly for a directory, much faster.

So to get back to the original question, should they pull the LDAP directory out and go with a NoSQL repository directly?

My answer is the typical consulting “it depends”.  If this is a stand alone application and all you need to do is store some key-value pairs, it will be hard to argue against going with the NoSQL repository.  But I would suggest you think long and hard about dropping the LDAP directory layer.

For two reasons. One, LDAP brings more than just data storage.  It has years of standards built into it, such as the inetOrgPerson and the ACI rules and policies to access of the objects.  That would have to be recreated at the database level. Second is the fact that many of the newer LDAP directories out there already are adopting object oriented repositories underneath. One of the main reasons my company Oracle came out with a new directory, Oracle Unified Directory(OUD), is to take advantage of the embedded Berkeley DB Java Edition technology.  As OUD is written in pure Java, it made the underlying repository, and thus the overall directory, faster, more robust, and highly scalable.

Good news is you can take advantage of this NoSQL storage approach, but still treat the directory as an LDAP repository, easing migration and upgrade costs.

Read Full Post »

Cool Graphic on Data Breaches

Cool little information graphic on recent information breaches.  

I know several of these companies and the troubles they ran into. One recurring common trait between them all is they did not have a security mentality before the breaches occurred.  The trusted too many people, both internally and externally. Some even had bought our security software and had not implemented it in nearly a year. 

Cool graphic, disturbing information.

Read Full Post »

Back again finally.  Things are as busy as ever here.

Was at a conference recently and had a CIO of a fairly large insurance company make an observation about moving applications to the cloud that I think hits the nail on the head around a major problem in the adoption of the cloud.

He said “one thing I have come to realize is that when I move my application to the cloud, all of the security of my networks and firewalls that I have invested in over the years disappears.  The only defense I have left is identity and data security in the application”.

This drives right to a major issue facing migration to the cloud.  Running applications in someone else’s data center is not new (we just gave it a fancy title “cloud”).  The major factor holding back the adoption of the cloud by companies today is controlling authentication and authorization remotely.

Not many CIO’s feel comfortable putting all of the user information and security policies on equipment that is not located internal to the company and under the direct control of company employees.  CIO’s who rely on lawyers and contracts with host providers are setting themselves up to look for work.  Even if you can sue the pants off of your cloud provider, the basic problem is a breach would have occurred and your people are not involved at the security level.

Therefore, the solution is quite obvious.  Identity and security need to be delivered as a service to the cloud instance. And it needs to be rock solid.  The security service needs to be maintained on internally hosted platforms and applications need to be modified to work with external security and policy services.

This is evolutionary step that will make adoption of the cloud happen on a large scale.  Just as desktop applications needed to be rewritten to client server paradigm, then morphed into web based models, now to mobile apps, applications will have to adapt and evolve to an external security model delivered as a service versus being embedded or co-located with the application.

Read Full Post »

In case you missed it yesterday, an event happened that you will probably tell your grandkids “I remember when…”.

Yesterday, IANA (that’s the Internet Assigned Numbers Authority; they do exist) allocated the last two publicly available IPv4 address blocks to APNIC (the Asian Pacific organization that miters out IP addresses in that area).  While the announcement below seems to be routine and only for those really into IP addressing, it does mark a milestone in the Internet – IPv4, as predicted, is running out of addresses.  All hail IPv6!

These last two address blocks, 39/8 and 106/8 represent the last freely available /8 address blocks under IPv4 and triggers a provision among the addressing community to work together to distribute the last 5 /8 blocks remaining.  When they are gone, that’s it for IPv4; no more IP’s to give out.

But don’t worry, there is still IP’s available under current IPv4 allocations and they will still be available for a while.  But it does mean the Internet is crossing over from IPv4 to IPv6.  In a few years time, the net will have to be IPv6 to accommodate everyone who wants to use it.

So what does this mean to us in security?  Well, for one, our customers are going to be forced to move to pure IPv6 networking in the next few years.  One of our largest clients, Comcast, even has a website up devoted to letting everyone know of their progress in this transition:  http://www.comcast6.net/

As for Oracle’s middleware, all of it can be reach via IPv6.  That does not mean all middleware is IPv6 yet (we are working on that) but all outward facing interfaces such as web proxies, etc., can handle the dual stack. There are some IPv4 only interfaces, but they are for local services and would appear in local or closely linked network segments.  For more details, see the Oracle Fusion Middleware Adminstration Guide.

But it does mean customers will be asking for IPv6 as a checkbox in their evaluation processes.

Here is the text of the announcement yesterday, which can be found at https://www.apnic.net/publications/news/2011/delegation

Two /8s allocated to APNIC from IANA

Published on: Tuesday, 1 February 2011

Dear Colleagues

The information in this announcement is to enable the Internet community to update network configurations, such as routing filters, where required.

APNIC received the following IPv4 address blocks from IANA in February 2011 and will be making allocations from these ranges in the near future:

  • 39/8
  • 106/8

Reachability and routability testing of the new prefixes will commence soon. The daily report will be published on the RIPE NCC Routing Information Service.

Please be aware, this will be the final allocation made by IANA under the current framework and will trigger the final distribution of five /8 blocks, one to each RIR under the agreed “Global policy for the allocation of the remaining IPv4 address space”.

After these final allocations, each RIR will continue to make allocations according to their own established policies.

APNIC expects normal allocations to continue for a further three to six months. After this time, APNIC will continue to make small allocations from the last /8 block, guided by section 9.10 in“Policies for IPv4 address space management in the Asia Pacific region”. This policy ensures that IPv4 address space is available for IPv6 transition.

It is expected that these allocations will continue for at least another five years.

APNIC reiterates that IPv6 is the only means available for the sustained ongoing growth of the Internet, and urges all Members of the Internet industry to move quickly towards its deployment.

Read Full Post »

Older Posts »