Posts Tagged ‘Oracle’

TRESPASSINGIn the maturity model of identity management, we have been through many stages of evolution and may be on the brink of the final stage – context aware identity management.  First it was centralized authorization (directory), authentication (access managmeent, SSO, STS, federation), identity management (provisioning), roles (RBAC), auditing/regulatory (compliance and analytics), privileged accounts management, and finally centralized policy management (entitlements servers).  

The final frontier, once you have mastered all of the above, is context aware identity management.  The user may have all the rights and privileges to access the resources, but they are doing so in an abnormal way.  Call it behavioral security.  My classic example is a company’s CIO may have access to a sensitive data room and may even have their badges grant them access to the data center floor, but one has to ask why the CIO is entering the sensitive data center at  2 AM.  However, a member of the cleaning staff would have the same privileges and would be expected in at 2 am to do their cleaning.

So its all about context.  Having the right credentials and using them in an manner as expected and flagging when they are used atypically.  As of this writing, Bradley Manning is waiting his sentencing for releasing 700,000 classified documents to Wikileaks.  What many miss in this sad adventure is that Pvt. Manning did not “hack” his way into the systems containing this information.  He was hired/recruited, trained, authorized to have sensitive access, and had his access vouched for in several compliance reviews.  The only question nobody asked was why a low level private with clearance was downloading hundreds of thousands of files at one time.  His behavior with his access level should have sent up warning signs.

This type of behavioral monitoring has been around for years and found some success, particularly in the financial sector.  Banks and investment firms have employed adaptive access management tools to work with their single sign-on front ends to their web sites. You have probably seen them when your bank shows you a confirmation picture first and asks you to set up security questions.  What you may not know is the software also “fingerprints” your system (OS type/version, MAC address, IP location, etc.) and starts building a profile of how you access your account. If you do anything out of the ordinary, it may ask you who your second grade teacher was even though you presented the correct user ID and password. Try logging into your bank from your mother-in-law’s computer in Florida when you visit next and you will most likely have to answer some additional security questions because we need to insure its you.

So buried deep in the latest release of Oracle Entitlements Server (try no to thump my company’s products, but this is the only software I know that can do this at this point) is the ability to add to your enforcement policies to make them context aware. The enforcement policies can look at more than just job title and role, it can also look at location, device, time, organization, etc. to make a more informed decision on whether to grant access.

It may be okay that you have privileged access to mission critical servers to issue a reboot command, but, ya know, we are just not going to allow you to do that if you came in through an external firewall using you iPad.  Just not going to happen. You need to be onsite in the data center to do that.

It is particularly helpful when several users have access to the same system, but need to be limited on what they can see. Just saw a killer demo a few weeks back where users of a military application can see a map of the world showing location of current deployments, but the data is filtered so you can only see resources in your theater of operation.  Europeans can only see assets based in Europe, Africans only see stuff based in Africa, etc.  All controlled centrally with one access policy.

To get to this level of context aware security is not easy to do and represents what I believe is the final frontier in online security.  Its the ability to not only control access and authorization, but understand when and how the proper credentials are being used.  Remember that most damaging breaches are by insiders who have proper access.


Read Full Post »


Interesting question came up on an internal database security alias that I thought should be shared.

The question posed on behalf of a database customer was around seeding random routines.  Was it more secure to provide a seed to the random number generator routines or not?  The question was extended to say that if seeding the routines was more secure, what new security issues were introduced in storing the seed values used?   To seed or not to seed, that is the question.

In this particular case, the answer is NOT to provide the seed the database random routines in production.  Seeding is similar to the cryptographic concepts of salts and even nonces, but within databases, it is used slightly differently.  In the Oracle DB world, generating a random numbers are the domain of the DBMS_RANDOM package.  It specifically states it should not be used for cryptography.

Like salts and nonces, a seed is used to add some increased entropy to the generation of random values.  Computers, for all their advanced capabilities, are still psuedo-random generators, not pure random generators.  Given the same input, they will generate the same output.  So to add some additional complexity to the random number generation to make it appear more a true random number, the DBMS_RANDOM routines allow the developer to “seed” any random number calculation. So one would think adding a seed to the generation would make the overall system more secure because the seed would better enhance the randomness of the pseudo random generation.

But, by supplying this seed to the calculation, the developer now introduces an added security complexity.  Where to store the “seed” securely between calculations? It is in effect a secret key if used (inappropriately I might add) to encrypt or protect information in the database.  Even with the documentation saying to do this, we have seen this done again and again.

What most developers miss is the DBMS_RANDOM package is a random package, not an encryption package.  If one does not supply a seed to the random number generator, the routine creates a random seed on its own, based on userid, date, and process id.  Not a truly random seed, but does add some entropy to the random generation.

In fact, the reason the routines allow the code to pass a seed instead of random generation of the seed is to allow testing.  By passing the same seed to the random number generator, you get the same random number back, which allows testing across runs.  As long as the seed remains fixed (non-null) the random number generator will pass back the same number. So, the thought that adding a seed to the random number generator to improve randomness actually fixes the generator to producing the same number, not a random one.  Plus, you have a security problem of where to store the seed.

So the answer is NOT to seed the routine in DBMS_RANDOM, but let it generate its own “random” seed when called. Remember, this discussion is around this implementation of one random number generation routine in an Oracle database package and should not be applied to other random number generation routines. But be dutiful of the seeding documentation.    Also, be aware that if you are using random numbers to do cryptographic encryption, you need to be sure the random routines are strong enough to insure the resulting security is trustworthy.  You need to use cryptographically secure routines. Not all psuedo-random routines are designed with that in mind.




Read Full Post »

The best cloud bursterCloud Bursting is the new catch phrase of the time.

Heard it the other day during an analysts briefing about the newest developments in cloud computing.  The idea of cloud bursting is for the most part, major enterprises will want to house their own cloud services for day to day operations, but then expand into the public cloud during peaks in demand.  They want to burst their internal cloud for an external cloud platform. Thus “cloud bursting”.

Sounds good on paper.  An online floral deliver service can run their website on their own in house cloud, adding new sites and services.  This internal cloud is preferred from a security point of view in that all user PII and other sensitive information stays in house and under the enterprise’s watchful eye.  Now, when Mother’s Day rolls around, the company can “cloud burst”; access on a temporary basis additional websites it will need just for the holiday rush period.  They expand their capacity on a temporary basis by creating a hybrid cloud of internal and external services.

Well, that all sounds very impressive and it is logically it makes a ton of sense, both technically and economically.  However, the devil is in the details and my view is the success of this concept of cloud bursting is completely dependent on getting the security right.  It might look easy on paper to add a few more virtual sites to your hybrid cloud, but if the services requires any PII or other sensitive information, you are now moving that information to an external site and the game just got a lot stickier.

As mentioned elsewhere, data ownership of sensitive information is becoming more and more of an issues.  Yes, you can sign contracts with outside cloud vendors to insure security, but most CxO’s I talk to still have it in their DNA that secured information should stay internal.

What this concept of cloud bursting tells me this is an opportunity to get your single sign on (SSO) or federation house in order.  A rock solid identity foundation running your current external web sites should be able to remotely add external cloud sites and still manage security (authentication and authorization) on the internal infrastructure.  The external cloud sites would be “neutered” versions of the web resources and would use federation or redirection to an SSO identity provider on internal resources for user security. Again, sounds easy on the whiteboard.

So, as you make plans to expand you online presence, now might be the time to invest in building up your external facing security infrastructure and get use to managing multiple instances of your web resources securely.  Then, when you have to “burst your cloud”, it won’t be as painful and you can support the business needs of the company easier and at lower cost.

And if you haven’t seen George Clooney in the The Men Who Stare At Goats, you should.  Highly recommended and he shows his version of “cloud bursting”.

Read Full Post »

No matter how good your identity management architecture and processes are, you may have a gaping hole in your public facing web stack.  And you won’t even be sure when it is exploited.

The hole are any third party applications (like who doesn’t have a few in their portal?).  I am always encouraging buy versus build, as your business should be putting jam into jars or running a bank, not writing software applications. Particularly ones that face the customer . Face it, most internal apps have grown organically and they are sink holes of development cash. And they have not upgraded their facade technology. At best, they are working through a re-skinned technology layer that you are not even sure who built it.

A customer relayed an interesting scenario that occurred recently that might keep you up at night.  They are in the financial business and offer services in a rather full service portal.  Part of that portal is a external agent management and fulfilment  application that they have contracted to use  for years and now offer over their portal.  The application vendor was well known, well accepted, and had been a good partner for years.

After a recent compliance audit of the site, they received notification from the auditor that the third party application had an administrator account in it of an employee who had not worked for the company for six years. The account was a “privileged account”, a rather impressive marketing sounding term but means someone is too lazy to secure the OS with separation of duty policies and gives out root or system access to accounts and does not track them closely.  The “privileged account” has access to PII information, clients personal data and account information.  Someone using that account could log in and download a lot of information the should not be free (apologies to my open source brethren).

Remediation time – no problem.  Ask the third party vendor to scan the audit logs and see if anyone has used that account in the last six years.  Dust for fingerprints and you are done.

But here is the rub – the third party vendor was not following the clients data center policies on logging and auditing.  In order to save storage space, thus money (thus price to the customer), their application was not set to log as much information on use activities as it could.  Therefore (wait for it), nobody was sure if someone had used the privileged account for evil.

And in the binary business of security, without a way to prove a breach was not exploited, one must assume it was.  Thus, the client was forced to implement a remediation plan for several million customers to the tune of several millions of dollars and some pretty irate customers.  A hefty price to pay for a security breach that may have never even occurred.

Needless to say, our customer is implementing a security review of all third party applications in their infrastructure and insuring they are abiding by the security policies of the data center.  There is a cost involved, but not as much as the above remediation.

So when you look at your GRC policies, remember to include third party applications and their vendors and insure they are abiding by the same rules as every other application in the house.  Add components to your identity framework, such as SSO or federation, that can externally aid in identity forensics.  And by all means, insure the policies you place on your internal applications are enforced to the same level with any vendor who supplies an application to your company.

Read Full Post »

Recently, a challenge was put out to our organization to try and determine what the major challenges will be for IT in 2020. Thats several lifetimes in this business. But it always is recommended to remember where the road you are on is eventually leading or you will get lost.  After several long walks with the dog, here is what I believe.

The fine art of handling users across business enterprises…

The other day, I booked a round trip flight for my son to go out to Colorado to look at a college he had gotten into. He was ticketed to fly out on US Airways and returned on Frontier. The thing is, it was booked as a United Airlines flight. Which I found on Kayak.com, a second tier travel search engine. About the only thing I could not do was reserve a specific seat on the Frontier airlines. Guess they did not have that part hooked up yet.

Now think about the identity aspects of this transaction. The only site I technically identified myself to was Kayak.com to start my search. This is where I established my online identity instance. This identity instance then approached the United Airlines system with enough credibility for them to reserve a several hundred dollar asset (the seat) from their inventory. Guess Kayak.com trusts me because I created an online account with them.

But then United had to turn around to its suppliers, US Airways and Frontier and complete a transaction on my behalf. All of them had to trust the supplier of my debit card (Visa) who, in turn, had to get the thumbs up from my bank (Bank of America) that I had enough money in the bank to cover the bill. And I am sure several government agency “don’t-not-fly” databases were checked or notified.

And my son’s Continenal OnePass account was to be given credit for his trip, as United and Continental are partners and soon to merge. And in that same transaction, I could have arranged for a hotel, rental car, trip insurance, parking and dinner reservations if I wanted to.

Did I mention this was not for me, but someone else?

This represents the challenge to enterprise IT organizations and the enterprise itself in the coming decade. By 2020, organizations must be able to handle online identities passing through their systems. Today, many IT organizations have some form of centralized employee directories and customer databases. They are still rolling out provisioning and identity self service applications. Entire industries (healthcare come to the top of the list) must be able to provide this seamless walk through a pool of providers.

By 2020, users must be able to establish an online identity and be able to pass through companies and partnerships as easily as they walk from store to store in a shopping mall. Sessions will have to be trusted from partners, privileges determined, and security maintained. All without delay.

Microsoft calls this claims – a type of request, the right to the request, and the resource requested. But it is much more than that. Claims fit into the Windows world quite nicely, but its rough sledding when you get outside the Windows world and try to go to federated partners. Unless they are on MS as well, the integration is difficult.

So, in 2020, companies must rethink their entire identity, security, and compliance approach to allow these millions of user sessions to “pass through” their systems and do business with them. Identity is no longer imbedded in an application, or enforced at the firewall. It has to be built into the security fabric of the IT enterprise at the ground level. Thus Identity Network Engineering is the complete rethinking to the approach of building enterprise architectures. It has to merge all of the current identity technologies (provisioning, authorization, single signon, compliance, reporting, etc.) into a unified identity architecture that is as fundamental to data center design as networking and storage technologies are.

More entries on what that security fabric will look like in the coming blogs.

Read Full Post »

http://en.wikipedia.org/wiki/Pogo_(comics)Famous words from Pogo and Mort Kelly, which was actually a jab at McCarthyism in its day.  But these words also apply to identity management.

Time and again, when truing up user accounts and security policies, the many of the offending accounts are the cleaning lady category. They are the admin and IT management accounts, the very accounts used to manage security and identity in the enterprise.

Oh, and you can add the audit team and senior management.

And what I usually hear is “well we need to have access to manage everyone’s security”.

No you don’t.

It was this kind of thinking that caused all of the problems with root account access in UNIX getting out of hand.  Privileged users are a unique use case, but they must have the most policies and security controls on them.

Think of this scenario (has happened). A new IT manager is hired (the last one failed to implement identity management correctly, could not take it serious).  Peoplesoft creates the user, the IDM system provisions the user.   Based on  the need to manage IT resources, the role they are assigned gives that brand new user the right to manage the process that granted them the right in the first place.

Look Pogo, a user granted the right to create more privileged users at their level.  The enemy is us.

Granted the workflow may include one or more approval steps. But once that provisioning is completed, what additional steps are being done to insure the safety of the overall system?  Who is checking this new user does not alter the workflow to add additional privileges?  Who is following up on a regular basis to insure the workflow has not been altered or used by a hacker to create additional privileged accounts?

Think about who is your IT organization is “The Cleaning Lady”. You know, the low paid, low esteemed worker whom most people ignore, but provides an invaluable service doing the dirty jobs (aka, IT administration).  And who, because of the job she needs to do, has keys to all of the most secure areas of the office.  Who in your organization has a keychain full of top level keys?

What about auditors?  DBA’s?  UNIX admins?  The security team itself?

Can you tell us right now who these people are, who is monitoring their access?  Where are the exception policies that insure these high value target accounts are regularly audited and renewed?

One of my favorite stories is how a head of compliance  got gobsmacked several years ago.  He was an early adopter who got upper management approval to push through an IDM project for SOX compliance.  Had a strict policy adopted that any rogue or orphan accounts could not have an email box (makes sure it is not used as a spam source).

When the rogue account reconcillation pulled up a variety of unclaimed privileged accounts and the workflow promptly deleted the email account in Exchange, removed the home directory, and deactivated  across a variety of systems.

Problem was, one of those accounts was the CEO’s.  Everything got trashed.

Seems the CEO had been given super human, almost god like privileges to most of the companies sensitive systems.  Any time IT admin added the CEO’s account to a system, they gave him admin/root access privileges. After all, Toto, he/she is the CEO.

When the reconcile missed linking up the CEO’s account to a known HR account, it was deemed a rogue account with high entitlements and was promptly dispatched.

Needless to say, there was no joy in Mudville that day.

Truth is, other than blowing away the email account, the CEO should not have accumulated the excess entitlements.  Remember, over 85% of security breaches are believed to be from internal employees.

So, consider the extra effort needed do Privileged User Management.  It will take a lot more time than you originally estimate.  Make sure you get all the cleaning ladies handled properly…

Read Full Post »