NSS is FIPS 140-2 level 2 validated

Bob Lord reports that NSS (Network Security Services), the crypto library that powers software such as Firefox, Thunderbird, Open Office, and Fedora directory server, has recently been FIPS 140-2 level 2 validated by NIST. This is an important milestone because NSS is the only open source crypto library that is validated to level 2 (the highest available certification for software). Level 1 allows use in a single user environment, while level 2 allows a multi-user environment: and that not inconsiderable detail allows NSS based software to be deployed into security sensitive environments that resemble the commonly used configuration for modern operating systems.

This is also an important milestone because it means that software applications that use the NSS library for crypto while also following the security policy of the validation are also legitimately able to claim compliance. The reason for that is that NSS draws the crypto boundary behind its APIs and no private keys are accessible to applications. This means that a whole bunch of software just became usable in an ever increasing number of environments requiring FIPS 140-2 level 2 validation.

Congratulations to the NSS team.

Free as in beer

freeIPA logo

There is a new project on the block: freeIPA. This is an effort to shore up the existing identity infrastructure such as kerberos, LDAP, Samba and RADIUS. and make it all work together out of the box. For version 1 we’ll be concentrating on the I for identity and in later versions we’ll be adding the very important policy and audit capabilities. If this kind of thing interests you enough to want to contribute we have plenty to do.

Project blurb:

FreeIPA (so far) is an integrated solution combining:* Linux (currently Fedora)
* Fedora directory server
* FreeRADIUS
* MIT Kerberos
* NTP
* DNS
* Samba
* Web and commandline provisioning and administration tools

The goal of this version is to allow an administrator to quickly install, setup, and administer one or more servers for centralized authentication and identity management.

Motivation

For efficiency, compliance and risk mitigation, organizations need to centrally manage and correlate vital security information including

* Identity (machine, user, virtual machines, groups, authentication credentials)
* Policy (configuration settings, access control information)
* Audit (events, logs, analysis thereof)

Because of its vital importance and the way it is interrelated, we think identity, policy, and audit information should be open, interoperable, and manageable. Our focus is on making identity, policy, and audit easy to centrally manage for the Linux and Unix world. Of course, we will need to interoperate well with Windows and much more.

We are looking to take concrete and useful steps and so have chosen initially to focus on Identity solutions for the Unix/Linux world with some support for Windows login.

We intend to tackle centralized management of policy and audit information next.

Secure OpenID

I’ve been waiting for the first OpenID provider to offer a certificate based, no password ever, service. Not an SSL service, a certficate authentication based service. That is, a service that simply puts a certificate in your database and uses that to authenticate you. Browsers are well versed in the art of the certificate these days, they have had a while to eek out the rough spots. Auto-installation of certificates from a web page is possible and that allows a pretty seemless experience for sign up and “log in.” Prooveme.com very nearly, almost, but not quite gets it right. When I signed up and briefly tested the service I noted three rather serious problems:

  1. I had to click through a certificate security alert dialog because they used a self signed certificate for the page that installs the user certificate. It is just fine to use self signed certificates for user identification in this case, in fact it is the perfect use case, but I should know who is giving me the certificate and I shouldn’t be trained any further in bad browsing habits. Their users are surely worth a $20 certificate.
  2. Upon signing up for a site I discover that I am not asked if I have authorized the site to identify me. If I log in to a site for the first time I want to be alerted to that fact. There needs to be some level of control here so that I can decide to be auto-logged in to a particular site.
  3. After recovering from the shock of being logged in straight away, I noticed my name had been given up too! That is, er, not cool.

I’m a forgiving sort though, so I shall take comfort in the knowledge that this is a relatively new service and it is still working on these things. Clearing up these issues will get us all a whole lot closer to the ideal provider set up, and I think, the minimum required security for the use of OpenID by anyone who cares about their identity.

The umpire delegates back

Recently Kim Cameron has been defending CardSpace against various assertions that it won’t work offline. As I pointed out some while back, that is pure nonesense. I’ll let you read Kims blog for the details of how such a system might work with CardSpace, but I’ll just say it has to do with delegation. And that’s just a big word for access control, in this case user centric decentralized access control.

There really is no big secret to how this stuff is possible - at some point in time an offline user will be online, and during that time instead of ceding their credentials to the service in the sky (or worse, it happens without choice), they spend the time granting access specific to the service that needs access. That’ll be a statement along the lines of “Pete’s blog is allowed to view this flickr photoset.”, not “here’s my password dude, do as you will”, or indeed “hey, IdP, see that service? That’s me that is.” I have to agree with Kim on the notion of impersonation - at no time should anybody give the required access level for impersonation of themselves, on or offline.

There be dragons.

Internet Identity Workshop 2007

Yes, it’s that time again. If you have any interest in seeing what has been going on, what is going on, and what is about to go on in digital identity I suggest you sign up for IIW2007 to be held in May at the Computer History Museum in Mountain View, CA. You won’t be sorry, but you might get caffeine shakes.

Serial numbers and MMR

I haven’t blogged in a while, and the reason for that is really quite simple: when it comes to blogs, code comes first. Actually, that is probably better written as: when it comes to %x, code comes first.

A while ago I wrote about some of the issues that some people have with multi-master replication in a directory server. Something that comes up quite a bit on the Fedora directory server discussion lists is a request to automatically generate unix uid and gid in the uidNumber and gidNumber attributes of the posixAccount objectclass. As the ldup considered harmful document points out under section 4.2., Allocation of serial numbers, this is hard to do in a multi-master replication environment because two or more masters could allocate the same serial number at a roughly similar time without the ability to detect the clash until it is too late to prevent it. That would, of course, be bad.

I recently added the FDS solution to this problem - a general purpose serial number allocation plugin which is modestly called the distributed numeric assignment plugin, or “dna” for short. It will generate unique serial numbers in an MMR environment, including uidNumber and gidNumber. I wanted to solve this problem because allocating serial numbers is a reasonable and quite common thing to want to do, the directory server should probably do it for you, and as I mentioned, this subject does come up with reasonable frequency on the Fedora directory server discussion list. Essentially there are two main approaches to this problem in the wild:

  1. Have a single master do the allocation and then replicate the result to the other masters. This is not really solving the problem because there are two major undesireable properties of a such a system: there is a single point of failure at the allocator, and; there is a replication delay between creating or modifying an entry and having that entry become “whole” by having its serial numbers catch up with it.
  2. Have all masters get in a huddle and divvy out blocks of serial numbers per master. While this allows masters to independently allocate serial numbers (a good goal I’d say), it does mean that the masters must cooperate in order for one to get a new block of numbers. Perhaps that is ok for some systems, but it does require that all masters have at least indirect access to all other masters in order for such a protocol to work, and it wouldn’t be pretty, likely having lots of network chatter. That all seems a little too coupled for a loosely coupled replication scheme anyway.

A third approach is to combine those two by having a single master do the divvying. That produces a system with a single point of failure but gives the admin some time to get the allocating server back up before the system grinds to a halt one server at a time. So at least you know in advance you are doomed.

Yet another approach might be to divvy up large blocks of the number space among the masters. So large, in fact, that you bet on never creating more serial numbers at any one master than are available in the block. This is feasible, 2 billion or so could be split quite a few ways before you get close to the probability of overflow if we are talking about one allocation per user entry for instance. However, once the space has been split between your masters, what happens when you add a master? You’ve already allocated your number space, how do you reset it? Keep spare blocks? How many spare blocks is enough? How big a block is enough?

Of course, if one were to implement a multi-master attribute uniqueness scheme then that could be relied upon to reject non-unique serial numbers. Such a scheme would also be very network chatty and not in the spirit of loosely coupled replication. In any case, attempting to add a serial number and waiting for a rejection from across the network before trying again with the next serial number in line doesn’t sound too hot in the performance department to me.

Needless to say, my solution involved none of this. Actually, the answer I came up with is quite simple - don’t allocate a block, allocate a sequence. So, for example, master 1 allocates the sequence 1, 4, 7, and so on, while master 2 allocates 2, 5, 8. There are only two masters in that example, but astute readers will recognize that a third master could be added without any reconfiguration of the first two. Add a fourth? Now you need to reset the existing masters, giving them a starting number higher than previously allocated and a new sequence interval equal to or higher than the number of masters. This does of course produce fragmented sequences, so that if you were to combine the lists of numbers from all masters there would be some numbers missing towards the end of the list. Typically though, systems that rely upon this kind of feature value “unique” over “goes up in ones.” That fact also means that a typical deployment would make the sequence interval quite high in order to avoid the possibility that sequence configuration would need to be reset as masters are added.

The major advantages of this scheme are independent serial number allocation, economical use of the number space, no single point of failure, no network chatter typical of cooperative schemes, and a warm fuzzy feeling inside every time you add a new user to your system. It’s a win/win I think.

Oh, and if you really want to, you can configure the plugin to use “blocks” and allocate serial numbers monotonically. That would also be the typical single master deployment configuration.

(Fire)walls have ears

Like to chat online? Of course you do. Like third parties snooping in on your conversations? Of course you don’t. Unfortunately that is the reality today, there is no lack of IM sniffers out there and that makes your conversations vulnerable to capture even to the unsophisticated. Beyond employers spying on employees, any sensitive company information you might divulge could be going right into the ears of your competitors.

There is good news though, Bob Lord has written about secure AIM that his team added to the AIM client 5 years ago using open standards. Apparently people who write books about this sort of thing have never noticed the security tab in the AIM configuration so they don’t write about it. That’s a bit of a shame given that secure AIM uses certificate based chat encryption and signing. In other words you know who you are talking to, and you know you are only talking to that person. He even offers to help the gaim team if they want a compatible implementation. I do note that there are some crypto plugins for gaim but there is an obvious advantage to supporting the same scheme as AIM and an open standard intended for the purpose at the same time.