Kurzweil predicted it accurately

I think it was Ray Kurzweil that said during a presentation at Stanford that the US Patent System was in for some problems.  I believe that he had some charts that showed the near exponential growth of the number of patents being filed ... and talked about the fact that reviewing and looking for possible overlap and duplication was going to grow to be near impossible ... unless we end up employing the majority of the US citizens!  Well ... looks like things are heating up!

What is cool is that they are actually exploring some new and innovative ways to deal with the review of patents ...
The U.S. Patent Office Wants You. A plan to help overburdened patent examiners solicits online advice from outside sources (read: you), calling on Slashdot's founder for a system to rank user comments. Plus: China blocks LiveJournal. In 27B Stroke 6. [Wired News: Top Stories]

The umpire delegates back

Recently Kim Cameron has been defending CardSpace against various assertions that it won’t work offline. As I pointed out some while back, that is pure nonesense. I’ll let you read Kims blog for the details of how such a system might work with CardSpace, but I’ll just say it has to do with delegation. And that’s just a big word for access control, in this case user centric decentralized access control.

There really is no big secret to how this stuff is possible - at some point in time an offline user will be online, and during that time instead of ceding their credentials to the service in the sky (or worse, it happens without choice), they spend the time granting access specific to the service that needs access. That’ll be a statement along the lines of “Pete’s blog is allowed to view this flickr photoset.”, not “here’s my password dude, do as you will”, or indeed “hey, IdP, see that service? That’s me that is.” I have to agree with Kim on the notion of impersonation - at no time should anybody give the required access level for impersonation of themselves, on or offline.

There be dragons.

Internet Identity Workshop 2007

Yes, it’s that time again. If you have any interest in seeing what has been going on, what is going on, and what is about to go on in digital identity I suggest you sign up for IIW2007 to be held in May at the Computer History Museum in Mountain View, CA. You won’t be sorry, but you might get caffeine shakes.

Serial numbers and MMR

I haven’t blogged in a while, and the reason for that is really quite simple: when it comes to blogs, code comes first. Actually, that is probably better written as: when it comes to %x, code comes first.

A while ago I wrote about some of the issues that some people have with multi-master replication in a directory server. Something that comes up quite a bit on the Fedora directory server discussion lists is a request to automatically generate unix uid and gid in the uidNumber and gidNumber attributes of the posixAccount objectclass. As the ldup considered harmful document points out under section 4.2., Allocation of serial numbers, this is hard to do in a multi-master replication environment because two or more masters could allocate the same serial number at a roughly similar time without the ability to detect the clash until it is too late to prevent it. That would, of course, be bad.

I recently added the FDS solution to this problem - a general purpose serial number allocation plugin which is modestly called the distributed numeric assignment plugin, or “dna” for short. It will generate unique serial numbers in an MMR environment, including uidNumber and gidNumber. I wanted to solve this problem because allocating serial numbers is a reasonable and quite common thing to want to do, the directory server should probably do it for you, and as I mentioned, this subject does come up with reasonable frequency on the Fedora directory server discussion list. Essentially there are two main approaches to this problem in the wild:

  1. Have a single master do the allocation and then replicate the result to the other masters. This is not really solving the problem because there are two major undesireable properties of a such a system: there is a single point of failure at the allocator, and; there is a replication delay between creating or modifying an entry and having that entry become “whole” by having its serial numbers catch up with it.
  2. Have all masters get in a huddle and divvy out blocks of serial numbers per master. While this allows masters to independently allocate serial numbers (a good goal I’d say), it does mean that the masters must cooperate in order for one to get a new block of numbers. Perhaps that is ok for some systems, but it does require that all masters have at least indirect access to all other masters in order for such a protocol to work, and it wouldn’t be pretty, likely having lots of network chatter. That all seems a little too coupled for a loosely coupled replication scheme anyway.

A third approach is to combine those two by having a single master do the divvying. That produces a system with a single point of failure but gives the admin some time to get the allocating server back up before the system grinds to a halt one server at a time. So at least you know in advance you are doomed.

Yet another approach might be to divvy up large blocks of the number space among the masters. So large, in fact, that you bet on never creating more serial numbers at any one master than are available in the block. This is feasible, 2 billion or so could be split quite a few ways before you get close to the probability of overflow if we are talking about one allocation per user entry for instance. However, once the space has been split between your masters, what happens when you add a master? You’ve already allocated your number space, how do you reset it? Keep spare blocks? How many spare blocks is enough? How big a block is enough?

Of course, if one were to implement a multi-master attribute uniqueness scheme then that could be relied upon to reject non-unique serial numbers. Such a scheme would also be very network chatty and not in the spirit of loosely coupled replication. In any case, attempting to add a serial number and waiting for a rejection from across the network before trying again with the next serial number in line doesn’t sound too hot in the performance department to me.

Needless to say, my solution involved none of this. Actually, the answer I came up with is quite simple - don’t allocate a block, allocate a sequence. So, for example, master 1 allocates the sequence 1, 4, 7, and so on, while master 2 allocates 2, 5, 8. There are only two masters in that example, but astute readers will recognize that a third master could be added without any reconfiguration of the first two. Add a fourth? Now you need to reset the existing masters, giving them a starting number higher than previously allocated and a new sequence interval equal to or higher than the number of masters. This does of course produce fragmented sequences, so that if you were to combine the lists of numbers from all masters there would be some numbers missing towards the end of the list. Typically though, systems that rely upon this kind of feature value “unique” over “goes up in ones.” That fact also means that a typical deployment would make the sequence interval quite high in order to avoid the possibility that sequence configuration would need to be reset as masters are added.

The major advantages of this scheme are independent serial number allocation, economical use of the number space, no single point of failure, no network chatter typical of cooperative schemes, and a warm fuzzy feeling inside every time you add a new user to your system. It’s a win/win I think.

Oh, and if you really want to, you can configure the plugin to use “blocks” and allocate serial numbers monotonically. That would also be the typical single master deployment configuration.

(Fire)walls have ears

Like to chat online? Of course you do. Like third parties snooping in on your conversations? Of course you don’t. Unfortunately that is the reality today, there is no lack of IM sniffers out there and that makes your conversations vulnerable to capture even to the unsophisticated. Beyond employers spying on employees, any sensitive company information you might divulge could be going right into the ears of your competitors.

There is good news though, Bob Lord has written about secure AIM that his team added to the AIM client 5 years ago using open standards. Apparently people who write books about this sort of thing have never noticed the security tab in the AIM configuration so they don’t write about it. That’s a bit of a shame given that secure AIM uses certificate based chat encryption and signing. In other words you know who you are talking to, and you know you are only talking to that person. He even offers to help the gaim team if they want a compatible implementation. I do note that there are some crypto plugins for gaim but there is an obvious advantage to supporting the same scheme as AIM and an open standard intended for the purpose at the same time.

At the Core of Authentication

Authentication is the process of an entity proving it's identity to a system, typically to get access to certain resources managed by the system.

The industry typically talks about authentication in terms of:
     o  what you know
     o  what you have, and,
     o  who you are
and, occasionally,
     o  how you do something
is also included.

In this article, I want to get to the real core operation of authentication, and make the case, again, for focusing on asymmetric key exchanges for strong authentication. If you look at what constitutes authentication, it is as simple as proof of identity based on information exchange.

.  "What you know" is, of course, information. However, "what you have", "who you are", and "how you do something" is also information in the following senses:

.  "What you have" is information stored in an object (eg. a smart card), as opposed to your brain.

.  "Who you are" is information stored somewhere in/on your body (eg. your thumb, your retina), as opposed to the neurons in your head.

.  "How you do something" is a reflection of learned or innate pattern in your muscular system (e.g. your typing cadence). It is less direct, but authentication in this form is just the computer extrating your body's parameters on the action you are taking.

Conclusion #1: Authentication can be reduced to using "the information you have" to identity yourself to a system.

(BTW, "you" could be an entity other than a human.)


There are two fundamental ways you can use information to uniquely prove an entity's identity to a system:
     o  Shared secrets
     o  Asymmetric key exchange

The bulk of authentication system use shared secrets. From passwords (shared between the system and your brain), to thumbprint readers (the system and your thumb), to most card key systems (the system and the access card). The biggest problem with shared secrets is that the identifying secret needs to be exchanged during the authentication process. This means that it is vulnerable to attacks that can sniff out the shared secrets during the exchange.

The advantage asymmetric key exchange (i.e. PKI) is the only way we know to establish identity of an entity (i.e. that the entity has a certain unique secret, a private key in this case) without the exchange of the secret. The identifying secret never has to be exposed by the entity (see Physicalization).

Therefore...

Conclusion #2: The most secure form of authentication has to utilize asymmetric key exchange.

At the Core of Authentication

Authentication is the process of an entity proving it's identity to a system, typically to get access to certain resources managed by the system.

The industry typically talks about authentication in terms of:
     o  what you know
     o  what you have, and,
     o  who you are
and, occasionally,
     o  how you do something
is also included.

In this article, I want to get to the real core operation of authentication, and make the case, again, for focusing on asymmetric key exchanges for strong authentication. If you look at what constitutes authentication, it is as simple as proof of identity based on information exchange.

.  "What you know" is, of course, information. However, "what you have", "who you are", and "how you do something" is also information in the following senses:

.  "What you have" is information stored in an object (eg. a smart card), as opposed to your brain.

.  "Who you are" is information stored somewhere in/on your body (eg. your thumb, your retina), as opposed to the neurons in your head.

.  "How you do something" is a reflection of learned or innate pattern in your muscular system (e.g. your typing cadence). It is less direct, but authentication in this form is just the computer extrating your body's parameters on the action you are taking.

Conclusion #1: Authentication can be reduced to using "the information you have" to identity yourself to a system.

(BTW, "you" could be an entity other than a human.)


There are two fundamental ways you can use information to uniquely prove an entity's identity to a system:
     o  Shared secrets
     o  Asymmetric key exchange

The bulk of authentication system use shared secrets. From passwords (shared between the system and your brain), to thumbprint readers (the system and your thumb), to most card key systems (the system and the access card). The biggest problem with shared secrets is that the identifying secret needs to be exchanged during the authentication process. This means that it is vulnerable to attacks that can sniff out the shared secrets during the exchange.

The advantage asymmetric key exchange (i.e. PKI) is the only way we know to establish identity of an entity (i.e. that the entity has a certain unique secret, a private key in this case) without the exchange of the secret. The identifying secret never has to be exposed by the entity (see Physicalization).

Therefore...

Conclusion #2: The most secure form of authentication has to utilize asymmetric key exchange.

Anonymity – A Binary Switch?

There's been a slew of postings on the topic of anonymity, so I thought I'd jott down a few of my thoughts too... and collect the links here.

Key Points:
  1. Norlin’s Maxim: Your personal data is shifting from private to public.
  2. What becomes public stays public.
  3. If the default for digital identities is anonymity, it will give the user more control.
  4. The default in most systems is not anonymity.
  5. Anonymity and strong identity should be orthorgonal issues, and can be technically.
  6. Anonymity is not typically supported in most systems, so the stronger your identity, the less anonymous it is.

Binary Switch? Eric Norlin critics Dave Weinberger in that Eric believes that there is a spectrum of choices from anonymous, through a range of pseudonymity, to unanonymous identities. Eric asserts that "... online identity is *not* a binary issue." I wonder. If you believe in "Norlin’s Maxim", then so long as there is some small piece of information that links a pseudonym to the user, sooner or later, a pseudonym identity becomes an unanonymous identity. I believe that anonymity is a binary decision. If your digital identity is not fully anonymous, then it is (or soon will be) unanonymous.

Resources:
  1. Ben Laurie, Anonymity is the Substrate (http://www.links.org/?p=123). August 24, 2006.
  2. Akma Adam, Plus Ça Change (http://akma.disseminary.org/archives/2006/08/plus_a_change.html). August 20, 2006.
  3. David Weinberger, Anonymity as the default, and why digital ID should be a solution, not a platform (http://www.hyperorg.com/blogger/mtarchive/anonymity_as_the_default_and_w.html). August 16, 2006.
  4. Dave Kearns, Yet more on anonymity (http://vquill.com/2006/08/yet-more-on-anonymity.html). August 15, 2006.
  5. Eric Norlin, Should the online world reflect the "real" world? (http://blogs.zdnet.com/digitalID/?p=61). August 15, 2006.
  6. Bavo De Ridder, Do you really think you are anonymous? (http://bderidder.wordpress.com/2006/08/15/do-you-really-think-you-are-anonymous/). August 15, 2006.
  7. Kim Cameron, Dave Kearns takes on anonymity (http://www.identityblog.com/?p=530). August 14, 2006.
  8. Dave Kearns, More on Privacy vs Anonymity (http://vquill.com/2006/08/more-on-privacy-vs-anonymity.html). August 14, 2006.
  9. Tom Maddox, Ben Laurie on Anonymity (http://blog.opinity.com/2006/08/ben_laurie_on_a.html). August 14, 2006.
  10. Dave Kearns, Anonymity, identity - and privacy (http://www.vquill.com/2006/08/anonymity-identity-and-privacy.html). August 14, 2006.
  11. Kim Cameron, Norlin’s Maxim (http://www.identityblog.com/?p=525). August 12, 2006.
  12. Willliam Beem, Security by Obscurity (http://william.beem.us/2006/08/security_by_obscurity.html). August 10, 2006.
  13. Eric Norlin, Anonymity and identity (http://blogs.zdnet.com/digitalID/?p=60). August 10, 2006.
  14. David Weinberger, Transparency and Shadows (http://www.strumpette.com/archives/162-Cluetrain-author-dispels-absolute-transparency-myth.html). August 8, 2006.
  15. P.T. Ong, Strong Identities Can Be Anonymous (http://blog.onghome.com/2005/03/strong-identities-can-be-anonymous.htm). March 11, 2005.
  16. P.T. Ong, Support for Anonymity (http://blog.onghome.com/2005/01/support-for-anonymity.htm). January 30, 2005.

Anonymity – A Binary Switch?

There's been a slew of postings on the topic of anonymity, so I thought I'd jott down a few of my thoughts too... and collect the links here.

Key Points:
  1. Norlin’s Maxim: Your personal data is shifting from private to public.
  2. What becomes public stays public.
  3. If the default for digital identities is anonymity, it will give the user more control.
  4. The default in most systems is not anonymity.
  5. Anonymity and strong identity should be orthorgonal issues, and can be technically.
  6. Anonymity is not typically supported in most systems, so the stronger your identity, the less anonymous it is.

Binary Switch? Eric Norlin critics Dave Weinberger in that Eric believes that there is a spectrum of choices from anonymous, through a range of pseudonymity, to unanonymous identities. Eric asserts that "... online identity is *not* a binary issue." I wonder. If you believe in "Norlin’s Maxim", then so long as there is some small piece of information that links a pseudonym to the user, sooner or later, a pseudonym identity becomes an unanonymous identity. I believe that anonymity is a binary decision. If your digital identity is not fully anonymous, then it is (or soon will be) unanonymous.

Resources:
  1. Ben Laurie, Anonymity is the Substrate (http://www.links.org/?p=123). August 24, 2006.
  2. Akma Adam, Plus Ça Change (http://akma.disseminary.org/archives/2006/08/plus_a_change.html). August 20, 2006.
  3. David Weinberger, Anonymity as the default, and why digital ID should be a solution, not a platform (http://www.hyperorg.com/blogger/mtarchive/anonymity_as_the_default_and_w.html). August 16, 2006.
  4. Dave Kearns, Yet more on anonymity (http://vquill.com/2006/08/yet-more-on-anonymity.html). August 15, 2006.
  5. Eric Norlin, Should the online world reflect the "real" world? (http://blogs.zdnet.com/digitalID/?p=61). August 15, 2006.
  6. Bavo De Ridder, Do you really think you are anonymous? (http://bderidder.wordpress.com/2006/08/15/do-you-really-think-you-are-anonymous/). August 15, 2006.
  7. Kim Cameron, Dave Kearns takes on anonymity (http://www.identityblog.com/?p=530). August 14, 2006.
  8. Dave Kearns, More on Privacy vs Anonymity (http://vquill.com/2006/08/more-on-privacy-vs-anonymity.html). August 14, 2006.
  9. Tom Maddox, Ben Laurie on Anonymity (http://blog.opinity.com/2006/08/ben_laurie_on_a.html). August 14, 2006.
  10. Dave Kearns, Anonymity, identity - and privacy (http://www.vquill.com/2006/08/anonymity-identity-and-privacy.html). August 14, 2006.
  11. Kim Cameron, Norlin’s Maxim (http://www.identityblog.com/?p=525). August 12, 2006.
  12. Willliam Beem, Security by Obscurity (http://william.beem.us/2006/08/security_by_obscurity.html). August 10, 2006.
  13. Eric Norlin, Anonymity and identity (http://blogs.zdnet.com/digitalID/?p=60). August 10, 2006.
  14. David Weinberger, Transparency and Shadows (http://www.strumpette.com/archives/162-Cluetrain-author-dispels-absolute-transparency-myth.html). August 8, 2006.
  15. P.T. Ong, Strong Identities Can Be Anonymous (http://blog.onghome.com/2005/03/strong-identities-can-be-anonymous.htm). March 11, 2005.
  16. P.T. Ong, Support for Anonymity (http://blog.onghome.com/2005/01/support-for-anonymity.htm). January 30, 2005.

OpenSSO Available

Noted. I was browsing through Pat Patterson's blog and noticed his posting on the release of OpenSSO. OpenSSO source code, released on August 17, 2006, is now available at https://opensso.dev.java.net/public/use/.

The cost of deploying backend-based SSO systems has traditionally not been in the cost of the software itself. Netegrity (now CA) and Oblix (now Oracle) both had technology similar to OpenSSO. The biggest challenge in rolling out these systems is that you had to integrate it to the backend servers, resulting in very slow deployment projects. It also meant that most companies couldn't really achieve Single Sign-On. Hence, the term Reduced Sign-On (RSO) was born.

I'm unclear as to how OpenSSO will affect the industry. What do you think?

OpenSSO Available

Noted. I was browsing through Pat Patterson's blog and noticed his posting on the release of OpenSSO. OpenSSO source code, released on August 17, 2006, is now available at https://opensso.dev.java.net/public/use/.

The cost of deploying backend-based SSO systems has traditionally not been in the cost of the software itself. Netegrity (now CA) and Oblix (now Oracle) both had technology similar to OpenSSO. The biggest challenge in rolling out these systems is that you had to integrate it to the backend servers, resulting in very slow deployment projects. It also meant that most companies couldn't really achieve Single Sign-On. Hence, the term Reduced Sign-On (RSO) was born.

I'm unclear as to how OpenSSO will affect the industry. What do you think?

Recent Articles of Interest

Noted. Haven't had much time to write my own thoughts ... so here are a few of the more interesting articles I've read over the last few months:

The identity silo paradox. Eric Norlin points out the reality that the organizations that have the large identity silos of internet users have very little business incentive to share that information -- i.e. to be identity providers. Bavo De Ridder responds in Is there an identity silo paradox?.

The Long View of Identity. Andy Oram gives a good overview of the major issues surrounding the issue of identity -- I tried to point out the key issues in a mushier way in Painting the Future: Panopticons and Choice.

Top 5 Identity Fallacies [#1] [#2] [#3] [#4] [#5]. Phil Becker writes eloquently about the misunderstandings of options we have when we build digital systems.

Credit Bureau as Identity Provider? Pete Rowley talks about credit bureaus as future identity providers. Similar to my thoughts about how credit card companies could server a similar role.

Recent Articles of Interest

Noted. Haven't had much time to write my own thoughts ... so here are a few of the more interesting articles I've read over the last few months:

The identity silo paradox. Eric Norlin points out the reality that the organizations that have the large identity silos of internet users have very little business incentive to share that information -- i.e. to be identity providers. Bavo De Ridder responds in Is there an identity silo paradox?.

The Long View of Identity. Andy Oram gives a good overview of the major issues surrounding the issue of identity -- I tried to point out the key issues in a mushier way in Painting the Future: Panopticons and Choice.

Top 5 Identity Fallacies [#1] [#2] [#3] [#4] [#5]. Phil Becker writes eloquently about the misunderstandings of options we have when we build digital systems.

Credit Bureau as Identity Provider? Pete Rowley talks about credit bureaus as future identity providers. Similar to my thoughts about how credit card companies could server a similar role.

Much Ado About Nothing?

Been busy. Six months without a post ... thought I'd better either shut the blog down, or start posting again. I decided in favor of the latter. And it just so happens that there is interesting stuff to post about...

"51% oppose NSA database" was USA Today's headlines on Monday (at least it was on the copy I picked up in Hong Kong). Interesting. So I read through all the related articles.

The long and short of it is the NSA has been collecting phone call records directly from most phone companies. Qwest, according to USA Today, was the only one who didn't release their customers' records. 51% of the 809 people USA Today polled was against the idea. (Not sure how -- I always like to know how a poll was conducted). USAToday's editorial (written by Keith Simmons) agreed with the majority view.

I think we could get a little bit more practical about the problem, and move away from the privacy debate -- which typically degenerates to a religious debate based on one's normative beliefs on the relationship between the individual and society. Huh? :-) Right.

Why collect the data? To catch the bad guys, right?

Well, if you assume that the bad guys are stupid, they will register phones under their real names and use their personal credit cards topay the bills. Everything traceable.

However, if the bad guys are a bit smarter, they would go out to the nearest Best Buy (Dixon's if they're in the UK) and get a pre-paid phone, using cash... buy lot's of pre-paid vouchers (again, with cash)... and viola! anonymous calling on a mobile phone. This might be a bit more expensive than regular phones, but a few bucks more on the phone bill is not a major consideration for these bad guys. And sure, if they are dumb enough to add credit to their phone with a personal credit card, or set up their phone from an ISP which can link the connection to them, then they might be hosed.

So, assuming a modicum of smarts in the bad guys, what is the reason for amassing personal phone records? I can't think of one. Can you?

Postscript: Here's one suggested by a friend: If you have a phone# linked to a well-known bad guy, the patterns of numbers the well-known phone calls might be useful information, even if there are anonymous phones involved. Well... serves them right for calling anonymous phones with well-known phones!

Much Ado About Nothing?

Been busy. Six months without a post ... thought I'd better either shut the blog down, or start posting again. I decided in favor of the latter. And it just so happens that there is interesting stuff to post about...

"51% oppose NSA database" was USA Today's headlines on Monday (at least it was on the copy I picked up in Hong Kong). Interesting. So I read through all the related articles.

The long and short of it is the NSA has been collecting phone call records directly from most phone companies. Qwest, according to USA Today, was the only one who didn't release their customers' records. 51% of the 809 people USA Today polled was against the idea. (Not sure how -- I always like to know how a poll was conducted). USAToday's editorial (written by Keith Simmons) agreed with the majority view.

I think we could get a little bit more practical about the problem, and move away from the privacy debate -- which typically degenerates to a religious debate based on one's normative beliefs on the relationship between the individual and society. Huh? :-) Right.

Why collect the data? To catch the bad guys, right?

Well, if you assume that the bad guys are stupid, they will register phones under their real names and use their personal credit cards topay the bills. Everything traceable.

However, if the bad guys are a bit smarter, they would go out to the nearest Best Buy (Dixon's if they're in the UK) and get a pre-paid phone, using cash... buy lot's of pre-paid vouchers (again, with cash)... and viola! anonymous calling on a mobile phone. This might be a bit more expensive than regular phones, but a few bucks more on the phone bill is not a major consideration for these bad guys. And sure, if they are dumb enough to add credit to their phone with a personal credit card, or set up their phone from an ISP which can link the connection to them, then they might be hosed.

So, assuming a modicum of smarts in the bad guys, what is the reason for amassing personal phone records? I can't think of one. Can you?

Postscript: Here's one suggested by a friend: If you have a phone# linked to a well-known bad guy, the patterns of numbers the well-known phone calls might be useful information, even if there are anonymous phones involved. Well... serves them right for calling anonymous phones with well-known phones!

What Must Happen

The future of digital identity is set in the context of the evolution of digital systems. This article might be a bit off topic (in that it is not specifically about digital identity), but I think it's important for us to consider the bigger context of the evolution of digital systems.

WHAT MUST HAPPEN

When trying to figure out what building technology, answering the question "what must happen" is a necessity. Not what would be good to happen, but what must happen...

Software that Runs Software: Software to-date have been built for human use. But because of the sheer numbers of systems we are exposed to, the next generation of software needs to be software that runs software -- for humans. Agents, or meta-applications, if you will.

Dominant Systems Define Standards: All these attempts to define standards just result in a mishmash of "standards". Just about the only way to create widely adopted protocols is to create a dominant system, and then open it up. For example, Skype has a tremendous opportunity to set an industrial standard, if they open up fast enough and flexibly enough.

Sandboxes vs Always-On: (i.e. P2P vs Client/Server). Because the physical still matters, and ownership still matters, sandboxes are still needed, and will always be needed. Even if it is possible to be always on the network, the user might not choose to refer to a network resource, but rather, have a copy of it he/she manages. For example, instead of pointing to a web page on a website owned by someone else, the use might want a copy kept in his/her own blog or wiki -- just in case the owner changes it, or stops exporting it.

ASP systems (e.g. Salesforce.com) ultimately will reach full functionality only if they provides P2P facilities.

Synchronization Must Be Done Right: A corollary to the sandboxing trend is that synchronization as a science and engineering technique must be done right.

Lego My Servers: Servers are too complicated to set up and to run. Future servers will come in "Lego" building block format. Run out of disk space on your email server? Plug another email server "brick" next to your first, and the problem is solved. Want redundancy? Buy another two bricks, put them else where, point them to the first pair, and you will have a hot-fail-over system. The bricks will be very specialized: email server, web server, directory server, file server, system admin servers, dataservers, etc.

Of course strong security, including strong digital identity, is required in server bricks.

Evolutionary Revolutions: Respect Legacy. Systems that do not respect and work with legacy systems will fail (unless they perform a function heretofore did not exist). That's why, also, the next generation of software will be meta-applications.


WHAT SHOULD HAPPEN (Normative Statements)

Here are a couple of things I believe should happen, but might not because short term commercial drivers might not be there to make them happen ...

Software for the Long Haul: All too often, we design software without thinking about the long haul. For example, 4-byte IP address space (which has long since run out of room) and 32-bit time integer in Unix (which will expire in 2038). See http://blog.onghome.com/2005/06/long-lived-software.htm.

Basic Software Engineering: Professional software engineering means that we hold ourselves up to the highest engineering standards. Basic issues like designing for testability, internationalization, code coverage, error handling, UI useability, etc. needs to be part of what we do day-to-day in Software Engineering -- otherwise, we should just call it hacking.

[This article was initially written on December 2005.]

What Must Happen

The future of digital identity is set in the context of the evolution of digital systems. This article might be a bit off topic (in that it is not specifically about digital identity), but I think it's important for us to consider the bigger context of the evolution of digital systems.

WHAT MUST HAPPEN

When trying to figure out what building technology, answering the question "what must happen" is a necessity. Not what would be good to happen, but what must happen...

Software that Runs Software: Software to-date have been built for human use. But because of the sheer numbers of systems we are exposed to, the next generation of software needs to be software that runs software -- for humans. Agents, or meta-applications, if you will.

Dominant Systems Define Standards: All these attempts to define standards just result in a mishmash of "standards". Just about the only way to create widely adopted protocols is to create a dominant system, and then open it up. For example, Skype has a tremendous opportunity to set an industrial standard, if they open up fast enough and flexibly enough.

Sandboxes vs Always-On: (i.e. P2P vs Client/Server). Because the physical still matters, and ownership still matters, sandboxes are still needed, and will always be needed. Even if it is possible to be always on the network, the user might not choose to refer to a network resource, but rather, have a copy of it he/she manages. For example, instead of pointing to a web page on a website owned by someone else, the use might want a copy kept in his/her own blog or wiki -- just in case the owner changes it, or stops exporting it.

ASP systems (e.g. Salesforce.com) ultimately will reach full functionality only if they provides P2P facilities.

Synchronization Must Be Done Right: A corollary to the sandboxing trend is that synchronization as a science and engineering technique must be done right.

Lego My Servers: Servers are too complicated to set up and to run. Future servers will come in "Lego" building block format. Run out of disk space on your email server? Plug another email server "brick" next to your first, and the problem is solved. Want redundancy? Buy another two bricks, put them else where, point them to the first pair, and you will have a hot-fail-over system. The bricks will be very specialized: email server, web server, directory server, file server, system admin servers, dataservers, etc.

Of course strong security, including strong digital identity, is required in server bricks.

Evolutionary Revolutions: Respect Legacy. Systems that do not respect and work with legacy systems will fail (unless they perform a function heretofore did not exist). That's why, also, the next generation of software will be meta-applications.


WHAT SHOULD HAPPEN (Normative Statements)

Here are a couple of things I believe should happen, but might not because short term commercial drivers might not be there to make them happen ...

Software for the Long Haul: All too often, we design software without thinking about the long haul. For example, 4-byte IP address space (which has long since run out of room) and 32-bit time integer in Unix (which will expire in 2038). See http://blog.onghome.com/2005/06/long-lived-software.htm.

Basic Software Engineering: Professional software engineering means that we hold ourselves up to the highest engineering standards. Basic issues like designing for testability, internationalization, code coverage, error handling, UI useability, etc. needs to be part of what we do day-to-day in Software Engineering -- otherwise, we should just call it hacking.

[This article was initially written on December 2005.]

If a Tree Falls …

Johannes' post on Phil Windley puts his finger on why defining "Digital Identity" is hard asserts that an identity is more than a set of claims.

If there is an entity, and there are no claims made about it, does it still have an identity?

If a tree falls in the forrest, and no one hears it, does it make a sound?

Ah, semantics!

From a materialistic perspective, define "sound" and you've answered the second question. Define "identity" and you've answered the first.

This is why Dave and Timothy (and I, to some extent) are on a rant about ontology and semantics. You don't get definitions right, it's hard to have lucid thoughts, let alone unambiguious communications.

"Do identical twins have different identities even if we can't tell them apart?" Define what you mean by "identity" and I'll answer your question.

We can't even answer basic questions about the "things" we are talking about because we don't have common definitions of them. Convinced yet about the importance of a well defined ontology for the digital identity community?