Identifiers are Negative Authenticators

I was just responding to a friend's question about using biometrics, and I realized that one good way of looking at certain class of identifiers is as negative authenticators...

In a separate blog entry ( http://blog.onghome.com/2003/12/problems-with-biometrics.htm ), I pontificate on why biometrics should not be used as authenticators.

At best, biometrics can be used as identifiers (negative authenticators) -- if you don't have the biometric, you'll not be authenticated. But just because you have the biometric, does not mean that you're authenticated... something like how social security numbers or should be treated.

Perhaps we can label this class of identifiers Private Identifiers ... identifiers that you should try to keep as private as possible, but should expect that some group of people would have them. Private identifiers (your social security number), compared to public identifiers (your name), are expected to be more confidential... But I guess it is a matter of degree of public-ness we are talking about. A 100% private identifier is a secret that is never shared -- and thus, is pretty useless for identifying an entity.

Identifiers are Negative Authenticators

I was just responding to a friend's question about using biometrics, and I realized that one good way of looking at certain class of identifiers is as negative authenticators...

In a separate blog entry ( http://blog.onghome.com/2003/12/problems-with-biometrics.htm ), I pontificate on why biometrics should not be used as authenticators.

At best, biometrics can be used as identifiers (negative authenticators) -- if you don't have the biometric, you'll not be authenticated. But just because you have the biometric, does not mean that you're authenticated... something like how social security numbers or should be treated.

Perhaps we can label this class of identifiers Private Identifiers ... identifiers that you should try to keep as private as possible, but should expect that some group of people would have them. Private identifiers (your social security number), compared to public identifiers (your name), are expected to be more confidential... But I guess it is a matter of degree of public-ness we are talking about. A 100% private identifier is a secret that is never shared -- and thus, is pretty useless for identifying an entity.

Access Agents

Access agents, which are a form of personal directories, are required to solve multiple problems in digital identity. Access agents should perform the user-centric, end-point management of user-id/password pairs, personal private keys, OTP (on-time password) seeds, OpenID tokens, etc. -- all the credentials an end-user possesses (and is expected to manage). Access agents should follow end-users around to all the end-points where human comes into contact with cyberspace. (I like to think of end-points as the 4P's -- PC's, PDA, phones, and portals.)

There are multiple reasons for end-point access agents:

1. Simplification of the user's world
2. Migration to multi-factor authentication
3. Integration

But the bottom-line is control. Control for the end-user in that he/she can finally stop worrying about dozens of access codes. And with better control comes the possibility of increasing security. Which also results in control for the enterprise in better security and more auditability. (Yes, the access agent can act as big brother for the enterprise.)

Dave Kearns has written a bunch on the need for personal directories. He sees most of the work on identity management, including OpenID and InfoCard, leading to a logical conclusion - the personal directory system.

Links to Dave's Articles
o May 2002, The need for a personal directory (http://www.networkworld.com/newsletters/dir/2002/01331333.html)
o January 2007, Someone else wants a personal directory! (http://vquill.com/labels/personal%20directory.html)

Access Agents

Access agents, which are a form of personal directories, are required to solve multiple problems in digital identity. Access agents should perform the user-centric, end-point management of user-id/password pairs, personal private keys, OTP (on-time password) seeds, OpenID tokens, etc. -- all the credentials an end-user possesses (and is expected to manage). Access agents should follow end-users around to all the end-points where human comes into contact with cyberspace. (I like to think of end-points as the 4P's -- PC's, PDA, phones, and portals.)

There are multiple reasons for end-point access agents:

1. Simplification of the user's world
2. Migration to multi-factor authentication
3. Integration

But the bottom-line is control. Control for the end-user in that he/she can finally stop worrying about dozens of access codes. And with better control comes the possibility of increasing security. Which also results in control for the enterprise in better security and more auditability. (Yes, the access agent can act as big brother for the enterprise.)

Dave Kearns has written a bunch on the need for personal directories. He sees most of the work on identity management, including OpenID and InfoCard, leading to a logical conclusion - the personal directory system.

Links to Dave's Articles
o May 2002, The need for a personal directory (http://www.networkworld.com/newsletters/dir/2002/01331333.html)
o January 2007, Someone else wants a personal directory! (http://vquill.com/labels/personal%20directory.html)

The Turing Event

A few (10~15) years from now, I will get a phone call from my friend's assistant requesting that since we have not touch bases in a while, that we should meet up over dinner. I think it's a good idea, pull out my PDA/calendar, and start working a meeting time and place with his assistant. In the course of our interaction, we joke about the kinds of food my friend detests and make casual chatter about the weather. After I hang up the phone, I would realize that I have no idea if I just talked to a human being or a machine.

Alan Turing proposed that the way we measure machine intelligence is by comparing an interaction with a machine to our interaction with humans. And if we can't tell them apart, then the machine can be labelled as "intelligent". (This test is known as the Turing Test.)

The first time in history when society can't tell the difference between machines and humans is what I refer to as the Turing Event.

Think about the impact of machines in a post Turing Event world... think seriously, because most of us will still be alive and kicking when we get there. How will economies be impacted? Which occupations will be considered "suitable" for humans, and which not? How much social unrest will there be?

Think about what identity would mean in that world. Do our assistants assume our identities, or do we give them their own? What are the questions we should be asking today that we're not asking?

I didn't write this article to give answers; just to ask questions.

What do you think?

P.S. Mitch Kapor has a bet with Ray Kurzweil that this will not happen by 2029.

The Turing Event

A few (10~15) years from now, I will get a phone call from my friend's assistant requesting that since we have not touch bases in a while, that we should meet up over dinner. I think it's a good idea, pull out my PDA/calendar, and start working a meeting time and place with his assistant. In the course of our interaction, we joke about the kinds of food my friend detests and make casual chatter about the weather. After I hang up the phone, I would realize that I have no idea if I just talked to a human being or a machine.

Alan Turing proposed that the way we measure machine intelligence is by comparing an interaction with a machine to our interaction with humans. And if we can't tell them apart, then the machine can be labelled as "intelligent". (This test is known as the Turing Test.)

The first time in history when society can't tell the difference between machines and humans is what I refer to as the Turing Event.

Think about the impact of machines in a post Turing Event world... think seriously, because most of us will still be alive and kicking when we get there. How will economies be impacted? Which occupations will be considered "suitable" for humans, and which not? How much social unrest will there be?

Think about what identity would mean in that world. Do our assistants assume our identities, or do we give them their own? What are the questions we should be asking today that we're not asking?

I didn't write this article to give answers; just to ask questions.

What do you think?

P.S. Mitch Kapor has a bet with Ray Kurzweil that this will not happen by 2029.

At the Core of Authentication

Authentication is the process of an entity proving it's identity to a system, typically to get access to certain resources managed by the system.

The industry typically talks about authentication in terms of:
     o  what you know
     o  what you have, and,
     o  who you are
and, occasionally,
     o  how you do something
is also included.

In this article, I want to get to the real core operation of authentication, and make the case, again, for focusing on asymmetric key exchanges for strong authentication. If you look at what constitutes authentication, it is as simple as proof of identity based on information exchange.

.  "What you know" is, of course, information. However, "what you have", "who you are", and "how you do something" is also information in the following senses:

.  "What you have" is information stored in an object (eg. a smart card), as opposed to your brain.

.  "Who you are" is information stored somewhere in/on your body (eg. your thumb, your retina), as opposed to the neurons in your head.

.  "How you do something" is a reflection of learned or innate pattern in your muscular system (e.g. your typing cadence). It is less direct, but authentication in this form is just the computer extrating your body's parameters on the action you are taking.

Conclusion #1: Authentication can be reduced to using "the information you have" to identity yourself to a system.

(BTW, "you" could be an entity other than a human.)


There are two fundamental ways you can use information to uniquely prove an entity's identity to a system:
     o  Shared secrets
     o  Asymmetric key exchange

The bulk of authentication system use shared secrets. From passwords (shared between the system and your brain), to thumbprint readers (the system and your thumb), to most card key systems (the system and the access card). The biggest problem with shared secrets is that the identifying secret needs to be exchanged during the authentication process. This means that it is vulnerable to attacks that can sniff out the shared secrets during the exchange.

The advantage asymmetric key exchange (i.e. PKI) is the only way we know to establish identity of an entity (i.e. that the entity has a certain unique secret, a private key in this case) without the exchange of the secret. The identifying secret never has to be exposed by the entity (see Physicalization).

Therefore...

Conclusion #2: The most secure form of authentication has to utilize asymmetric key exchange.

At the Core of Authentication

Authentication is the process of an entity proving it's identity to a system, typically to get access to certain resources managed by the system.

The industry typically talks about authentication in terms of:
     o  what you know
     o  what you have, and,
     o  who you are
and, occasionally,
     o  how you do something
is also included.

In this article, I want to get to the real core operation of authentication, and make the case, again, for focusing on asymmetric key exchanges for strong authentication. If you look at what constitutes authentication, it is as simple as proof of identity based on information exchange.

.  "What you know" is, of course, information. However, "what you have", "who you are", and "how you do something" is also information in the following senses:

.  "What you have" is information stored in an object (eg. a smart card), as opposed to your brain.

.  "Who you are" is information stored somewhere in/on your body (eg. your thumb, your retina), as opposed to the neurons in your head.

.  "How you do something" is a reflection of learned or innate pattern in your muscular system (e.g. your typing cadence). It is less direct, but authentication in this form is just the computer extrating your body's parameters on the action you are taking.

Conclusion #1: Authentication can be reduced to using "the information you have" to identity yourself to a system.

(BTW, "you" could be an entity other than a human.)


There are two fundamental ways you can use information to uniquely prove an entity's identity to a system:
     o  Shared secrets
     o  Asymmetric key exchange

The bulk of authentication system use shared secrets. From passwords (shared between the system and your brain), to thumbprint readers (the system and your thumb), to most card key systems (the system and the access card). The biggest problem with shared secrets is that the identifying secret needs to be exchanged during the authentication process. This means that it is vulnerable to attacks that can sniff out the shared secrets during the exchange.

The advantage asymmetric key exchange (i.e. PKI) is the only way we know to establish identity of an entity (i.e. that the entity has a certain unique secret, a private key in this case) without the exchange of the secret. The identifying secret never has to be exposed by the entity (see Physicalization).

Therefore...

Conclusion #2: The most secure form of authentication has to utilize asymmetric key exchange.

Anonymity – A Binary Switch?

There's been a slew of postings on the topic of anonymity, so I thought I'd jott down a few of my thoughts too... and collect the links here.

Key Points:
  1. Norlin’s Maxim: Your personal data is shifting from private to public.
  2. What becomes public stays public.
  3. If the default for digital identities is anonymity, it will give the user more control.
  4. The default in most systems is not anonymity.
  5. Anonymity and strong identity should be orthorgonal issues, and can be technically.
  6. Anonymity is not typically supported in most systems, so the stronger your identity, the less anonymous it is.

Binary Switch? Eric Norlin critics Dave Weinberger in that Eric believes that there is a spectrum of choices from anonymous, through a range of pseudonymity, to unanonymous identities. Eric asserts that "... online identity is *not* a binary issue." I wonder. If you believe in "Norlin’s Maxim", then so long as there is some small piece of information that links a pseudonym to the user, sooner or later, a pseudonym identity becomes an unanonymous identity. I believe that anonymity is a binary decision. If your digital identity is not fully anonymous, then it is (or soon will be) unanonymous.

Resources:
  1. Ben Laurie, Anonymity is the Substrate (http://www.links.org/?p=123). August 24, 2006.
  2. Akma Adam, Plus Ça Change (http://akma.disseminary.org/archives/2006/08/plus_a_change.html). August 20, 2006.
  3. David Weinberger, Anonymity as the default, and why digital ID should be a solution, not a platform (http://www.hyperorg.com/blogger/mtarchive/anonymity_as_the_default_and_w.html). August 16, 2006.
  4. Dave Kearns, Yet more on anonymity (http://vquill.com/2006/08/yet-more-on-anonymity.html). August 15, 2006.
  5. Eric Norlin, Should the online world reflect the "real" world? (http://blogs.zdnet.com/digitalID/?p=61). August 15, 2006.
  6. Bavo De Ridder, Do you really think you are anonymous? (http://bderidder.wordpress.com/2006/08/15/do-you-really-think-you-are-anonymous/). August 15, 2006.
  7. Kim Cameron, Dave Kearns takes on anonymity (http://www.identityblog.com/?p=530). August 14, 2006.
  8. Dave Kearns, More on Privacy vs Anonymity (http://vquill.com/2006/08/more-on-privacy-vs-anonymity.html). August 14, 2006.
  9. Tom Maddox, Ben Laurie on Anonymity (http://blog.opinity.com/2006/08/ben_laurie_on_a.html). August 14, 2006.
  10. Dave Kearns, Anonymity, identity - and privacy (http://www.vquill.com/2006/08/anonymity-identity-and-privacy.html). August 14, 2006.
  11. Kim Cameron, Norlin’s Maxim (http://www.identityblog.com/?p=525). August 12, 2006.
  12. Willliam Beem, Security by Obscurity (http://william.beem.us/2006/08/security_by_obscurity.html). August 10, 2006.
  13. Eric Norlin, Anonymity and identity (http://blogs.zdnet.com/digitalID/?p=60). August 10, 2006.
  14. David Weinberger, Transparency and Shadows (http://www.strumpette.com/archives/162-Cluetrain-author-dispels-absolute-transparency-myth.html). August 8, 2006.
  15. P.T. Ong, Strong Identities Can Be Anonymous (http://blog.onghome.com/2005/03/strong-identities-can-be-anonymous.htm). March 11, 2005.
  16. P.T. Ong, Support for Anonymity (http://blog.onghome.com/2005/01/support-for-anonymity.htm). January 30, 2005.

Anonymity – A Binary Switch?

There's been a slew of postings on the topic of anonymity, so I thought I'd jott down a few of my thoughts too... and collect the links here.

Key Points:
  1. Norlin’s Maxim: Your personal data is shifting from private to public.
  2. What becomes public stays public.
  3. If the default for digital identities is anonymity, it will give the user more control.
  4. The default in most systems is not anonymity.
  5. Anonymity and strong identity should be orthorgonal issues, and can be technically.
  6. Anonymity is not typically supported in most systems, so the stronger your identity, the less anonymous it is.

Binary Switch? Eric Norlin critics Dave Weinberger in that Eric believes that there is a spectrum of choices from anonymous, through a range of pseudonymity, to unanonymous identities. Eric asserts that "... online identity is *not* a binary issue." I wonder. If you believe in "Norlin’s Maxim", then so long as there is some small piece of information that links a pseudonym to the user, sooner or later, a pseudonym identity becomes an unanonymous identity. I believe that anonymity is a binary decision. If your digital identity is not fully anonymous, then it is (or soon will be) unanonymous.

Resources:
  1. Ben Laurie, Anonymity is the Substrate (http://www.links.org/?p=123). August 24, 2006.
  2. Akma Adam, Plus Ça Change (http://akma.disseminary.org/archives/2006/08/plus_a_change.html). August 20, 2006.
  3. David Weinberger, Anonymity as the default, and why digital ID should be a solution, not a platform (http://www.hyperorg.com/blogger/mtarchive/anonymity_as_the_default_and_w.html). August 16, 2006.
  4. Dave Kearns, Yet more on anonymity (http://vquill.com/2006/08/yet-more-on-anonymity.html). August 15, 2006.
  5. Eric Norlin, Should the online world reflect the "real" world? (http://blogs.zdnet.com/digitalID/?p=61). August 15, 2006.
  6. Bavo De Ridder, Do you really think you are anonymous? (http://bderidder.wordpress.com/2006/08/15/do-you-really-think-you-are-anonymous/). August 15, 2006.
  7. Kim Cameron, Dave Kearns takes on anonymity (http://www.identityblog.com/?p=530). August 14, 2006.
  8. Dave Kearns, More on Privacy vs Anonymity (http://vquill.com/2006/08/more-on-privacy-vs-anonymity.html). August 14, 2006.
  9. Tom Maddox, Ben Laurie on Anonymity (http://blog.opinity.com/2006/08/ben_laurie_on_a.html). August 14, 2006.
  10. Dave Kearns, Anonymity, identity - and privacy (http://www.vquill.com/2006/08/anonymity-identity-and-privacy.html). August 14, 2006.
  11. Kim Cameron, Norlin’s Maxim (http://www.identityblog.com/?p=525). August 12, 2006.
  12. Willliam Beem, Security by Obscurity (http://william.beem.us/2006/08/security_by_obscurity.html). August 10, 2006.
  13. Eric Norlin, Anonymity and identity (http://blogs.zdnet.com/digitalID/?p=60). August 10, 2006.
  14. David Weinberger, Transparency and Shadows (http://www.strumpette.com/archives/162-Cluetrain-author-dispels-absolute-transparency-myth.html). August 8, 2006.
  15. P.T. Ong, Strong Identities Can Be Anonymous (http://blog.onghome.com/2005/03/strong-identities-can-be-anonymous.htm). March 11, 2005.
  16. P.T. Ong, Support for Anonymity (http://blog.onghome.com/2005/01/support-for-anonymity.htm). January 30, 2005.

OpenSSO Available

Noted. I was browsing through Pat Patterson's blog and noticed his posting on the release of OpenSSO. OpenSSO source code, released on August 17, 2006, is now available at https://opensso.dev.java.net/public/use/.

The cost of deploying backend-based SSO systems has traditionally not been in the cost of the software itself. Netegrity (now CA) and Oblix (now Oracle) both had technology similar to OpenSSO. The biggest challenge in rolling out these systems is that you had to integrate it to the backend servers, resulting in very slow deployment projects. It also meant that most companies couldn't really achieve Single Sign-On. Hence, the term Reduced Sign-On (RSO) was born.

I'm unclear as to how OpenSSO will affect the industry. What do you think?

OpenSSO Available

Noted. I was browsing through Pat Patterson's blog and noticed his posting on the release of OpenSSO. OpenSSO source code, released on August 17, 2006, is now available at https://opensso.dev.java.net/public/use/.

The cost of deploying backend-based SSO systems has traditionally not been in the cost of the software itself. Netegrity (now CA) and Oblix (now Oracle) both had technology similar to OpenSSO. The biggest challenge in rolling out these systems is that you had to integrate it to the backend servers, resulting in very slow deployment projects. It also meant that most companies couldn't really achieve Single Sign-On. Hence, the term Reduced Sign-On (RSO) was born.

I'm unclear as to how OpenSSO will affect the industry. What do you think?

Recent Articles of Interest

Noted. Haven't had much time to write my own thoughts ... so here are a few of the more interesting articles I've read over the last few months:

The identity silo paradox. Eric Norlin points out the reality that the organizations that have the large identity silos of internet users have very little business incentive to share that information -- i.e. to be identity providers. Bavo De Ridder responds in Is there an identity silo paradox?.

The Long View of Identity. Andy Oram gives a good overview of the major issues surrounding the issue of identity -- I tried to point out the key issues in a mushier way in Painting the Future: Panopticons and Choice.

Top 5 Identity Fallacies [#1] [#2] [#3] [#4] [#5]. Phil Becker writes eloquently about the misunderstandings of options we have when we build digital systems.

Credit Bureau as Identity Provider? Pete Rowley talks about credit bureaus as future identity providers. Similar to my thoughts about how credit card companies could server a similar role.

Recent Articles of Interest

Noted. Haven't had much time to write my own thoughts ... so here are a few of the more interesting articles I've read over the last few months:

The identity silo paradox. Eric Norlin points out the reality that the organizations that have the large identity silos of internet users have very little business incentive to share that information -- i.e. to be identity providers. Bavo De Ridder responds in Is there an identity silo paradox?.

The Long View of Identity. Andy Oram gives a good overview of the major issues surrounding the issue of identity -- I tried to point out the key issues in a mushier way in Painting the Future: Panopticons and Choice.

Top 5 Identity Fallacies [#1] [#2] [#3] [#4] [#5]. Phil Becker writes eloquently about the misunderstandings of options we have when we build digital systems.

Credit Bureau as Identity Provider? Pete Rowley talks about credit bureaus as future identity providers. Similar to my thoughts about how credit card companies could server a similar role.

Much Ado About Nothing?

Been busy. Six months without a post ... thought I'd better either shut the blog down, or start posting again. I decided in favor of the latter. And it just so happens that there is interesting stuff to post about...

"51% oppose NSA database" was USA Today's headlines on Monday (at least it was on the copy I picked up in Hong Kong). Interesting. So I read through all the related articles.

The long and short of it is the NSA has been collecting phone call records directly from most phone companies. Qwest, according to USA Today, was the only one who didn't release their customers' records. 51% of the 809 people USA Today polled was against the idea. (Not sure how -- I always like to know how a poll was conducted). USAToday's editorial (written by Keith Simmons) agreed with the majority view.

I think we could get a little bit more practical about the problem, and move away from the privacy debate -- which typically degenerates to a religious debate based on one's normative beliefs on the relationship between the individual and society. Huh? :-) Right.

Why collect the data? To catch the bad guys, right?

Well, if you assume that the bad guys are stupid, they will register phones under their real names and use their personal credit cards topay the bills. Everything traceable.

However, if the bad guys are a bit smarter, they would go out to the nearest Best Buy (Dixon's if they're in the UK) and get a pre-paid phone, using cash... buy lot's of pre-paid vouchers (again, with cash)... and viola! anonymous calling on a mobile phone. This might be a bit more expensive than regular phones, but a few bucks more on the phone bill is not a major consideration for these bad guys. And sure, if they are dumb enough to add credit to their phone with a personal credit card, or set up their phone from an ISP which can link the connection to them, then they might be hosed.

So, assuming a modicum of smarts in the bad guys, what is the reason for amassing personal phone records? I can't think of one. Can you?

Postscript: Here's one suggested by a friend: If you have a phone# linked to a well-known bad guy, the patterns of numbers the well-known phone calls might be useful information, even if there are anonymous phones involved. Well... serves them right for calling anonymous phones with well-known phones!

Much Ado About Nothing?

Been busy. Six months without a post ... thought I'd better either shut the blog down, or start posting again. I decided in favor of the latter. And it just so happens that there is interesting stuff to post about...

"51% oppose NSA database" was USA Today's headlines on Monday (at least it was on the copy I picked up in Hong Kong). Interesting. So I read through all the related articles.

The long and short of it is the NSA has been collecting phone call records directly from most phone companies. Qwest, according to USA Today, was the only one who didn't release their customers' records. 51% of the 809 people USA Today polled was against the idea. (Not sure how -- I always like to know how a poll was conducted). USAToday's editorial (written by Keith Simmons) agreed with the majority view.

I think we could get a little bit more practical about the problem, and move away from the privacy debate -- which typically degenerates to a religious debate based on one's normative beliefs on the relationship between the individual and society. Huh? :-) Right.

Why collect the data? To catch the bad guys, right?

Well, if you assume that the bad guys are stupid, they will register phones under their real names and use their personal credit cards topay the bills. Everything traceable.

However, if the bad guys are a bit smarter, they would go out to the nearest Best Buy (Dixon's if they're in the UK) and get a pre-paid phone, using cash... buy lot's of pre-paid vouchers (again, with cash)... and viola! anonymous calling on a mobile phone. This might be a bit more expensive than regular phones, but a few bucks more on the phone bill is not a major consideration for these bad guys. And sure, if they are dumb enough to add credit to their phone with a personal credit card, or set up their phone from an ISP which can link the connection to them, then they might be hosed.

So, assuming a modicum of smarts in the bad guys, what is the reason for amassing personal phone records? I can't think of one. Can you?

Postscript: Here's one suggested by a friend: If you have a phone# linked to a well-known bad guy, the patterns of numbers the well-known phone calls might be useful information, even if there are anonymous phones involved. Well... serves them right for calling anonymous phones with well-known phones!

What Must Happen

The future of digital identity is set in the context of the evolution of digital systems. This article might be a bit off topic (in that it is not specifically about digital identity), but I think it's important for us to consider the bigger context of the evolution of digital systems.

WHAT MUST HAPPEN

When trying to figure out what building technology, answering the question "what must happen" is a necessity. Not what would be good to happen, but what must happen...

Software that Runs Software: Software to-date have been built for human use. But because of the sheer numbers of systems we are exposed to, the next generation of software needs to be software that runs software -- for humans. Agents, or meta-applications, if you will.

Dominant Systems Define Standards: All these attempts to define standards just result in a mishmash of "standards". Just about the only way to create widely adopted protocols is to create a dominant system, and then open it up. For example, Skype has a tremendous opportunity to set an industrial standard, if they open up fast enough and flexibly enough.

Sandboxes vs Always-On: (i.e. P2P vs Client/Server). Because the physical still matters, and ownership still matters, sandboxes are still needed, and will always be needed. Even if it is possible to be always on the network, the user might not choose to refer to a network resource, but rather, have a copy of it he/she manages. For example, instead of pointing to a web page on a website owned by someone else, the use might want a copy kept in his/her own blog or wiki -- just in case the owner changes it, or stops exporting it.

ASP systems (e.g. Salesforce.com) ultimately will reach full functionality only if they provides P2P facilities.

Synchronization Must Be Done Right: A corollary to the sandboxing trend is that synchronization as a science and engineering technique must be done right.

Lego My Servers: Servers are too complicated to set up and to run. Future servers will come in "Lego" building block format. Run out of disk space on your email server? Plug another email server "brick" next to your first, and the problem is solved. Want redundancy? Buy another two bricks, put them else where, point them to the first pair, and you will have a hot-fail-over system. The bricks will be very specialized: email server, web server, directory server, file server, system admin servers, dataservers, etc.

Of course strong security, including strong digital identity, is required in server bricks.

Evolutionary Revolutions: Respect Legacy. Systems that do not respect and work with legacy systems will fail (unless they perform a function heretofore did not exist). That's why, also, the next generation of software will be meta-applications.


WHAT SHOULD HAPPEN (Normative Statements)

Here are a couple of things I believe should happen, but might not because short term commercial drivers might not be there to make them happen ...

Software for the Long Haul: All too often, we design software without thinking about the long haul. For example, 4-byte IP address space (which has long since run out of room) and 32-bit time integer in Unix (which will expire in 2038). See http://blog.onghome.com/2005/06/long-lived-software.htm.

Basic Software Engineering: Professional software engineering means that we hold ourselves up to the highest engineering standards. Basic issues like designing for testability, internationalization, code coverage, error handling, UI useability, etc. needs to be part of what we do day-to-day in Software Engineering -- otherwise, we should just call it hacking.

[This article was initially written on December 2005.]

What Must Happen

The future of digital identity is set in the context of the evolution of digital systems. This article might be a bit off topic (in that it is not specifically about digital identity), but I think it's important for us to consider the bigger context of the evolution of digital systems.

WHAT MUST HAPPEN

When trying to figure out what building technology, answering the question "what must happen" is a necessity. Not what would be good to happen, but what must happen...

Software that Runs Software: Software to-date have been built for human use. But because of the sheer numbers of systems we are exposed to, the next generation of software needs to be software that runs software -- for humans. Agents, or meta-applications, if you will.

Dominant Systems Define Standards: All these attempts to define standards just result in a mishmash of "standards". Just about the only way to create widely adopted protocols is to create a dominant system, and then open it up. For example, Skype has a tremendous opportunity to set an industrial standard, if they open up fast enough and flexibly enough.

Sandboxes vs Always-On: (i.e. P2P vs Client/Server). Because the physical still matters, and ownership still matters, sandboxes are still needed, and will always be needed. Even if it is possible to be always on the network, the user might not choose to refer to a network resource, but rather, have a copy of it he/she manages. For example, instead of pointing to a web page on a website owned by someone else, the use might want a copy kept in his/her own blog or wiki -- just in case the owner changes it, or stops exporting it.

ASP systems (e.g. Salesforce.com) ultimately will reach full functionality only if they provides P2P facilities.

Synchronization Must Be Done Right: A corollary to the sandboxing trend is that synchronization as a science and engineering technique must be done right.

Lego My Servers: Servers are too complicated to set up and to run. Future servers will come in "Lego" building block format. Run out of disk space on your email server? Plug another email server "brick" next to your first, and the problem is solved. Want redundancy? Buy another two bricks, put them else where, point them to the first pair, and you will have a hot-fail-over system. The bricks will be very specialized: email server, web server, directory server, file server, system admin servers, dataservers, etc.

Of course strong security, including strong digital identity, is required in server bricks.

Evolutionary Revolutions: Respect Legacy. Systems that do not respect and work with legacy systems will fail (unless they perform a function heretofore did not exist). That's why, also, the next generation of software will be meta-applications.


WHAT SHOULD HAPPEN (Normative Statements)

Here are a couple of things I believe should happen, but might not because short term commercial drivers might not be there to make them happen ...

Software for the Long Haul: All too often, we design software without thinking about the long haul. For example, 4-byte IP address space (which has long since run out of room) and 32-bit time integer in Unix (which will expire in 2038). See http://blog.onghome.com/2005/06/long-lived-software.htm.

Basic Software Engineering: Professional software engineering means that we hold ourselves up to the highest engineering standards. Basic issues like designing for testability, internationalization, code coverage, error handling, UI useability, etc. needs to be part of what we do day-to-day in Software Engineering -- otherwise, we should just call it hacking.

[This article was initially written on December 2005.]

If a Tree Falls …

Johannes' post on Phil Windley puts his finger on why defining "Digital Identity" is hard asserts that an identity is more than a set of claims.

If there is an entity, and there are no claims made about it, does it still have an identity?

If a tree falls in the forrest, and no one hears it, does it make a sound?

Ah, semantics!

From a materialistic perspective, define "sound" and you've answered the second question. Define "identity" and you've answered the first.

This is why Dave and Timothy (and I, to some extent) are on a rant about ontology and semantics. You don't get definitions right, it's hard to have lucid thoughts, let alone unambiguious communications.

"Do identical twins have different identities even if we can't tell them apart?" Define what you mean by "identity" and I'll answer your question.

We can't even answer basic questions about the "things" we are talking about because we don't have common definitions of them. Convinced yet about the importance of a well defined ontology for the digital identity community?

If a Tree Falls …

Johannes' post on Phil Windley puts his finger on why defining "Digital Identity" is hard asserts that an identity is more than a set of claims.

If there is an entity, and there are no claims made about it, does it still have an identity?

If a tree falls in the forrest, and no one hears it, does it make a sound?

Ah, semantics!

From a materialistic perspective, define "sound" and you've answered the second question. Define "identity" and you've answered the first.

This is why Dave and Timothy (and I, to some extent) are on a rant about ontology and semantics. You don't get definitions right, it's hard to have lucid thoughts, let alone unambiguious communications.

"Do identical twins have different identities even if we can't tell them apart?" Define what you mean by "identity" and I'll answer your question.

We can't even answer basic questions about the "things" we are talking about because we don't have common definitions of them. Convinced yet about the importance of a well defined ontology for the digital identity community?