OAuth support: a summary of our work


This post is by Hubert from C'est la Vie...


Click here to view on the original site: Original Post





A quick summary of the OAuth support we’ve recently added in a couple of key projects.

If you’re into RESTful web services and OAuth, we have implemented an extension to the Jersey project (the JAX-RS Reference Implementation). This extension allows for the signing and/or the verification of OAuth 1.0 based requests. It is based on a digital signature library accessed by server and client filters. Detailed information can be found here.

For people interested in a more integrated solution, we have also implemented a module for the open source project OpenSSO to supports OAuth as an authentication module. This module handles the Service Provider side, that is: token issuance, token & message verification as well as SSO session handling (to bridge with other protocols). This module is, for now, an extension to OpenSSO. In other words it is not yet part of the core OpenSSO and should be considered as more experimental. Beside the Java doc, a good source of information on this can be found in this article. There’s also Pat’s demo at Community One this year.

If you’re so inclined, give it a try – any feedback is more than welcome!

Catalyst Federation Interop


This post is by Mike Jones from Mike Jones: self-issued


Click here to view on the original site: Original Post




I’m writing to thank the Burton Group for sponsoring the federation interop demonstration at the 2009 Catalyst Conference in San Diego. As you can see from the logos, they attracted an impressive set of interop participants. It was great working with the knowledgeable and enthusiastic colleagues from other companies to assure that our products will work together for our customers.

Catalyst North America 2009 Interop Banner

Microsoft demonstrated SAML 2.0 interoperation using our forthcoming Active Directory Federation Services 2.0 product (no, it’s not named “Geneva” Server anymore). We federated both to and from numerous other implementations. For instance, those attending in person got to watch yours truly demonstrate using AD FS 2.0 to log into SalesForce.com and WebEx, among other scenarios.

But why write about this now, one might ask? Isn’t the interop done? Not necessarily! In fact, one of the cool things about online interops is that the participants can continue testing well after “the event” is over. For instance, we’ve done some WS-Federation testing with participants since Catalyst, as well as just invited participants to re-test with a more recent drop of our server bits if they’d like to.

Finally, I’d be remiss if I didn’t thank the Eternal Optimist herself for doing the work to enable the Catalyst interop to be hosted the OSIS wiki. Doing the interop online with public endpoint information helped the work go as smoothly as possible.

DIY Security for the Utterly Paranoid


This post is by Pamela from Adventures of an Eternal Optimist


Click here to view on the original site: Original Post




I talked to several people who were somewhat disturbed about my last blog post.  Surely it can’t be that easy?

The  potential exists – and I think it is worthwhile to ask why.  Most people have been taught to guard their passwords, but have been carefully instructed to feel no responsibility for the other ways in which an attacker could access their account.  Why is it we can educate about password complexity and reuse, but don’t want to explain under what circumstances a “personal identification” answer might be used?   Why is it we will force a user to change their password every three months, but the email address that would be used in case of a password recovery effort is never tested, and security questions are never refreshed or reinforced?   Why is it that we as a culture have recognized the concept of a “fire drill” in the real world, and advise people to understand alternate exit routes in cases where the elevators are out of order, but in the online world, we feel that advising those users who happen to be of the more concerned persuasion to familiarize themselves with and verify the operation of the page behind their “forgot my password” links is a crazy and unthinkable thing to ask?

If you are someone who worries about being hacked, and if you are willing to take a little bit of time and energy to at least understand the risk you might be facing, my advice to you is:  Go forth and recover.

Go ahead.  Recover all of your accounts.  You probably needed to rotate those passwords anyway.  Find those “forgot password” links and click ‘em. Chances are, you will be able to reset your password in an automated fashion,  either by answering a pre-specified question, or by getting a link sent to an email account (sometimes, both approaches are combined).    If you are asked a question, is the answer guessable?  Is it searchable?  Is it short? Is it a single dictionary word?   Can you control the guessability of the answer, or is it a hard-coded format such as a postal code or a birthdate?   If you are emailed a link, follow the chain to your email provider and recover your password there too.   Is it more pre-specified questions?  Are they the same questions? Were you required to click on a link sent to yet another email address?  If so, follow the chain again.  Rinse and repeat.  This is the same trail that a hacker would follow – often they find something you’ve forgotten, something out of date, an expired account or a typo that you never would guess could end up in a compromise of your identity.  Password recovery mechanisms were used to compromise Sarah Palin’s email account, and also used to steal corporate data from Twitter.   If you can satisfy yourself that the password recovery loop is closed, that your answers are not guessable, that you haven’t specified incorrect, out-of-date, or non-existent email addresses, and that the services you use don’t use unsafe mechanisms, you will be safer.

Don’t believe me?  Check out the techniques this guy used to compromise the identity of a mere acquaintance.  He gained access to supposedly “secure” accounts whose password recovery mechanisms depended on password recovery mechanism that depended on grossly guessable data.

Should you have to do this?  No.  Not according to almost anyone in this business.  Are you expected to do this?  Of course not.  How many people actually memorize an alternate exit route from every hotel room they ever stay in?  Only the ultra paranoid, I am sure.  Still, if you care, if you are motivated,  and if you want to know what to do, perhaps this can be a starting point.

SXSW


This post is by =andy.dale from The Tao of XDI


Click here to view on the original site: Original Post




If you have a chance; check out this proposed session for SXSW:http://bit.ly/vuPu5. Have you noticed that when you search the internet you probably don't see results from the stuff that you pay for (subscriptions, stuff available through your local library, etc...)? this panel will discuss how we could fix that... If you think that would be useful.. go give it the thumbs up.

Deploying the OpenID2.0 Extension for OpenSSO


This post is by Hubert from C'est la Vie...


Click here to view on the original site: Original Post





OpenSSO acts as an authentication hub and as such supports many different modules. We recently upgraded one of them, OpenID, from OpenID 1.0 to OpenID 2.0. This module was written using both OpenSSO’s client library and OpenID4Java library.

This blog post  describes the steps necessary to deploy the OpenID 2.0 extension module for OpenSSO. Once deployed, this module will add both OpenID 1.0 and 2.0 support for your IdP. In OpenID parlance, your OpenSSO deployment can act as an OP (OpenID Provider) and thus authenticate users for OpenID client applications.

In the example below, I will be using 2 different hostnames for clarity purposes: openid.example.com to run the OpenID module and opensso.example.com to run OpenSSO and the OP. Remember to, at a minimum, use 2 separate instances  of your application server (I use & recommend Glassfish-v2.1): one for OpenID and the other for OpenSSO.

For the OpenID module

  1. Deploy the openid war file.

  2. Update 3 properties file with values taken from the opensso deployment. Those files are: AMConfig.properties, Provider.properties and ldap.properties (if the OP will be persisting user’s OpenID attributes). Sample configuration files are described at the end of this document.

  1. Add properties files to the Classes directory (e.g. /Applications/NetBeans/glassfish-v2.1/domains/domain2/applications/j2ee-modules/openid/WEB-INF/classes/ on my Mac). Note that the domain MUST be restarted once those files have been added. Also at the moment, these files will have to be copied each time the openid war file is (re)deployed.

For OpenSSO

  1. Add openid.example.com in the list of realm aliases
    (Access Control tab → top realm → General tab)

  2. Add an OpenID attribute to OpenSSO’s user schema. To do so, insert the following attribute in the <user> section of amUser.xml:
    <AttributeSchema name=”ldap.people.return.attribute” type=”single” syntax=”string”
    any=”display” i18nKey=”openid-attributes”></AttributeSchema>

    This file should be located in your opensso deployment directory under …/config/xml/ (or WEB-INF/classes/).

  3. Add this OpenID attribute to OpenSSO’s embedded ldap directory (I use Apache Directory Studio)

  4. Enable self update of OpenID attribute in the ldap directory: to do so you have 2 choices (thanks to Rajeev for this tip):

    1. If you have a LDAP editor:

      1. connect to embedded config store directory (default : localhost:50389)

      2. login as user cn=Directory Manager )

      3. navigate to dn: ou=SelfWriteAttributes,ou=Policies,ou=default
        ,ou=OrganizationConfig,ou=1.0,ou=iPlanetAMPolicyService,
        ou=services,o=sunamhiddenrealmdelegationservicepermissions,
        ou=services,dc=opensso,dc=java,dc=net

      4. Edit the sunKeyValue attribute to add the openID attribute declared in OpenSSO’s schema:
        <Value>openid-attibutes</Value> or

    2. Using the addwriteperm.ldif (see content of this file at the end of this document):

      1. Edit the file addwriteperm.ldif and insert the OpenID attribute (openid-attibutes).

      2. Execute the shell command:
        $DS/ldapmodify -h localhost -p 50389 -a -f ~/bin/addwriteperm.ldif
        -D “cn=Directory manager” -w password

  5. You need to add LDAP attributes to the users data store. Log in OpenSSO as admin, browse to the Data Store tab, select the appropriate store (or the users) and add openiduser to the LDAP User Object list and openid-attributes to the LDAP User Attributes list.

  6. Restart your app server.

Configuration Files

Below are sample configuration files (only key configuration values are being shown).

AMConfig.properties

  • com.iplanet.am.naming.url=

    http://opensso.example.com:8080/opensso/namingservice

  • com.sun.identity.agents.app.username=amAdmin
  • com.iplanet.am.service.password=changeme
  • com.iplanet.am.service.secret=
    AQIC1MSQKNB2HObD21Z8jsHOqPnCKCvL+ACy
  • am.encryption.pwd=mYqo9kXOHz4pju/dCDVGewVNcl9HsabR
  • com.iplanet.am.server.host=opensso.example.com
  • com.iplanet.am.server.port=8080
  • com.iplanet.am.services.deploymentDescriptor=/opensso
  • com.sun.identity.loginurl=

    http://opensso.example.com:8080//opensso/UI/Login

  • com.sun.identity.liberty.authnsvc.url=

    http://opensso.example.com:8080//opensso/Liberty/authnsvc

Provider.properties

  • openid.provider.service_url=

    http://openid.example.com:49723/openid/service

  • openid.provider.setup_url=

    http://openid.example.com:49723/openid/setup.jsf

  • openid.provider.local-auth-url=

    http://openid.example.com:49723/openid/authentication

  • openid.provider.login_url=

    http://opensso.example.com:8080/opensso/UI/Login?realm=openid

  • openid.provider.simple_registration=true
  • openid.provider.attribute_exchange=true
  • openid.provider.identity_pattern=

    http://openid.example.com:49723/openid/(.+)

  • openid.provider.principal_pattern=id=(.+),ou=user,dc=opensso,dc=java,dc=net
  • openid.provider.external_target=_blank
  • openid.provider.strict_protocol=false
  • openid.provider.am-profile-attributes=uid|uid,givenName|firstname,sn|lastname,cn|
    fullname,postalcode|postcode,c|country,mail|email
  • openid.provider.am-search-attribute=uid
  • openid.provider.attribute_types_map=uid|text,email|text,firstname|text,lastname|
    text,fullname|text,nickname|text,dob|date,gender|text,postcode|text,country|
    select,language|select,timezone|select
  • openid.provider.persistence.enabled=true
  • openid.provider.persistence.class.name=

ldap.properties

  • ldap.host=opensso.example.com
  • ldap.port=50389
  • ldap.bind.dn=cn=Directory Manager
  • ldap.bind.pwd=adminadmin
  • ldap.people.base=dc=opensso,dc=java,dc=net
  • ldap.people.return.attribute=openid-attributes
  • ldap.people.attribute.nodes=firstname,lastname,fullname,nickname,email,
    gender,dob,postcode,country,
    language,timezone
  • ldap.people.search.attribute=uid

addwriteperm.ldif

  • dn:
    ou=SelfWriteAttributes,ou=Policies,ou=default,ou=OrganizationConfig,ou=1.0,
    ou=iPlanetAMPolicyService,ou=services,o=sunamhiddenrealmdelegationservicepermissions,
    ou=services,dc=opensso,dc=java,dc=net
    changetype: modify
    replace: sunKeyValue
    sunKeyValue: xmlpolicy=<?xml version=”1.0″ encoding=”UTF-8″?><Policy name=”SelfWriteAttributes” referralPolicy=”false” active=”true” ><Rule name=”user-read-rule”> <ServiceName name=”sunAMDelegationService” /> <ResourceName name=”sms://*dc=opensso,dc=java,dc=net/sunIdentityRepositoryService/1.0/application/*” /> <AttributeValuePair> <Attribute name=”MODIFY” /> <Value>allow</Value> </AttributeValuePair> </Rule> <Subjects name=”Subjects” description=”"> <Subject name=”delegation-subject” type=”AuthenticatedUsers” includeType=”inclusive”> </Subject> </Subjects> <Conditions name=”AttrCondition” description=”"> <Condition name=”condition” type=”UserSelfCheckCondition”> <AttributeValuePair><Attribute name=”attributes”/><Value>sunIdentityServerDeviceStatus</Value><Value>telephonenumber</Value><Value>userpassword</Value><Value>givenname</Value><Value>mail</Value><Value>sn</Value><Value>cn</Value><Value>iplanet-am-user-password-reset-options</Value><Value>postaladdress</Value><Value>sunIdentityServerDeviceKeyValue</Value><Value>preferredlocale</Value><Value>description</Value><Value>iplanet-am-user-password-reset-question-answer</Value><Value>openid-attributes</Value> </AttributeValuePair> </Condition> </Conditions> </Policy>


Testing your deployment

To test your OpenID deployment you will need to have a web application that hands out OpenID identifiers as well as an OpenID client application (this is in addition to the OpenID extension and the OpenSSO instance described above). We also assume you have some users registered in the OpenSSO instance.

We’ve created a very simple application (OP.war) that will serve OpenID identifiers of the form:
http://your_hostname/OP/resources/user_name. Note that in its current form the identifier will point to an OP deployed at the following URL: http://openid.example.com:49723/openid/service
If your deployment URL differs, you’ll have to edit the (only) java file and change that link (in 2 places) before re-compiling the war file.
In our example, we’ll deploy the OP in the same domain than the OpenSSO instance, at the URI (http://opensso.example.com:8080/OP/).
A way to verify the OP is to visit a URI of the form http://opensso.example.com:8080/OP/resources/username where username can be anything. You should see some text explaining what the OP is based on but more importantly you can right-click on the page to take a look at the html source of the page. Note the OpenID metadata present in the HTML <head> section of the page.

OpenID4Java (the library that was used to create the OpenID extension) offers a nice little OpenID client application (Consumer-servlet) that lets you test both OpenID 1.0 and OpenID 2.0 (with persistance of attributes).
In our example, we’ll deploy the OpenID client application in the same domain than the OpenID extension, at the URI: (http://openid.example.com:49723/consumer-servlet/).

2 scenarios can be tested:

OpenID 2.0 Authentication

This scenario demonstrates OpenID-based delegated authentication with an OpenSSO IdP.

  1. Visit the OpenID Service Provider (aka. Relying Party)

    http://openid.example.com:49723/consumer-servlet/

  2. In the (Sample 1) OpenID Username, enter the OpenID identifier:

    http://opensso.example.com:8080/OP/resources/username

    and click on Login

  3. You’re redirected to OpenSSO login page. Log in with the credentials of a known user. Note that the user must correspond to the provided OpenID identifier. That mapping is determined by the pattern declared in the Provider.properties file (with the openid.provider.identity_pattern property).
  4. The next page is the OpenID verification (or consent) page.
    Click on trust.
  5. You’re now logged in the blog site.

OpenID 2.0 Authentication with Simple Registration Exchange

In addition to delegated authentication, this demonstrates the provisioning of attributes to the Relying Party.

  1. Before starting, close all browser windows (or clean cookies) to make sure you don’t have a live session at the IdP.
  2. Browse to the Relying Party at to the following URL:
    http://opensso.example.com:49723/consumer-servlet/
  3. In the Sample 2 box, enter the same OpenID identifier as above (username being anything you want):http://opensso.example.com:8080/OP/resources/username
  4. Select (or de-select) the attributes that will be provided at the same time authentication takes place (make sure to leave at least one selected).
  5. You’re now redirected to OpenSSO for authentication. Enter the credentials of the corresponding user.
  6. In addition to the same consent page, notice the attributes that were requested. Fill up those information. You can chose to have those attributes remembered, in which case they will be persisted in the ldap directory.
    Click on trust.
  7. You’re now back at the Relying Party site on a page that shows the query string and the attributes requested.

Make of it what you will


This post is by Kim Cameron from Kim Cameron's Identity Weblog


Click here to view on the original site: Original Post




One of the people whose work has most influenced the world of security - a brilliant researcher who is also gifted with a sense of irony and humor - received this email and sent it on to a group of us.   He didn’t specify why he thought we would find it useful…  

At any rate, the content boggles the mind.  A joke?  Or a metaspam social engineering attack, intended to bilk jealous boyfriends and competitors? 

Or… could this kind of… virus actually be built and… sold?  

Subject: MMS PHONE INTERCEPTOR - THE ULTIMATE SPY SOLUTION FOR MOBILE PHONES AND THE GREAT PRODUCT FOR YOUR CUSTOMERS

MMS PHONE INTERCEPTOR - The ultimate surveillance solution will enable you to acquire the most valuable information from a mobile phone of a person of your interested.

Now all you will need to do in order to get total control over a NOKIA mobile (target) phone of a person of your interest is to send the special MMS to that target phone, which is generated by our unique MMS PHONE INTERCEPTOR LOADER. See through peoples' clothsThis way you can get very valuable and otherwise un-accessible information about a person of your interest very easily.

The example of use:

You will send the special MMS message containing our unique MMS PHONE INTERCEPTOR to a mobile phone of e.g. your girlfriend

In case your girlfriend will be using her (target) mobile phone, you will be provided by following unique functions:

  • In case your girlfriend will make an outgoing call or in case her (target) phone will receive an incoming call, you will get on your personal standard mobile phone an immediate SMS message about her call. This will give you a chance to listen to such call immediately on your standard mobile phone.
  • In case your girlfriend will send an outgoing SMS message from her (target) mobile phone or she will receive a SMS message then you will receive a copy of this message on your mobile phone immediately.
  • This target phone will give you a chance to listen to all sounds in its the surrounding area even in case the phone is switched off. Therefore you can hear very clearly every spoken word around the phone.
  • You will get a chance to find at any time the precise location of your girlfriend by GPS satellites.

All these functions may be activated / deactivated via simple SMS commands.

A target mobile phone will show no signs of use of these functions.

As a consequence of this your girlfriend can by no means find out that she is under your control.

In case your girlfriend will change her SIM card in her (target) phone for a new one, then after switch on of her (target) phone, your (source) phone will receive a SMS message about the change of the SIM card in her (target) phone and its new phone number.

These unique surveillance functions of target phones may be used to obtain very valuable and by no other means accessible information also from other subjects of your interest {managers, key employees, business partners etc, too.

I like the nostalgic sense of convenience and user-friendliness conjured up by this description.  Even better, it reminds me of the comic book ads that used to amuse me as a kid.  So I guess we can just forget all about this and go back to sleep, right?

Sincerely, John Hughes


This post is by Drummond Reed from Equals Drummond


Click here to view on the original site: Original Post




Someday I’ll tell the rest of the story about why I’m posting the following link. But for right now, let me just recommend you read it.

I was never particularly close to John Hughes movies — though I did like The Breakfast Club — but that’s not the point of this story. It’s a story about John Hughes as a person, and the difference it made in one girl’s life.

After I read it — and almost started crying myself — I noticed it has a whooping 1151 comments.

Read it and you’ll know why.

We’ll Know When We Get There: Sincerely, John Hughes

Using JSP with Jersey JAX-RS Implementation


This post is by donpark from Don Park's Daily Habit


Click here to view on the original site: Original Post




This post shows you some tips you’ll likely need to use JSP with Jersey in typical Java webapps.

Tested Conditions

While Jersey 1.1.1-ea or later is probably the only hard requirement for the tips to work, my development environment is listed here for your info. You are welcome to add to this rather meager basis for sanity.

  1. Jersey 1.1.1-ea
  2. Tomcat 6.0.20
  3. JDK 1.5
  4. OS X Leopard

Change JSP Base Template Path

Default base path for templates is the root of the webapp. So if my webapp is at “/…/webapps/myapp” then Viewable(“/mypage”, null) will map to “/…/webapps/myapp/mypage.jsp”

To change this, say to “WEB-INF/jsp” as it’s commonly done for security reasons, add following init-param to Jersey servlet/filter in web.xml:

<init-param>
<param-name>com.sun.jersey.config.property.JSPTemplatesBasePath</param-name>
<param-value>/WEB-INF/jsp</param-value>
</init-param>

Return Viewable as part of Response

It was not obvious to me (doh) where Viewable fits into Response when I have to return a Response instead of Viewable. It turns out, Viewable can be passed where message body entity is passed. Example:

return Response.ok(new Viewable("/mypage", model).build();

Use “/*” as servlet-mapping for Jersey

The primitive servlet-mapping URI pattern scheme, which somehow survived many iterations of the servlet API, impacts JAX-RS hard if servlet-mapping is overly broad. Unfortunately, pretty restful URL calls for servlet-mapping to be “/*” instead of something like “/jersey/*”, breaking access to JSP files as well as static resources.

To work around, you’ll have to use Jersey as a filter instead of a servlet and edit a regular-expression init-param value to punch passthrough holes in Jersey’s routing scheme. To enable this, replace Jersey servlet entry in web.xml with something like this:

<filter>
 <filter-name>jersey</filter-name>
 <filter-class>com.sun.jersey.spi.container.servlet.ServletContainer</filter-class>
 <init-param>
  <param-name>com.sun.jersey.config.property.WebPageContentRegex</param-name>
  <param-value>/(images|js|styles|(WEB-INF/jsp))/.*</param-value>
 </init-param>
</filter>
<filter-mapping>
 <filter-name>jersey</filter-name>
 <url-pattern>/*</url-pattern>
</filter-mapping>

That’s all for now. Hope this post saved you some headaches.


Posted in General, Technical Tagged: java, jax-rs, jersey, jsp

Google Apps SSO and Authentication – Twitter Breach Creates Teachable Moment


This post is by ChrisCeppi from Arbitrage


Click here to view on the original site: Original Post




The anatomy of the Twitter breach as detailed in TechCrunch speaks clearly to the lengths that a determined attacker will go to gain access to proprietary information. The specifics of the attack are complex and involve a number of ingenious inter-related actions on the part of the attacker who did ultimately gain access to a single user credential at Twitter. Although the methods used are complex and much of the post game discussion has focused on high level security risks associated with Google Apps, the fundamental architectural characteristic that makes this type of attack possible at all is the publicly available web form for collecting user names and passwords.

The attacker was able to manipulate all of the publicly available functionality that is set up to support web form authentication and gain access to sensitive information as a result. Exposing password resets, question based authentication, email notification – (i.e. all of the machinery required to support the public web form) to anyone with a browser is an invitation to serious mischief.

The Twitter breach is a teachable moment for companies adopting cloud applications. In simple terms – since the fundamental risk is having web authentication forms on the public Internet, it follows that the best place for authentication of enterprise users to occur is behind the firewall. Technology designed to make it simple for companies to leverage an existing secure authentication (that happens on a secure network ) to provide access to cloud based applications is the most secure, least intrusive, and most cost effective way of addressing security risks like the ones that were exposed at Twitter.

In my five years and counting at Ping Identity we’ve built from zero to a customer roster of over 370 companies around the world, including 42 of the fortune 100. To a large extent, the credit for Ping’s growth goes to the simple premise that there is inevitable trend that continues to move credential collection to the most secure location available. The recent news about Twitter and their struggle with authentication to Google Apps fits this pattern perfectly.

The implications of this trend for emerging cloud based Identity Provider solutions are an interesting related topic. Ultimately, credential collection can be done securely on the public Internet - but it requires well thought out layering of single sign on, monitoring, and strong forms of authentication. More on the best practices developing around Cloud based Identity Providers in a future post...

Google Apps SSO and Authentication – Twitter Breach Creates Teachable Moment


This post is by ChrisCeppi from Arbitrage


Click here to view on the original site: Original Post




The anatomy of the Twitter breach as detailed in TechCrunch speaks clearly to the lengths that a determined attacker will go to gain access to proprietary information. The specifics of the attack are complex and involve a number of ingenious inter-related actions on the part of the attacker who did ultimately gain access to a single user credential at Twitter. Although the methods used are complex and much of the post game discussion has focused on high level security risks associated with Google Apps, the fundamental architectural characteristic that makes this type of attack possible at all is the publicly available web form for collecting user names and passwords.

The attacker was able to manipulate all of the publicly available functionality that is set up to support web form authentication and gain access to sensitive information as a result. Exposing password resets, question based authentication, email notification – (i.e. all of the machinery required to support the public web form) to anyone with a browser is an invitation to serious mischief.

The Twitter breach is a teachable moment for companies adopting cloud applications. In simple terms – since the fundamental risk is having web authentication forms on the public Internet, it follows that the best place for authentication of enterprise users to occur is behind the firewall. Technology designed to make it simple for companies to leverage an existing secure authentication (that happens on a secure network ) to provide access to cloud based applications is the most secure, least intrusive, and most cost effective way of addressing security risks like the ones that were exposed at Twitter.

In my five years and counting at Ping Identity we’ve built from zero to a customer roster of over 370 companies around the world, including 42 of the fortune 100. To a large extent, the credit for Ping’s growth goes to the simple premise that there is inevitable trend that continues to move credential collection to the most secure location available. The recent news about Twitter and their struggle with authentication to Google Apps fits this pattern perfectly.

The implications of this trend for emerging cloud based Identity Provider solutions are an interesting related topic. Ultimately, credential collection can be done securely on the public Internet - but it requires well thought out layering of single sign on, monitoring, and strong forms of authentication. More on the best practices developing around Cloud based Identity Providers in a future post...

Firefox Extension Developer Tips


This post is by donpark from Don Park's Daily Habit


Click here to view on the original site: Original Post




Just a couple of tips for Firefox extension developers, hard earned after many hours of head scratching. Not adhering to either tips will confuse Firefox and XPCOM component will fail to load.

XPCOM components get loaded before chromes are loaded.

[Update: The most common problem related to this is Components.utils.import call fails during launch with NS_ERROR_FAILURE exception. To fix, wait until app-startup notification is received before importing javascript modules.]

This means anything defined in chrome.manifest won’t be available until “app-startup” event is observed. Note that Resource URI scheme “resource://” introduced in Firefox 3 uses resource directives in chrome.manifest which means you should defer Components.utils.import calls until “app-startup“.

XPCOM components implemented using Javascript should be defined as a pure object, not function.

So it should look something like this:

var MyServiceModule = {
  registerSelf: function(compMgr, fileSpec, location, type) {
    ..
  },
  ..
};

Posted in Technical Tagged: Components.utils.import, firefox, NS_ERROR_FAILURE, tips, XPCOM

If you try sometimes – you can get what you need


This post is by Kim Cameron from Kim Cameron's Identity Weblog


Click here to view on the original site: Original Post




I’ll lose a few minutes less sleep each night worrying about Electronic Eternity - thanks to the serendipitous appearance of  John Markoff’s recent piece on Vanish in the New York Times Science section:

A group of computer scientists at the University of Washington has developed a way to make electronic messages “self destruct” after a certain period of time, like messages in sand lost to the surf. The researchers said they think the new software, called Vanish, which requires encrypting messages, will be needed more and more as personal and business information is stored not on personal computers, but on centralized machines, or servers. In the term of the moment this is called cloud computing, and the cloud consists of the data — including e-mail and Web-based documents and calendars — stored on numerous servers.

The idea of developing technology to make digital data disappear after a specified period of time is not new. A number of services that perform this function exist on the World Wide Web, and some electronic devices like FLASH memory chips have added this capability for protecting stored data by automatically erasing it after a specified period of time.

But the researchers said they had struck upon a unique approach that relies on “shattering” an encryption key that is held by neither party in an e-mail exchange but is widely scattered across a peer-to-peer file sharing system…

The pieces of the key, small numbers, tend to “erode” over time as they gradually fall out of use. To make keys erode, or timeout, Vanish takes advantage of the structure of a peer-to-peer file system. Such networks are based on millions of personal computers whose Internet addresses change as they come and go from the network. This would make it exceedingly difficult for an eavesdropper or spy to reassemble the pieces of the key because the key is never held in a single location. The Vanish technology is applicable to more than just e-mail or other electronic messages. Tadayoshi Kohno, a University of Washington assistant professor who is one of Vanish’s designers, said Vanish makes it possible to control the “lifetime” of any type of data stored in the cloud, including information on Facebook, Google documents or blogs. In addition to Mr. Kohno, the authors of the paper, “Vanish: Increasing Data Privacy with Self-Destructing Data,” include Roxana Geambasu, Amit A. Levy and Henry M. Levy.

[More here]

Never a Small Step


This post is by Bob from Ceci n'est pas un Bob


Click here to view on the original site: Original Post




"The day", to my grandparents's generation, was December 7th. To my parents', it was November 22. To me, and to my generation, "the day" is today - July 20. I like to think that Neil Armstrong fumbled the first half of his famous quote because the false humility stuck in his throat. It was never a small step. It was always and only a giant leap, and everyone knew it. Armstrong knew it, because he and everyone he worked with signed up for a giant leap, and would never have settled for anything less. Kennedy knew it; he gave the call Armstrong answered. Kruschev knew it, behind all his bluster. And I knew it, and so did all my fourth-grade friends on Robin Hill drive in Williamsville, New York. That leap defined my generation and set us on our path. The Beatles and the race to the moon were the Continue reading "Never a Small Step"

Accountability


This post is by =andy.dale from The Tao of XDI


Click here to view on the original site: Original Post




I have written about reputation in the past and continue to evolve my thinking on the subject. I had an interesting interaction last weekend with Lillie Coney of EPIC while on a panel together at ALA. Lillie described the legal frameworks that exist to both protect and circumvent our privacy as a lawyer and a privacy expert she described the steps necessary to strengthen our privacy position in the law. I found myself pushing back on Lillie; expressing that Reputation systems are just as important as systems of accountability for privacy as legal frameworks. If we had more time I think we might have had an interesting discussion on the subject.

Here's the summary I reached in my head: I do not deny that the legal system works to protect our privacy interests at certain levels. However, as an individual with a compaint against a large company I have very Continue reading "Accountability"

Remembering Frank


This post is by Bob from Ceci n'est pas un Bob


Click here to view on the original site: Original Post




Frank McCourt died today. Frank was famous for Angela's Ashes - his account of his "miserable Irish Catholic childhood". If you haven't read it, you should. He was a wonderful writer. Mostly by chance, I had the pleasure of spending a week on a bus with Frank. Karen & I signed up for a Photo Mentor Series trek to Ireland in 2003. The Ireland trip was unique among the Photo Mentor series treks in that it had a local host who wasn't a photographer, and Frank was that host. I took the above picture of him in the pitch-dark interior of the Gallarus Oratory on the Dingle peninsula. The photo mentors - Barbara Kinney, Jill Enfield, and Joe McNally, - were fantastic; Barbara had been Bill Clinton's White House staff photographer, Jill is a leading expert in hand-coloring photographs, and Joe shot the first digital cover for Continue reading "Remembering Frank"

My email address


This post is by Kim Cameron from Kim Cameron's Identity Weblog


Click here to view on the original site: Original Post




I’m writing this post in case your version of my email address has “windows.microsoft.com” in it.

The “windows.microsoft.com” domain is being repurposed for some higher good.  So going forward, please write to me with the usual address (same local-part) but at ”@microsoft.com” instead of “@windows.microsoft.com”).

Electronic Eternity


This post is by Kim Cameron from Kim Cameron's Identity Weblog


Click here to view on the original site: Original Post




From the Useful Spam Department :  I got an advertisement from a robot at “complianceonline.com” that works for a business addressing the problem of data retention on the web from the corporate point of view. 

We’ve all read plenty about the dangers of teenagers publishing their party revels only to find themselves rejected by a university snooping on their Facebook account.  But it’s important to remember that the same issues affect business and government as well, as the complianceonline robot points out:

“Avoid Documentation ‘Time Bombs’

“Your own communications and documents can be used against you.

“Lab books, project and design history files, correspondence including e-mails, websites, and marketing literature may all contain information that can compromise a company and it’s regulatory compliance. Major problems with the U.S. FDA and/or in lawsuits have resulted from careless or inappropriate comments or even inaccurrate opinions being “voiced” by employees in controlled or retained documents. Opinionated or accusatory E-mails have been written and sent, where even if deleted, still remain in the public domain where they can effectively “last forever”.

“In this electronic age of My Space, Face Book, Linked In, Twitter, Blogs and similar instant communication, derogatory information about a company and its products can be published worldwide, and “go viral”, whether based on fact or not. Today one’s ‘opinion’ carries the same weight as ‘fact’.”

This is all pretty predictable and even banal, but then we get to the gem:  the company offers a webinar on “Electronic Eternity”.  I like the rubric.  I think “Electronic Eternity” is one of the things we should question.  Do we really need to accept that it is inevitable?  Whose interest does it serve?  I can’t see any stakeholder who benefits except, perhaps, the archeologist. 

Perhaps everything should have a half-life unless a good argument can be made for preserviing it.