Ethical Foundations of Cybersecurity


This post is by Kim Cameron from Kim Cameron's Identity Weblog


Click here to view on the original site: Original Post




Britian’s Enterprise Privacy Group is starting a new series of workshops that deal squarely with ethics.  While specialists in ethics have achieved a signficant role in professions like medicine, this is one of the first workshops I’ve seen that takes on equivalent issues in our field of work.  Perhaps that’s why it is already oversubscribed… 

‘The continuing openess of the Internet is fundamental to our way of life, promoting the free flow of ideas to strengthen democratic ideals and deliver the economic benefits of globalisation.  But a fundamental challenge for any government is to balance measures intended to protect security and the right to life with the impact these may have on the other rights that we cherish and which form the basis of our society.
 
‘The security of cyber space poses particular challenges in meeting tests of necessity and proportionality as its distributed, de-centralised form means that powerful tools may need to be deployed to tackle those who wish to do harm.  A clear ethical foundation is essential to ensure that the power of these tools is not abused.
 
‘The first workshop in this series will be hosted at the Cabinet Office on 17 June, and will explore what questions need to be asked and answered to develop this foundation?

‘The event is already fully subscribed, but we hope to host further events in the near future with greater opportunities for all EPG Members to participate.’

Let’s hope EPG eventually turns these deliberations into a document they can share more widely.  Meanwhile, this article seems to offer an introduction to the literature.

Definitions for a Common Identity Framework


This post is by Kim Cameron from Kim Cameron's Identity Weblog


Click here to view on the original site: Original Post




The Proposal for a Common Identity Framework begins by explaining the termnology it uses.  This wasn’t intended to open up old wounds or provoke ontological debate.  We just wanted to reduce ambiguity about what we actually mean to say in the rest of the paper.  To do this, we did think very carefully about what we were going to call things, and tried to be very precise about our use of terms.

The paper presents its definitions in alphabetical order to faciliate lookup while reading the proposal, but I’ll group them differently here to facilitate discussion.

Let’s start with the series of definitions pertaining to claims.  It is key to the document that claims are assertions by one subject about another subject that are “in doubt”.  This is a fundamental notion since it leads to an understanding that one of the basic services of a multi-party model must be ”Claims Approval”.  The simple assumption by systems that assertions are true - in other words the failure to factor out “approval” as a separate service - has lead to conflation and insularity in earlier systems.

  • Claim:  an assertion made by one subject about itself or another subject that a relying party considers to be “in doubt” until it passes “Claims Approval”
  • Claims Approval: The process of evaluating a set of claims associated with a security presentation to produce claims trusted in a specific environment so it can used for automated decision making and/or mapped to an application specific identifier.
  • Claims Selector:  A software component that gives the user control over the production and release of sets of claims issued by claims providers. 
  • Security Token:  A set of claims.

The concept of claims provider is presented in relation to “registration” of subjects.  Then claims are divided into two broad categories:  primordial and substantive…

  • Registration:  The process through which a primordial claim is associated with a subject so that a claims provider can subsequently issue a set of claims about that subject.
  • Claims Provider:  An individual, organization or service that:
  1. Registers subjects and associates them with primordial claims, with the goal of subsequently exchanging their primordial claims for a set of substantive claims about the subject that can be presented at a relying party; or
  2. Interprets one set of substantive claims and produces a second set (this specialization of a claims provider is called a claims transformer).  A claims set produced by a claims provider is not a primordial claim.
  • Claims Transformer:  A claims provider that produces one set of substantive claims from another set.

To understand this better let’s look at what we mean by  “primordial” and “substantive” claims.  The word ”primordial” may seem a strange at first, but its use will be seen to be rewardingly precise:  Constituting the beginning or starting point, from which something else is derived or developed, or on which something else depends. (OED) .

As will become clear, the claims-based model works through the use of “Claims Providers”.  In the most basic case, subjects prove to a claims provider that they are an entity it has registered, and then the claims provider makes ”substantive” claims about them.  The subject proves that it is the registered entity by using a “primordial” claim - one which is thus the beginning or starting point, and from which the provider’s substantive claims are derived.  So our definitions are the following: 

  • Primordial Claim: A proof – based on secret(s) and/or biometrics – that only a single subject is able to present to a specific claims provider for the purpose of being recognized and obtaining a set of substantive claims.
  • Substantive claim:  A claim produced by a claims provider – as opposed to a primordial claim.

Passwords and secret keys are therefore examples of “primordial” claims, whereas SAML tokens and X.509 certificates (with DNs and the like) are examples of substantive claims. 

Some will say, “Why don’t you just use the word ’credential’”?   The answer is simple.  We avoided “credential” precisely because people use it to mean both the primordial claim (e.g. a secret key) and the substantive claim (e.g. a certificate or signed statement).   This conflation makes it unsuitable for expressing the distinction between primordial and substantive, and this distinction is essential to properly factoring the services in the model.

There are a number of definitions pertaining to subjects, persons and identity itself:

  • Identity:  The fact of being what a person or a thing is, and the characteristics determining this.

This definition of identity is quite different from the definition that conflates identity and “identifier” (e.g. kim@foo.bar being called an identity).  Without clearing up this confusion, nothing can be understood.   Claims are the way of communicating what a person or thing is - different from being that person or thing.  An identifier is one possible claim content.

We also distinguish between a “natural person”, a “person”, and a “persona”, taking into account input from the legal and policy community:

  • Natural person:  A human being…
  • Person:  an entity recognized by the legal system.  In the context of eID, a person who can be digitally identified.
  • Persona:  A character deliberately assumed by a natural person

A “subject” is much broader, including things like services:

  • Subject:  The consumer of a digital service (a digital representation of a natural or juristic person, persona, group, organization, software service or device) described through claims.

And what about user?

  • User:  a natural person who is represented by a subject.

The entities that depend on identity are called relying parties:

  • Relying party:  An individual, organization or service that depends on claims issued by a claims provider about a subject to control access to and personalization of a service.
  • Service:  A digital entity comprising software, hardware and/or communications channels that interacts with subjects.

Concrete services that interact with subjects (e.g. digital entities) are not to be confused with the abstract services that constitute our model:

  • Abstract services:  Architectural components that deliver useful services and can be described through high level goals, structures and behaviors.  In practice, these abstract services are refined into concrete service definitions and instantiations.

Concrete digital services, including both relying parties and claims providers, operate on the behalf of some “person” (in the sense used here of legal persons including organizations).  This implies operations and administration:

  • Administrative authority:  An organization responsible for the management of an administrative domain.
  • Administrative domain:  A boundary for the management of all business and technical aspects related to:
  1. A claims provider;
  2. A relying party; or
  3. A relying party that serves as its own claims provider 

There are several definitions that are necessary to understand how different pieces of the model fit together:

  • ID-data base:  A collection of application specific identifiers used with automatic claims approval
  • Application Specific Identifier (ASID):  An identifier that is used in an application to link a specific subject to data in the application.
  • Security presentation:  A set consisting of elements like knowledge of secrets, possession of security devices or aspects of administration which are associated with automated claims approval.  These elements derive from technical policy and legal contracts of a chain of administrative domains.
  • Technical Policy:  A set of technical parameters constraining the behavior of a digital service and limited to the present tense.

And finally, there is the definition of what we mean by user-centric.  Several colleagues have pointed out that the word “user-centric” has been used recently to justify all kinds of schemes that usurp the autonomy of the user.  So we want to be very precise about what we mean in this paper:

  • User-centric:  Structured so as to allow users to conceptualize, enumerate and control their relationships with other parties, including the flow of information.

Proposal for a Common Identity Framework


This post is by Kim Cameron from Kim Cameron's Identity Weblog


Click here to view on the original site: Original Post




Today I am posting a new paper called, Proposal for a Common Identity Framework: A User-Centric Identity Metasystem.

Good news: it doesn’t propose a new protocol!

Instead, it attempts to crisply articulate the requirements in creating a privacy-protecting identity layer for the Internet, and sets out a formal model for such a layer, defined through the set of services the layer must provide.

The paper is the outcome of a year-long collaboration between Dr. Kai Rannenberg, Dr. Reinhard Posch and myself. We were introduced by Dr. Jacques Bus, Head of Unit Trust and Security in ICT Research at the European Commission.

Each of us brought our different cultures, concerns, backgrounds and experiences to the project and we occasionally struggled to understand how our different slices of reality fit together. But it was in those very areas that we ended up with some of the most interesting results.

Kai holds the T-Mobile Chair for Mobile Business and Multilateral Security at Goethe University Frankfurt. He coordinates the EU research projects FIDIS  (Future of Identity in the Information Society), a multidisciplinary endeavor of 24 leading institutions from research, government, and industry, and PICOS (Privacy and Identity Management for Community Services).  He also is Convener of the ISO/IEC Identity Management and Privacy Technology working group (JTC 1/SC 27/WG 5)  and Chair of the IFIP Technical Committee 11 “Security and Privacy Protection in Information Processing Systems”.

Reinhard taught Information Technology at Graz University beginning in the mid 1970’s, and was Scientific Director of the Austrian Secure Information Technology Center starting in 1999. He has been federal CIO for the Austrian government since 2001, and was elected chair of the management board of ENISA (The European Network and Information Security Agency) in 2007. 

I invite you to look at our paper.  It aims at combining the ideas set out in the Laws of Identity and related papers, extended discussions and blog posts from the open identity community, the formal principles of Information Protection that have evolved in Europe, research on Privacy Enhancing Technologies (PETs), outputs from key working groups and academic conferences, and deep experience with EU government digital identity initiatives.

Our work is included in The Future of Identity in the Information Society - a report on research carried out in a number of different EU states on topics like the identification of citizens, ID cards, and Virtual Identities, with an accent on privacy, mobility, interoperability, profiling, forensics, and identity related crime.

I’ll be taking up the ideas in our paper in a number of blog posts going forward. My hope is that readers will find the model useful in advancing the way they think about the architecture of their identity systems.  I’ll be extremely interested in feedback, as will Reinhard and Kai, who I hope will feel free to join into the conversation as voices independent from my own.

Information Card Specification Standards Approval Vote


This post is by Mike Jones from Mike Jones: self-issued


Click here to view on the original site: Original Post




Information Card IconOASIS logoOASIS has scheduled the standards approval vote for the Identity Metasystem Interoperability Version 1.0 specification for June 16-30. My thanks to everyone who submitted comments during the public review. Numerous clarifications have been incorporated as a result of your comments, while still maintaining compatibility with the Identity Selector Interoperability Profile V1.5 (ISIP 1.5) specification.

Information Cards in Industry Verticals


This post is by Kim Cameron from Kim Cameron's Identity Weblog


Click here to view on the original site: Original Post




The recent European Identity Conference, hosted in Munich by the analyst firm Kuppinger Cole, had great content inspiring an ongoing stream of interesting conversations.   Importantly, attendance was up despite the economic climate, an outcome Tim Cole pointed out was predictable since identity technology is so key to efficiency in IT.

One of the people I met in person was James McGovern, well known for his Enterprise Architecture blog.  He is on a roll writing about ideas he discussed with a number of us at the conference, starting with this piece on use of Information Cards in industry verticals.  James knows a lot about both verticals and identity.  He has started a critical conversation, replete with the liminal questions he is known for:

‘Consider a scenario where you are an insurance carrier and you would like to have independent insurance agents leverage CardSpace for SSO. The rationale says that insurance agents have more personally identifiable information on consumers ranging from their financial information such as where they work, how much they earn, where they live, what they own to information about their medical history, etc. When they sell an insurance policy they will even take payment via credit cards. In other words, if there were a scenario where username/passwords should be demolished first, insurance should be at the top of the list.’

A great perception.  Scary, even.

‘Now, an independent insurance agent can do business with a plethora of carriers who all are competitors. The ideal scenario says that all of the carriers would agree to a common set of claims so as to insure card portability. The first challenge is that the insurance vertical hasn’t been truly successful in forming useful standards that are pervasive (NOTE: There is ACORD but it isn’t widely implemented) and therefore relying on a particular vertical to self-organize is problematic.

‘The business value - while not currently on the tongues of enterprise architects who work in the insurance vertical - says that by embracing information cards, they could minimally save money. By not having to manage so many disparate password reset approaches (each carrier has their own policies for password history, complexity and expiry) they can improve the user experience…

‘If I wanted to be a really good relying party, I think there are other challenges that would emerge. Today, I have no automated way of validating the quality of an identity provider and would have to do this as a bunch of one offs. So, within our vertical, we may have say 80,000 different insurance agencies whom could have their own identity provider. With such a large number, I couldn’t rely on white listing and there has to be a better way. We should of course attempt to define what information would need to be exposed at runtime in order for trust to be consumed.’

This raises the matter of how trust would be concretized within the various verticals.  White listing is obviously too cumbersome given the numbers.  James proposes an idea that I will paraphrase as follows:  use claims transformers run by trusted entities (like state departments of insurance) to vet incoming claims.  The idea would be to reuse the authorities already involved in making this kind of decision.

He goes on to examine the challenge of figuring out what identity proofing process has actually been used by an identity provider.  In a paper I collaborated on recently (I’ll be publishing it here soon) we included the proofing and registration processes as one element in a chain of factors we called the “security presentation”.  One of the points James makes is that it should be easy to include an explicit statement about the “security presentation” as one element of any claim-set being submitted (see Jame’s post for some good examples).  Another is that the relying party should be able to include a statement of its security presentation requirements in its policy.

James concludes with a set of action items that need to be addressed for Information Cards to be widely usedl in industry verticals:

‘1. Microsoft needs to redouble its efforts to sell information cards as a business value proposition where the current pitch is towards a technical audience. It is nice that it will be part of Geneva but this means that its capabilities would be fully leveraged unless it is understood by more than folks who do just infrastructure work.

‘2. Oasis is a wonderful standards organization and can add value as a forum to organize common claims at an industry vertical level. Since identity is not insurance specific, we have to acknowledge that using insurance specific bodies such as ACORD may not be appropriate. I would be game to participate on a working group to generate common claims for the insurance vertical.

‘3. When it comes to developing enterprise applications using the notion of claims, …developers need to do a quick paradigm shift. I can envision a few of us individuals who are also book authors coming up with a book entitled: Thinking in Claims and XACML as there is no guide to help developers understand proper architecture going forward. If such a guide existed, we… (could avoid repeating) …the same mistakes of the past.

‘4. I am wildly convinced that industry analysts are having the wrong conversations around identity. Ask yourself, how many ECM systems have on their 2009 roadmap, the ability to consume a claim? How many BPM systems? In case you haven’t figured it out, the answer is a big fat zero. This says that the identity crowd is evangelizing to the wrong demographic. Industry analysts are measuring identity products what consumers really need which is to measure how many existing products can consume new approaches to identity. Does anyone have a clue as to how to get analysts such as Nick Malik, Gerry Gebel, Bob Blakely and others to change the conversation.

‘5. We need to figure out some additional identity standards that an IDP could expose to an RP to assert vetting, attestation, indemnification and other constructs to relying parties. This will require a small change in the way that identity selectors work but B2B user-centric approaches won’t scale without these approaches…’

I know some good work to formalize various aspects of the “security presentation” has been going on in one of the Liberty Alliance working groups - perhaps someone involved could post about the progress that has been made an how it ties in to some of James’ action items. 

James’ action items are all good.  I buy his point that Microsoft needs to take claims beyond the current “infrastructure” community - though I still see the participation of this community as absolutely key.  But we need - as an industry and as individual companies - to widen the discussion and start figuring out how claims can be used in concrete verticals.  As we do this, I expect to see many players, with very strong participation from Microsoft,  taking the new paradigm to the “business people” who will really benefit from the technology.

When Geneva is released to manufacturing later this year, it will be seen as a fundamental part of Active Directory and the Windows platform.  I expect that many programs will then start to kick in that turn up the temperature along the lines James proposes.

My only caution with respect to James’ argument is that I hope we can keep requirements simple in the first go-around.  I don’t think ALL the capabilities of claims have to be delivered “simultaneously”, though I think it is essential for architects like James to understand them and build our current deliverables in light of them. 

So I would add a sixth bullet to the five proposed by James, about beginning with extremely simplified profiles and getting them to work perfectly and interoperably before moving on to more advanced scenarios.  Of course, that means more work in nailing the most germane scenarios and determining their concrete requirements.  I expect James would agree with me on this (I guess I’ll find out, eh?…)

[By the way, James also has an intriguing graphic that appears with the piece, but doesn't discuss it explicitly. I hope that is a treat that is coming...]

Cyber Security


This post is by Bob from Ceci n'est pas un Bob


Click here to view on the original site: Original Post




The Obama administration released the results of its Cyber-Security Review last week. The report's conclusions and recommendations aren't going to do any harm, but they're not going to solve the cyber-security problem either. Start with the obvious: information security has failed, as a technology and as a discipline. A lot of security professionals object to this statement, but let's get real. Hundreds of millions of credit card numbers are stolen from retailers, processors, and other online properties every year. Foreign hackers roam the systems supporting major national defense projects. Spam, malware, and viruses circulate constantly despite the purchase and use of millions of dollars worth of anti-malware tools. Serious penetration tests succeed essentially 100% of the time. The list goes on; the news is all bad, and it's on all the time. The Cyberspace Policy Review team wants to fix this by building "next generation secure computers and networking for Continue reading "Cyber Security"

More precision on the Right to Correlate


This post is by Kim Cameron from Kim Cameron's Identity Weblog


Click here to view on the original site: Original Post




Dave Kearns continues to whack me for some of my terminology in discussing data correlation.  He says: 

‘In responding to my “violent agreement” post, Kim Cameron goes a long way towards beginning to define the parameters for correlating data and transactions. I’d urge all of you to jump into the discussion.

‘But - and it’s a huge but - we need to be very careful of the terminology we use.

‘Kim starts: “Let’s postulate that only the parties to a transaction have the right to correlate the data in the transaction, and further, that they only have the right to correlate it with other transactions involving the same parties.” ‘

Dave’s right that this was overly restrictive.  In fact I changed it within a few minutes of the initial post - but apparently not fast enough to prevent confusion.  My edited version stated:

‘Let’s postulate that only the parties to a transaction have the right to correlate the data in the transaction (unless it is fully anonymized).’

This way of putting things eliminates Dave’s concern:

‘Which would mean, as I read it, that I couldn’t correlate my transactions booking a plane trip, hotel and rental car since different parties were involved in all three transactions!’

That said, I want to be clear that ”parties to a transaction” does NOT include what Dave calls “all corporate partners” (aka a corporate information free-for-all!)  It just means parties (for example corporations) participating directly in some transaction can correlate it with the other transacitons in which they directly participate (but not with the transactions of some other corporation unless they get approval from the transaction participants to do so). 

Dave argues:

‘In the end, it isn’t the correlation that’s problematic, but the use to which it’s put. So let’s tie up the usage in a legally binding way, and not worry so much about the tools and technology.

‘In many ways the internet makes anti-social and unethical behavior easier. That doesn’t mean (as some would have it) that we need to ban internet access or technological tools. It does mean we need to better educate people about acceptable behavior and step up our policing tools to better enable us to nab the bad guys (while not inconveniencing the good guys).’

To be perfectly clear, I’m not proposing a ban on technology!  I don’t do banning!  I do creation. 

So instead, I’m arguing that as we develop our new technologies we should make sure they support the “right to correlation” - and the delegation of that right - in ways that restore balance and give people a fighting chance to prevent unseen software robots from limiting their destinies.

 

Ok… not so Simple Question: How the heck are managers supposed to know if access is correct?


This post is by Mat from MatHamlin.com


Click here to view on the original site: Original Post




How can managers, when presented with their employees’ access across all enterprise applications, make a determination of accuracy?  They can’t, or won’t, if they don’t understand what they are attesting to.

So, we have to make it easy for them.

Here are a few ways to make it easier:

Glossary

For business users to understand a list of fine grained access rights currently held by their employee, the information must be easy to understand.  There needs to be a translation between the IT representation of access and what it actually means if you were explaining it to someone face to face.  For example, if I were to ask a manager, Joe, if his employee, Suzy, should have SAP TCode ‘BGM1′, Joe would have no idea… let alone sign his life away on a decision.  We must translate it to, “Is it ok if Suzy has rights to create master warranties?”  Ideally, your company would establish a cross-departmental governance board to translate these items, and manage and maintain them over time.

ID Card / Contact Information

During the attestation process, if a manager is provided translated access information, but still doesn’t know if the access is correct or not, who can help?  The owner of the access.  During automated access certification, the contact information of the owner of the access could/should be presented to the person making the decision about appropriateness, so they can contact them directly and talk about it.

Delegate the decision

What if a manager is reviewing access for an employee, and finds entitlements that they believe are tied to a temporary project, or cross-functional task?  They really are not the appropriate attestor of this access….  So, a manager should be able to delegate the decision about access to the appropriate business owner.

Present the right information to the right people

From the outset of you access certification process, you should be thinking about who should be determining appropriateness of access to what applications and data.  Building on the example above… the project manager should be presented a list of access relating to the project for individuals on the project.  Ensuring your automated solution provides this flexibility of certification populations is important.

Present information about the access data

Enable your attestors/certifiers to make an informed decision.  Indicate to them during the certification process if certain access is deemed high risk, or is part of an existing SoD violation, or is of a certain classification (like Finance), or is access that has been previously revoked.  All of this metadata about the access information will increase the effectiveness of your access certification process.

Simple Answer: Sit with your certifiers and understand why and where they are having difficulty completing their certifications, and apply some of the items above to make it easier for them.

Do people care about data correlation?


This post is by Kim Cameron from Kim Cameron's Identity Weblog


Click here to view on the original site: Original Post




While I was working on the last couple of posts about data correlation, trusty old RSS brought in a  corroborating piece by Colin McKay at the Office of the Privacy Commissioner of Canada.   Many  in the industry seem to assume people will trade any of their personal information for the smallest trinkets, so more empirical work of the kind reported here seems to be essential.

‘How comfortable, exactly, are online users with their information and online browsing habits being used to track their behaviour and serve ads to them?

‘A survey of Canadian respondents, conducted by TNS Facts and reported by the Canadian Marketing Association, reports that a large number of Canadians and Americans “(69% and 67% respectively) are aware that when they are online their browsing behaviour may be captured by third parties for advertising purposes.”

‘That doesn’t mean they are comfortable with the practice. The same survey notes that “just 33 per cent of Canadians who are members of a site are comfortable with these sites using their browsing information to improve their site experience. There is no difference in support for the use of consumers’ browsing history to serve them targeted ads, be it with the general population, the privacy concerned, or members of a site.”’

If only only 33% are comfortable with using browsing information to improve site experience, I wonder how many will be comfortable with using browsing information to evaluate terminating of peoples’ credit cards (see thread on Martinism)?  Can I take a guess?  How about 1%?  (This may seem high, but I have a friend in the direct marketing world who tells me 1% of the population will believe in anything at all!)  Colin continues:

‘But how much information are users willing to consciously hand over to win access to services, prizes or additional content?

‘A survey of 1800 visitors to coolsavings.com, a coupon and rebate site owned by Q Interactive, has claimed that web visitors are willing “to receive free online services and information in exchange for the use of my data to target relevant advertising to me.”

‘Now, my impression is that visitors to sites like coolsavings.com - who are actively seeking out value and benefits online - would be predisposed to believing that online sites would be able to deliver useful content and relevant ads.

‘That said, Mediapost, who had access to details of the full Q Interactive survey, cautions that users “… continue to put the brakes on hard when asked which specific information they are willing to hand over. The survey found 77.8% willing to give zip code, 64.9% their age and 72.3% their gender, but only 22.4% said they wanted to share the Web sites they visited and only 12% and 12.1% were willing to have their online purchases or the search history respectively to be shared …” ‘

I want to underline Colin’s point.  These statistics come from people who actively sought out a coupon site in order to trade information for benefits!  Even so, we are talking about a mere 12% who were willing to have their online purchases or search history shared.  This empirically nixes the notion, held by some, that people don’t care about data correlation (an issue I promised to address in my last post.

Colin’s conclusions seem consistent with the idea I sketched there of defining a new “right to data correlation” and requiring delegation of that right before trusted parties can correlate individuals across contexts.

‘In both the TNS Facts/CMA and Q Interactive surveys, the results seem to indicate that users are willing to make a conscious decision to share information about themselves – especially if it is with sites they trust and with whom they have an established relationship.

‘A common thread seems to be emerging: consumers see a benefit to providing specific data that will help target information relevant to their needs, but they are less certain about allowing their past behaviour to be used to make inferences about their individual preferences.

‘They may feel their past search and browsing habits might just have a greater impact on their personal and professional life than the limited re-distribution of basic personal information by sites they trust. Especially if those previous habits might be seen as indiscreet, even obscene.’

Colin’s conclusion points to the need to be able to “revoke the right to data correlation” that may have been extended to third parties.  It also underlines the need for a built-in scheme for aging and deletion of correlation data.

 

Simple Question: Is this access correct?


This post is by Mat from MatHamlin.com


Click here to view on the original site: Original Post




Correct…

What is correct?

  • Is the data in the warehouse up to date?
  • Are the accounts correlated correctly to their owners?
    • How do you know?
  • What about the accounts that can’t be correlated to an actual person?
    • Are they system accounts (used by applications)?
    • Are they privileged accounts, used by IT administrators (bad!.. no shared passwords)?
    • Are they accounts that were once owned by employees, contractors or partners who no longer have a relationship with the business?
  • Is each person’s access correct based on least privilege? (only access needed to perform their job)
    • What is least privilege for Suzy? Bob?
  • Does any of the current access represent a risk?
    • Does anyone have the ability to perform an unwanted transaction (or set of transactions)?
    • Who has privileged access to applications and data?

To properly answer these questions, you have to ask the people who would know…  The Business. If you ask the IT department, they might be able to tell you when the access what granted, and maybe even how… but it is unlikely that they can tell you why… and even more unlikely that they know if it is still needed.

The business also knows if the current, static access of each person is correct.  If there is anyone in the company that knows what access Bob or Suzy actually needs, it’s their manager or possibly the application owners on which they have accounts.  The business owners need to review each individuals’ exact access, down to the entitlement level and make a determination of appropriateness.  This is the process of access certification.

Additionally, the business should be engaged to decide what entitlements, when granted to the same individual, constitute a Separation of Duties violation.  These SoD policies can typically span the entire enterprise, and all applications should be considered during the evaluation cycle.  For example, your vendor management (for creating vendor records) could be in an operational application, like a fulfillment or inventory solution, while your vendor payment process may rely on the records in your accounting application.  In this scenario, if someone had the ability to create a vendor in the inventory solution, and then pay the vendor in the accounting solution, this would constitute a SoD violation, or a “Toxic Combination” of access.  This is why the ability to define and enforce SoD policies across enterprise applications is critical.

Simple Answer: Once you’ve build an identity warehouse, execute an SoD evaluation an complete and access certification.  Once they are complete, you truly have “Identity Gold”… all nice and shiny.

….next question: How the heck are managers supposed to know if access is correct?

Fixed Aptana RadRails GEM_LIB issue on m…


This post is by donpark from Don Park's Daily Habit


Click here to view on the original site: Original Post




Fixed Aptana RadRails GEM_LIB issue on mac by linking ‘/Users/{user}/.gem/ruby/1.8/gems’ to ‘/usr/local/lib/ruby/gems/1.8/gems’. I can’t blame Aptana for this since it was me who chose to use a tool built by a company that spread itself too thin. I doubt they have more than a couple of engineers working on RadRails which is not enough to provide the necessary quality across the range of environments Aptana is unfortuantely being asked to support.


Posted in General Tagged: aptana, gem_lib, radrails, wtf

Simple Question: Who has access to what?


This post is by Mat from MatHamlin.com


Click here to view on the original site: Original Post




Who has access to what?.. a simple question, but one that is not so easy to answer for a lot of companies… Companies compelled to answer this question and meet their regulatory obligations.

Siloed IT departments, mergers and acquisitions, employee transfers, contractors hired to full time positions, and terminations can all lead to proliferation of invalid access.  Getting a handle on who has access to what is often times a difficult task that requires cross-departmental cooperation and process development to even gather the data.  Once gathered, correlation of accounts to an actual person or “subject” needs to occur, and is also not an easy task.

We often overlook the value of gathering Identity data.  In a recent face-to-face meeting with Ian Glazer of the Burton Group, he referred to this as “Identity Gold”, and I completely agree.

This step is the foundation for Access Certification, Role Mining, Entitlements Management, Policy Evaluation, Identity Auditing, and numerous other custom services developed by our customers.

Simple Answer: Build an Identity Warehouse… next question: Is this access correct?

The Permissioned Web: Open Does Not Mean Public Domain


This post is by Drummond Reed from Equals Drummond


Click here to view on the original site: Original Post




At the Glue Conference this week I’m enjoying a great set of speakers lined up by Eric Norlin on the topic of how everything in the networked universe gets glued together using Web 2.0 tools and beyond. (The talk Mitch Kapor gave this morning was worth the trip all by itself.)

In a few minutes I’ll be on a panel called Implementing the Open Web. In chatting with Lloyd Hilaiel of Yahoo, Kevin Mullins of MIT, and Phil Windley of Kynetx about this topic last night, we hit on one key point that Phil articulated this way: “People tend to conflate ‘open’ with ‘public domain’, i.e.,  that anything that qualifies as open must be freely available to all.”

It struck me how true this is. It reminds me of the Richard Stallman quote describing open source (cited in the Wikipedia Gratis versus Libre article): “Think free as in free speech, not free beer.”

In terms of data on the Open Web, what this means that even though a particular pool of data may be available via an open standard, publicly-accessible interface, it does NOT mean this data must be publicly available to anyone. If that were true, the whole concept of a personal data store — a key premise of VRM (Vendor Relationship Management) — would not be possible.

So what makes any system or node participating in the Web “open” is not that its data is public, but that the metadata and services for accessing it are available via a publicly discoverable, open-standard interface. The public discovery portion of this is the goal of the XRD work now underway at the XRI Technical Committee at OASIS (based on the original XRDS work – see this blog post by Eran Hammer-Lahav of Yahoo to understand the differences). The open standard portion is the output of IETF, W3C, OASIS, and all the other SSOs (standards-setting organizations) for the net. (The potential of the Open Web Foundation, once it finishes its bootstrap stage, is to make this process of creating open standards even more lightweight and distributed.)

This combination – open discovery of open interfaces accessible over open protocols – is the DNA of the Open Web. And it applies equally to both public and private data. In fact it can finally open up what might be called the Permissioned Web - the Web of all all data that any one party has permission from other parties to access.

That would lead us to the need for integrating identity and permissions with the data, which brings us to the motivations for XDI as a semantic data sharing format/protocol – but my panel is about to start so that will have to be another post.

HTML5 Microdata Fantasy


This post is by donpark from Don Park's Daily Habit


Click here to view on the original site: Original Post




I haven’t been tracking HTML5 design efforts lately but what’s being proposed for microdata (see posts by Sam Ruby and Shelly Powers) yucked me sufficiently to revisit an old fantasy of mine about HTML (man, what a boring life I have). My fantasy was to add general element/structure definition facility to HTML. It should easily extended to support microdata as well.

The way I envisioned it being used is like this:

<address>
<street>123 ABC St.</street>
<city>Foobar</city>
<state>CA</state><zip>94065</zip>
</address>

which sure is preferable to:

<div item>
<span itemtype="street">123 ABC St.</span>
<span itemtype="city">Foobar</span>
<span itemtype="state">CA</span>
<span itemtype="zip">94065</span>
</div>

As to how a semantic structures and syntactic sugars can be defined, one very arbitrary way could be:

<head>
<def name="address" package="http://test.com/1/mapking"
    params="{{street city state zip}}">
  <div>
    <span>{{street}}</span>
    <span>{{city}}</span>
    <span>{{zip}}</span>
    <span>{{zip}}</span>
  </div>
</def>
</head>

I don’t have any illusions that this fantasy has even a tiny chance of coming true though. Besides, it’s like a beggar asking for caviar when any kind of microdata support will satiate our hunger.

Boss! Boss! The Plane. The Plane!

update:

Here is a more elaborate version of the def element for the bored:

<def name="name" package="http://ting.ly/name"
  attrs="$$first last$$">
  <span>$$first$$ $$middle$$ $$last$$</span>
</def>

which could be used like this:

<name first="Don" last="Park"/>

There are lots of wholes in this sketch which is why it’s a fantasy.


Posted in Technical Tagged: fantasy, html5, microdata

“Geneva” Beta 2 is Here


This post is by Mike Jones from Mike Jones: self-issued


Click here to view on the original site: Original Post




Microsoft announced the availability of the second beta of its forthcoming “Geneva” claims-based identity software today during Tech•Ed. This is a significant milestone for the team along the path to releasing production versions of the “Geneva” software family, which includes the server, framework, and CardSpace. I’m personally particularly proud of all the interop work that has been done in preparation for this release. I believe that you’ll find it to be high-quality and interoperable with others’ identity software using WS-*, SAML 2.0, and Information Cards.

For more details, see What’s New in Beta 2 on the “Geneva” Team Blog. Visit the “Geneva” information page. Check out the Identity Developer Training Kit. Learn from team experts on the ID Element show. Download the beta. And let us know how it works for you, so the final versions can be even better.

Enjoy!

ICF Achievements at the EIC


This post is by Mike Jones from Mike Jones: self-issued


Click here to view on the original site: Original Post




Information Card Icon OutlineThis week the Information Card Foundation marked two significant developments at the European Identity Conference: the formation of the German-language chapter of the ICF, and receiving the European Identity Award for Best New Standard.

The inaugural meeting of the German-language D-A-CH chapter was exciting. About 25 people attended representing at least 17 companies and organizations. A highlight was presentations by Fraunhofer FOKUS, Deutsche Telekom, CORISECIO, Siemens, Universität Potsdam, and Microsoft about their Information Card work. Lots of good things happening! Also see the ICF post about the chapter.

Information Card Foundation German Chapter Logos

Receiving the European Identity Award for Best New Standard was a significant honor for the foundation, and a mark of the maturing of the Information Card ecosystem. Also see the ICF post about the award.

European Identity Award

Sehr aufregend!

Star Trek: See It


This post is by Drummond Reed from Equals Drummond


Click here to view on the original site: Original Post




One advantage of having a 13-year old son is that you have an excuse to go see a summer blockbuster movie on the very first night it comes out.

I never did that as a kid, which is one reason I let my son (and his biggest ally in such guilty pleasures, my wife) talk me into it.

And boy, was it worth it. I love films, especially world class dramas, but there’s something extra special about a Hollywood popular movie that somehow turns fun into its own high art. The first Pirates of the Caribbean, the original Spiderman film (and to a lesser extent the third), and last summer’s Dark Knight all fit this bill.

Now you can add this Star Trek. Where exactly they found the energy, humor, and drive in this film I have no idea. How it plays gently, lovingly, and brilliantly off the original while at the same time channelling its own unique spirit and energy still has me doing a mental whistle each time I think about it.

This one will be a good old-fashion b-l-o-c-k-b-u-s-t-e-r at the box office. But don’t go see it for that reason. Go because it will make you happy that so many generations can enjoy a story for so many generations.

Smiley Profile Image Set


This post is by donpark from Don Park's Daily Habit


Click here to view on the original site: Original Post




I wish I could use a set of profile images instead of just one and have appropriate one displayed based on text content so that if I put a smiley like :-) or ;-) in the text, photo of me smiling or winking will show.

It doesn’t have to be a face, it could be topic/category images. And I don’t see why tweet-specific images couldn’t be displayed since Twitter already sends out image URL with each tweet (inside ‘user’).


Posted in General Tagged: blog, twitter

The Porkalypse, Blakley’s Law, and the WHO


This post is by Bob from Ceci n'est pas un Bob


Click here to view on the original site: Original Post




Swine Flu has been downgraded to Influenza Type A (H1N1) for the sake of the pigs, but the WHO Epidemic and Pandemic Alert and Response Phase is still at 5 ("A pandemic is imminent"). The Department of Homeland Security claims that its National Threat Advisory is at "Yellow" ("Significant risk of terrorist attacks") - but DHS is just kidding. For air travellers it's still "Orange" ("High risk of terrorist attacks"). At first glance these two alarming indicators seem similar. They're not. The DHS National Threat Advisory is a public alert system. That a public alert system is indicating imminent disaster is not surprising. In fact it's inevitable. It's the nature of public alert systems to signal imminent disaster at all times. I've composed "Blakley's Law" (next time I come up with one of these I'll rename this one "Blakley's First Law") to describe the phenomenon:
"Every public alert system's status Continue reading "The Porkalypse, Blakley’s Law, and the WHO"