24 year old student lights match: Europe versus Facebook

If you are interested in social networks, don’t miss the slick video about Max Schrems’ David and Goliath struggle with Facebook over the way they are treating his personal information.  Click on the red “CC” in the lower right-hand corner to see the English subtitles.

Max is a 24 year old law student from Vienna with a flair for the interview and plenty of smarts about both technology and legal issues.  In Europe there is a requirement that entities with data about individuals make it available to them if they request it.  That’s how Max ended up with a personalized CD from Facebook that he printed out on a stack of paper more than a thousand pages thick (see image below). Analysing it, he came to the conclusion that Facebook is engineered to break many of the requirements of European data protection.  He argues that the record Facebook provided him finds them to be in flagrante delicto.  

The logical next step was a series of 22 lucid and well-reasoned complaints that he submitted to the Irish Data Protection Commissioner (Facebook states that European users have a relationship with the Irish Facebook subsidiary).  This was followed by another perfectly executed move:  setting up a web site called Europe versus Facebook that does everything right in terms using web technology to mount a campaign against a commercial enterprise that depends on its public relations to succeed.

Europe versus Facebook, which seems eventually to have become an organization, then opened its own YouTube channel.  As part of the documentation, they publicised the procedure Max used to get his personal CD.  Somehow this recipe found its way to reddit  where it ended up on a couple of top ten lists.  So many people applied for their own CDs that Facebook had to send out an email indicating it was unable to comply with the requirement that it provide the information within a 40 day period.

If that seems to be enough, it’s not all.  As Max studied what had been revealed to him, he noticed that important information was missing and asked for the rest of it.  The response ratchets the battle up one more notch: 

Dear Mr. Schrems:

We refer to our previous correspondence and in particular your subject access request dated July 11, 2011 (the Request).

To date, we have disclosed all personal data to which you are entitled pursuant to Section 4 of the Irish Data Protection Acts 1988 and 2003 (the Acts).

Please note that certain categories of personal data are exempted from subject access requests.
Pursuant to Section 4(9) of the Acts, personal data which is impossible to furnish or which can only be furnished after disproportionate effort is exempt from the scope of a subject access request. We have not furnished personal data which cannot be extracted from our platform in the absence of is proportionate effort.

Section 4(12) of the Acts carves out an exception to subject access requests where the disclosures in response would adversely affect trade secrets or intellectual property. We have not provided any information to you which is a trade secret or intellectual property of Facebook Ireland Limited or its licensors.

Please be aware that we have complied with your subject access request, and that we are not required to comply with any future similar requests, unless, in our opinion, a reasonable period of time has elapsed.

Thanks for contacting Facebook,
Facebook User Operations Data Access Request Team

What a spotlight

This throws intense light on some amazingly important issues. 

For example, as I wrote here (and Max describes here), Facebook’s “Like” button collects information every time an Internet user views a page containing the button, and a Facebook cookie associates that page with all the other pages with “Like” buttons visited by the user in the last 3 months. 

If you use Facebook, records of all these visits are linked, through cookies, to your Facebook profile - even if you never click the “like” button.  These long lists of pages visited, tied in Facebook’s systems to your “Real Name identity”, were not included on Max’s CD. 

Is Facebook prepared to argue that it need not reveal this stored information about your personal data because doing so would adversely affect its “intellectual property”? 

It will be absolutely amazing to watch how this issue plays out, and see just what someone with Max’s media talent is able to do with the answers once they become public. 

The result may well impact the whole industry for a long time to come.

Meanwhile, students of these matters would do well to look at Max’s many complaints:

no

date

topic

status

files

01

18-AUG-2011

Pokes.
Pokes are kept even after the user “removes” them.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

02

18-AUG-2011

Shadow Profiles.
Facebook is collecting data about people without their knowledge. This information is used to substitute existing profiles and to create profiles of non-users.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

03

18-AUG-2011

Tagging.
Tags are used without the specific consent of the user. Users have to “untag” themselves (opt-out).
Info: Facebook announced changes.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

04

18-AUG-2011

Synchronizing.
Facebook is gathering personal data e.g. via its iPhone-App or the “friend finder”. This data is used by Facebook without the consent of the data subjects.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

05

18-AUG-2011

Deleted Postings.
Postings that have been deleted showed up in the set of data that was received from Facebook.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

06

18-AUG-2011

Postings on other Users’ Pages.
Users cannot see the settings under which content is distributed that they post on other’s pages.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

07

18-AUG-2011

Messages.
Messages (incl. Chat-Messages) are stored by Facebook even after the user “deleted” them. This means that all direct communication on Facebook can never be deleted.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

08

18-AUG-2011

Privacy Policy and Consent.
The privacy policy is vague, unclear and contradictory. If European and Irish standards are applied, the consent to the privacy policy is not valid.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

09

18-AUG-2011

Face Recognition.
The new face recognition feature is an inproportionate violation of the users right to privacy. Proper information and an unambiguous consent of the users is missing.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

10

18-AUG-2011

Access Request.
Access Requests have not been answered fully. Many categories of information are missing.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

11

18-AUG-2011

Deleted Tags.
Tags that were “removed” by the user, are only deactivated but saved by Facebook.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

12

18-AUG-2011

Data Security.
In its terms, Facebook says that it does not guarantee any level of data security.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

13

18-AUG-2011

Applications.
Applications of “friends” can access data of the user. There is no guarantee that these applications are following European privacy standards.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

14

18-AUG-2011

Deleted Friends.
All removed friends are stored by Facebook.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

15

18-AUG-2011

Excessive processing of Data.
Facebook is hosting enormous amounts of personal data and it is processing all data for its own purposes.
It seems Facebook is a prime example of illegal “excessive processing”.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

16

18-AUG-2011

Opt-Out.
Facebook is running an opt-out system instead of an opt-in system, which is required by European law.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

 

24-AUG-2011

Letter from the Irish DPC.

 

Letter (PDF)

 

15-SEPT-2011

Letter to the Irish DPC concerning the new privacy policy and new settings on Facebook.

 

Letter (PDF)

17

19-SEPT-2011

Like Button.
The Like Button is creating extended user data that can be used to track users all over the internet. There is no legitimate purpose for the creation of the data. Users have not consented to the use.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

18

19-SEPT-2011

Obligations as Processor.
Facebook has certain obligations as a provider of a “cloud service” (e.g. not using third party data for its own purposes or only processing data when instructed to do so by the user).

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

19

19-SEPT-2011

Picture Privacy Settings.
The privacy settings only regulate who can see the link to a picture. The picture itself is “public” on the internet. This makes it easy to circumvent the settings.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

20

19-SEPT-2011

Deleted Pictures.
Facebook is only deleting the link to pictures. The pictures are still public on the internet for a certain period of time (more than 32 hours).

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

21

19-SEPT-2011

Groups.
Users can be added to groups without their consent. Users may end up in groups that lead other to false impressions about a person.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

22

19-SEPT-2011

New Policies.
The policies are changed very frequently, users do not get properly informed, they are not asked to consent to new policies.

Filed with the Irish DPC

Complaint (PDF)
Attachments (ZIP)

 

New paper on Wi-Fi positioning systems

Regular readers will have come across (or participated in shaping) some of my work over the last year as I looked at the different ways that device identity and personal identity collide in mobile location technology.

In the early days following Google’s Street View WiFi snooping escapades, I became increasingly frustrated that public and official attention centered on Google’s apparently accidental collection of unencrypted network traffic when there was a much worse problem staring us in the face.

Unfortunately the deeper problem was also immensely harder to grasp since it required both a technical knowledge of networked devices and a willingness to consider totally unpredicted ways of using (or misusing) information.

As became clear from a number of the conversations with other bloggers, even many highly technical people didn’t understand some pretty basic things - like the fact that personal device identifiers travel in the clear on encrypted WiFi networks… Nor was it natural for many in our community to think things through from the perspective of privacy threat analysis.

This got me to look at the issues even more closely, and I summarized my thinking at PII 2010 in Seattle.

A few months ago I ran into Dr. Ann Cavoukian, the Privacy Commissioner of Ontario, who was working on the same issues.  We decided to collaborate on a very in-depth look at both the technology and policy implications, aiming to produce a document that could be understood by those in the policy community and still serve as a call to the technical community to deal appropriately with the identity issues, seeking what Ann calls “win-win” solutions that favor both privacy and innovation.

Ann’s team deserves all the credit for the thorough literature research and clear exposition.  Ann expertly describes the policy issues and urges us as technologists to adopt Privacy By Design principles for our work. I appreciate having had the opportunity to collaborate with such an innovative group.  Their efforts give me confidence that even difficult technical issues with social implications can be debated and decided by the people they affect.

Please read WiFi Positioning Systems: Beware of Unintended Consequences and let us know what you think - I invite you to comment (or tweet or email me) on the technical, policy and privacy-by-design aspects of the paper.

Google opposing the “Right to be forgotten”

In Europe there has been a lot of discussion about “the Right to be Forgotten” (see, for example, Le droit à l’oubli sur Internet).  The notion is that after some time, information should simply fade away (counteracting digital eternity).    

In America, the authors of the Social Network Users’ Bill of Rights have called their variant of this the “Right to Withdraw”.  

Whatever words we use, the right, if recognized, would be a far-reaching game-changer - and as I wrote here, represent a “cure as important as the introduction of antibiotics was in the world of medicine”.

Against this backdrop, the following report by CIARAN GILES of the Associated Press gives us much to think about. It appears Google is fighting head-on against the “the Right to be Forgotten”.  It seems to be willing to take on any individual or government who dares to challenge the immutable right of its database and algorithms to define you through something that has been written - forever, and whether it’s true or not.

MADRID – Their ranks include a plastic surgeon, a prison guard and a high school principal. All are Spanish, but have little else in common except this: They want old Internet references about them that pop up in Google searches wiped away.

In a case that Google Inc. and privacy experts call a first of its kind, Spain’s Data Protection Agency has ordered the search engine giant to remove links to material on about 90 people. The information was published years or even decades ago but is available to anyone via simple searches.

Scores of Spaniards lay claim to a “Right to be Forgotten” because public information once hard to get is now so easy to find on the Internet. Google has decided to challenge the orders and has appealed five cases so far this year to the National Court.

Some of the information is embarrassing, some seems downright banal. A few cases involve lawsuits that found life online through news reports, but whose dismissals were ignored by media and never appeared on the Internet. Others concern administrative decisions published in official regional gazettes.

In all cases, the plaintiffs petitioned the agency individually to get information about them taken down.

And while Spain is backing the individuals suing to get links taken down, experts say a victory for the plaintiffs could create a troubling precedent by restricting access to public information.

The issue isn’t a new one for Google, whose search engine has become a widely used tool for learning about the backgrounds about potential mates, neighbors and co-workers. What it shows can affect romantic relationships, friendships and careers.

For that reason, Google regularly receives pleas asking that it remove links to embarrassing information from its search index or least ensure the material is buried in the back pages of its results. The company, based in Mountain View, Calif., almost always refuses in order to preserve the integrity of its index.

A final decision on Spain’s case could take months or even years because appeals can be made to higher courts. Still, the ongoing fight in Spain is likely to gain more prominence because the European Commission this year is expected to craft controversial legislation to give people more power to delete personal information they previously posted online.

“This is just the beginning, this right to be forgotten, but it’s going to be much more important in the future,” said Artemi Rallo, director of the Spanish Data Protection Agency. “Google is just 15 years old, the Internet is barely a generation old and they are beginning to detect problems that affect privacy. More and more people are going to see things on the Internet that they don’t want to be there.”

Many details about the Spaniards taking on Google via the government are shrouded in secrecy to protect the privacy of the plaintiffs. But the case of plastic surgeon Hugo Guidotti vividly illustrates the debate.

In Google searches, the first link that pops up is his clinic, complete with pictures of a bare-breasted women and a muscular man as evidence of what plastic surgery can do for clients. But the second link takes readers to a 1991 story in Spain’s leading El Pais newspaper about a woman who sued him for the equivalent of euro5 million for a breast job that she said went bad.

By the way, if it really is true that the nothing should ever interfere with the automated pronouncements of the search engine - even truth - does that mean robots have the right to pronounce any libel they want, even though we don’t?

Google Indoors featured on German TV

Germans woke up yesterday to a headline story on Das Erste’s TV Morning Show announcing a spiffy new Internet service - Google indoors

The first's lead-in and Google Indoors spokesman

A spokesman said Google was extending its Street View offering so Internet users could finally see inside peoples’ homes.  Indeed, Google indoors personnel were already knocking on doors, patiently explaining that if people had not already gone through the opt-out process, they had ”opted in”…

Google Indoors greeted by happy customer

… so the technicians needed to get on with their work:

Google Indoors camera-head enters appartment

Google’s deep concern about peoples’ privacy had let it to introduce features such as automated blurring of faces…

Automated privacy features and product placements with revenue shared with residents
 
… and the business model of the scheme was devilishly simple: the contents of peoples’ houses served as product placements charged to advertisers, with 1/10 of a cent per automatically recognized brand name going to the residents themselves.  As shown below, people can choose to obfuscate products worth more than 5,000 Euros if concerned about attracting thieves - an example of the advanced privacy options and levels the service makes possible.

Google Indoors app experience

Check out the video.  Navigation features within houses are amazing!  From the amount of effort and wit put into it by a major TV show, I’d wager that even if Google’s troubles with Germany around Street View are over, its problems with Germans around privacy may not be. 

Frankly, Das Erste (meaning “The First”) has to be congratulated on one of the best crafted April Fools you will have witnessed.  I don’t have the command of German language or politics (!) to understand all the subtleties, but friends say the piece is teeming with irony.  And given Eric Schmidt’s policy of getting as close to “creepy” as possible, who wouldn’t find the video at least partly believable?

[Thanks to Kai Rannenberg for the heads up.]

Broken Laws of Identity lead to system’s destruction

Britain’s Home Office has posted a remarkable video, showing Immigration Minister Damian Green methodically pulverizing the disk drives that once held the centralized database that was to be connected to the British ID Cards introduced by Tony Blair.  

“What we’re doing today is CRUSHING, the final remnants of the national identity card scheme - the disks and hard drives that held the information on the national identity register have been wiped and they’re crushed and reduced to bits of metal so everyone can be absolutely sure that the identity scheme is absolutely dead and buried.

“This whole experiment of trying to collect huge amounts of private information on everyone in this country - and collecting on the central database - is no more, and it’s a first step towards a wider agenda of freedom.  We’re publishing the protection of freedoms bill as well, and what this shows is that we want to rebalance the security and freedom of the citizen.  We think that previously we have not had enough emphasis on peoples’ individual freedom and privacy, and we’re determined to restore the proper balance on that.”

Readers of Identityblog will recall that the British scheme was exceptional in breaking so many of the Laws of Identity at once.  It flouted the first law - User control and Consent - since citizen participation was mandatory.  It broke the second - Minimal Disclosure for a Constrained Use - since it followed the premise that as much information as possible should be assembled in a central location for whatever uses might arise…  The third law of Justifiable Parties was not addressed given the centralized architecture of the system, in which all departments would have made queries and posted updates to the same database and access could have been extended at the flick of a wrist.  And the fourth law of “Directed Identity” was a clear non-goal, since the whole idea was to use a single identifier to unify all possible information.

Over time opposition to the scheme began to grow and became widespread, even though the Blair and Brown governments claimed their polls showed majority support.  Many well-known technologists and privacy advocates attempted to convince them to consider privacy enhancing technologies and architectures that would be less vulnerable to security and privacy meltdown - but without success.  Beyond the scheme’s many technical deficiencies, the social fracturing it created eventually assured its irrelevance as a foundational element for the digital future.

Many say the scheme was an important issue in the last British election.  It certainly appears the change in government has left the ID card scheme in the dust, with politicians of all stripes eager to distance themselves from it.  Damian Green, who worked in television and understands it, does a masterful job of showing what his views are.  His video posted by the Home Office, seems iconic.

All in all, the fate of the British ID Card and centralized database scheme is exactly what was predicted by the Laws of Identity:

Those of us who work on or with identity systems need to obey the Laws of Identity.  Otherwise, we create a wake of reinforcing side-effects that eventually undermine all resulting technology.  The result is similar to what would happen if civil engineers were to flount the law of gravity.  By following the Laws we can build a unifying identity metasystem that is universally accepted and enduring.

[Thanks to Jerry Fishenden (here and here) for twittering Damian Green's video]

People, meet Facebook HAL…

According to  Irina Slutsky of Ad Age Digital, Facebook is testing the idea of deciding what ads to show you by pigeon-holing you based on your real-time conversations. 

In the past, a user’s Facebook advertising would eventually be impacted by what’s on her wall and in her stream, but this was a gradual shift based on out-of-band analysis and categorization. 

Now, at least for participants in this test, it will become crystal clear that Facebook is looking at and listening to your activities; making assumptions about who you are and what you want; and using those assumptions to change how you are treated.

Irena writes:

This month — and for the first time — Facebook started to mine real-time conversations to target ads. The delivery model is being tested by only 1% of Facebook users worldwide. On Facebook, that’s a focus group 6 million people strong.

The closest Facebook has come to real-time advertising has been with its most recent ad offering, known as sponsored stories, which repost users’ brand interactions as an ad on the side bar. But for the 6 million users involved in this test, any utterance will become fodder for real-time targeted ads.

For example: Users who update their status with “Mmm, I could go for some pizza tonight,” could get an ad or a coupon from Domino’s, Papa John’s or Pizza Hut.

To be clear, Facebook has been delivering targeted ads based on wall posts and status updates for some time, but never on a real-time basis. In general, users’ posts and updates are collected in an aggregate format, adding them to target audiences based on the data collected over time. Keywords are a small part of that equation, but Facebook says sometimes keywords aren’t even used. The company said delivering ads based on user conversations is a complex algorithm continuously perfected and changed. The real aim of this test is to figure out if those kinds of ads can be served at split-second speed, as soon as the user makes a statement that is a match for an ad in the system.

With real-time delivery, the mere mention of having a baby, running a marathon, buying a power drill or wearing high-heeled shoes is transformed into an opportunity to serve immediate ads, expanding the target audience exponentially beyond usual targeting methods such as stated preferences through “likes” or user profiles. Facebook didn’t have to create new ads for this test and no particular advertiser has been tapped to participate — the inventory remains as is.

A user may not have liked any soccer pages or indicated that soccer is an interest, but by sharing his trip to the pub for the World Cup, that user is now part of the Adidas target audience. The moment between a potential customer expressing a desire and deciding on how to fulfill that desire is an advertiser sweet spot, and the real-time ad model puts advertisers in front of a user at that very delicate, decisive moment.

“The long-held promise of local is to deliver timely, relevant and measurable ads which drive actions such as commerce, so if Facebook is moving in this direction, it’s brilliant,” said Reggie Bradford, CEO of Facebook software and marketing company Vitrue. “This is a massive market shift everyone is pivoting toward, led by services such as Groupon. Facebook has the power of the graph of me and my friends placing them in the position to dominate this medium.” [More here]

This test is important and will reveal a lot.  If the system is accurate and truly real-time, the way it works will become obvious to people.  It will be a simple cause-and-effect experience that leads to a clarity people have not had before around profiling.  This will be good

However, once the analysis algorithms make mistakes in pigeon-holing users - which is inevitable - it is  likely that it will alienate at least some part of the test population, raising their consciousness of the serious potential problems with profiling.  What will that do to their perception of Facebook?

A Facebook that looks more and more like HAL will not be accepted as “your universal internet identity” - as some of the more pathologically shortsighted dabblers in identity claim is already becoming the case.  Like other companies, Facebook has many simultaneous goals, and some of them conflict in fundamental ways.  More than anything else, in the long term, it is these conflicts that will limit Facebook’s role as an identity provider.

 

 

Netflix stung with privacy lawsuits

Via Archie Reed, this story by Greg Sandoval of ZDnet:

Netflix, the web’s top video-rental service, has been accused of violating US privacy laws in five separate lawsuits filed during the past two months, records show.

Each of the five plaintiffs allege that Netflix hangs onto customer information, such as credit card numbers and rental histories, long after subscribers cancel their membership. They claim this violates the Video Privacy Protection Act (VPPA).

Netflix declined to comment.

In a four-page suit filed on Friday, Michael Sevy, a former Netflix subscriber who lives in Michigan, accuses Netflix of violating the VPPA by “collecting, storing and maintaining for an indefinite period of time, the video rental histories of every customer that has ever rented a DVD from Netflix”. Netflix also retains information that “identifies the customer as having requested or obtained specific video materials or services”, according to Sevy’s suit.

In a complaint filed 22 February, plaintiff Jason Bernal, a resident of Texas, claimed “Netflix has assumed the role of Big Brother and trampled the privacy rights of its former customers”.

Jeff Milans from Virginia filed the first of the five suits on 26 January. One of his attorneys, Bill Gray, told ZDNet Australia’s sister site CNET yesterday that the way he knows Netflix is preserving information belonging to customers who have left the company is from Netflix emails. According to Gray, in messages to former subscribers, Netflix writes something similar to “We’d love to have you come back. We’ve retained all of your video choices”.

Gray said that Netflix uses the customer data to market the rental service, but this is done while risking its customers’ privacy. Someone’s choice in rental movies could prove embarrassing, according to Gray, and should hackers ever get access to Netflix’s database, that information could be made publicly available.

“We want Netflix to operate in compliance of the law and delete all of this information,” Gray said.

All the plaintiffs filed their complaints in US District Court for the Northern District of California. Each has asked the court for class action status. [More here].

In Europe there has been a lot of discussion about “the Right to be Forgotten” (see, for example,
Le droit à l’oubli sur Internet).  The notion is that after some time, information should simply fade away (counteracting digital eternity).  The Right to be Forgotten has to be one of the most important digital rights - not only for social networks, but for the Internet as a whole.  

The authors of the Social Network Users’ Bill of Rights have called some variant of this the “Right to Withdraw”.  Whatever words we use, the Right is a far-reaching game-changer - a cure as important as the introduction of antibiotics was in the world of medicine.

I say “cure” because it helps heal problems that shouldn’t have been created in the first place. 

For example, Netflix does not need to - and should not - associate our rental patterns with our natural identities (e.g. with us as recognizable citizens).  Nor should any other company that operates in the digital world. 

Instead, following the precepts of minimal disclosure, the patterns should simply be associated with entities who have accounts and the right to rent movies.  The details of billing should not be linked to the details of ordering (this is possible using the new privacy-enhancing technologies).  From our point of view as consumers of these services, there is no reason the linking should be visible to anyone but ourselves.

All this requires a wee bit of a paradigm shift, you will say.  And you’re right.  Until that happens, we don’t have a lot of alternatives other than the Right to be Forgotten.  Especially, as described in the law suits above, when we have “chosen to withdraw.”

Six new authentication methods for Identityblog

Back in March 2006, when Information Cards were unknown and untested, it became obvious that the best way for me to understand the issues would be to put Information Cards onto Identityblog. 

I wrote the code in PHP, and a few people started trying out Information Cards.  Since I was being killed by spam at the time, I decided to try an experiment:  make it mandatory to use an Information Card to leave a comment.  It was worth a try.  More people might check out InfoCards.  And presto, my spam problems would go away.

So on March 18th 2006 I posted More hardy pioneers try out InfoCard, showing the first few people to give it all a whirl.

At first I thought my draconian “InfoCard-Only” approach would get a lot of peoples’ hackles up and only last a few weeks.  But over time more and more people seemed to be subscribing - probably because Identityblog was one of the few sites that actually used InfoCards in production.  And I never had spam again.

How many people joined using InfoCards?  Today I looked at my user list (see the screenshot below with PII fuzzed out).  The answer: 2958 people successfully subscribed and passed email verification.  There were then over 23,000 successful audited logins.  Not very many for a commercial site, but not bad for a technical blog.

Of course, as we all know, the powers at the large commercial sites have preferred the  ”NASCAR” approach of presenting a bunch of different buttons that redirect the user to, uh, something-or-other-that-can-be-phished, ahem, in spite of the privacy and security problems.  This part of the conversation will go on for some time, since these problems will become progressively more widespread as NASCAR gains popularity and the criminally inclined tune in to its potential as a gold mine… But that discussion is for another day. 

Meanwhile, I want to get my hands dirty and understand all the implications of the NASCAR-style approach.  So recently I subscribed to a nifty janrain service that offers a whole array of login methods.  I then integrated their stuff into Identityblog.  I promise, Scout’s Honor, not to do man-in-the-middle-attacks or scrape your credentials, even though I probably could if I were so inclined.

From now on, when you need to authenticate at Identityblog, you will see a NASCAR-style login symbol.  See, for example, the LOG IN option at the top of this page. 

If you are not logged in and you want to leave a comment you will see :
 

Click on the string of icons and you get something like this:

 

Because many people continue to use my site to try out Information Cards, I’ve supplemented the janrain widget experience with the Pamelaware Information Card Option (it was pretty easy to make them coexist, and it leaves me with at least one unphishable alternative).  This will also benefit people who don’t like the idea of linking their identifiers all over the web.  I expect it will help researchers and students too.

One warning:  Janrain’s otherwise polished implementation doesn’t work properly with Internet Explorer - it leaves a spurious “Cross Domain Receiver Page” lurking on your desktop.  [Update - this was apparently my problem: see here]  Once I figure out how to contact them (not evident), I’ll ask janrain if and when they’re going to fix this.  Anyway, the system works - just a bit messy because you have to manually close the stranded empty page.  The problem doesn’t appear in Firefox. 

It has already been a riot looking into the new technology and working through the implications.  I’ll talk about this as we go forward.

 

A Privacy Bill of Rights proposed for the US

The continuing deterioration of privacy and multi-party security due to short-sighted and unsustainable practices within our industry has begun to have the inevitable result, as reported by this article in the New York TImes.

A Commerce Department task force called for the creation of a ‘Privacy Bill of Rights’ for online consumers and the establishment of an office within the department that would work to strengthen privacy policies in the United States and coordinate initiatives with other countries.

The department’s Internet Policy Task Force, in a report released on Thursday, said the “Privacy Bill of Rights” would increase transparency on how user information was collected online, place limits on the use of consumer data by companies and promote the use of audits and other forms of enforcement to increase accountability.

The new protections would expand on the framework of Fair Information Practice Principles that address data security, notice and choice — or the privacy policies many users agree to on Web sites — and rights to obtaining information on the Internet.

The simple concept of notice and choice is not adequate as a basis for privacy protections,” said Daniel J. Weitzner, the associate administrator for the office of policy analysis and development at the Commerce Department’s National Telecommunications and Information Administration [emphasis mine - Kim].

The article makes the connection to the Federal Trade Commission’s “Do Not Track” proposal:

The F.T.C., in its report on online privacy this month, also called for improvements to the practice principles, but focused on installing a “do not track” mechanism that would allow computer users to opt out of having their information collected surreptitiously by third-party companies.

That recommendation caused concern in the online advertising industry, which has said that such a mechanism would hamper the industry’s growth and could potentially limit users’ access to free content online.

[The prospect of an online advertising industry deprived of its ability to surreptitiously collect information on us causes tears to well in my eyes.  I can't continue!  I need a Kleenex!]

The proposed Privacy Policy Office would work with the administration, the F.T.C. and other agencies on issues surrounding international and commercial data privacy issues but would not have enforcement authority.

“America needs a robust privacy framework that preserves consumer trust in the evolving Internet economy while ensuring the Web remains a platform for innovation, jobs and economic growth,” the commerce secretary, Gary F. Locke, said in a statement. “Self-regulation without stronger enforcement is not enough. Consumers must trust the Internet in order for businesses to succeed online.”

All of this is, in my view, just an initial reaction to behaviors that are seriously out of control.  As information leakage goes, the surreptitious collection of information” to which the NYT refers is done at a scale that dwarfs Wiki Leaks, even if the subjects of the information are mere citizens rather than lofty officials of government.

I will personally be delighted when it is enshrined in law that a company can no longer get you to click on a privacy policy like this one and claim it is consent to sell your location to anyone it pleases.

Gov2.0 and Facebook ‘Like’ Buttons

I couldn’t agree more with the points made by identity architect James Brown in a very disturbing piece he has posted at The Other James Brown

James explains how the omnipresent Facebook  widget works as a tracking mechanism:  if you are a Facebook subscriber, then whenever you open a page showing the widget, your visit is reported to Facebook.

You don’t have to do anything whatsoever - or click the widget - to trigger this report.  It is automatic.  Nor are we talking here about anonymized information or simple IP address collection.  The report contains your Facebook identity information as well as the URL of the page you are looking at.

If you are familiar with the way advertising beacons operate, your first reaction might be to roll your eyes and yawn.  After all, tracking beacons are all over the place and we’ve known about them for years.

But until recently, government web sites - or private web sites treating sensitive information of any kind - wouldn’t be caught dead using tracking beacons. 

What has changed?  Governments want to piggyback on the reach of social networks, and show they embrace technology evolution.  But do they have procedures in place that ensure that the mechanisms they adopt are actually safe?  Probably not, if the growing use of the Facebook ‘Like’ button on these sites demonstrates.  I doubt those who inserted the widgets have any idea about how the underlying technology works - or the time or background to evaluate it in depth.  The result is a really serious privacy violation.

Governments need to be cautious about embracing tracking technology that betrays the trust citizens put in them.  James gives us a good explanation of the problem with Facebook widgets.  But other equally disturbing threats exist.  For example, should governments be developing iPhone applications when to use them, citizens must agree that Apple has the right to reveal their phone’s identifier and location to anyone for any purpose?    

In my view, data protection authorities are going to have to look hard at emerging technologies and develop guidelines on whether government departments can embrace technologies that endanger the privacy of citizens.

Let’s turn now to the details of James’ explanation.  He writes:

I am all for Gov2.0.  I think that it can genuinely make a difference and help bring public sector organisations and people closer together and give them new ways of working.  However, with it comes responsibility, the public sector needs to understand what it is signing its users up for.image

In my post Insurers use social networking sites to identify risky clients last week I mentioned that NHS Choices was using a Facebook ‘Like’ button on its pages and this potentially allows Facebook to track what its users were doing on the site.  I have been reading a couple of posts on ‘Mischa’s ramblings on the interweb’ who unearthed this issue here and here and digging into this a bit further to see for myself, and to be honest I really did not realise how invasive these social widgets can be.

Many services that government and public sector organisations offer are sensitive and personal. When browsing through public sector web portals I do not expect that other organisations are going to be able to track my visit – especially organisations such as Facebook which I use to interact with friends, family and colleagues.

This issue has now been raised by Tom Watson MP, and the response from the Department of Health on this issue of Facebook is:

“Facebook capturing data from sites like NHS Choices is a result of Facebook’s own system. When users sign up to Facebook they agree Facebook can gather information on their web use. NHS Choices privacy policy, which is on the homepage of the site, makes this clear.”

“We advise that people log out of Facebook properly, not just close the window, to ensure no inadvertent data transfer.”

I think this response is wrong on a number of different levels.  Firstly at a personal level, when I browse the UK National Health Service web portal to read about health conditions I do not expect them to allow other companies to track that visit; I don’t really care what anybody’s privacy policy states, I don’t expect the NHS to allow Facebook to track my browsing habits on the NHS web site.

Secondly, I would suggest that the statement “Facebook capturing data from sites like NHS Choices is a result of Facebook’s own system” is wrong.  Facebook being able to capture data from sites like NHS Choices is a result of NHS Choices adding Facebook’s functionality to their site.

Finally, I don’t believe that the “We advise that people log out of Facebook properly, not just close the window, to ensure no inadvertent data transfer.” is technically correct.

(Sorry to non-technical users but it is about to a bit techy…)

I created a clean Virtual Machine and installed HTTPWatch so I could see the traffic in my browser when I load an NHS Choices page.  This machine has never been to Facebook, and definitely never logged into it.  When I visit the NHS Choices page on bowel cancer the following call is made to Facebook:

 

AnonFacebook

So Facebook knows someone has gone to the above page, but does not know who.

 

Now go Facebook and log-in without ticking the ‘Keep logged in’ checkbox and the following cookie is deposited on my machine with the following 2 fields in it: (added xxxxxxxx to mask the my unique id)

  • datr: s07-TP6GxxxxxxxxkOOWvveg
  • lu: RgfhxpMiJ4xxxxxxxxWqW9lQ

If I now close my browser and go back to Facebook, it does not log me in - but it knows who I am as my email address is pre-filled.

 

Now head over back to http://www.nhs.uk/conditions/cancer-of-the-colon-rectum-or-bowel/pages/introduction.aspx and when the Facebook page is contacted the cookie is sent to them with the data:

  • datr: s07-TP6GxxxxxxxxkOOWvveg
  • lu: RgfhxpMiJ4xxxxxxxxWqW9lQ

FacebookNotLoggedIn

 

So even if I am not logged into Facebook, and even if I do not click on the ‘Like’ button, the NHS Choices site is allowing Facebook to track me.

Sorry, I don’t think that is acceptable.

[Update:  I originally misread James' posting as saying the "keep me logged in" checkbox on the Facebook login page was a factor in enabling tracking - in other words that Facebook only used permanent cookies after you ticked that box.  Unfortunately this is not the case.  I've updated my comments in light of this information.

If you have authenticated to Facebook even once, the tracking widget will continue to collect information about you as you surf the web unless you manually delete your Facebook cookies from the browser.  This design is about as invasive of your privacy as you can possibly get...]

 

U-Prove honored by International Association of Privacy Professionals

There was great news this week about the growing support for U-Prove Minimal Disclosure technology:  it received the top award in the technology innovation category from the International Association of Privacy Professionals - the world’s largest association of privacy professionals.

BALTIMORE — September 30, 2010 — Winners of the eighth annual HP-International Association of Privacy Professionals (IAPP) Privacy Innovation Awards were recognized today at the IAPP Privacy Dinner, held in conjunction with the IAPP Privacy Academy 2010.  The honorees include Symcor, Inc., Minnesota Privacy Consultants, and Microsoft Corporation.

The annual awards recognize exceptional integration of privacy and are judged from a broad field of entries. This year’s winners were selected by a panel of private and public sector privacy experts including Allen Brandt, CIPP, Corporate Counsel, Chief Privacy Official, Graduate Management Admission Council; Joanne McNabb, CIPP, CIPP/G, Chief, California Office of Privacy Protection; Susan Smith, CIPP, Americas Privacy Officer, Hewlett-Packard Company; and Florian Thoma, Chief Data Protection Officer, Siemens AG.

“On behalf of more than 7,000 privacy professionals across 50 countries, we applaud this year’s HP-IAPP Privacy Innovation Award winners,” said IAPP Executive Director Trevor Hughes.  “At a time when privacy is driving significant conversation and headlines, this year’s results show how protecting privacy and assuring organizational success go hand-in-hand.”

“HP is pleased to sponsor an award that advances privacy worldwide,” said Hewlett Packard Company Americas Privacy Officer Susan Smith.

In the Large Organization category (more than 5,000 employees), Symcor, Inc. won for its “A-integrity Process,” which is designed to manage and protect sensitive financial information that is ultimately presented to customers in the form of client statements. As the largest transactional printer in Canada, Symcor provides statement-to-payment services for some of Canada’s major financial, telecommunications, insurance, utility and payroll institutions. A-integrity established a new standard in data protection with an industry-leading error rate of less than one per million statements produced. Symcor has been improving on this rate each year.  A robust privacy incident management process was also developed to standardize error identification and resolution. Symcor’s dedicated Privacy Office provides overall governance to the process and has instilled a deep culture of privacy awareness throughout the organization.

The winner in the Small Organization category (fewer than 5,000 employees), is Minnesota Privacy Consultants (MPC). MPC helps multinational corporations and government agencies operationalize their governance of personal data. The organization won for its Privacy Maturity Model (PMM), a benchmarking tool that evaluates privacy program maturity and effectiveness. Using the Generally Accepted Privacy Principles (GAPP) framework as the basis but recognizing that the GAPP does not provide for degrees of compliance and maturity of a privacy program, MPC cross-referenced the 73 subcomponents of the GAPP framework against the six “maturity levels” of the Capability Maturity Model (CMM) developed by Carnegie Mellon University. From this, the Privacy Maturity Model (PMM) was developed to define specific criteria and weighting to various control areas based on prevailing statistics in the areas of data breaches and security enforcement actions worldwide. The Innovation Award judges recognized MPC for its successful and sophisticated approach to a very difficult problem.

Microsoft Corporation received the honor in the Technology category for “U-Prove”, a privacy-enhancing identity management technology that helps enable people to protect their identity-related information. The technology is based on advanced cryptographic protocols designed for electronic transactions and communications. It was acquired by Microsoft in 2008 and released into Proof of Concept as well as donated to the Open Source community in 2010. U-Prove technology has similar characteristics of conventionally used technologies, such as PKI certificates and SAML tokens, with additional privacy and security benefits. Through a technique of minimal disclosure, U-Prove tokens enable individuals to disclose just the information needed by applications and services, but nothing more, during online transactions. Online service providers, such as businesses and governments that are involved in transactions with individuals cannot link or collect a profile of activities. U-Prove effectively meets the security and privacy requirements of many identity systems—most notably national e-ID schemes now being contemplated by world governments. U-Prove has already won the Kuppinger Cole prize for best innovation in European identity projects and is now this year’s recipient of the HP-IAPP Privacy Innovation Award in technology.

About the IAPP
The International Association of Privacy Professionals is the world’s largest association of privacy professionals with more than 7,400 members across 50 countries. The IAPP helps to define, support and improve the privacy profession globally through networking, education and certification.  More information about the IAPP is available at www.privacyassociation.org.

Kim Komando on location services

Kim Komando has a great piece at USA Today where she explains geotagging through the experiences of two women who also happened to be using the foursquare location service.  This article is one of the first of what I expect will become a torrent as the media learns the implications of geolocation:

Sylvia was dining out with a friend. The restaurant manager interrupted her dinner to tell her she had a phone call. It was from a complete stranger who tracked her online. He had described her to the manager.

Louise was at a bar with colleagues. A stranger began talking to her. He knew a lot about her personal interests. Then, he pulled out his phone and showed her a photo. It was a picture of Louise that he found online.

Both of these stories are true. And they’re very unnerving. There is also a common thread. The women were tracked by something known as “geotagging.”

Geotagging adds GPS coordinates to your online posts or photos. You may be exposing this information without even knowing it. Geotagging is particularly popular with photos; many smartphones automatically geotag photos.

Photos can be plotted on a map for easy organization and viewing. But if you post photos online, and you could reveal your home address or child’s school. You’ve given a criminal a treasure map.

Layers of information

A geotagged photo is the most obvious threat to your privacy and safety. But, in Louise’s and Sylvia’s cases, there was more going on. Both used the location-based social-networking service Foursquare.

Location-based social-networking services are designed to help you meet up with family and friends. When you’re out and about, you check in with the site. At the coffee shop? Check in so friends nearby can find you.

Unless you have a stalker, these services aren’t particularly dangerous on their own. You need to think about the layers of information you leave online. As you use more services, it’s easier for criminals to track you.

Let’s say you post a photo of your new house to a photo site. The photo is geotagged. You’ve linked your photo account to Facebook. And you use Foursquare or Twitter on the go; updates are sent to your Facebook account.

One night you go to the movies. You send a tweet as you wait in line. When you get home, you discover you’ve been robbed. The burglar used your photo to find your address. He learned more about you on Facebook. Your tweet tipped him off to your location.

Thanks to a movie site, he knew exactly how long the movie ran. He scoped out your house and neighborhood on Google Street View. He devised a plan to get in and out fast and undetected.

Protecting yourself

If you use these services, protect yourself. Use a little common sense. First, don’t geotag photos of your house or your children. In fact, it’s best to disable geotagging until you specifically need it.

On the iPhone 4, tap Settings, then General, and then Location Services. You can select which applications can access GPS data. These options aren’t available in older iPhone software, so tap Settings, then General, then Reset. Tap Reset Location Warnings. You’ll be prompted if an application wants to access GPS data. You can then disallow it.

In Android, start the Camera app and open the menu at the left. Go into the settings and turn off geotagging or location storage, depending on which version of Android is on your phone. On a BlackBerry, click the Camera icon. Press the Menu button and select Options. Set the Geotagging option to Disabled. Save your settings.

You can also use an EXIF editor to remove location information from photos. EXIF data is information about a photo embedded in the file. Visit www.komando.com/news for free EXIF editors.

Don’t check in on Foursquare or similar sites from home. And make sure your Twitter program is not including GPS coordinates in your tweets.

For many people, Facebook ties everything together. Reconsider linking other accounts to Facebook. Pay close attention to your privacy settings. Only trusted friends should know when you are or aren’t at home. Finally, if you have contacts you don’t fully trust, it’s time to do a purge.

[Kim Komando hosts the nation's largest talk radio show about computers and the Internet. To get the podcast or find the station nearest you, visit www.komando.com/listen. To subscribe to Kim's free e-mail newsletters, sign up at www.komando.com/listen.. Contact her at C1Tech@gannett.com. ]

It is well worth reading Foursquare’s privacy policy - which is well thought out and makes Foursquare a paragon of virtue when compared to the contract with the devil you sign when you install iTunes, for example.  I’ll explore this more going forward.

Blizzard backtracks on real-names policy

A few days ago I mentioned the outcry when Blizzard, publisher of the World of Warcraft (WoW) multi-player Internet game, decided to make gamers reveal their offline identities and identifiers within their fantasy gaming context. 

I also descibed Blizzard’s move as being the “kookiest” flaunting yet of the Fourth Law of Identity (Contextual separation through unidirectional identifiers). 

Today the news is all about Blizzard’s first step back from the mistaken plan that appears to have completely misunderstood its own community.

CEO Mike Morhaime  seems to be on the right track with the first part of his message:

“I’d like to take some time to speak with all of you regarding our desire to make the Blizzard forums a better place for players to discuss our games. We’ve been constantly monitoring the feedback you’ve given us, as well as internally discussing your concerns about the use of real names on our forums. As a result of those discussions, we’ve decided at this time that real names will not be required for posting on official Blizzard forums.

“It’s important to note that we still remain committed to improving our forums. Our efforts are driven 100% by the desire to find ways to make our community areas more welcoming for players and encourage more constructive conversations about our games. We will still move forward with new forum features such as the ability to rate posts up or down, post highlighting based on rating, improved search functionality, and more. However, when we launch the new StarCraft II forums that include these new features, you will be posting by your StarCraft II Battle.net character name + character code, not your real name. The upgraded World of Warcraft forums with these new features will launch close to the release of Cataclysm, and also will not require your real name.”

Then he goes weird again.  He seems to have a fantasy of his own:  that he is running Facebook…

“I want to make sure it’s clear that our plans for the forums are completely separate from our plans for the optional in-game Real ID system now live with World of Warcraft and launching soon with StarCraft II. We believe that the powerful communications functionality enabled by Real ID, such as cross-game and cross-realm chat, make Battle.net a great place for players to stay connected to real-life friends and family while playing Blizzard games. And of course, you’ll still be able to keep your relationships at the anonymous, character level if you so choose when you communicate with other players in game. Over time, we will continue to evolve Real ID on Battle.net to add new and exciting functionality within our games for players who decide to use the feature.”

Don’t get me wrong.  As convoluted as this thinking is, it’s one big step forward (after two giant steps backward) to make linking of offline identity to gaming identity ”optional”. 

And who knows?  Maybe Mike Morhaime really does understand his users…  He may be right that lots of gamers are totally excited at the prospect of their parents, lovers and children joining Battle.net to stay connected with them while they are playing WoW!  Facebook doesn’t stand a chance!

 

Trusting Mobile Technology

Jacques Bus recently shared a communication he has circulated about the mobile technology issues I’ve been exploring.  To European readers he will need no introduction:  as Head of Unit for the European Commission’s Information and Communication Technologies (ICT) Research Programme he oversaw and gave consistency to the programs shaping Europe’s ICT research investment.  Thoroughly expert and equally committed to results, Jacques’ influence on ICT policy thinking is clearly visible in Europe.   Jacques is now an independent consultant on ICT issues.

On June 20, Kim Cameron [KC] posted a piece on this blog titled: Harvesting phone and laptop fingerprints for its database - Google says the user’s device sends a request to its location server with a list of all MAC addresses currently visible to it. Does that include yours?

It was the start of a series of communications that reads like a thriller. Unfortunately the victim is not imaginary, but it is me and you.

He started with an example of someone attending a conference while subscribed to a geo-location service. “I [KC] argued that the subscriber’s cell phone would pick up all the MAC addresses (which serve as digital fingerprints) of nearby phones and laptops and send them in to the centralized database service, which would look them up and potentially use the harvested addresses to further increase its knowledge of people’s behavior - for example, generating a list of those attending the conference.”

He then explained how Google says its location database works, showing that “certainly the MAC addresses of all nearby phones and laptops are sent in to the geo-location server - not simply the MAC addresses of wireless access points that are broadcasting SSIDs.”

His first post was followed by others, including reference to an excellent piece of Niraj Chokshi in The Atlantic and demonstrating that Google’s messages in its application descriptions are, to say the least, not in line with their PR messages to Chokshi.

On 2 July a discussion of Apple iTunes follows in KC’s post: Update to iTunes comes with privacy fibs with as main message: As the personal phone evolves it will become increasingly obvious that groups within some of our best tech companies have built businesses based on consciously crafted privacy fibs.

The new iTunes policy says: By using this software in connection with an iTunes Store account, you agree to the latest iTunes Store Terms of Service, which you may access and review from the home page of the iTunes Store. So iTunes says: Our privacy policy is that you need to read another privacy policy. This other policy states:

We also collect non-personal information - data in a form that does not permit direct association with any specific individual. We may collect, use, transfer, and disclose non-personal information for any purpose. The following are some examples of non-personal information that we collect and how we may use it:

  • We may collect information such as occupation, language, zip code, area code, unique device identifier, location, and the time zone where an Apple product is used so that we can better understand customer behavior and improve our products, services, and advertising.

I think KC rightly asks the question: What does downloading a song have to do with giving away your location???

Clearly Apple would call its unique device identifier - and its location - ”non-personal data”. However, personal data means in Europe any information relating to an identified or identifiable natural person. Even Google CEO Eric Schmidt would under this EU definition supposedly disagree with Apple, given his statement in a recent speech quoted by KC: Google is making the Android phone, we have the Kindle, of course, and we have the iPad. Each of these form factors with the tablet represent in many ways your future….: they’re personal. They’re personal in a really fundamental way. They know who you are. So imagine that the next version of a news reader will not only know who you are, but it’ll know what you’ve read…and it’ll be more interactive. And it’ll have more video. And it’ll be more real-time. Because of this principle of “now.”.

We could go on with the post of 3 July: The current abuse of personal device identifiers by Google and Apple is at least as significant as the problems I discussed long ago with Passport. He is referring to a story by Todd Bishop at TechFlash - here I refer readers to the original thriller rather than trying to summarize it for them.

What is absolutely clear from the above is how dependent we all are on mobile technology. It is also clear that to enjoy the personal and location services we request one needs to combine data on the person and his location. However, I am convinced that in the complex society we live in, we will eventually only accept services and infrastructure if we can trust them to work as we expect, including the handling of our personal data. But trust can only be given if the services and infrastructure is trustworthy. O’Hara and Hall describe trust on the Web very well, based on fundamental principles. They decompose trust in local trust (personal experience through high-bandwidth interactions) and global trust (outsourcing our trust decisions to trusted institutions, like accepted roles through training, witnessing, or certification). Reputation is usually a mix of this.

For trust to be built up the transparency and accountability of the data collectors and processors is essential. As local trust is particularly difficult in global transactions over the Web, we need stronger global trust through a-priori assurances on compliance with legal obligations on privacy protection, transparency, auditing, and effective law enforcement and redress. These are basic principles on which our free and developed societies are built, and which are necessary to guarantee creativity, social stability, economic activity and growth.

One can conclude from KCs posts that not much of these essential elements are represented in the current mobile world.

I agree that the legal solutions he proposes are small steps in the right direction and should be pursued. However, essential action at the level of the legislators is urgently needed. Data Protection authorities in Europe are well aware of that as is demonstrated in The Future of Privacy. Unfortunately these solutions are slow to implement, whilst commercial developments are very fast.

Technology solutions, like developing WiFi protocols that appropriately randomize MAC addresses and also protect other personal data, are also needed urgently to enable develop trustworthy solutions that are competitive and methods should be sought to standardize such results quickly.

However, the gigantic global centralization of data collection and the possibilities of massive correlation is scaring and may make DP Commissioners, even in group in Europe, look helpless. The data is already out there and usable.

What I wonder: is all this data available for law enforcers under warrant and accepted as legal proof in court? And if not, how can it be possible that private companies can collect it? Don’t we need some large legal test cases?

And let’s not forget one thing: any government action must be as global as possible given the broad international presence of the most important companies in this field, hence the proposed standards of the joint international DP authorities in their Madrid Declaration.

Smart questions and conclusions.

 

How to anger your most loyal supporters

The gaming world is seething after what is seen as an egregious assault on privacy by World of Warcraft (WoW), one of the most successful multiplayer role-playing games yet devised.  The issue?  Whereas players used to know each other through their WoW “handles”, the company is now introducing a system called “RealID” that forces players to reveal their offline identities within the game’s fantasy context.  Commentators think the company wanted to turn its user base into a new social network.  Judging from the massive hullabaloo amongst even its most loyal supporters, the concept may be doomed.

To get an idea of the dimensions of the backlash just type “WoW RealID” into a search engine.  You’ll hit paydirt:

The RealID feature is probably the kookiest example yet of breaking the Fourth Law of Identity - the law of Directed Identity.   This law articulates the requirement to scope digital identifiers to the context in which they are used.  In particular, it explains why universal identifiers should not be used where a person’s relationship is to a specific context.  The law arises from the need for “contextual separation” - the right of individuals to participate in multiple contexts without those contexts being linkable unless the individual wants them to be.

The company seems to have initially inflicted Real ID onto everyone, and then backed off by describing the lack of “opt-in” as a “security flaw”, according to this official post on wow.com:

To be clear, everyone who does not have a parentally controlled account has in fact opted into Real ID, due to a security flaw. Addons have access to the name on your account right now. So you need to be very careful about what addons you download — make sure they are reputable. In order to actually opt out, you need to set up parental controls on your account. This is not an easy task. Previous to the Battle.net merge, you could just go to a page and set them up. Done. Now, you must set up an account as one that is under parental control. Once your account is that of a child’s (a several-step process), your settings default to Real ID-disabled. Any Real ID friends you have will no longer be friends. In order to enable it, you need to check the Enable Real ID box.

 Clearly there are security problems that emerge from squishing identifiers together and breaking cross-context separation.  Mary Landsman has a great post on her Antivirus Software Blog called “WoW Real ID: A Really Bad Idea“:

Here are a couple of snippets about the new Battle.net Real ID program:

“…when you click on one of your Real ID friends, you will be able to see the names of his or her other Real ID friends, even if you are not Real ID friends with those players yourself.”

“…your mutual Real ID friends, as well as their Real ID friends, will be able to see your first and last name (the name registered to the Battle.net account).”

“…Real ID friends will see detailed Rich Presence information (what character the Real ID friend is playing, what they are doing within that game, etc.) and will be able to view and send Broadcast messages to other Real ID friends.”

And this is all cross-game, cross-realm, and cross-alts. Just what already heavily targeted players need, right? A merge of WoW/Battle.net/StarCraft with Facebook-style social networking? Facepalm might have been a better term to describe Real ID given its potential for scams. Especially since Blizzard rolled out the change without any provision to protect minors whatsoever:

Will parents be able to manage whether their children are able to use Real ID?
We plan to update our Parental Controls with tools that will allow parents to manage their children’s use of Real ID. We’ll have more details to share in the future.

Nice. So some time in the future, Blizzard might start looking at considering security seriously. In the meantime, the unmanaged Real ID program makes it even easier for scammers to socially engineer players AND it adds potential stalking to the list of concerns. With no provision to protect minors whatsoever.

Thanks, Blizz…Not!

And Kyth has a must-read post at stratfu called Deeply Disappointed with the ‘RealID’ System where he explains how RealID should have been done.  His ideas are a great implementation of the Fourth Law.

Using an alias would be fine, especially if the games are integrated in such a way that you could pull up a list of a single Battle.net account’s WoW/D3 characters and SC2 profiles. Here is how the system should work:

  • You have a Battle.net account. The overall account has a RealID Handle. This Handle defaults to being your real name, but you can easily change it (talking single-click retard easy here) to anything you desire. Mine would be [WGA]Kazanir, just like my Steam handle is.
  • Each of your games is attached to your Battle.net account and thereby to your RealID. Your RealID friends can see you when you are online in any of those games and message you cross-game, as well as seeing a list of your characters or individual game profiles. Your displayed RealID is the handle described above.
  • Each game contains either a profile (SC2) or a list of characters. A list of any profiles or characters attached to your Battle.net account would be easily accessible from your account management screen. Any of these characters can be “opted out” of your RealID by unchecking them from the list. Thus, my list might look like this:

    X Kazanir.wga - SC2 ProfileX Kazanir - WoW - 80 Druid Mal’ganisX Gidgiddoni - WoW - 60 Warrior Mal’ganis_ Kazbank - WoW - 2 Hunter Mal’ganisX Kazabarb - D3 - 97 Barbarian US East_ Kazahidden - D3 - 45 Monk US West

    In this way I can play on characters (such as a bank alt or a secret D3 character with my e-girlfriend) without forcibly having their identity broadcast to my friends.When I am online on any of the characters I have unchecked, my RealID friends will be able to message me but those characters will not be visible even to RealID friends. The messages will merely appear to come from my RealID and the “which character is he on” information will not be available.

  • Finally, the RealID messenger implementation in every game should be able to hide my presence from view just like any instant messenger application can right now. I shouldn’t be forced to be present with my RealID just because I am playing a game — there should be a universal “pretend to not be online” button available in every Battle.net enabled game.

These are the most basic functionality requirements that should be implemented by anyone with an IQ over 80 who designs a system like this.

Check out the comments in response to his post.  I would have to call his really sensible and informed proposal “wildly popular”.  It will be really interesting to see how this terrible blunder by such a creative company will end up.

 [Thanks to Joe Long for heads up]

“Microsoft Accuses Apple, Google of Attempted Privacy Murder”

Ms. Smith at Network World made it to the home page of digg.com yesterday when she reported on my concerns about the collection and release of information related to people’s movements and location. 

I want to set the record straight about one thing: the headline.  It’s not that I object to the term “attempted privacy murder” - it pretty much sums things up. The issue is just that I speak as Kim Cameron - a person, not a corporation.  I’m not in marketing or public releations - I’m a technologist who has come to understand that we must  all work together to ensure people are able to trust their digital environment.  The ideas I present here are the same ones I apply liberally in my day job, but this is a personal blog.

Ms. Smith is as precise as she is concise:

A Microsoft identity guru bit Apple and smacked Google over mobile privacy policies. Once upon a time, before working for Microsoft, this same man took MS to task for breaking the Laws of Identity.

Kim Cameron, Microsoft’s Chief Identity Architect in the Identity and Security Division, said of Apple, “If privacy isn’t dead, Apple is now amongst those trying to bury it alive.”

What prompted this was when Cameron visited the Apple App store to download a new iPhone application. When he discovered Apple had updated its privacy policy, he read all 45 pages on his iPhone. Page 37 lets Apple users know:

Collection and Use of Non-Personal Information

We also collect non-personal information - data in a form that does not permit direct association with any specific individual. We may collect, use, transfer, and disclose non-personal information for any purpose. The following are some examples of non-personal information that we collect and how we may use it:

· We may collect information such as occupation, language, zip code, area code, unique device identifier, location, and the time zone where an Apple product is used so that we can better understand customer behavior and improve our products, services, and advertising.

The MS identity guru put the smack down not only on Apple, but also on Google, writing in his blog, “Maintaining that a personal device fingerprint has ‘no direct association with any specific individual’ is unbelievably specious in 2010 - and even more ludicrous than it used to be now that Google and others have collected the information to build giant centralized databases linking phone MAC addresses to house addresses. And - big surprise - my iPhone, at least, came bundled with Google’s location service.”

MAC in this case refers to Media Access Control addresses associated with specific devices and one of the types that Google collected. Google admits to collecting MAC addresses of WiFi routers, but denies snagging MAC addresses of laptops or phones. Google is under mass investigation for its WiFi blunder.

Apple’s new policy is also under fire from two Congressmen who gave Apple until July 12th to respond. Reps. Edward J. Markey (D-Mass.) and Joe Barton (R-Texas) sent a letter to Apple CEO Steve Jobs asking for answers about Apple gathering location information on its customers.

As far as Cameron goes, Microsoft’s Chief Identity Architect seems to call out anyone who violates privacy. That includes Microsoft. According to Wikipedia’s article on Microsoft Passport:

“A prominent critic was Kim Cameron, the author of the Laws of Identity, who questioned Microsoft Passport in its violations of those laws. He has since become Microsoft’s Chief Identity Architect and helped address those violations in the design of the Windows Live ID identity meta-system. As a consequence, Windows Live ID is not positioned as the single sign-on service for all web commerce, but as one choice of many among identity systems.”

Cameron seems to believe location based identifiers and these changes of privacy policies may open the eyes of some people to the, “new world-wide databases linking device identifiers and home addresses.”

 

Microsoft identity guru questions Apple, Google on mobile privacy

Todd Bishop at TechFlash published a comprehensive story this week on device fingerprints and location services: 

Kim Cameron is an expert in digital identity and privacy, so when his iPhone recently prompted him to read and accept Apple’s revised terms and conditions before downloading a new app, he was perhaps more inclined than the rest of us to read the entire privacy policy — all 45 pages of tiny text on his mobile screen.

It’s important to note that apart from writing his own blog on identity issues — where he told this story — Cameron is Microsoft’s chief identity architect and one of its distinguished engineers. So he’s not a disinterested industry observer in the broader sense. But he does have extensive expertise.

And he is publicly acknowledging his use of an iPhone, after all, which should earn him at least a few points for neutrality…

At this point I’ll butt in and editorialize a little.  I’d like to amplify on Todd’s point for the benefit of readers who don’t know me very well:  I’m not critical of Street View WiFi because I am anti-Google.  I’m not against anyone who does good technology.  My critique stems from my work as a computer scientist specializing in identity, not as a person playing a role in a particular company.  In short, Google’s Street View WiFi is bad technology, and if the company persists in it, it will be one of the identity catastrophes of our time.

When I figured out the Laws of Identity and understood that Microsoft had broken them, I was just as hard on Microsoft as I am on Google today.  In fact, someone recently pointed out the following reference in Wikipedia’s article on Microsoft’s Passport:

“A prominent critic was Kim Cameron, the author of the Laws of Identity, who questioned Microsoft Passport in its violations of those laws. He has since become Microsoft’s Chief Identity Architect and helped address those violations in the design of the Windows Live ID identity meta-system. As a consequence, Windows Live ID is not positioned as the single sign-on service for all web commerce, but as one choice of many among identity systems.”

I hope this has earned me some right to comment on the current abuse of personal device identifiers by Google and Apple - which, if their FAQs and privacy policies represent what is actually going on, is at least as significant as the problems I discussed long ago with Passport.  

But back to Todd: 

At any rate, as Cameron explained on his IdentityBlog over the weekend, his epic mobile reading adventure uncovered something troubling on Page 37 of Apple’s revised privacy policy, under the heading of “Collection and Use of Non-Personal Information.” Here’s an excerpt from Apple’s policy, Cameron’s emphasis in bold.

We also collect non-personal information — data in a form that does not permit direct association with any specific individual. We may collect, use, transfer, and disclose non-personal information for any purpose. The following are some examples of non-personal information that we collect and how we may use it:

We may collect information such as occupation, language, zip code, area code, unique device identifier, location, and the time zone where an Apple product is used so that we can better understand customer behavior and improve our products, services, and advertising.

Here’s what Cameron had to say about that.

Maintaining that a personal device fingerprint has “no direct association with any specific individual” is unbelievably specious in 2010 — and even more ludicrous than it used to be now that Google and others have collected the information to build giant centralized databases linking phone MAC addresses to house addresses. And — big surprise — my iPhone, at least, came bundled with Google’s location service.

The irony here is a bit fantastic. I was, after all, using an “iPhone”. I assume Apple’s lawyers are aware there is an ‘I’ in the word “iPhone”. We’re not talking here about a piece of shared communal property that might be picked up by anyone in the village. An iPhone is carried around by its owner. If a link is established between the owner’s natural identity and the device (as Google’s databases have done), its “unique device identifier” becomes a digital fingerprint for the person using it.

MAC in this context refers to Media Access Control addresses associated with specific devices, one type of data that Google has acknowledged collecting. However, in a response to an Atlantic magazine piece that quoted an earlier Cameron blog post, Google says that it hasn’t gone as far Cameron is suggesting. The company says it has collected only the MAC addresses of WiFi routers, not of laptops or phones.

The distinction is important because it speaks to how far the companies could go in linking together a specific device with a specific person in a particular location.

Google’s FAQ, for the record, says its location-based services (such as Google Maps for Mobile) figure out the location of a device when that device “sends a request to the Google location server with a list of MAC addresses which are currently visible to the device” — not distinguishing between MAC addresses from phones or computers and those from wireless routers.

Here’s what Cameron said when I asked about that topic via email.

I have suggested that the author ask Google if it will therefore correct its FAQ, since the portion of the FAQ on “how the system works” continues to say it behaves in the way I described. If Google does correct its FAQ then it will be likely that data protection authorities ask Google to demonstrate that its shipped software behaving in the way described in the correction.

I would of course feel better about things if Google’s FAQ is changed to say something like, “The user’s device sends a request to the Google location server with the list of MAC addresses found in Beacon Frames announcing a Network Access Point SSID and excluding the addresses of end user devices.”

However, I would still worry that the commercially irresistible feature of tracking end user devices could be turned on at any second by Google or others. Is that to be prevented? If so, how?

So a statement from Google that its FAQ was incorrect would be good news - and I would welcome it - but not the end of the problem for the industry as a whole.

The privacy statement for Microsoft’s Location Finder service, for the record, is more specific in saying that the service uses MAC addresses from wireless access points, making no reference to those from individual devices.

In any event, the basic question about Apple is whether its new privacy policy is ultimately correct in saying that the company is only collecting “data in a form that does not permit direct association with any specific individual” — if that data includes such information as the phone’s unique device identifier and location.

Cameron isn’t the only one raising questions.

The Consumerist blog picked up on this issue last week, citing a separate portion of the revised privacy policy that says Apple and its partners and licensees “may collect, use, and share precise location data, including the real-time geographic location of your Apple computer or device.” The policy adds, “This location data is collected anonymously in a form that does not personally identify you and is used by Apple and our partners and licensees to provide and improve location-based products and services.”

The Consumerist called the language “creepy” and said it didn’t find Apple’s assurances about the lack of personal identification particularly comforting. Cameron, in a follow-up post, agreed with that sentiment.

SF Weekly and the Hypebot music technology blog also noted the new location-tracking language, and the fact that users must agree to the new privacy policy if they want to use the service.

“Though Apple states that the data is anonymous and does not enable the personal identification of users, they are left with little choice but to agree if they want to continue buying from iTunes,” Hypebot wrote.

We’ve left messages with Apple and Google to comment on any of this, and we’ll update this post depending on the response.

And for the record, there is an option to email the Apple privacy policy from the phone to a computer for reading, and it’s also available here, so you don’t necessarily need to duplicate Cameron’s feat by reading it all on your phone.

Update to iTunes comes with privacy fibs

A few days ago I reported that from now on, to get into the iPhone App store you must allow Apple to share your phone or tablet device fingerprints and detailed, dynamic location information with anyone it pleases.  No chance to vet the purposes for which your location data is being used.  No way to know who it is going to. 

As incredible as it sounds in 2010, no user control.  Not even  transparency.  Just one thing is for sure.  If privacy isn’t dead, Apple is now amongst those trying to bury it alive.

Then today, just when I thought Apple had gone as far as it could go in this particular direction, a new version of iTunes wanted to install itself on my laptop.  What do you know?  It had a new privacy policy too… 

The new iTunes policy was snappier than the iPhone policy - it came to the point - sort of - in the 5th paragraph rather than the 37th page!

5. iTunes Store and other Services.  This software enables access to Apple’s iTunes Store which offers downloads of music for sale and other services (collectively and individually, “Services”). Use of the Services requires Internet access and use of certain Services requires you to accept additional terms of service which will be presented to you before you can use such Services.

By using this software in connection with an iTunes Store account, you agree to the latest iTunes Store Terms of Service, which you may access and review from the home page of the iTunes Store.

I shuddered.  Mind bend!  A level of indirection in a privacy policy! 

Imagine:  “Our privacy policy is that you need to read another privacy policy.”  This makes it much more likely that people will figure out what they’re getting into, don’t you think?  Besides, it is a really novel application of the proposition that all problems of computer science can be solved through a level of indirection!  Bravo!

But then - the coup de grace.  The privacy policy to which Apple redirects you is… are you ready… the same one we came across a few days ago at the App Store!  So once again you need to get to the equivalent of page 37 of 45 to read:

Collection and Use of Non-Personal Information

We also collect non-personal information - data in a form that does not permit direct association with any specific individual. We may collect, use, transfer, and disclose non-personal information for any purpose. The following are some examples of non-personal information that we collect and how we may use it:

  • We may collect information such as occupation, language, zip code, area code, unique device identifier, location, and the time zone where an Apple product is used so that we can better understand customer behavior and improve our products, services, and advertising.

The mind bogggggles.  What does downloading a song have to do with giving away your location???

Some may remember my surprise that the Lords of The iPhone would call its unique device identifier - and its location - ”non-personal data”.  Non-personal implies there is no strong relationship to the person who is using it.  I wrote:

The irony here is a bit fantastic.  I was, after all, using an “iPhone”.   I assume Apple’s lawyers are aware there is an ”I” in the word “iPhone”.  We’re not talking here about a piece of shared communal property that might be picked up by anyone in the village.  An iPhone is carried around by its owner.  If a link is established between the owner’s natural identity and the device (as Google’s databases have done), its “unique device identifier” becomes a digital fingerprint for the person using it. 

Anybody who thinks about identity understands that a “personal device” is associated with (even an extension of) the person who uses it.  But most people - including technical people - don’t give these matters the slightest thought.  

A parade of tech companies have figured out how to use peoples’ ignorance about digital identity to get away with practices letting them track what we do from morning to night in the physical world.  But of course, they never track people, they only track their personal devices!  Those unruly devices really have a mind of their own - you definitely need central databases to keep tabs on where they’re going.

I was therefore really happy to read some of  Google CEO Eric Schmidt’s recent speech to the American Society of News Editors.  Talking about mobility he made a number of statements that begin to explain the ABCs of what mobile devices are about:

Google is making the Android phone, we have the Kindle, of course, and we have the iPad. Each of these form factors with the tablet represent in many ways your future….: they’re personal. They’re personal in a really fundamental way. They know who you are. So imagine that the next version of a news reader will not only know who you are, but it’ll know what you’ve read…and it’ll be more interactive. And it’ll have more video. And it’ll be more real-time. Because of this principle of “now.”

It is good to see Eric sharing the actual truth about personal devices with a group of key influencers.  This stands in stark contrast to the silly fibs about phones and laptops being non-personal that are being handed down in the iTunes Store, the iPhone App Store, and in the “Refresher FAQ” Fantasyland Google created in response to its Street View WiFi shenanigans. 

As the personal phone evolves it will become increasingly obvious  that groups within some of our best tech companies have built businesses based on consciously crafted privacy fibs.  I’m amazed at the short-sightedness involved:  folks, we’re talking about a “BP moment”.  History teaches us that ”There is no vice that doth so cover a man with shame as to be found false and perfidious.” [Francis Bacon]  And statements that your personal device doesn’t identify you and that location is not personal information are precisely “false and perfidious.”

 

What Could Google Do With the Data It’s Collected?

Niraj Chokshi has published a piece in The Atlantic where he grapples admirably with the issues related to Google’s collection and use of device fingerprints (technically called MAC Addresses).  It is important and encouraging to have journalists like Niraj taking the time to explore these complex issues.  

But I have to say that such an exploration is really hard right now. 

Whether on purpose or by accident, the Google PR machine is still handing out contradictory messages.  In particular, the description in Google’s Refresher FAQ titled “How does this location database work?” is currently completely different from (read: the opposite of) what its public relations people are telling journalists like Nitaj.  I think reestablishing credibility around location services requires the messages to be made consistent so they can be verified by data protection authorities.

Here are some excerpts from the piece - annotated with some comments by me.  [Read the whole article here.] 

The Wi-Fi data Google collected in over 30 countries could be more revealing than initially thought…

Google’s CEO Eric Schmidt has said the information was hardly useful and that the company had done nothing with it. The search giant has also been ordered (or sought) to destroy the data. According to their own blog post, Google logged three things from wireless networks within range of their vans: snippets of unencrypted data; the names of available wireless networks; and a unique identifier associated with devices like wireless routers. Google blamed the collection on a rogue bit of code that was never removed after it had been inserted by an engineer during testing.

[The statement about rogue code is an example of the PR ambiguity Nitaj and other journalists must deal with.  Google blogs don't actually blame the collection of unique identifiers on rogue code, although they seem crafted to leave people with that impression.  Spokesmen only blame rogue code for the collection of unencrypted data content (e.g. email messages.) - Kim]

Each of the three types of data Google recorded has its uses, but it’s that last one, the unique identifier, that could be valuable to a company of Google’s scale. That ID is known as the media access control (MAC) address and it is included — unencrypted, by design — in any transfer, blogger Joe Mansfield explains.

Google says it only downloaded unencrypted data packets, which could contain information about the sites users visited. Those packets also include the MAC address of both the sending and receiving devices — the laptop and router, for example.

[Another contradiction: Google PR says it "only" collected unencrypted data packets, but Google's GStumbler report  says its cars did collect and record the MAC addresses from encrypted data frames as well. - Kim]

A company as large as Google could develop profiles of individuals based on their mobile device MAC addresses, argues Mansfield:

Get enough data points over a couple of months or years and the database will certainly contain many repeat detections of mobile MAC addresses at many different locations, with a decent chance of being able to identify a home or work address to go with it.

Now, to be fair, we don’t know whether Google actually scrubbed the packets it collected for MAC addresses and the company’s statements indicate they did not. [Yet the GStumbler report says ALL MAC addresses were recorded - Kim].  The search giant even said it “cannot identify an individual from the location data Google collects via its Street View cars.”  Add a step, however, and Google could deduce an individual from the location data, argues Avi Bar-Zeev, an employee of Microsoft, a Google competitor.

[Google] could (opposite of cannot) yield your identity if you’ve used Google’s services or otherwise revealed it to them in association with your IP address (which would be the public IP of your router in most cases, visible to web servers during routine queries like HTTP GET). If Google remembered that connection (and why not, if they remember your search history?), they now have your likely home address and identity at the same time. Whether they actually do this or not is unclear to me, since they say they can’t do A but surely they could do B if they wanted to.

Theoretically, Google could use the MAC address for a mobile device — an iPod, a laptop, etc. — to build profiles of an individual’s activity. (It’s unclear whether they did and Google has indicated that they have not.) But there’s also value in the MAC addresses of wireless routers.

Once a router has been associated with a real-world location, it becomes useful as a reference point. The Boston company Skyhook Wireless, for example, has long maintained a database of MAC addresses, collected in a (slightly) less-intrusive way. Skyhook is the primary wireless positioning system used by Apple’s iPhone and iPod Touch. (See a map of their U.S. coverage here.) When your iPod Touch wants to retrieve the current location, it shares the MAC addresses of nearby routers with Skyhook which pings its database to figure out where you are.

Google Latitude, which lets users share their current location, has at least 3 million active users and works in a similar way. When a user decides to share his location with any Google service on a non-GPS device, he sends all visible MAC addresses in the vicinity to the search giant, according to the company’s own description of how its location services works.

[Update: Google's own "refresher FAQ" states that a user of its geo-location services, such as Latitude, sends all MAC addresses "currently visible to the device" to Google, but a spokesman said the service only collects the MAC addresses of routers. That FAQ statment is the basis of the following argument.]

This is disturbing, argues blogger Kim Cameron (also a Microsoft employee), because it could mean the company is getting not only router addresses, but also the MAC addresses of devices such as laptops and iPods. If you are sitting next to a Google Latitude user who shares his location, Google could know the address and location of your device even though you didn’t opt in. That could then be compared with all other logged instances of your MAC address to develop a profile of where the device is and has been.

Google denies using the information it collected and, if the company is telling the truth, then only data from unencrypted networks was intercepted anyway, so you have less to worry about if your home wireless network is password-protected. (It’s still not totally clear whether only router MAC addresses were collected. Google said it collected the information for devices “like a WiFi router.”) Whether it did or did not collect or use this information isn’t clear, but Google, like many of its competitors, has a strong incentive to get this kind of location data.

[Again, and I really do feel for Niraj, the PR leaves the impression that if you have passwords and encryption turned on you have nothing to worry about, but Googles' GStumbler report says that passwords and encryption did not prevent the collection of the MAC addresses of phones and laptops from homes and businesses. - Kim]

I really tuned in to these contradictory messages when a reader first alerted me to Niraj’s article.   It looked like this:

My comments earned their strike-throughs when a Google spokesman assured the Atlantic ”the Service only collects the MAC addresses of routers.”  I pointed out that my statement was actually based on Google’s own FAQ, and it was their FAQ (”How does this location database work?”) - rather than my comments - that deserved to be corrected.  After verifying that this was true, Niraj agreed to remove the strikethrough.

How can anyone be expected to get this story right given the contradictions in what Google says it has done?

In light of this, I would like to see Google issue a revision to its “Refresher FAQ” that currently reads:

The “list of MAC addresses which are currently visible to the device” would include the addresses of nearby phones and laptops.  Since Google PR has assured Niraj that “the service only collects the MAC addresses of routers”, the right thing to do would be to correct the FAQ so it reads:

  • “The user’s device sends a request to the Google location server with the list of MAC addresses found in Beacon Frames announcing a Network Access Point SSID and excluding the addresses of end user devices like WiFi enabled phones and laptops.”

This would at least reassure us that Google has not delivered software with the ability to track non-subscribers and this could be verified by data protection authorities.  We could then limit our concerns to what we need to do to ensure that no such software is ever deployed in the future.

 

The Consumerist says “Apple is Watching”

A reader has pointed me to this article in The Consumerist (”Shoppers bite back”) about Apple’s new privacy policy


Schmegga

Apple updated its privacy policy today, with an important, and dare we say creepy new paragraph about location information. If you agree to the changes, (which you must do in order to download anything via the iTunes store) you agree to let Apple collect store and share “precise location data, including the real-time geographic location of your Apple computer or device.”

Apple says that the data is “collected anonymously in a form that does not personally identify you,” but for some reason we don’t find this very comforting at all. [Good instinct ! - Kim]. There appears to be no way to opt-out of this data collection without giving up the ability to download apps.

Here’s the full text [Emphasis is mine - Kim]:

Location-Based Services

“To provide location-based services on Apple products, Apple and our partners and licensees may collect, use, and share precise location data, including the real-time geographic location of your Apple computer or device. This location data is collected anonymously in a form that does not personally identify you and is used by Apple and our partners and licensees to provide and improve location-based products and services. For example, we may share geographic location with application providers when you opt in to their location services.

Some location-based services offered by Apple, such as the MobileMe “Find My iPhone” feature, require your personal information for the feature to work. “

I wonder how The Consumerist will feel when it figures out how this change ties in to the new world-wide databases linking device identifiers and home addresses?

The consumerist piece is dated June 21, 2010 9:50 PM, and seems to confirm that the change in policy has only been made public since Google’s WiFi shenanigans have been discovered by data protection authorities… The point about “no opt out” is very important too.