Time to move…

I've decided to shut down my Google-hosted blog and put it somewhere else... You can find my new posts here (and I've ported all the posts from this site, too).

It's a shame to be leaving Blogger; I've been blogging there since long before Google acquired it. I just don't like the way in which Google have gone about changing their privacy policy this month. Those of you who have followed my blogging over the last few years can doubtless guess why...

See you on the other side!

SOCA and website takedowns

News that SOCA has reached out of the UK and into the .com domain to take down a music download website has provoked quite a reaction in the twittersphere (at least, that slightly geeky, slightly legal, slightly subversive segment of it that is visible to me...).

Here are some examples of good comment, analysis and reaction from

Glyn Moody at ComputerWorld
Mel, at dajaz1.com (includes a useful screenshot of SOCA's warning message)
Lilian Edwards on the Pangloss blog.

In general, what I see is that even among those who acknowledge that SOCA may be acting within its remit here, very few say anything good about the way in which SOCA has acted - and I think they have a point.

In my opinion (and it is only that), if SOCA is going to insert its own splash page into a domain that has been taken down, they should stick to an objective statement of the legal justification for the takedown.

SOCA would doubtless argue that it has a duty to deter illegal activity, and that that justifies the splash page - but I do not believe it does SOCA's credibility as a law enforcement agency any good to make the kind of assertions it makes in its warning. For example, they say:

"As a result of illegal downloads young, emerging artists may have had their careers damaged."

That is quite possibly true. On the other hand, it is also true that young, emerging artists (and older esablished ones, come to that) have had their careers, health and even lives ruined by the commerical practices of the music industry - including, of course, all those artists whose careers simply never happened because the music industry did not think they would be profitable enough. The strength of the independent music community amply illustrates that the mainstream publishing cartel is neither entirely benevolent, nor (happily) indispensable.

Are SOCA, then, about to insist that when we visit Amazon in search of the latest CD, we must see a warning which reads:

"As a result of venal commercial practices young, emerging artists may have failed to get a career at all, or may have turned to drugs under the pressure of huge recording contracts and subsequently died in obscure penury in a pool of their own vomit"?

No?

Thought not.

I suppose what this boils down to is this: SOCA do not, ultimately, need to state an ethical justification for their action, only a legal one. Their rather clumsy attempt to stick a moral veneer on their law enforcement action in this case is ill-judged and poorly executed.

Time for a rant…

... about some really irritating developments in TV advertising.

I apoplogise in advance, but I think some of these peeves have been simmering for a while now, and it would be healthier all round if I can permit myself a little vent. There are two advertising trends at the moment which are really starting to grate.

The first is when the advertiser treats us like imbeciles, incapable of logical thought. Two examples:

1 - the dishwasher tablet which is sold on the premise that, if you don't use it, filth accumulates in your dishwasher's plumbing tubes and is then swilled around your cutlery and crockery, bathing them in a vile brew which is, by implication, not far short of raw sewage. Of course, being imbeciles we fail to notice that the pipes into the dishwasher come from the water main, and are presumably not already clogged with sewage; and the pipes out of the dishwasher do not convey anything back into it.

2 - the kitchen soap dispenser whose great selling point is that it includes a sensor, so that you can get your dollop of soap without having to do anything insanitary like press down on a squirter. Again, being imbeciles, we have never noticed that the first thing you do after pressing down on a (presumably plague-ridden) soap dispenser is... wash your hands.

Here's the enigma: are these advertisements fatally flawed, foolishly insulting their target market... or are they perfectly crafted, aimed precisely at a market of imbeciles?

The other irritant is a variant on the old "vox pop" technique. Classically, this involves a reassuring third party, such as an interviewer or someone in a white coat, getting totally spontaneous product endorsements out of enthusiastic consumers who are totally surprised at the effectiveness of the product.

The variant (toothpaste being far and away the worst offender) is that when sound-editing your vox pops, you have to remove tiny snippets of silence from between random words. The result sounds something like this:

"I had never realisedthat some things I eatevery day, suchas battery acid, can eataway at tooth enameland cause cavities andbrain rot."

Why? Why do they do this?

I am seriously considering applying for that job, snipping out the tiny gaps between words in fatuous vox pops. Then, like one of my literary heroes, Doktor Murke, I would splice them carefully together again and luxuriate in the resulting silence. Listening to it might even bring my blood pressure down again...

Un-bricking a System76 Starling netbook

In case it may be of help to someone else in the same situation...

I have a System76 Starling netbook, which until this week was running Ubuntu 9 (Karmic Koala). That's the release it came factory-installed with, and as long as it was getting patched and updated, I was sticking with the "ain't broke, don't fix" principle. Previous experience has taught me that the tiniest tweak to an otherwise working Linux system can lock you into a death-spiral of dependencies, upgrades, super-dependencies and so on, until you have no option but to press on because you can't retreat.

However, when the system updater warned me this week that Ubuntu 9 would not be getting any more patches, I decided it was time to take a deep breath and upgrade to an LTS (Long Term Support) release of Ubuntu 10 (Lucid Lynx). I also reasoned that as Lynx has been out for a while, System76 would have had time to get their hardware-specific driver for the Starling good and ready. So, after backing up all my data to an external drive, I hit the Upgrade button.

All went well, at least in terms of fetching all the packages. Unfortunately, the installation process hung part way through, and after leaving it frozen for half an hour or so (just in case) I sighed, turned the power off, and resigned myself to re-installing from scratch. As I now have an un-bricked Starling running Ubuntu 10.4, this post is simply to point you to the set of resources which worked for me, if you're in the same position.

Here are some starting assumptions:
  • You have a Windows machine with which to create your bootable USB image (it can be done with another Linux machine or a Mac, but you'll have to find your own path in those cases)
  • Obviously, as the Starling has no CD drive, you'll a nice big USB stick handy (they say 2Gb, but if you have 3-4Gb I think you'll be safer, for reasons I explain below)
  • A wired network connection... this will just make it a lot easier to auto-update with the most recent software updates and the System76 driver.
And here, in the order you will probably encounter them, are the pages which got me through it. There are others, but I found some false trails, and these are the pages which worked for me.
  1. System76's "How to Upgrade to Lucid Lynx" page
  2. The Ubuntu Lucid releases - you want the Netbook Live CD .iso
  3. The Ubuntu help page on creating a bootable USB image
  4. Two tools from PenDriveLinux: USB-Installer and Persistent-Filespace creator
  5. Ubuntu support thread on Lucid/wireless just in case...
If step (3) works OK, you may not need the tools from step (4); however, the first time I tried it, the USB image boot failed because it couldn't find a writeable filespace. This error is listed on the Ubuntu help page above, under Known Issues, as "Can not mount /dev/loop1 on /cow". The Persistent Filespace creator will help you make one of those on the USB stick... which is another reason why I think a 3 or 4Gb stick is probably a good idea.

You may or may not need step (5): frustratingly, when I first booted Lucid Lynx my wireless connection came up flawlessly: I ran the System76 driver and my wireless connectivity disappeared. The thread had some suggestions about making sure the C/C++ libraries (gcc) are definitely installed on your machine, and re-running the System76 driver. I followed those suggestions and it still didn't work, but after a couple of re-boots and tweaks of the 3G-Wifi switch on the front of the Starling, it all worked again.

Good luck...

EU cookie regulations and consent

As you are probably aware, a revision to the EU's e-Privacy Directive was recently transposed into UK law as the Privacy and Electronic Communications Regulations 2011, or PECR. PECR means that, as of May 26th 2011, UK websites are required to obtain users' informed consent before tracking their online behaviour through means such as cookies.

Well-meaning though this legislation may be, there are a number of practical issues with its implementation. As it has never been my intent to invade, subvert or otherwise compromise your privacy, this post is a brief indication of some of those issues, and the possible impact on you as a visitor to this blog.

First, jurisdiction: is this an UK site? Well, I'm located in the UK, and it's my blog, so I'm going to behave as though it is and assume that PECR 2011 applies to it and to me. However, as Blogger belongs to Google, and Google are notoriously reticent about revealing the location of their data-centres, I have no idea where this blog is actually hosted. I suspect a lot of individuals, small/medium enterprises and organisations are in the same position: wherever they are, their websites may or may not be hosted in the UK, and that may give rise to some question as to whether or not PECR can be enforced.

Second, enforcement. The UK ICO has, allegedly, been 'pressured' by the UK government not to enforce PECR, at least for a year while companies figure out what to do about the law. On the one hand, I have little sympathy with this: EU legislation moves at a pretty normal pace for law-making, and PECR has been inching its way down the legislative alimentary canal for many months now. Its emergence should not have come as a surprise to anyone.... but let's not take that analogy any further. On the other hand, there's no doubt that the mechanisms for doing a good privacy-respecting job of gathering user consent are sadly lacking. Of course, as the only viable candidate for deploying such mechanisms is the browser, and as the dominant browsers on the planet are all developed outside the EU, that shouldn't come as a surprise either. On the third hand (as Zaphod could have said) why in Zarquon's name didn't Viviane Reding and her merry band of legislators think of that when they were designing the amendment?

Third, practicality. I do use a couple of counters to track visits to the blog: as you can see, there's a ClustrMaps graphic on the page, and though you can't see it, Statcounter is also enabled. For those two tools, I can give you the following assurance: I never use them for anything other than an occasional look at how site traffic is trending over time. I sometimes look at the per-country breakdown of visits, and if I'm getting persistent spam comments I may look at the IP address of a specific visitor. However, I never use the tracking details for any other purpose, and never knowingly disclose them to any other entity. I don't use Adwords or Affiliate Network, nor is it my intent to do so.

However... it is entirely possible that Blogger, as the host of the blog, gathers statistics about both my use of it and your visits to it. Over that, I have no control. Again, I suspect that many, many individuals, organisations and small/medium businesses are in the same position - and as 'cloud' computing continues to grow, that situation will grow with it.

That leaves me with two problems:

1 - if you don't like the relatively minor use of cookies I do make on this site, and/or don't trust my promise not to abuse the data collected, I'm afraid I don't have any practical way of gathering your consent (or withdrawal of it). Nor do I have a way of turning cookies off for you while still somehow keeping an eye on site usage. By all means block or delete my cookies at your end, if you have the means to do so; I won't be offended (in fact, I won't even know), and as far as I am aware, it won't affect your ability to browse the site.

2 - if you don't like the idea that my hosts (either for this blog, or for my website, for instance) may also be setting cookies, I can sympathise, but there's very little I can do about that. Nor do I think there's any reasonable expectation that they will ask for your consent via my blog. If you have a problem with that, please leave a comment, and then we can both stare at it and wonder what to do next...

So, what can we expect from the PECR 2011 amendment?

Will it immediately change the way in which companies track your online behaviour? No.

Will it change the way browsers handle cookies and consent? Possibly, over time.

Will it advance the debate over online privacy: I sincerely hope so, even if it's only through increased discussion, as opposed to immediate improvement.

Will it resolve the tension between technologists who see the law as an inconvenient obstacle to commercial progress, and legislators who don't understand the technology but want to be seen to be doing something? No. That, regrettably, is something we're stuck with for the foreseeable future. Welcome to Aldous Huxley's world.

EU cookie regulations and consent

As you are probably aware, a revision to the EU's e-Privacy Directive was recently transposed into UK law as the Privacy and Electronic Communications Regulations 2011, or PECR. PECR means that, as of May 26th 2011, UK websites are required to obtain users' informed consent before tracking their online behaviour through means such as cookies.

Well-meaning though this legislation may be, there are a number of practical issues with its implementation. As it has never been my intent to invade, subvert or otherwise compromise your privacy, this post is a brief indication of some of those issues, and the possible impact on you as a visitor to this blog.

First, jurisdiction: is this a UK site? Well, I'm located in the UK, and it's my blog, so I'm going to behave as though it is and assume that PECR 2011 applies to it and to me. However, as Blogger belongs to Google, and Google are notoriously reticent about revealing the location of their data-centres, I have no idea where this blog is actually hosted. I suspect a lot of individuals, small/medium enterprises and organisations are in the same position: wherever they are, their websites may or may not be hosted in the UK, and that may give rise to some question as to whether or not PECR can be enforced.

Second, enforcement. The UK ICO has, allegedly, been 'pressured' by the UK government not to enforce PECR, at least for a year while companies figure out what to do about the law. On the one hand, I have little sympathy with this: EU legislation moves at a pretty normal pace for law-making, and PECR has been inching its way down the legislative alimentary canal for many months now. Its emergence should not have come as a surprise to anyone.... but let's not take that analogy any further. On the other hand, there's no doubt that the mechanisms for doing a good privacy-respecting job of gathering user consent are sadly lacking. Of course, as the only viable candidate for deploying such mechanisms is the browser, and as the dominant browsers on the planet are all developed outside the EU, that shouldn't come as a surprise either. On the third hand (as Zaphod could have said) why in Zarquon's name didn't Viviane Reding and her merry band of legislators think of that when they were designing the amendment?

Third, practicality. I do use a couple of counters to track visits to the blog: as you can see, there's a ClustrMaps graphic on the page, and though you can't see it, Statcounter is also enabled. For those two tools, I can give you the following assurance: I never use them for anything other than an occasional look at how site traffic is trending over time. I sometimes look at the per-country breakdown of visits, and if I'm getting persistent spam comments I may look at the IP address of a specific visitor. However, I never use the tracking details for any other purpose, and never knowingly disclose them to any other entity. I don't use Adwords or Affiliate Network, nor is it my intent to do so.

However... it is entirely possible that Blogger, as the host of the blog, gathers statistics about both my use of it and your visits to it. Over that, I have no control. Again, I suspect that many, many individuals, organisations and small/medium businesses are in the same position - and as 'cloud' computing continues to grow, that situation will grow with it.

That leaves me with two problems:

1 - if you don't like the relatively minor use of cookies I do make on this site, and/or don't trust my promise not to abuse the data collected, I'm afraid I don't have any practical way of gathering your consent (or withdrawal of it). Nor do I have a way of turning cookies off for you while still somehow keeping an eye on site usage. By all means block or delete my cookies at your end, if you have the means to do so; I won't be offended (in fact, I won't even know), and as far as I am aware, it won't affect your ability to browse the site.

2 - if you don't like the idea that my hosts (either for this blog, or for my website, for instance) may also be setting cookies, I can sympathise, but there's very little I can do about that. Nor do I think there's any reasonable expectation that they will ask for your consent via my blog. If you have a problem with that, please leave a comment, and then we can both stare at it and wonder what to do next...

So, what can we expect from the PECR 2011 amendment?

Will it immediately change the way in which companies track your online behaviour? No.

Will it change the way browsers handle cookies and consent? Possibly, over time.

Will it advance the debate over online privacy: I sincerely hope so, even if it's only through increased discussion, as opposed to immediate improvement.

Will it resolve the tension between technologists who see the law as an inconvenient obstacle to commercial progress, and legislators who don't understand the technology but want to be seen to be doing something? No. That, regrettably, is something we're stuck with for the foreseeable future. Welcome to Aldous Huxley's world.

Marking Commissioner Malmström’s homework

Julian Huppert MP has broken new ground today (as far as I'm aware) by "crowd-sourcing" views on the newly-announced proposal for an EU Directive on Attacks Against Information Systems.

Having looked at the press release, my first impression of the Directive is that it is seriously unbalanced and needs to be substantially re-worked. As my teachers used (frequently, I'm afraid) to write on my prep: "Adequate as far as it goes, but I need to see more."

I don't deny that botnets and the like represent a potential threat to computing infrastructures, and thereby indirectly to interests such as consumer safety, commerce, and even national security - though one should also note that in their recent report for the OECD, Professor Peter Sommer (LSE) and Dr Ian Brown (Oxford University) argue convincingly that the majority of such threats are both localised and short-term in their effect. Let us not, then, rush to fling the cyber-baby out with the bathwater.

If we step back for a moment and balance the cyber-war rhetoric with Sommer and Brown's more qualified perspective, the obvious shortcoming of the proposed EU Directive is that it focusses entirely on measures to prevent "illegal interception" and legislation against the use of malware... entirely ignoring the point that the technology to abuse online systems is often the same as the technology used to control it. The difference between lawful and unlawful interception is the prefix "un-", not the means used.

With that in mind, the EU Directive comes across as a piece of work less than half finished. While the policymakers and drafters were considering how to prevent the activities they don't want, they should have been devoting at least as much effort to considering how to regulate the activities they don't want. Badly or insufficiently regulated, those activities do every bit as much social and economic harm as the threats the Directive is keen to stress.

This is by no means just about EU citizens, either. Every instance of bad or incomplete regulatory oversight in our own house is an excuse for repressive regimes to point to that bad example and say "look: that's how they do it in the EU, so it must be acceptable". We need only look at the suppression of internet services in Iran, Tunisia, Pakistan, Egypt and elsewhere to see how this leaves the door open to profound and damaging abuse of citizens' rights and self-determination.

So, for every paragraph about the prevention of illegal activity, the Directive should contain a paragraph about the protection of legitimate activity - including legitimately anonymous and/or pseudonymous activity - and a paragraph about the regulation of law enforcement interception, data retention, content filtering, packet inspection and so on.

Regrettably, the Directive comes from the office of Cecila Malmström, the EU's Home Affairs Commissioner, and her reported views on this kind of thing do not inspire optimism. At the recent CPDP2011 conference in Brussels, she was quoted as having said "data retention is here to stay". When the captains of industry say things like "privacy is no longer the social norm", it makes them look ignorant. When policymakers simply acquiesce with such views, it makes them look dangerous.

As Hielke Hijmans (Head of Policy and Consultations for the EDPS) succinctly put it, at the same conference: "It's not good enough for governments and policy-makers to say 'privacy is dead, get over it': the challenge for them is to work out how social privacy norms can be protected in an information society."

I'm afraid that, in the margin of Ms Malmström's prep, I can only write "B minus. A fair effort, but must try harder."

Marking Commissioner Malmström’s homework

Julian Huppert MP has broken new ground today (as far as I'm aware) by "crowd-sourcing" views on the newly-announced proposal for an EU Directive on Attacks Against Information Systems.

Having looked at the press release, my first impression of the Directive is that it is seriously unbalanced and needs to be substantially re-worked. As my teachers used (frequently, I'm afraid) to write on my prep: "Adequate as far as it goes, but I need to see more."

I don't deny that botnets and the like represent a potential threat to computing infrastructures, and thereby indirectly to interests such as consumer safety, commerce, and even national security - though one should also note that in their recent report for the OECD, Professor Peter Sommer (LSE) and Dr Ian Brown (Oxford University) argue convincingly that the majority of such threats are both localised and short-term in their effect. Let us not, then, rush to fling the cyber-baby out with the bathwater.

If we step back for a moment and balance the cyber-war rhetoric with Sommer and Brown's more qualified perspective, the obvious shortcoming of the proposed EU Directive is that it focusses entirely on measures to prevent "illegal interception" and legislation against the use of malware... entirely ignoring the point that the technology to abuse online systems is often the same as the technology used to control it. The difference between lawful and unlawful interception is the prefix "un-", not the means used.

With that in mind, the EU Directive comes across as a piece of work less than half finished. While the policymakers and drafters were considering how to prevent the activities they don't want, they should have been devoting at least as much effort to considering how to regulate the activities they don't want. Badly or insufficiently regulated, those activities do every bit as much social and economic harm as the threats the Directive is keen to stress.

This is by no means just about EU citizens, either. Every instance of bad or incomplete regulatory oversight in our own house is an excuse for repressive regimes to point to that bad example and say "look: that's how they do it in the EU, so it must be acceptable". We need only look at the suppression of internet services in Iran, Tunisia, Pakistan, Egypt and elsewhere to see how this leaves the door open to profound and damaging abuse of citizens' rights and self-determination.

So, for every paragraph about the prevention of illegal activity, the Directive should contain a paragraph about the protection of legitimate activity - including legitimately anonymous and/or pseudonymous activity - and a paragraph about the regulation of law enforcement interception, data retention, content filtering, packet inspection and so on.

Regrettably, the Directive comes from the office of Cecila Malmström, the EU's Home Affairs Commissioner, and her reported views on this kind of thing do not inspire optimism. At the recent CPDP2011 conference in Brussels, she was quoted as having said "data retention is here to stay". When the captains of industry say things like "privacy is no longer the social norm", it makes them look ignorant. When policymakers simply acquiesce with such views, it makes them look dangerous.

As Hielke Hijmans (Head of Policy and Consultations for the EDPS) succinctly put it, at the same conference: "It's not good enough for governments and policy-makers to say 'privacy is dead, get over it': the challenge for them is to work out how social privacy norms can be protected in an information society."

I'm afraid that, in the margin of Ms Malmström's prep, I can only write "B minus. A fair effort, but must try harder."

Privacy of emails

By coincidence, the theme of the previous blog post (expectations of privacy in correspondence, electronic or otherwise) also crops up in an article by Simon Jenkins in the Guardian today. Jenkins' piece is actually about media ethics, but it's prompted by the renewed media feeding frenzy over a now slightly dusty scandal... revelations that the News Of The World had been hacking into the voicemails of people who they thought might thus provide juicy material for the presses.

At one point, Jenkins notes, the Crown Prosecution Service (i.e. the agency responsible for prosecuting alleged criminals on behalf of the state) advised the police that it was "illegal to hack into a message before, but not after, a recipient had heard it"... much as the 11th US Circuit Court ruled in Rehberg v Hodges.

As the number of forms of electronic communication continues to grow, and governments' appetite for retention, interception and retrieval of those communications grows correspondingly, let's just pick that concept apart and see why it's so absurd - because absurd it surely is.

The idea of an expectation of confidentiality in communications probably has its origins in the establishment of monopolised state postal services. Before that point, you had to have a good reason to trust anyone to whom you gave a letter to deliver to someone else... though in practice those with something particularly sensitive to say also put their trust in means such as encryption and tamper-evident technology. The advent of a universal postal service meant that people had to feel that they could entrust their letters to - essentially - a complete stranger and still be confident that the letter would arrive intact.

There was, then, a clear expectation that a universal postal service should demonstrate great integrity in the handling of the correspondence put into its care - and sure enough, most such services are protected by specific laws to deal with 'interference with the mails'. In other words, and not to overburden the word "confidence", a letter from Sandra to Reece is entrusted to Pat as an intermediary. The contents of the letter are intended to be confidential between Sandra and Reece. Pat has no legitimate expectation of reading the letter for himself, because Sandra's clear intent and expectation is that she is communicating only with Reece.

Now, what happens once Reece receives and opens the letter? Does that act somehow revise Sandra's intention in sending it - so that, onceit is opened, she intends it to be read by people other than Reece? I don't see why we should make that assumption. But just for the sake of it, let's imagine that what Reece finds when he opens the envelope is another envelope: this one has written on it "Confidential: for Reece only". So in this instance Sandra has made her intention and expectations explicit.

Reece opens the second envelope and finds inside a message which says "Dear Reece, I don't want you to tell anyone else this, but I have discovered that I have a fatal disease, and probably only months to live". Again, I don't see anything in the act of Reece opening the inner envelope which revises Sandra's intention and expectations in writing to him and him alone. She even says, in the contents, that she wants Reece to keep this information to himself... and that seems to me to be a legitimate expectation.

Of course, merely by disclosing the fact of her illness to Reece, Sandra is making it possible for Reece to disclose it to someone else - but I think there's a clear difference between making that disclosure possible, and expecting or intending it to take place.

That is why I think it's so perverse to rule that the act of opening a letter changes the sender's legitimate expectation of the confidentiality of the contents. It's also why I wonder whether initiatives like the Privicons plug-in - while doubtless well-intentioned - might have preverse consequences. After all, if there's a button you can click which says "don't share this email", won't that be taken to imply that - if the email has no such icon attached - you don't mind it being shared? All in all, I think I'd be happier if we start with no "this email is sent in confidence" button - because I think the fundamental assumption should be that emails are confidential unless it's explicitly stated otherwise.

It's possible that that assumption is broken; but if so, that argues in favour of mending it, not discarding it.

With that in mind, I wish you a happy Data Privacy Day for tomorrow, Jan 28th.. I encourage you to spend it considering what digital footprints you leave in the course of the day, and to what extent they involve any consent and control on your part.

Privacy of emails

By coincidence, the theme of the previous blog post (expectations of privacy in correspondence, electronic or otherwise) also crops up in an article by Simon Jenkins in the Guardian today. Jenkins' piece is actually about media ethics, but it's prompted by the renewed media feeding frenzy over a now slightly dusty scandal... revelations that the News Of The World had been hacking into the voicemails of people who they thought might thus provide juicy material for the presses.

At one point, Jenkins notes, the Crown Prosecution Service (i.e. the agency responsible for prosecuting alleged criminals on behalf of the state) advised the police that it was "illegal to hack into a message before, but not after, a recipient had heard it"... much as the 11th US Circuit Court ruled in Rehberg v Hodges.

As the number of forms of electronic communication continues to grow, and governments' appetite for retention, interception and retrieval of those communications grows correspondingly, let's just pick that concept apart and see why it's so absurd - because absurd it surely is.

The idea of an expectation of confidentiality in communications probably has its origins in the establishment of monopolised state postal services. Before that point, you had to have a good reason to trust anyone to whom you gave a letter to deliver to someone else... though in practice those with something particularly sensitive to say also put their trust in means such as encryption and tamper-evident technology. The advent of a universal postal service meant that people had to feel that they could entrust their letters to - essentially - a complete stranger and still be confident that the letter would arrive intact.

There was, then, a clear expectation that a universal postal service should demonstrate great integrity in the handling of the correspondence put into its care - and sure enough, most such services are protected by specific laws to deal with 'interference with the mails'. In other words, and not to overburden the word "confidence", a letter from Sandra to Reece is entrusted to Pat as an intermediary. The contents of the letter are intended to be confidential between Sandra and Reece. Pat has no legitimate expectation of reading the letter for himself, because Sandra's clear intent and expectation is that she is communicating only with Reece.

Now, what happens once Reece receives and opens the letter? Does that act somehow revise Sandra's intention in sending it - so that, onceit is opened, she intends it to be read by people other than Reece? I don't see why we should make that assumption. But just for the sake of it, let's imagine that what Reece finds when he opens the envelope is another envelope: this one has written on it "Confidential: for Reece only". So in this instance Sandra has made her intention and expectations explicit.

Reece opens the second envelope and finds inside a message which says "Dear Reece, I don't want you to tell anyone else this, but I have discovered that I have a fatal disease, and probably only months to live". Again, I don't see anything in the act of Reece opening the inner envelope which revises Sandra's intention and expectations in writing to him and him alone. She even says, in the contents, that she wants Reece to keep this information to himself... and that seems to me to be a legitimate expectation.

Of course, merely by disclosing the fact of her illness to Reece, Sandra is making it possible for Reece to disclose it to someone else - but I think there's a clear difference between making that disclosure possible, and expecting or intending it to take place.

That is why I think it's so perverse to rule that the act of opening a letter changes the sender's legitimate expectation of the confidentiality of the contents. It's also why I wonder whether initiatives like the Privicons plug-in - while doubtless well-intentioned - might have preverse consequences. After all, if there's a button you can click which says "don't share this email", won't that be taken to imply that - if the email has no such icon attached - you don't mind it being shared? All in all, I think I'd be happier if we start with no "this email is sent in confidence" button - because I think the fundamental assumption should be that emails are confidential unless it's explicitly stated otherwise.

It's possible that that assumption is broken; but if so, that argues in favour of mending it, not discarding it.

With that in mind, I wish you a happy Data Privacy Day for tomorrow, Jan 28th.. I encourage you to spend it considering what digital footprints you leave in the course of the day, and to what extent they involve any consent and control on your part.

The Privacy of Emails

A colleague has alerted me to a December 2010 ruling on email privacy, in the US 6th Circuit court. There's a brief article here from DC law firm K&L Gates.

The 6th Circuit delivers a welcome reversal of the July 2010 ruling in Rehberg v Hodges, in which the 11th Circuit court somewhat bizarrely concluded that Mr Rehberg’s “privacy interest in emails held by his ISP was not clearly established”. Even in that case, although the ruling itself denied Mr Rehberg’s right to privacy, the court did amend previous statements as follows:

“The Court had written that a "person also loses a reasonable expectation of privacy in emails, at least after the email is sent to and received by a third party" and that "Rehberg's voluntary delivery of emails to third parties constituted a voluntary relinquishment of the right to privacy in that information." This is not the law, and the incorrect statements are no longer precedent.”

Article here on the EFF site.


Note the court’s use of the phrase “third party”. I would be interested to know if this ruling has any effect on a law enforcement request for access to received emails still in the possession of the intended recipient (as opposed to an intermediary). The reason for my interest will be clear in a moment...

Broadening the context beyond email: the legal implications of disclosures via online networking sites are still, in my view, a long way from being conclusively worked out in case law. There was the ruling in Romano v Steelcase Furniture, in which Mrs Romano's Facebook photo showed her apparently happy and smiling in front of her home. Steelcase’s lawyers argued that that was prima facie evidence she was not suffering as badly as she had maintained in an injury suit against them, and successfully got a ruling that Mrs Romano’s private Facebook pages should be disclosed in case they revealed further incriminating evidence.

The twist in that latter part was that not only had Mrs Romano obviously decided that she wanted some of her Facebook disclosures to be more private than others, she had in fact also deleted some of her private pages. At least, she thought she had. In fact, they were still on disk somewhere in Facebook’s storage, and as a result, they were disclosed in evidence. I blogged about that in October, here.

So, in the social networking case, it seems the law still has to catch up with the notion that disclosure is not a binary thing. I keep quoting danah boyd on this, because I can’t improve on her way of putting it:

“Making something that is public more public is a violation of privacy”

(Making Sense of Privacy and Publicity, SXSW 2010; text available here)

In the email case, I’d argue that the same gap still needs to be bridged. US case law seems to be taking the following line: an email from Sandra to Reece embodies an expectation that it is sent in confidence by the sender to the recipient. It is intended to be kept confidential from the ISP who conveys it. (As an aside, that’s interesting if you reflect that an unencrypted email is much more like a postcard than a letter sealed into an envelope...).

That’s fine as far as it goes... but what about the non-binary shadings? Legally, what expectation can a sender have in the confidentiality of, for instance:
  • The contents of an email which the recipient has opened?
  • The contents of an email still unopened in the recipient’s inbox?
  • Copies of the email archived by the sender (for instance, in a “Sent Mail” folder) on the sender’s system, on an employer's email system or on one operated by a third party, say, in the cloud?
There may be many instances of a single electronic disclosure, and I don’t think the legal privacy status of these instances has been fully explored yet in any single jurisdiction, let alone in cloud computing and multi-jurisdictional contexts. Of course, if you know different, let me know via the Comments field.

The Privacy of Emails

A colleague has alerted me to a December 2010 ruling on email privacy, in the US 6th Circuit court. There's a brief article here from DC law firm K&L Gates.

The 6th Circuit delivers a welcome reversal of the July 2010 ruling in Rehberg v Hodges, in which the 11th Circuit court somewhat bizarrely concluded that Mr Rehberg’s “privacy interest in emails held by his ISP was not clearly established”. Even in that case, although the ruling itself denied Mr Rehberg’s right to privacy, the court did amend previous statements as follows:

“The Court had written that a "person also loses a reasonable expectation of privacy in emails, at least after the email is sent to and received by a third party" and that "Rehberg's voluntary delivery of emails to third parties constituted a voluntary relinquishment of the right to privacy in that information." This is not the law, and the incorrect statements are no longer precedent.”

Article here on the EFF site.


Note the court’s use of the phrase “third party”. I would be interested to know if this ruling has any effect on a law enforcement request for access to received emails still in the possession of the intended recipient (as opposed to an intermediary). The reason for my interest will be clear in a moment...

Broadening the context beyond email: the legal implications of disclosures via online networking sites are still, in my view, a long way from being conclusively worked out in case law. There was the ruling in Romano v Steelcase Furniture, in which Mrs Romano's Facebook photo showed her apparently happy and smiling in front of her home. Steelcase’s lawyers argued that that was prima facie evidence she was not suffering as badly as she had maintained in an injury suit against them, and successfully got a ruling that Mrs Romano’s private Facebook pages should be disclosed in case they revealed further incriminating evidence.

The twist in that latter part was that not only had Mrs Romano obviously decided that she wanted some of her Facebook disclosures to be more private than others, she had in fact also deleted some of her private pages. At least, she thought she had. In fact, they were still on disk somewhere in Facebook’s storage, and as a result, they were disclosed in evidence. I blogged about that in October, here.

So, in the social networking case, it seems the law still has to catch up with the notion that disclosure is not a binary thing. I keep quoting danah boyd on this, because I can’t improve on her way of putting it:

“Making something that is public more public is a violation of privacy”

(Making Sense of Privacy and Publicity, SXSW 2010; text available here)

In the email case, I’d argue that the same gap still needs to be bridged. US case law seems to be taking the following line: an email from Sandra to Reece embodies an expectation that it is sent in confidence by the sender to the recipient. It is intended to be kept confidential from the ISP who conveys it. (As an aside, that’s interesting if you reflect that an unencrypted email is much more like a postcard than a letter sealed into an envelope...).

That’s fine as far as it goes... but what about the non-binary shadings? Legally, what expectation can a sender have in the confidentiality of, for instance:
  • The contents of an email which the recipient has opened?
  • The contents of an email still unopened in the recipient’s inbox?
  • Copies of the email archived by the sender (for instance, in a “Sent Mail” folder) on the sender’s system, on an employer's email system or on one operated by a third party, say, in the cloud?
There may be many instances of a single electronic disclosure, and I don’t think the legal privacy status of these instances has been fully explored yet in any single jurisdiction, let alone in cloud computing and multi-jurisdictional contexts. Of course, if you know different, let me know via the Comments field.

Anonymity on the Net

There's an interesting piece on the New York Times site by Professor Stanley Fish, titled "Anonymity and the Dark Side of the Internet".

A quick disclaimer to start with, though: bear in mind that what you're reading here is my comment on an article in which Prof. Fish reviews a collection of essays by academics citing various principles and legal precedents. This discourse has more layers than Inception... and that's before you get to the comments readers have left on Prof. Fish's article itself.

The collection of essays is called "The Offensive Internet" - and based on Prof. Fish's portrayal, the contributors are writing from the standpoint that anonymity online is a Bad Thing, about which Something Must Be Done. Second disclaimer: I haven't actually read "The Offensive Internet"... but as much of the discussion apparently revolves around the dangers of unsubstantiated online gossip, it would be contrary to let a mere lack of factual knowledge stop me blogging about it, wouldn't it?

The position of the anti-anonymists is (at least, as far as Prof. Fish represents it) riddled with arguments from the particular to the general - principally along the lines of "here is an instance where online anonymity has undesirable consequences - therefore all online anonymity is undesirable". In part, the picture painted is of an ecosystem polluted by irresponsible comment, libel and misinformation, riding on the back of instant, mass publication with total immunity from being held to account.

Some of the quotations Prof. Fish includes are such gems I almost wonder if he isn't part of some fiendishly cunning marketing ploy, designed to convince us that the only way to stem our incredulity it to read it for ourselves. Out of context or not, what are we to make of a statement like: "autonomy resides not in free choice per se but in choosing wisely"? So, I can have (or at least call it) autonomy, but only if I agree not to make foolish, capricious, ill-informed or simply bad decisions. And who decides which of my free' choices qualifies as autonomous? Someone else, you say....? Hmm.

Even if we accept that the essays, Prof. Fish himself, or both, are being deliberately polemical, it does the argument against anonymity no credit to ignore valid counterexamples. For instance, The Times and The Economist both have a long tradition of anonymous publication (The Times for its leaders and The Economist in general). That has a number of consequences: it means that the credibility of what is written depends first (and foremost) on its content and second (and less) on the brand under which it appears. The second factor, the brand or reputation of the publication, is critically interdependent on the credibility of the content. This virtuous circle encourages the anonymous to write in such a way as to enhance the credibility of their host publication. It is not true, then, that anonymity necessarily means a lack of accountability or an immunity from the consequences of irresponsible writing.

Prohibition of online anonymity would also damage the interests of those whose identity - if disclosed - would expose them to various forms of abuse. Take the case of Harriet Jacobs (not her real name, QED...) whose personal safety depends at least in part to online pseudonymity. Presumably in the brave new world of enforced identifiability, those who fall victim to domestic violence, rape or persecution simply forfeit their entitlement to the means of online expression available to the smug majority. It is not true, then, that anonymity serves only the interests of those who have something libellous, shameful, malicious or just plain wrong to say.

The examples of journalists and Harriet Jacobs illustrate a principle which does not come across in Prof. Fish's article - that the Internet is quite capable of supporting various levels of identifiability.

There is the relative anonymity of being 'one of a number of journalists publishing under a given title'; of course the editor knows who wrote what, and who to hold responsible if the article turns out to be libellous. Second, there is the pseudonymity of publishing a blog under a pen name. Ultimately, through a combination of the registration process for the blog itself, the formalities of having a billable IPS account and so on, the author of most blogs could, ultimately, be identified by a third party able to correlate the right identifiers - and most legislation in this area makes provision for law enforcement access (ideally subject to justifying conditions and with some degree of oversight). The real issue, then, is not whether online anonymity can or should be banned, but how to maintain and manage these various levels of anonymity, pseudonymity and identifiability.

The bottom line is that, if the authors of "The Offensive Internet" were looking for an analogy, they could and should have done better than "cesspool" or "graffiti-filled bathroom wall". The Internet is like electricity. It can be put to good purposes, bad purposes, trivial and misguided purposes, and indeed purposeless uses. You will find anonymity in all those categories, and ruling it out of all of them because of its occasional role in one of them is just perverse.

Speaking of electricity, it's interesting how frequently writers (Prof. Fish included) quote Justice Brandeis' comment that "Sunshine [sic] is the best disinfectant" without going on to complete the aphorism. When I give it in full, perhaps you will see why:

"Sunlight is said to be the best of disinfectants, electric light the most efficient policeman" (Other People's Money - Chapter V: What Publicity Can Do)

Note the implicit characterisation of sunlight as clean, natural, healthy and life-giving. Who could object to that? By contrast, electricity may create an atmosphere in which people obey the law, but it does so by offering cut-rate panopticality. People will behave because they live under the floodlights. Not such a utopian image.

Mind you, Brandeis' thesis certainly has its modern resonances; the problem he goes on to address in Chapter V? Excessive bankers' commissions...

Anonymity on the Net

There's an interesting piece on the New York Times site by Professor Stanley Fish, titled "Anonymity and the Dark Side of the Internet".

A quick disclaimer to start with, though: bear in mind that what you're reading here is my comment on an article in which Prof. Fish reviews a collection of essays by academics citing various principles and legal precedents. This discourse has more layers than Inception... and that's before you get to the comments readers have left on Prof. Fish's article itself.

The collection of essays is called "The Offensive Internet" - and based on Prof. Fish's portrayal, the contributors are writing from the standpoint that anonymity online is a Bad Thing, about which Something Must Be Done. Second disclaimer: I haven't actually read "The Offensive Internet"... but as much of the discussion apparently revolves around the dangers of unsubstantiated online gossip, it would be contrary to let a mere lack of factual knowledge stop me blogging about it, wouldn't it?

The position of the anti-anonymists is (at least, as far as Prof. Fish represents it) riddled with arguments from the particular to the general - principally along the lines of "here is an instance where online anonymity has undesirable consequences - therefore all online anonymity is undesirable". In part, the picture painted is of an ecosystem polluted by irresponsible comment, libel and misinformation, riding on the back of instant, mass publication with total immunity from being held to account.

Some of the quotations Prof. Fish includes are such gems I almost wonder if he isn't part of some fiendishly cunning marketing ploy, designed to convince us that the only way to stem our incredulity it to read it for ourselves. Out of context or not, what are we to make of a statement like: "autonomy resides not in free choice per se but in choosing wisely"? So, I can have (or at least call it) autonomy, but only if I agree not to make foolish, capricious, ill-informed or simply bad decisions. And who decides which of my free' choices qualifies as autonomous? Someone else, you say....? Hmm.

Even if we accept that the essays, Prof. Fish himself, or both, are being deliberately polemical, it does the argument against anonymity no credit to ignore valid counterexamples. For instance, The Times and The Economist both have a long tradition of anonymous publication (The Times for its leaders and The Economist in general). That has a number of consequences: it means that the credibility of what is written depends first (and foremost) on its content and second (and less) on the brand under which it appears. The second factor, the brand or reputation of the publication, is critically interdependent on the credibility of the content. This virtuous circle encourages the anonymous to write in such a way as to enhance the credibility of their host publication. It is not true, then, that anonymity necessarily means a lack of accountability or an immunity from the consequences of irresponsible writing.

Prohibition of online anonymity would also damage the interests of those whose identity - if disclosed - would expose them to various forms of abuse. Take the case of Harriet Jacobs (not her real name, QED...) whose personal safety depends at least in part to online pseudonymity. Presumably in the brave new world of enforced identifiability, those who fall victim to domestic violence, rape or persecution simply forfeit their entitlement to the means of online expression available to the smug majority. It is not true, then, that anonymity serves only the interests of those who have something libellous, shameful, malicious or just plain wrong to say.

The examples of journalists and Harriet Jacobs illustrate a principle which does not come across in Prof. Fish's article - that the Internet is quite capable of supporting various levels of identifiability.

There is the relative anonymity of being 'one of a number of journalists publishing under a given title'; of course the editor knows who wrote what, and who to hold responsible if the article turns out to be libellous. Second, there is the pseudonymity of publishing a blog under a pen name. Ultimately, through a combination of the registration process for the blog itself, the formalities of having a billable IPS account and so on, the author of most blogs could, ultimately, be identified by a third party able to correlate the right identifiers - and most legislation in this area makes provision for law enforcement access (ideally subject to justifying conditions and with some degree of oversight). The real issue, then, is not whether online anonymity can or should be banned, but how to maintain and manage these various levels of anonymity, pseudonymity and identifiability.

The bottom line is that, if the authors of "The Offensive Internet" were looking for an analogy, they could and should have done better than "cesspool" or "graffiti-filled bathroom wall". The Internet is like electricity. It can be put to good purposes, bad purposes, trivial and misguided purposes, and indeed purposeless uses. You will find anonymity in all those categories, and ruling it out of all of them because of its occasional role in one of them is just perverse.

Speaking of electricity, it's interesting how frequently writers (Prof. Fish included) quote Justice Brandeis' comment that "Sunshine [sic] is the best disinfectant" without going on to complete the aphorism. When I give it in full, perhaps you will see why:

"Sunlight is said to be the best of disinfectants, electric light the most efficient policeman" (Other People's Money - Chapter V: What Publicity Can Do)

Note the implicit characterisation of sunlight as clean, natural, healthy and life-giving. Who could object to that? By contrast, electricity may create an atmosphere in which people obey the law, but it does so by offering cut-rate panopticality. People will behave because they live under the floodlights. Not such a utopian image.

Mind you, Brandeis' thesis certainly has its modern resonances; the problem he goes on to address in Chapter V? Excessive bankers' commissions...

UK Govt plans to "turn off" internet porn

Back in April, I ceded the following hostage to fortune:

" [...] to accusations of political partiality I will say only this: I've only ever blogged under a Labour government.

If a non-Labour government fails to provide just as much blog-fodder, I will supplement that dwindling diet with my hat."

If this story on news.com.au is to be believed, I think my headwear is safe. The UK government plans to legislate to make households "opt in" to be able to access porn on the internet. ISPs are expected to put some kind of registration, age-related classification and/or filtering mechanisms in place.

If the report is true, it suggests that UK policymakers have managed to come up with something which is at once populist, paternalistic, naive and utterly impractical.

It is populist in the sense that the stated goal of the policy is to safeguard children from inadvertent (or worse, deliberate) exposure to pornographic material. In other words, a goal which has been framed so that to disagree with it is to mark oneself out as a child-abusing pervert. One can instantly understand how this will appeal to a slightly right-of-centre, instinctively conservative but not overly intellectual middle class demographic. It's not for me to caricature that demographic as "Daily Mail readers", however apophetically, but if you like tabloid-based stereotypes, that's one shorthand for it.

The ethical 'argument' here is from the same in-bred stock as the pernicious "nothing to hide, nothing to fear" line on personal privacy: "if you're not ashamed of looking at smut, you shouldn't be ashamed of having to register to look at smut, therefore we'll make it illegal not to register". Once again, the valid distinction between the illegal, the shameful and the merely embarrassing is being elided.

It is paternalistic because it is based on the assumption that the best way to protect children within a household is to cede decision-making to entities outside the household (policymakers and ISPs) about what content is suitable for which audiences, and what should be allowed into the house through the cable modem.

Actually, of course, this simply ensures that the citizens' ethical faculties atrophy through disuse... because any decisions about appropriate content are 'someone else's responsibility'. I'm disappointed, because I thought a decade of New Labour's Orwellian tendencies had moved (even) the Tories on from that kind of pernicious molly-coddling.

It is naive because there is no precedent to show that the way to enforce any given ethical principle is to impose a technically-mediated solution. 20th century 'received wisdom' is that every technological innovation is swiftly turned to pornographic purposes (the internet, the Betamax video cassette, the photograph, the engraving, the fresco...). Never mind Pompeii (Approx. 79AD): the Egyptians were using wall-painting technology to depict sexual acts around 1500 BC; by the 1200s BC they were exploiting the newer medium of papyrus... which was doubtless rather more conveniently portable.

The point is, even as the means to produce and disseminate (sorry) pornography have become more and more technologically mediated, the ability to impose technological restriction on such publication has, again and again, proved futile.

But however long one may care to debate that hypothesis, the fact is that the legislation proposed just won't achieve the goal being used to justify it. Think of some of the practicalities:
  • Are households which contain no children also obliged to register?
  • Is registration rendered unnecessary once children in a household achieve majority?
  • Who counts as "the householder" in, for example, a university hall of residence?
But more crucially: in so-called "toxic" households, where children are deliberately exposed to pornography by adult perverts... how is the new legislation to have effect? If the householder "opts in", what protection does the law then provide to any children in that household?

None.

On that basis, there isn't even any point starting an analysis of the potential downside of such a legislative measure - the proposal is just a stupid idea masquerading as a moral crusade (and with all the success of an unconvincing transvestite).

UK Govt plans to "turn off" internet porn

Back in April, I ceded the following hostage to fortune:

" [...] to accusations of political partiality I will say only this: I've only ever blogged under a Labour government.

If a non-Labour government fails to provide just as much blog-fodder, I will supplement that dwindling diet with my hat."

If this story on news.com.au is to be believed, I think my headwear is safe. The UK government plans to legislate to make households "opt in" to be able to access porn on the internet. ISPs are expected to put some kind of registration, age-related classification and/or filtering mechanisms in place.

If the report is true, it suggests that UK policymakers have managed to come up with something which is at once populist, paternalistic, naive and utterly impractical.

It is populist in the sense that the stated goal of the policy is to safeguard children from inadvertent (or worse, deliberate) exposure to pornographic material. In other words, a goal which has been framed so that to disagree with it is to mark oneself out as a child-abusing pervert. One can instantly understand how this will appeal to a slightly right-of-centre, instinctively conservative but not overly intellectual middle class demographic. It's not for me to caricature that demographic as "Daily Mail readers", however apophetically, but if you like tabloid-based stereotypes, that's one shorthand for it.

The ethical 'argument' here is from the same in-bred stock as the pernicious "nothing to hide, nothing to fear" line on personal privacy: "if you're not ashamed of looking at smut, you shouldn't be ashamed of having to register to look at smut, therefore we'll make it illegal not to register". Once again, the valid distinction between the illegal, the shameful and the merely embarrassing is being elided.

It is paternalistic because it is based on the assumption that the best way to protect children within a household is to cede decision-making to entities outside the household (policymakers and ISPs) about what content is suitable for which audiences, and what should be allowed into the house through the cable modem.

Actually, of course, this simply ensures that the citizens' ethical faculties atrophy through disuse... because any decisions about appropriate content are 'someone else's responsibility'. I'm disappointed, because I thought a decade of New Labour's Orwellian tendencies had moved (even) the Tories on from that kind of pernicious molly-coddling.

It is naive because there is no precedent to show that the way to enforce any given ethical principle is to impose a technically-mediated solution. 20th century 'received wisdom' is that every technological innovation is swiftly turned to pornographic purposes (the internet, the Betamax video cassette, the photograph, the engraving, the fresco...). Never mind Pompeii (Approx. 79AD): the Egyptians were using wall-painting technology to depict sexual acts around 1500 BC; by the 1200s BC they were exploiting the newer medium of papyrus... which was doubtless rather more conveniently portable.

The point is, even as the means to produce and disseminate (sorry) pornography have become more and more technologically mediated, the ability to impose technological restriction on such publication has, again and again, proved futile.

But however long one may care to debate that hypothesis, the fact is that the legislation proposed just won't achieve the goal being used to justify it. Think of some of the practicalities:
  • Are households which contain no children also obliged to register?
  • Is registration rendered unnecessary once children in a household achieve majority?
  • Who counts as "the householder" in, for example, a university hall of residence?
But more crucially: in so-called "toxic" households, where children are deliberately exposed to pornography by adult perverts... how is the new legislation to have effect? If the householder "opts in", what protection does the law then provide to any children in that household?

None.

On that basis, there isn't even any point starting an analysis of the potential downside of such a legislative measure - the proposal is just a stupid idea masquerading as a moral crusade (and with all the success of an unconvincing transvestite).

My first Burton IT1 report is out…

Something of a milestone day today, as my first IT1 analyst report has finally made it through the gastric tract of the corporate publishing animal, and now appears under the IT research category on the Burton Group repository, here. (You'll need an IT1 subscription to get the full document, I'm afraid... that's the world we live in though). I may not approve of paywalls for daily newspapers (or blogs...), but that doc took me a couple of months to create - so I hope it's worth the price of admission.

It's on the topic of "Changing a Privacy Policy Statement"... my colleagues Ian Glazer and Bob Blakley and I took a look at some of the more interesting ways in which changes to privacy policy statements have been 'got wrong' in recent months, and tried to come up with a model for getting it right.

Obviously, it's not just about changing the privacy policy statement; if that isn't mirrored in corresponding changes to the organisation's privacy policy itself, and in appropriate communication to the data subjects, something is going to go awry - it's just a question of when, and how embarrassingly.

Anyway, if you are a subscriber, please have a look and let me know what you think. This is meant to be the first of many such reports, so for goodness' sake let me know if I'm getting it wrong!

My first Burton IT1 report is out…

Something of a milestone day today, as my first IT1 analyst report has finally made it through the gastric tract of the corporate publishing animal, and now appears under the IT research category on the Burton Group repository, here. (You'll need an IT1 subscription to get the full document, I'm afraid... that's the world we live in though). I may not approve of paywalls for daily newspapers (or blogs...), but that doc took me a couple of months to create - so I hope it's worth the price of admission.

It's on the topic of "Changing a Privacy Policy Statement"... my colleagues Ian Glazer and Bob Blakley and I took a look at some of the more interesting ways in which changes to privacy policy statements have been 'got wrong' in recent months, and tried to come up with a model for getting it right.

Obviously, it's not just about changing the privacy policy statement; if that isn't mirrored in corresponding changes to the organisation's privacy policy itself, and in appropriate communication to the data subjects, something is going to go awry - it's just a question of when, and how embarrassingly.

Anyway, if you are a subscriber, please have a look and let me know what you think. This is meant to be the first of many such reports, so for goodness' sake let me know if I'm getting it wrong!

Wikileaks and diplomacy

I have, over time, heard two definitions of the word “diplomat”:

1 – a man sent to lie abroad for his country;

2 – someone who can tell you to go to Hell in such a way that you feel you would benefit from the journey.

(By way of disclaimer, I should point out that I heard both from my father, who was himself a career diplomat... ;^)

To me, what the current Wikileaks "cablegate" incident reveals is this: as individuals and social animals, we all understand the fine nuances of truth-telling, lying and hypocrisy (from ‘white lies’ to ‘social convention’, ‘good manners’, ‘gentlemanly or ladylike behaviour’, ‘discretion’ and so on and so forth). When you scale that up to ‘social’ scale, it tends to become simplified and polarised - as we see from the press coverage and the political rhetoric.

Diplomats are intelligent tools of the political system (in German, a single word - Botschafter - serves for both “messenger” and “ambassador”). In the sense of 'messenger', the diplomat is there only to convey what his or her government wishes to be said. However, in representing their government’s wishes, it is also their job to exercise judgement about when the national interest is best served by the truth, a lie, a lie which is known to be a lie, an apparently accidental indiscretion, an unpalatable truth told in jest... or any of the million shades of grey along that spectrum.

Often, the value of diplomacy lies precisely in the ability to convey one thing while saying another. That way, an official position is publicised, without preventing what is pragmatically necessary from being communicated.

The leaked cables will, of course, reveal that what diplomats say to their colleagues and their political masters is often not what they say to their counterparts in post. That should surprise no-one...

Wikileaks and diplomacy

I have, over time, heard two definitions of the word “diplomat”:

1 – a man sent to lie abroad for his country;

2 – someone who can tell you to go to Hell in such a way that you feel you would benefit from the journey.

(By way of disclaimer, I should point out that I heard both from my father, who was himself a career diplomat... ;^)

To me, what the current Wikileaks "cablegate" incident reveals is this: as individuals and social animals, we all understand the fine nuances of truth-telling, lying and hypocrisy (from ‘white lies’ to ‘social convention’, ‘good manners’, ‘gentlemanly or ladylike behaviour’, ‘discretion’ and so on and so forth). When you scale that up to ‘social’ scale, it tends to become simplified and polarised - as we see from the press coverage and the political rhetoric.

Diplomats are intelligent tools of the political system (in German, a single word - Botschafter - serves for both “messenger” and “ambassador”). In the sense of 'messenger', the diplomat is there only to convey what his or her government wishes to be said. However, in representing their government’s wishes, it is also their job to exercise judgement about when the national interest is best served by the truth, a lie, a lie which is known to be a lie, an apparently accidental indiscretion, an unpalatable truth told in jest... or any of the million shades of grey along that spectrum.

Often, the value of diplomacy lies precisely in the ability to convey one thing while saying another. That way, an official position is publicised, without preventing what is pragmatically necessary from being communicated.

The leaked cables will, of course, reveal that what diplomats say to their colleagues and their political masters is often not what they say to their counterparts in post. That should surprise no-one...