Category: Data Protection

data protection

  • Striking a balance in Data Protection Sanctions

    It was reported yesterday that the Irish Government has issued a “discussion paper” on the proposed administrative sanctions under the new Data Protection Regulation.

    EDRI has criticised the proposals with reference to the “warning/dialogue/enforcement” approach taken by the Irish DPC. Billy Hawkes has, in the past, been at pains to clarify that the Irish DPC uses dialogue to encourage compliance and also seeks to encourage organisations to raise questions and issues with the DPC to avoid breaches. There is a belief that the “brand impact” of even being spoken to by the DPC about an issue can prompt “road to Damascus” conversions in organisations.

    That is all well and good, but my experience working with organisations is that this can result in management playing a game of “mental discounting” (I’ve written about this before in response to the original draft DP Regulation). If there is a perception that the probability of an actual penalty is low, there is little leverage in appealing to intrinsic motivation of a business manager when his extrinsic drivers for behaviour are pushing the decision towards a “suck it and see” approach.

    Having re-read the discussion paper and EDRI’s response to it I can’t help feel that EDRI may be over-stating the “ask” that is being made here a small bit. They cite it as the “destruction of the right to privacy”, citing the Irish DPC’s own experiences with the Garda Pulse system which has been plagued by reports of breaches in Data Protection since its introduction, despite the Gardaí having a statutory Code of Practice for Data Protection. In 2010 the DPC reported that that Code of Practice was not being implemented in the Gardaí.

    However, this says as much to to me about the attitude to Data Protection in some (but not all) parts of the Irish Public Service then it does about the merits of the Data Protection Commissioner’s approach to encouraging compliance or the specifics of anything that might be discussed on foot of this discussion paper. Furthermore it raises questions for me about the capability and resources that the Data Protection Commissioner has to execute their function effectively in Ireland, and even suggests that there may be informal barriers to the effective operation of their function in the public sector which need to be urgently considered (given that the Office of the DPC is supposed to be independent).

    Given the extent of the negative findings in the interim report on the 2012 audit of the PULSE system I personally would hope that there would be some level of penalty for the Garda Siochana for failing to follow their own code of practice. But that is a different issue to what the Discussion paper actually raises.

    What is being discussed (and what would I like them to consider?)

    The Discussion Paper that was circulated invites Ministers at an Informal Council meeting to consider (amongst other things):

    1. If wider provision should be made for warnings or reprimands, making fines optional or at least conditional upon a prior warning or reprimand;
    2. if supervisory authorities should be permitted to take other mitigating factors, such as adherence to an approved code of conduct or a privacy seal or mark, in to account when determining sanctions.

    It flags the fact that the Regulation, as drafted, allows for no discretion in terms of the levying of a penalty. What is proposed here in the discussion is a discussion of whether warnings or the making of fines optional would be the mechanism to go to rather than scaring the bejesus out of people with massive fines. This in itself doesn’t kill the right to Privacy, but it does potentially create the environment where the fundamental Right to Privacy will die, starved of any oxygen of effective enforcement.

    Bluntly – when faced with a toothless framework of warnings and vague threats, businesses and public sector bodies will (and currently do) play a game of mental discounting where the bottom line impact (in terms of making money or achieving a particular goal) outweigh the other needs and requirements of society. So an organisation may choose to obtain information unfairly or process it for an undisclosed secondary purpose because it will hit its target in this quarter and the potential monetary impact won’t emerge for many more months or years, after an iterative cycle of warnings. The big penalty will be seen as something “far away” that can be worried about later. After everyone’s got their bonuses or their promotions etc.

    If strict statutory liability is the model that is being proposed, and the discussion is to look at watering it down to a stern talking to as a matter of formal policy in the Regulation, I must despair of the wingnuts in my government who even thought that would be a good idea to even suggest this. But I do agree that tying the hands of the Regulators to the big ticket monetary penalties might not work in their interests or in the interests of encouraging compliance with the legislation.

    What is needed is a middle ground. A mechanism whereby organisations can make errors of judgement and be warned, but that the warning will have some sanction with it. The sanction needs to be non-negotiable. But it needs to be transparent and obvious that this is what will happen if you ignore DP rules. It needs to be easily enforced and managed. There should be a right of appeal, but appealing the non-negotiable fixed-penalty should carry with it the risk of greater penalties. And the ability of an organisation to benefit from iterative small penalties should be removed if they are a recidivist offender.

    There is a system that operates like this in most EU countries – it is the Penalty Points system for motoring offences. Hopefully the discussion will move to looking at how a similar system might be implemented for Data Protection offences. The penalties could be tiered (e.g. no cookies notification – €150 fine and 2 points on first offence, €500 and 4 points on second, failure to document processing €500 fine on first offence and 6 points). The points could be cumulative, with the “optionality” of higher sanctions being removed if you were, for example, an organisation with 100 points against you (congratulations, you’ve failed to up your game and now you are being prosecuted for the full tariff). Organisations bidding for public sector contracts could be required to have a “Data Protection Points” score below a certain level.

    This system could be devised in a way that would take account of mitigating factors. If a code of practice was entered in to, and was successfully audited against by an appropriate body, then points could be removed from the “scorecard” at the end of a 12 month period. If there were mitigating factors, a lower level category of offence might actually apply (I’ll admit I’m not sure how that might work in practice and need to think it through myself a little). Perhaps self-notification to the DPC, engagement in codes of practice, mitigating factors or actions etc. would carry a “bonus points” element which could be used to off-set the points total being carried by a Data Controller (e.g. “adopted code of practice and passed audit: minus 3 points, introduced training and has demonstrated improved staff knowledge: minus 3 points).

    Certain categories of breach might be exempt from mitigation, and certain categories of offence, just like with motoring offences, might be a permanent black mark on the organisation’s Data Protection record (e.g.: Failure to engage with DPC in an investigation, failing to take actions on foot of an audit/investigation).

    The scheme could be administered at an EU level by the EDPB, with the points accumulated by organisations operating in multiple member states either being cumulative or averaged based on a standardised list of key offences. Member States could be free to add additional offences to this list locally, within the spirit and intent of the Regulation.

    That would be an innovative idea, based on a model that has been proven to have an influence on compliance behaviour in motoring. And it would provide a transparent mechanism that would ensure that warnings could be given, advice could be sought, and positive engagement could be entered into by Micro Enterprises, SMEs, and large corporates. It would provide a relatively low impact mechanism for levying and collecting penalties from organisations who are in breach (penalties could potentially be collected as part of annual tax returns as a debt owed to the State), and it could be used to reward organisations who are taking positive actions (“bonus points”).

    Finally, it would give the basis of a transparent scorecard for organisations seeking to evaluate data processors or other service providers (in the same way as Insurance providers use penalty points data for motoring to assess driver risk), and it would give a clear escalation path to the full sanctions in the Regulation (e.g. 100 points and you go straight to full penalties).

    What it does not give is a death spiral of warnings that don’t amount to penalty and as a result give a platform for organisations to ignore the Right to Privacy. It is an evolution of the conciliatory approach to encouraging compliance but one that is given teeth in a manner that can be transparent, easily explained, and standardised across the EU27.

    I’ve written about this in 2010 and 2012. Maybe the time is right for it to be discussed?

  • Call the Tweet Police (a slight return)

    An opinion piece by Joe Humphreys in the Irish Times on the 9th of January (which I can link to here thanks to the great work of McGarr Solicitors) discusses anonymous comment on-line. In doing so he presents an argument that would appear to suggest that persons taking a nom de plume in debate are in some way sinister and not trustworthy.

    He suggests three actions that can be taken to challenge “trolling”. I’ve previously addressed this topic on this blog (27th December 2012 and previously) I thought I’d examine each of Mr Humphrey’s suggestions in turn and provide agreement or counter argument as appropriate.

    1. Publicly condemn it. Overall I agree with this. However who or what should be condemned? The pseudonymous comment or the pseudonymous commenter? Should you ‘play the man or the ball’, to borrow a metaphor from sports? The answer is that, in an open society the correct course of action is to either ignore the argument or join the argument. Anything else leads to a downward spiral of tit-for-tat trolling and abuse, one of the very behaviours that has sections of our body politic and mainstream media crying “Down with this sort of thing!”

    2. “Develop ways of discriminating against it… … by technology that helps to authenticate people’s identities”. In my blog post of the 27th of December I address this under the heading of “Bad Idea #1”. The concept of identity is incredibly fluid. As Mr Humphreys appears fond of citing scientists and philosophers, I’m sure he is familiar with Descarte’s writings on the existentialist concepts of identity.

    The idea of an “identity register” is one that raises significant technical, philosophical, and legal issues. South Korea has recently abandoned their attempts to impose a “Real Names” policy on the use of social media due to these issues, and “Real Name” policies in social media have been criticised on Data Protection grounds in Europe. In China, where a “real names” policy is in place for social media, people use fake ID to register and the Chinese government has failed to get a significant majority of internet users to comply with their law.

    Describing anonymity as a “market failure” to be fixed by enforced identification equates identity with a tradable commodity. This is, ironically, the business model of Facebook, which Mr Humphreys describes as “an invention of Orwellian proportions”.

    3. “Challenge the anonymous to explain why they are hiding themselves. I’ve yet to hear a good excuse…” In my post of the 27th of December I link to an excellent resource (the GeekFeminism Wiki) which lists a number of reasons why people might not be able to use their real names in on-line comment. Time taken to research this: 30 seconds on Google. They include: survivors of abuse, whistleblowers, law enforcement personnel, and union activists.

    The implication made by Mr Humphreys that people choose to comment anonymously because they don’t want their employer to know they are on social media all day is disingenuous to say the least and belies a biased view of those of us who are active users of modern technologies for communication, discussion, and debate.

    Finally, history has a litany of examples of people who, for various reasons have used pen names to hide themselves. From Leslie Charles Bowyer-Yin (Leslie Charteris, author of The Saint) to Samuel Langhorne Clemens (Mark Twain), to Francois-Marie Arouet (Voltaire), to Eric Blair (George Orwell) there is a tradition of, in the words of preparing “a face to meet the faces that you meet” (to borrow a line from T.S Eliot) for a variety of reasons. See http://en.wikipedia.org/wiki/List_of_pen_names for more examples.

  • Some food for thought

    The Official Twitter Account of the Irish EU Presidency (@eu2013ie) tweeted earlier today about recipes.

    That gave me a little food for thought given the subject matter I posted on yesterday.

    1. Ireland will hold the Presidency of the EU in the first half of 2013.
    2. Part of what we will be tasked with is guiding the Data Protection Regulation through the final stages of ratification
    3. Viviane Reding has been very vocal about the role Ireland will play and the importance of strengthening enforcement of rights to Personal Data Privacy in the EU. 
    4. World wide media  and our European peers will be looking at Ireland and our approach to Data Protection.

    In that context I would hope that any Dáil Committee would have the importance of the right to Privacy (as enshrined in EU Treaties and manifested by our current Data Protection Acts and the forthcoming Data Protection Regulation) when reviewing legislation and regulation around Social Media.

    While I don’t think that the recipes being tweeted about by the @eu2013ie account contained any Chinese recipes, the news today about changes in the Chinese Social Media regulatory environment are disturbing in the context of the rights to privacy and free speech. One interesting point about China’s approach to control of on-line comment from the FT article linked to above is this:

    It has also tried to strengthen its grip on users with periodical pushes for real name registration. But so far, these attempts have been unsuccessful in confirming the identity of most of China’s more than 500m web users

    Food for thought.

  • Calling The Tweet Police

    [updated 2012-12-27@17:11 to reflect comments from TJ McIntyre] [edited introductory paragraphs at 20:34 2012-12-27 reflecting feedback from Aoife below, fair comment made and responded to] [Note: This has been posted today because RTE are doing a thing about “social media regulation” which means that levers are being pulled that need to be red flagged] I drafted this post on Christmas Eve morning 2012. The original post had the introduction below. One person (out of the 600+ who have read this post by now, a few hours after I posted it) felt that the opening was too hyperbolic. Perhaps it was, so I decided to tweak it. I did hope I wouldn’t have to publish the piece I’d drafted. But the fact that the opening item on the 6pm news on the 27th of December 2012 was a piece about the Chairman of the Dáil communications committee announcing that the committee would meet in the New Year to discuss regulating ‘Social Media’ meant that my misgivings about the approach of the Irish political classes to the use of Social Media were not entirely misplaced. I’m writing this on Christmas Eve morning 2012. I dearly hope I never have to publish it. If I do it will be because the Government I helped elect will have abandoned any pretence of being a constitutional democracy and will have instead revealed its true insular, isolated, clientelist nature in a manner that will disgust and appal people. And this will be all the more disturbing as the Government will have used real personal tragedies to justify this abandonment of principles. But I am not hopeful. If this post sees the light of day something will have gone horribly wrong with the Irish Body Politick. That the content of the media coverage today echoed the expectation I set out in the paragraphs below for the rationale of any review of regulation (“cyber bullying” and other misuses/abuses of social media) suggests that, perhaps, this post might contribute a useful counterpoint to a perspective that appears to dominate the mainstream.

    The Issue

    I fully expect within the early weeks of 2013 for the Irish Government to propose regulations requiring that users of social media be required to tweet or blog in an identifiable way. No more anonymous tweets, no more anonymous blogs. The stated reason will be to “combat cyber bullying”. Sean Sherlock TD is quoted in today’s Irish Times (2012/12/24) calling for action on anonymous posting. This is ominous. Others quoted in that article are calling for “support systems” to help TDs deal with the “venom” being targeted at them via social media. While the support systems suggested are to be welcomed, the categorisation of expressions of opinion by citizens as “venom” is, at best, unhelpful and, at worst, disingenuous. What seems to be in pipeline to be proposed to stem this tide is almost inevitably going to be some form of requirement that people verify their identity in some way in blog posts or tweets. Remove the veil of anonymity, the reasoning will go, and this venom will go away. The “keyboard warriors” will put their weapons beyond use and step in line with the process of government and being governed. The fact that politicians are lumping Facebook in with these other platforms illustrates the tenuous grasp many have on the facts – Facebook already requires “real identity”  policy, which raises problems about what your real identity is and has been flagged as potentially in breach of EU law by at least one German Data Protection Authority.

    Why this is a bad idea

    In Orwell’s 1984 a shadowy figure of the State ultimately breaks the protagonist Smith, requiring him to give up on love and private intimacy and resubmit to a surveillance culture in which the Thought Police monitor the populace and the media tells everyone it is necessary to protect against the “enemy”. That shadowy figure is called O’Brien. My passion for data privacy is a reaction to my namesake, and from that perspective I can see three reasons why this is A VERY BAD IDEA.

    Bad Idea Reason #1  – What is Identity?

    Requiring people to post comments, write blogs, or tweet under their own identity creates a clear and public link between the public persona and the private individual. The supporters of any such proposal will argue that this is a deterrent to people making harsh or abusive comments. However, in a fair society that respects fundamental rights, it is important to think through who else might be impacted by a “real names” policy. There are quite a number of examples of this, the most famous recent example being Salman Rushdie having his Facebook account suspended because it didn’t think he was him. Identity is a complex and multifaceted thing. We all, to borrow a phrase from T.S Eliot, “prepare a face to meet the faces that we meet”. The GeekFeminism Wiki has an excellent list of scenarios where your “real name” might not be the name you are really known by. In Ireland, people who would be affected by a “real names” policy in social comment would include:

    • Public servants who cannot comment publicly on government policy but may be affected by it
    • Survivors of abuse
    • People with mental health concerns or problems
    • Whistleblowers
    • Celebrities.

    A real names policy would require that every time Bono tweets or blogs about Ireland, Irishness, or Irish Government policies he would have to do it under the name Paul David Hewson. And who the heck would be interested in an opinion expressed by Paul Crossan about epilepsy?

    Bad Idea Reason #2 – How will it work exactly?

    It is one thing to say that you want people to post comments using their identity, but it is another thing entirely to get a system in place that actually works. Identity is a “flexible” thing, as outlined above. Facebook require evidence of your identity in the form of personal ID (passport/driver’s license). They have the resources to process that data securely. But they still get it wrong (see the Salman Rushdie example cited above). If verifiable identities are required for comment, then how exactly would a small personal blog that is used to exercise my mental muscles outside of my work persona (domestic use) be expected to handle the overhead of verifying the identity of commenters in a verifiable way. Would I be expected to get people to register with the blog and provide evidence of ID? Would I be able to get a grant to help implement secure processes to obtain and process copies of passports and drivers licenses? Or will the State just require that I shut up shop? Would the State indemnify me if this blog was compromised and data held on it about the identity of others was stolen? Every few years we used to hear similar calls about the registration of mobile phones. The argument in favour of registration usually goes: “If they have to register, bad people won’t use these phones”.  That argument is bunkum. I’ve written about it at length here but the short form:

    1. If people have to register and provide ID for verification, they will use fake ID (as is happening in China with their mobile phone registration requirement)
    2. If the law is to register, strangely it is unlikely that that would bother criminals by definition they find the law an inconvenience rather than a barrier.
    3. If people are required to register without some form of identity verification then you’ll wind up with Mr D. Duck of  The Pond owning a lot of phones. A pseudonym, so no more identifiable than a picture of an egg.

    Applying this to a proposal for a “real names” policy for tweets, blogs, comments and other social media discourse and we wind up with a situation where, to achieve the objective that the proposers of non-anonymised comment seem to be seeking, would result in a disproportionate burden being placed on those of us who engage in debate on-line. Even then it would not be fool proof. And a non-verified identity is nothing more than another pseudonym. I could, for example, use the name of another person when “registering” to comment. Or a fictional duck. It is worth noting that South Korea is abandoning its “Real Names” policy for social media for a variety of reasons.

    Bad Idea Reason #3  –  The logical principle must be technology neutral

    Blogging, tweeting, social media… these are all technologies for self-expression and social interaction that barely existed five years ago and where unheard in the mainstream of a decade ago. Therefore any regulation that requires identification of commenters must be framed in such a way as to anticipate new technologies or new applications of existing technology or risk near instant obsolescence. Therefore the regulation would need to be technology neutral. Which means that, in order to avoid it being discriminatory and to ensure it has the fullest possible effect, it would need to be applicable to other forms of technology.

    When debating this on Twitter with Harry McGee on the 22nd December I asked him if he saw a difference between Twitter and a malicious phone call or an anonymous pamphlet. His response was they were, in his opinion, the same. So, if tweets are the same as anonymous pamphlets, the logical extension of needing to be able to identify the tweeter is a need to be able to identify the pamphleteer. The State would want to be able to identify the author of a published thought. We have seen this before. In fact, the seeing of it before is one of the reasons that the EU has a right to personal Data Privacy (introduced in the Lisbon Treaty) and why the strictest interpretations of Data Protection laws in Europe tend to be in Germany and former Soviet bloc countries. Have we managed to forget that, within the lifetime of people now in their mid thirties, governments in Eastern Europe required people to register their typewriters with the State so the State could identify the writers of letters, plays, pamphlets and other communications? As Mikko Hypponen of F-Secure (one of the world’s leading experts on information security) says in one of his many presentations:

    In the 1980s in the communist Eastern Germany, if you owned a typewriter, you had to register it with the government. You had to register a sample sheet of text out of the typewriter. And this was done so the government could track where text was coming from. If they found a paper which had the wrong kind of thought, they could track down who created that thought. And we in the West couldn’t understand how anybody could do this, how much this would restrict freedom of speech. We would never do that in our own countries. But today in 2011, if you go and buy a color laser printer from any major laser printer manufacturer and print a page, that page will end up having slight yellow dots printed on every single page in a pattern which makes the page unique to you and to your printer. This is happening to us today. And nobody seems to be making a fuss about it. And this is an example of the ways that our own governments are using technology against us, the citizens.

    So, if we can uniquely identify the typewriter or the printer shouldn’t we take the logical step and have the owner register it, just like in communist East Germany in the 1980s? So that when a pamphlet or letter is sent that has the wrong kind of thought the relevant authorities can take action and immediately stop that kind of thing. But sure, we’d never do that in our own country. We’d just ask everyone register their identity before blogging or tweeting. Totally different. The Government would never propose the creation of a register of printer owners. Would they? {update: here’s an article from EFF.org outlining their take (from the US) on why “real name” policies and regulation are a bad idea }

    Use the laws we have, don’t create crazy new ones

    But something must be done!! This is an intolerable thing, this “cyberbullying”. And indeed it is. But let’s not get hung up on the label. It is not “cyberbullying”. That is bullying by a fictional race from the TV show Dr. Who.

    What this is is inappropriate and/or malicious use of communications networks and technologies. It is no different from a smear poster campaign, a co-ordinated letter writing campaign, or a malicious calling campaign. And there are already laws a-plenty to combat this in a manner that is proportionate with the curtailment of freedoms of speech and rights to privacy. Bluntly: If your conduct on-line amounts to a criminal act or defamation it is almost inevitable that your illusion of privacy will evaporate once the blow-torch of appropriate and existing laws are applied.

    The power to pierce privacy in this case comes from the pursuit of a criminal investigation of what are deemed under the Communications (Retention of Data) Act 2011 as serious offences. Any social media provider will provide information about users where a serious offence is being investigated. It’s in their terms and conditions (see Twitter’s here – Section 8). This would allow the identification of the IP address used at a date and time for transmitting a message via twitter and could be used to compel a telecommunications provider to provide the name of the account holder and/or the location of the device at the time and at present. But it is done under a clear system of checks and balances. And it would be focussed just on the people who had done a bold thing that was complained about, not placing a burden on society as a whole just in case someone might do something naughty. I would ask the Government to use the laws we already have. Update them. Join them up. Standardise and future proof their application. But do so in a technology neutral way that isn’t swiping at flies while ignoring larger concerns. And please don’t mandate non-anonymised comment – it simply doesn’t work.

    The Risk

    When proposing any course of action it is advisable to prepare for the unintended consequence. With this chatter of requiring comment to be identifiable comes the risk that, should it happen, the social media data of Irish citizens will become either more valuable (because marketers will be able to mine the “big data” more efficiently) or less valuable (because we switch off and there is less data to meaningfully mine). There is also the risk that our Government will, yet again, send a signal to the world that it just doesn’t understand On-Line, for all its bleating about a “Knowledge Economy”. And at that point we may become less attractive to the foreign new media firms who are setting up base here. Like Twitter, LinkedIn, Facebook, etc.

    Conclusion

    Requiring identifiable comment is a dumb move and a silly non-solution to a non-problem. The problem is not anonymity. The problem is actually how we evolve our laws and culture to embrace new communication channels. We have always had anonymous comment or pseudonymous dispute. Satire thrives on it, art embraces it, and literature often lives through it. Just because every genius, wit, and idiot now has a printing press with a global reach does not mean we need to lock down the printing presses. It didn’t work in Stasi East Germany or other Soviet Bloc dictatorships. Other solutions, such as working the laws we already have, are preferable and are more likely to work. Educating users of social media that there are still social standards of acceptable behaviour is also a key part of the solution.

    Tagging the typewriters is NEVER the answer in a democracy. This O Brien stands firmly against this particular Thought Crime.

  • Europe v Facebook–a lesson in clarity

    I was on the news this afternoon. The radio. So the world was spared my visage. My words were quick in response to rapid fire questions about why Europe v Facebook had announced they were suing Facebook in Ireland and their comments about the Irish Data Protection Commissioner.

    To put some clarity on my comments (which I believe were reasonably balanced) I thought I’d write a short post here in my personal rant zone. Note I am not a lawyer but am renowned for my Matlock impressions.

    Europe v Facebook are suing?

    That’s nice. Who are they suing? Why?

    Well, it would seem they want to sue Facebook in Irish Courts for breaches of the Data Protection Acts. That’s nice. Section 7 of the Data Protection Acts allows for the Data Subject to sue for specific breaches of the Acts – the Duty of Care is contained in Section 7 and the Standard of Care is effectively Section 2 (and given the level of specificity that Accuracy as a test is defined with the recent Dublin Bus v DPC case would suggest that a strict interpretation would be applied by the Courts as to what the standard would be).

    But that is not Europe v Facebook suing. That’s a single punter. Or a series of single punters. Individually. Because we (as Europe v Facebook acknowledge) don’t have Class Actions here in Ireland. So each person rolls the dice and takes their chances in an area of law with little jurisprudence or precedent behind it in Ireland. Oh. And it would likely be a case taken at Circuit Court level unless the individuals wanted to risk large costs if they lost.

    Of course, Europe v Facebook could take a case against the State to the ECJ on the basis that the State hasn’t properly implemented the Directive. But as we basically photocopied it in a hurry that might be a long shot. The ECJ tends not to get directly involved in telling Member States how to spend money, particularly when the rest of the EU machinery is trying to get us to spend less money. But it is an option.

    Europe v Facebook itself can’t sue under Section 7. No duty of care is owed under the Data Protection Acts to a body corporate.

    What it could do is appeal a decision taken by the Data Protection Commissioner on foot of one of the 22 complaints the organisation has submitted. But apparently Europe v Facebook won’t state clearly what the specific complaint is so that a decision can be taken or what specific complaints they require decisions to be taken on, ergo there can be no decision from the DPC and ergo there is nothing to appeal against.

    But suing under Section 7 is entirely separate to any DPC investigation (just as suing someone for personal injuries arising from an assault is separate to a criminal investigation of assault). Just as the DPC Audit is a separate process from any investigation of a complaint.

    Why the focus on Ireland and the Irish DPC?

    Well Facebook have decided that, for a variety of reasons to set up shop in Ireland. (Europe v Facebook seem obsessed with tax breaks but there are other reasons multinationals come to Ireland. The scenery. The nice people. The multilingual skill sets, the cluster effect of other companies).

    In setting up Facebook Ireland Ltd Facebook also decided that, for any Facebook User outside of the US and Canada, Ireland would be the country and legislative framework and enforcement framework they would comply with.

    So the Irish DPC became responsible for policing the activities of Facebook globally.

    Hence Europe v Facebook are dealing with them.

    Dealing with the DPC

    Europe v Facebook are making some odd demands. They want the evidence from the investigation of their complaints before they will decide to proceed with their complaints. Nuts.

    That’s like asking the gardaí for the Book of Evidence before deciding if you will press charges against a thief. Lets ignore the fact that the ‘evidence’ might contain personal data of other individuals or may include commercially sensitive information or other confidential information.  If Europe v Facebook believe they have valid complaints they should specify which ones they want to move to a decision on and then take the process on.

    Personally and commercially I have found the DPC to be both a pleasure and a frustration to engage with. But the process is straight forward. Pissing around like a spoiled teenager is frankly, in my opinion, just a waste of the limited time and resources of the DPC.

    Europe v Facebook have highlighted that they have the support of German Data Protection Authorities. For balance it is worth pointing out that they have the public support of one of FIFTEEN German Data Protection Authorities, not counting the Federal Data Protection Authority for Germany.

    It’s a bit like having the backing of Carlow County Council on a matter of Foreign Affairs policy. Great to have it but not conclusive until the Feds (who represent Germany at the A29 Working Group) back the position. Yes it is important and needs to be noted and considered, but it is not in and of itself decisive.

    Time and Resources

    The audit of Facebook and subsequent reviews have taken up over 25% of the resources of the Office of the DPC. External technical support was resourced from UCD Campus company pro bono. Europe v Facebook’s press release say they couldn’t find the company. They didn’t look very hard. All the details about the company and the qualifications of the person doing the work were in the first Audit Report.

    Europe v Facebook does have a point though: the DPC has no “legally qualified” people. Now, that’s an interesting phrase. Do they mean qualified solicitor or barrister entered into the Roll of the relevant professional society here, or do they mean someone with a legal qualification (such as a BBLS degree) who has not gone on to qualify. Frankly if it is the latter I’m quids in… I’ve a legal qualification and I’m a recognised expert internationally on Data Governance practices.

    They point out that the DPC is faced with armies of lawyers when dealing with companies. No shit. A policeman. Having to deal with lawyers. Who’d a thought it? The implication is that they are outclassed in the legal skillz department. And guess what… they are. And they will be forever. For the simple reason that the salary scale of a civil servant wouldn’t match that of the hired guns on retainer. The smarter people go where the money is. Just as the Attorney General and the DPP and Revenue and other high-skill arms of Government lose skilled resources to the private sector so to would the DPC. I would be surprised if they haven’t already lost members of staff to law firms.

    And frankly the focus on a tick box skill set is narrow minded in my view. Hiring people who understand how businesses use data, the kinds of technology that are there, the actual best practices in Governance etc. is equally if not more important to driving compliance.

    The Upshot

    Max Schrems, the law student behind Europe v Facebook, will likely sue Facebook in Ireland. Likely at the Circuit Court level. The DPC will likely be called to give evidence, and they will submit the Audit Report. Facebook will probably be asked in discovery to provide information about their communications with the DPC.

    Europe v Facebook will do diddly squat, given they have no standing in the case. They might float a case up to the European Court re the effectiveness of the implementation of the Directive and the adequacy of resourcing and skills of the DPC. But the Directive is largely silent on those questions (as is the Regulation). Beyond that they can and will do nothing until they piss or get off the pot and tell the DPC what complaints they want decisions on. Then they are free to appeal the decisions.

    The real upshot is that this kerfuffle and the commentary surrounding it should focus attention on the resourcing, training, skills, qualifications, and competence of the Data Protection Commissioner’s office. They are diligent hard working servants of the public who could probably benefit from upskilling in a variety of areas either through hiring or training. They could also do with more resources, but the focus needs to be on brains not bodies.

    The continuing failure of the Courts to properly apply the criminal sanctions in the Acts should also be looked at. Having cases struck out as it is a “first offence” is feck all use when the DPC engagement model is to only prosecute after a second or third occurrence of an offence. I would consider the need for written judgements in DP cases to be important. I would also consider the need for a published archive of Enforcement notices and penalties, similar to the publications from the ICO in the UK, to be a useful step forward.

    I wish Europe v Facebook luck in their endeavours. A binding precedent on Data Protection compliance would be nice. But they would do well to remember that the Audit and the investigation of their complaints are two different processes and they need to engage with their process to bring the investigation leg to a close.

    Only by specifying the complaints they require a decision on can Europe v Facebook conclude the criminal investigation, either through findings they agree with or an appeal that is upheld.

    The potential for legal action by a Data Subject under Section 7 is interesting and has already lead to a number of key cases moving their way through the Irish Courts System at the moment. It would be a valuable contribution to Data Protection law here and elsewhere in Europe. But I can’t help but feel that the better approach would have been to engage positively with the Irish DPC and work towards clarity rather than calling the independence of the DPC into question and being confrontational.

    But maybe we are all just pixie heads.

  • Why (with due respect) Ian Elliott is mistaken

    Ian Elliott is the chairman of the National Board for Safeguarding Children in the Catholic Church. It is an agency of the Catholic Church in Ireland and is not a State agency. It is tasked with ensuring that the Catholic Church in Ireland follows and implements its own child protection guidelines, particularly with reference to allegations of clerical sexual abuse of children.

    It is a difficult job. It is an important job. And it is a function and role that we should be thankful someone is filling.

    However Mr Elliott seems to be operating under the misapprehension that the Data Protection Acts are an impediment to the NBSCCC from doing its job effectively. This is not the first time that this fig leaf has been trundled out.Similar issues raised their heads in 2011 when Bishops refused to cooperate with Mr Elliott on spurious Data Protection grounds that were dismissed by the Data Protection Commissioner. Given that the NBSCCC is in effect an agency of the Church it was a bit odd seeing the middle management of the Church trying to wheedle out of cooperating with it.

    In the present complaint about the Data Protection Acts Mr Elliott cites the example that the Gardaí are not able to pass information to his organisation without there being a risk of “imminent harm” to a child, which causes problems for the processes of safeguarding children. I believe Mr Elliott to be mistaken in his analysis of where the problem lies. Let’s look at this.

    An allegation that someone has committed a criminal offence is sensitive personal data. Information about an identified person contained in such an allegation is personal data. Therefore it can only be disclosed either with the consent of the Data Subject (and in this case the Data Subject is the individual about whom the allegation has been made) or where another exemption under Section 8 of the Data Protection Acts can be identified. The relevant condition that seems to be in dispute here is Section 8(d) which requires that the disclosure is

    Required urgently to prevent injury or other damage to the health of a person or serious loss of or damage to property.

    In effect he is stating that the Gardaí (or possibly the Attorney General who would likely have advised the Gardaí) are taking the view that there is no imminent harm therefore there is no lawful grounds for onward disclosure. That does not mean that the Gardai are not retaining the data and processing it themselves. Such processing however would fall under the protection of Section 62 of the Garda Siochana Act 2005 which places certain restrictions on the disclosure of data by members of An Garda Siochana, particularly related to investigations or other operational information. Breaches of this section of the Garda Siochana Act carry potentially significant penalties (and as they are a criminal conviction could be at best career limiting for members of the force).

    As the NBSCCC is not a State body that investigates criminal offences Section 8(b ) does not apply to them. As child safety in the Church is not a matter of National Security Section 8(a ) does not apply. As there is no legal advice being sought (the Gardaí are not asking the NBSCCC for a legal opinion) and there are no legal proceedings Section 8(f ) doesn’t apply. And given that the subject of an allegation is unlikely to have consented to their data being disclosed, the Consent exemption cannot be relied on.

    Which leaves us with Section 8(e). Section 8(e) is what I believe Mr Elliott was actually alluding to (but I may be mistaken). Section 8 (e) allows for the disclosure of information where it is

    Required by or under any enactment or by rule of law or order of a court

    So the Data Protection Acts contain a provision which would enable the sharing of data by the Gardai with the NBSCCC in any or all circumstances Mr Elliott might wish. He just needs legislation to allow it. This could be either primary legislation or a Statutory Instrument. Primary legislation would have the added benefit of giving some scope to making the role of the NBSCCC more formal. Any form of legislation would potentially provide a framework for properly balanced sharing of information from other State Agencies.

    The legislation would, of course, have to include some outlining of the protocols and security controls and limitations on processing that would be applied to the data but that is simply good practice.

    But (and here is the important bit) the Data Protection Acts would not need to be touched. The Junior Minister with responsibility for Children would simply need to legislate for some thought through Child Protection rules that would enable balanced and appropriate sharing of information.

    The risk in touching the Data Protection Acts is that you could create a situation where the Risk Committee of any employer could potentially seek disclosure from the Gardaí of any reports of specific criminal offences or reports of possible offences committed by current or prospective employees (unless you write the NBSCCC specifically into the legislation, which is derived from an EU Directive that is about to be replaced with a Regulation so… eh… not really possible). That is a dangerously broad and clumsy tool to apply. The law of unintended consequences is still on the metaphorical statute books after all.

    Mr Elliott, I politely submit that your analysis – or perhaps the media’s one-sided interpretation and reporting of it – is flawed. Leave the Data Protection Acts alone. Government – legislate for a clear exemption under Section 8(e ) and solve the problem the right way.

    Of course if the Data Protection Acts are going to be opened up the logical thing to do- given the impending Data Protection Regulation- would be to legislate on the basis of the principles in the Regulation. Beating the rush so to speak and definitely putting a stamp on the Irish EU Presidency. And I’ve a shopping list of other things…

  • The Anti-Choice Robodialler–some thoughts

    The Intro

    Robodialling, autodialling, power dialling. Call it what you will. It is the use of computers and computer telephony integration to save the tired fingers of call centre workers and turn the job into a battery farm of talk… pause.. talk.

    I know. I’ve worked with them. Heck, I designed the backend data management and reporting processes for one of the first big installations of one in Ireland back in the late 1990s. It was fun.

    I also learned a lot about how they work and some of the technical limitations and capabilities of them. Such as the lag that can happen when there is no agent available to take a call so the person dialled hears noise and static. Or the fact that you can trigger the dump of a recorded message either as a broadcast or based on the machine’s interpretation of whether it’s hit an answering machine or not (at least on the snazzy RoboDial9000 we were putting in).

    And I also remember the grizzled CRM and Direct Marketing consultant who was helping advise on best practice for using it telling the management team:

    “Don’t. For the love of all that is sacred don’t. Doing that shit just gets our industry a really bad name because it freaks people out.”

    Today – Fallout and penalties

    Today I’m trying to reengage brain after a night on twitter helping to advise people how to register their complaints about the use of a Robodialler to push anti-choice messages to unsuspecting households. The DPC is now getting up to 3 complaints every 5 minutes on this.

    Each complaint could carry a €5000 penalty on summary conviction. That is the tricky bit as this requires evidence gathering etc. This could take time. But the DPC has time available to them to conduct investigations and bring prosecutions. And if it is a case that this is an individual acting on their own behalf, the DPC has the powers to enter domestic premises to conduct searches and can levy a significant personal penalty of up to €50,000.

    Oh.. and if the dialler is in the UK the maximum penalty per offence is £500k and the DPC and ICO do talk to each other. A lot. They’re co-hosting an event in Newry at the end of the month.

    The unintended consequences

    My thoughts now turn to the unexpected consequences this robodialling will have.

    1. All future market research or polling that may be done on this topic by phone is borked and broken. People will be suspicious, even when the nice man from the polling agency ticks all the boxes and explains who they are etc.
    2. There will be a wave of “false positive” complaints to the DPC arising from any phone polling on this topic (for the reason outlined above). This will tax the resources of the DPC, and will tax the resources of market research and polling organisations as they work to deal with complaints and investigations etc.

    The impact of this on debate is that the published results of any polling will be distorted and will be potentially unreliable as barometers of public opinion. Face to face field work results will likely be less tainted by the robodialler experience but will be a LOT more expensive and time consuming for media and other organisations to run. So there may be less of them.

    The dialler incident will tie up resources in the ODPC that would otherwise be spent dealing with the wide range of complaints they get every day, driving investigations, conducting audits, and managing the large number of existing open cases they are working through.

    22 staff. In total. 25% of their staff regularly being tied up dealing with Facebook alone. With a mandate that covers ANY non-domestic processing of personal data. (by comparison the Financial Services Regulatory Authority has three times the number of staff at Director level alone).

    Another consequence of this is that we might get a little debate about how this is no different from the placard waving and leaflet shoving of the Anti-choice camp historically. But it is different. Disturbingly different. If I am walking on the street with my daughter and a leaflet or picture is thrust in her face, I can turn away, walk another route, or some other strategy to help shield my daughter from disturbing imagery.

    Last night I read of parents whose small children or young tweenagers answered the call and listened and have been upset by the calls.

    The wrap up

    I worked in a telemarketing business early in my career. Even then (nearly 2 decades ago) we were cautious about ringing people in the evenings. It is an invasion of the private family time of individuals, an abrupt interruption of what Louis Brandeis called “the right to be left alone”. No recorded messages were left. Human interaction was key to ensuring we only continued to encroach where welcomed, and requests to be removed from lists were treated respectfully. “Do Not Call in Evenings” was a call outcome code in the robodialler that prevented that number ever being called again (at least in theory when the software worked correctly and the teams did their jobs right).

    To tread on that right to be left alone to ram a pre-recorded message into the ears of an unsuspecting and unidentified audience belies an arrogance and ignorance on the part of those who thought it would be a good idea to choose to commit a criminal offence to push their message, ignoring both the law and the choices people had made with respect to their own personal data privacy (a fundamental right of all EU citizens).

    _____

    If you have received a call from a robodialler with an automated message or where the caller did not identify themselves to you you should register a complaint with the Data Protection Commissioner

    Investigations can be complex and it may be impossible to verify who to prosecute, but by registering the complaint you can help build the case against people who are acting illegally.

    Try to find the number that called you (in your phone’s call log). Note the date and time of the call. If the number is blocked, include that fact in your complaint. While numbers are blocked from being presented to you, the phone network will still know who called you and having the date and time you received the call will potentially enable ComReg and the Data Protection Commissioner to request data from the telecommunications companies to trace calling numbers. They may subsequently require you to give consent to accessing your phone records as part of their investigation but only to identify the number that phoned you on that date/time from the network call logs that are generated.

  • A little bit of root cause analysis (Web Summit)

    One of the issues highlighted by Karlin Lillington in her article today was the fact that people who had not opted into mailings were receiving them and there was inconsistency between the format and content of mailings received, with some including an option to opt-out and others not.

    This is symptomatic of a disparate data architecture at the backend. Which is consultant speak for “they’ve got too many buckets".

    This is a classic Information Quality problem. My friend and colleague Dr Peter Aiken identifies the root cause of this as being the training received in Computer Science courses world wide which primes people to solve problems by building/buying another database.

    Based on very quick analysis conducted today with help from @orlacox (one of the new “women of IT” in Ireland who I’ve discovered thanks to #dws) the following sources and tools for email communications were identified as being in use by Dublin Web Summit.

    1. Contact Form 7 plugin on the website (which is running on WordPress). This page captures email addresses in the contact form. No information is given about uses for the data you provide on this form and there is no option to opt-in to receiving marketing messages from DWS or its associates. So… if you fill in that form they should only be responding to your question and doing NOTHING else with your name and email address. [the use of contact form 7 was confirmed by inspecting page source for the form]
    2. CreateSend. On the website there is an option to provide an email address to subscribe to their mailing list. This is processed using CreateSend. I’ll return this later for another point [the use of CreateSend was determined by an inspection of the page source]
    3. MailChimp. @OrlaCox received an email from the organiser of the WebSummit the header of which confirms it was sent via MailChimp.

    Fair Obtaining

    If anyone involved in Dublin WebSummit was to have taken contact details supplied via their contact form on the website to include in commercial promotional email marketing that is a breach of the Data Protection Acts 1988 and 2003 and SI336 which require that

    • Data be processed for a specified purpose and not for a purpose incompatible with the specified purpose
    • Marketing by email requires consent.

    It is not possible in this case to argue “soft opt-in” based on terms and conditions that are associated with booking for the event. There is no commercial relationship in this context that can be relied upon as “soft opt-in” consent.

    [What would I suggest as a learning: If you have contact form, ASK PERMISSION to add people to contact lists. Otherwise you HAVE NO CONSENT]

    The Two Bucket Problem

    DWS appears to have been using two bulk email platforms. The technical term I use to describe that kind of data management strategy is TBSC  (Totally Bat Shit Crazy). It invites variation in process (one platform having opt-outs built in to the message, the other not), inevitably leads to inconsistencies in data (persons loaded to both platforms may wind up being opted out on one but not opted out on another, the headaches of keeping data synchronised).

    It is symptomatic of the “jump in and get it done” culture that can be brilliant… if you have thought through the things that need to be done to get it done.

    Information, like every other asset in an organisation, has a well defined Asset Life Cycle. The acronym is POSMAD. This resource by my friend Danette McGlvray (who introduced me to the idea a number of years ago) explains it in detail.

    DWS seems to have jumped into the Obtain and Store phases without doing the Plan. So they wound up with two (or more) buckets within which they had to manage data.

    (As an aside, it would appear there may be a third bucket as the media registration appears to have been backed by Google Forms).

    [What would I suggest as a learning: This is MASTER DATA. You need to have a SINGLE BUCKET so you can control what data is coming in, consistently apply suppressions, consistently manage content and format of messages, and generally only have one ‘house’ you need to perform housekeeping on. Tools like MailChimp let you set up multiple lists that people can subscribe to. Use multiple lists. Not multiple tools. That way you have a “Single View of the truth” and won’t make an arse of managing your obligations under the ePrivacy Regulations and/or the Data Protection Acts]

    [What I would strongly advise: Apply the POSMAD framework to the sketching out of the platform you will build to execute and deliver. It will help you resist the temptation to throw tech and tools at the strategy without having a strategy. It will prevent you from implementing things that are TBSC]

    Safety in Harbor – Remembering that Mail List tools are Data Processors

    Every time you use an external mailing list service you are engaging a Data Processor. As part of that a Data Controller needs to pay attention to a number of things. Among them is the thorny issue of whether the data is leaving the EEA at any point and whether there is actually any lawful basis for allowing that to happen.

    The DPA doesn’t prevent Cross Border transfers like this. And it doesn’t make using a Cloud Service or Outsourced service illegal. It makes doing it wrong and without attention to detail something that could constitute an offence.

    Mailchimp is a reasonably good tool. One good thing about it is that it is Safe Harbor registered. This means that a Data Controller in the EU can send data to Mailchimp in the US without being in breach of S11 of the Data Protection Acts.

    CreateSend.ie is a company based in Co. Clare. However, CreateSend.com is the server that the data is written to if you register for a mailing list hosted by CreateSend.  That server is hosted in Charlotte North Carolina. So, data is going to the US. There may be a “chain of processors” in place here (CreateSend Ireland, CreateSend US). Either way, data is going out of the European Economic Area. So one would expect that one of the legal grounds for cross border transfer.

    • CreateSend does not appear to be registered for US Safe Harbor. (It may be that their registration is under a different name)

    A scan through the terms and conditions of CreateSend.ie indicates in Section 2.7 that the data provided to CreateSend is indeed passed to servers in the United States. But then it goes a little bit squirrely:

    you warrant that you have obtained the consent of the relevant individuals to the storage and transmission of their personal information in this manner.

    In other words, any organisation that uses CreateSend as their email marketing platform has to get consent from their subscribers to transfer personal data to the United States. Not having that  consent means any transfer is illegal under S11 of the Data Protection Acts

    There is no notice of or consent sought for a transfer of personal data to the US when signing up for that mailing list. I know. I’ve done it. What I got was a lovely pdf telling me the name, department, and organisation of every attendee at the conference.

    So… to get a list of everyone at the conference I don’t even have to attend the conference, I just need to sign up to a mailing list. That’s TBSC strategy yet again.

    But I digress.

    [A lesson to learn: When selecting an email marketing service provider, it pays to do due diligence and make sure that you have clear lawful bases for the processing you are proposing to do. Safe Harbor is a good thing to look for. Relying on consent is allowed, but you have to get the consent]

    Conclusion

    Dublin Web Summit had too many buckets that were filled up without any apparent thought to Data Protection compliance and how to manage it.

    A single email marketing platform, with a simple and compliant structure for transferring data outside the EEA if required, and a clearly defined strategy for using it effectively and in a compliant manner would have saved a host of problems headaches.

    The approach that has been taken would raise questions about how prepared DWS would be if audited or investigated by the Data Protection Commissioner.

  • Dublin Web Summit, Data Protection, Data Quality, and Brand

    The KoolAid is being quaffed in great quantities this week in Dublin. And, having run national and international conferences in the Data Protection and Data Quality fields, I have to respect the achievement of the organisers of the Dublin Web Summit for putting together an impressive event that showcases the level of innovation and thought leadership, and capability in web, data, and all things tech.

    Yes. About that “thought leadership”…

    Data Protection

    Today’s Irish Times Business Section carries a story by Karlin Lillington about things that have been happening with her personal data at the Web Summit. An event she is not attending and has not registered for but for which she:

    • is registered as an attended
    • is listed on the media attendees list
    • has had her contact details distributed to sponsors and companies attending the event
    • has had her details shared with a social networking application that has pulled data from her Facebook profile

    In addition, she highlights that a list of ALL attendees is being distributed by the organisers if you request it through their Facebook page, but there is no opt-out for being included on this list and nothing in your registration that informs you that this will be happening.

    Emails are being sent out without people having opted-in, and not every email that is being sent out has the required opt-out. And I suspect that that may be the tip of the iceberg.

    Karlin reports that there have been complaints filed with the ODPC. My twitter stream this morning confirms that there are a number of people who I follow who have complained about how their data has been used. Many of these people would be the kind of people who you’d like to see fronting the thought leadership and innovation in web and data stuff, and they are irked at how their data is being abused.

    The DPC apparently has had previous complaints about Web Summit and has engaged with them in an “Advisory Capacity”. In my experience working with clients who have been subject to Data Protection complaints and have been investigated by the DPC, that is the Data Protection equivalent of “helping the police with their enquiries”. Web Summit has been handed rope. They have been guided and advised as to what needs to be done to be compliant (in keeping with the gummy tiger provisions of Section 10 of the Data Protection Acts which require the DPC to seek amicable resolution first and to focus on encouraging compliance rather than punish breaches).

    Dublin Web Summit has chosen, whether through a deliberate decision or a series of ego-driven and ignorance fuelled errors of judgement to ignore the advice of the DPC and continues to act in a manner that flouts the Data Protection rules that (and here’s the kicker) are not ‘nice to have’ but are guaranteed under Article 16 of the TFEU and have been subject to a number of recent tests at Circuit Court and High Court.

    Basically this is a Data Protection cluster f*ck of the highest order that illustrates one of the key problems with the “Innovation culture” in Ireland and, on the part of Government, either a blatant hypocrisy or a sociopathic ability to hold multiple contradictory positions at once. We want to promote Ireland as a great place to do business with web and data. And we want to be seen to be a bastion of increasingly responsible governance and regulation (after all, we’ve learned the lessons of the financial services collapse right? That one where we had  a Regulatory regime that was of so light a touch it could earn extra pin money touting for trade along the canal.) But for feck’s sake, don’t let the LAW get in the way of the use of TECHNOLOGY.

    Dublin Web Summit has almost certainly breached the Data Protection Acts in a variety of ways. Given that many of those breaches would appear to have been taken AFTER the DPC had given advice and guidance on what not to do. So the Web Summit organisers might want to check section 29 of the Data Protection Acts (never used, but there’s always a first time).

    Data Quality

    Data Protection and Data Quality go hand in hand. Heck, the principles for Data Protection are referred to in Directive 95/46/EC (and a variety of other places) as “Principles for Data Quality”. But on a more practical level, the approach the Web Summit has taken to obtaining and gathering their data and putting it to use has created some Data Quality problems.

    Take Karlin for example.Her contact details have been included on a media contact list for the event, touting her as someone from the media who is attending. A variety of sponsors and exhibitors at the event have apparently contacted her looking to meet at the conference. I’m guessing they’re a bit surprised when a leading tech journalist tells them she isn’t attending the event and won’t be able to meet with them.

    Also, eyeballing the “media list” I’ve found:

    • Duplicate entries (suggesting the list was created from multiple sources)
    • Organisations listed that might not be media organisations but are possibly service providers interfacing with media (new media/old media)… so VENDORS.

    The categorisation of organisations is hair splitting on my part, but the duplicate entries on a list that was being circulated to sponsors and exhibitors is indicative of a lazy and careless approach to managing data.

    How many of the people on the list are actually attending? And if you are counting the number of people attending from an organisation, are you allowing for duplicate and triplicate entries? If you are a marketing manager from a company who is ringing all these media people only to be told that they are either not attending or that they are not actually covering the tech aspects of the event but are (heaven forfend) actually exhibiting at the event yourself, how much will you trust this list next year? Will you be happy to pay for it?

    Never mind the quality, look at the tech!!

    Brand

    And this is where we come to the brand aspect of all of this. The Web Summit has made basic mistakes in Data Protection compliance even when presented with advice and guidance from the DPC. With regard to their Presdo social networking application, there are examples of it being used in data protection compliant ways (Karlin cites the le Web conference which used the same application but presented people with a code they could use to confirm their consent to their personal data being accessed and shared).

    But Dublin knows better. Dublin is the go-getter innovator. Rules schmules, Indians Schmindians.

    Which is a mantra that has disturbing echoes in the recent history of the European Economy. So it is a mantra we should, as thought leaders and innovators, be trying to distance ourselves from as much as possible. By showing how we can design privacy into everything we do in web and data and pushing the innovate envelope in ensuring balance.

    But here’s my fear. EI and the Government don’t get this. I am not aware of ANY EI incubator programme [Brian Honan informs me that Blanchardstown and Dundalk IT have had him in to talk to programmes] that provides training or briefings on Data Protection (Wayra does. I recently provided some content to help).

    My company has submitted proposals to various government backed training programmes for On-Line business, and I have got letters back telling me that Data Protection is not relevant.

    Everyone seems happy to touch the hem of the prophets of the Web and drink hungrily from the Kool Aid, repeating the mantra “Rules Schmules, Indians Schmindians”. But it is worth remembering the origins of the phrase “Drinking the Kool Aid” (hint: it didn’t work out well for the first group to do it).

    The Data Protection world globally is in a state of rapid evolution. Those who ignore the help and advice of Regulators invite penalties and brand damage. It  is time that the thought leaders of our web economy stepped back and actually thought about how they develop their brand and build trust based in the personal data economy.

    Koolaid from the Floor [an update]

    I made the mistake of watching twitter streams from the Dublin Web Summit. The KoolAid was gushing. Lots of great ideas and interesting innovation but not a single person seemed to be addressing the gorilla in the room that is Data Protection and Privacy.

    Yes, Social Engagement is important. Yes it is important to build trust and engagement with your brand. But as W.Edwards Deming famously said:

    You can’t inspect quality into a product, it’s there from the beginning.

    In other words, if you don’t start off by respecting your customers and their privacy rights, you will leave a bad taste in your customer’s mouths and sour your brand.

    That’s the weedkiller in your web branding koolaid. Drink with care.

  • Daisy (chain) cutters needed

    Brian Honan (@brianhonan on twitter) has been keeping me (and the omniverse) updated via Twitter about the trials and tribulations of Wired.com columnist Matt Honan who was the subject of a Social Engineering attack on his Amazon, Apple, Gmail, and ultimately twitter accounts which resulted in every photograph he had of his young daughter being deleted, along with a whole host of other problems.

    Matt writes about his experience in Wired.com today.

    Apart from the salutary lesson about Cloud-based back-up services (putting your eggs in their basket leaves you at the mercy of their ability to recover your data if something goes wrong), Matt’s story also raises some key points about Information Quality and Data Governance and the need to consider Privacy as a Quality Characteristic of data.

    Part of the success of the attach on Matt’s accounts hinged on the use of his Credit Card number for identity verification:

    …the very four digits that Amazon considers unimportant enough to display in the clear on the web are precisely the same ones that Apple considers secure enough to perform identity verification. The disconnect exposes flaws in data management policies endemic to the entire technology industry, and points to a looming nightmare as we enter the era of cloud computing and connected devices.

    So, Amazon view the last four digits as being useful to the customer (quality) so they can identify different cards on their account so they are exposed. But Apple considers that short string of data to be sufficient to validate a person’s identity.

    This is a good example of what I call “Purpose Shift” in Information Use. Amazon uses the credit card for processing payments, and need to provide information to customers to help them select the right card. However, in Apple-land, the same string of data (the credit card number) is used both as a means of payment (for iTunes, iCloud etc.) and for verifying your identity when you ring Apple Customer Support.

    This shift in purpose changes the sensitivity of the data and either

    • The quality of its display in Amazon (it creates a security risk for other purposes) or
    • The risk of its being relied on by Apple as an identifier (there is no guarantee it has not been swiped, cloned, stolen, or socially engineered from Amazon)

    Of course, the same is true of the age old “Security Questions”, which a colleague of mine increasingly calls INsecurity questions.

    • Where were you born?
    • What was your first pet’s name?
    • Who was your favourite teacher?
    • What is your favourite book?
    • What is your favourite sport?
    • Last four digits of your contact phone number?

    In the past there would have been a reasonable degree of effort required to gather this kind of information about a person. But with the advent of social media it becomes easier to develop profiles of people and gather key facts about them from their interactions on Facebook, Twitter, etc. The very facts that were “secure” because only the person or their close friends would know it (reducing the risk of unauthorised disclosure) are now widely broadcast – often to the same audience, but increasingly in a manner less like quiet whispers in confidence and more like shouting across a crowded room.

    [update: Brian Honan has a great presentation where he shows how (with permission) he managed to steal someone’s identity. The same sources he went to would provide the data to answer or guess “security” questions even if you didn’t want to steal the identity. http://www.slideshare.net/brianhonan/knowing-me-knowing-you)

    The use of and nature of the data has changed (which Tom Redman highlights in Data Driven as being one of the Special Characteristics of Information as an Asset). Therefore the quality of that data for the purpose of being secure is not what it once may have been. Social media and social networking has enabled us to connect with friends and acquaintances and random cat photographers in new and compelling ways, but we risk people putting pieces of our identity together like Verbal Kint creating the myth of Kaiser Sose in the Usual Suspects.

    Building Kaiser Soze

    Big Data is the current hype cycle in data management because the volumes of data we have available to process are getting bigger, faster, more full of variety. And it is touted as being a potential panacea for all things. Add to that the fact that most of the tools are Open Source and it sounds like a silver bullet. But it is worth remembering that it is not just “the good guys” who take advantage of “Big Data”. The Bad Guys also have access to the same tools and (whether by fair means or foul) often have access to the same data. So while they might not be able to get the exact answer to your “favourite book” they might be able to place you in a statistical population that likes “1984 by George Orwell” and make a guess.

    Yes, it appears that some processes may not have been followed correctly by Apple staff (according to Apple), but ‘defence in depth’ thinking applied to security checks would help provide controls and mitigation from process ‘variation’. Ultimately, during my entire time working with Call Centre staff (as an agent, Team Leader, Trainer, and ultimately as an Information Quality consultant) no staff member wanted to do a bad job… but they did want to do the quickest job (call centre metrics) or the ‘best job they thought they should be doing’ (poorly defined processes/poor training).

    Ultimately the nature of key data we use to describe ourselves is changing as services and platforms evolve, which means that, from a Privacy and Security perspective, the quality of that information and associated processes may no longer be “fit for purpose”.

    As Matt Honan says in his Wired.com article:

    I bought into the Apple account system originally to buy songs at 99 cents a pop, and over the years that same ID has evolved into a single point of entry that controls my phones, tablets, computers and data-driven life. With this AppleID, someone can make thousands of dollars of purchases in an instant, or do damage at a cost that you can’t put a price on.

    And that can result in poor quality outcomes for customers, and (in Matt’s case) the loss of the record of a year of his child’s life (which as a father myself would count as possibly the lowest quality outcome of all).