Category: The Business of Information

  • Calling The Tweet Police

    [updated 2012-12-27@17:11 to reflect comments from TJ McIntyre] [edited introductory paragraphs at 20:34 2012-12-27 reflecting feedback from Aoife below, fair comment made and responded to] [Note: This has been posted today because RTE are doing a thing about “social media regulation” which means that levers are being pulled that need to be red flagged] I drafted this post on Christmas Eve morning 2012. The original post had the introduction below. One person (out of the 600+ who have read this post by now, a few hours after I posted it) felt that the opening was too hyperbolic. Perhaps it was, so I decided to tweak it. I did hope I wouldn’t have to publish the piece I’d drafted. But the fact that the opening item on the 6pm news on the 27th of December 2012 was a piece about the Chairman of the Dáil communications committee announcing that the committee would meet in the New Year to discuss regulating ‘Social Media’ meant that my misgivings about the approach of the Irish political classes to the use of Social Media were not entirely misplaced. I’m writing this on Christmas Eve morning 2012. I dearly hope I never have to publish it. If I do it will be because the Government I helped elect will have abandoned any pretence of being a constitutional democracy and will have instead revealed its true insular, isolated, clientelist nature in a manner that will disgust and appal people. And this will be all the more disturbing as the Government will have used real personal tragedies to justify this abandonment of principles. But I am not hopeful. If this post sees the light of day something will have gone horribly wrong with the Irish Body Politick. That the content of the media coverage today echoed the expectation I set out in the paragraphs below for the rationale of any review of regulation (“cyber bullying” and other misuses/abuses of social media) suggests that, perhaps, this post might contribute a useful counterpoint to a perspective that appears to dominate the mainstream.

    The Issue

    I fully expect within the early weeks of 2013 for the Irish Government to propose regulations requiring that users of social media be required to tweet or blog in an identifiable way. No more anonymous tweets, no more anonymous blogs. The stated reason will be to “combat cyber bullying”. Sean Sherlock TD is quoted in today’s Irish Times (2012/12/24) calling for action on anonymous posting. This is ominous. Others quoted in that article are calling for “support systems” to help TDs deal with the “venom” being targeted at them via social media. While the support systems suggested are to be welcomed, the categorisation of expressions of opinion by citizens as “venom” is, at best, unhelpful and, at worst, disingenuous. What seems to be in pipeline to be proposed to stem this tide is almost inevitably going to be some form of requirement that people verify their identity in some way in blog posts or tweets. Remove the veil of anonymity, the reasoning will go, and this venom will go away. The “keyboard warriors” will put their weapons beyond use and step in line with the process of government and being governed. The fact that politicians are lumping Facebook in with these other platforms illustrates the tenuous grasp many have on the facts – Facebook already requires “real identity”  policy, which raises problems about what your real identity is and has been flagged as potentially in breach of EU law by at least one German Data Protection Authority.

    Why this is a bad idea

    In Orwell’s 1984 a shadowy figure of the State ultimately breaks the protagonist Smith, requiring him to give up on love and private intimacy and resubmit to a surveillance culture in which the Thought Police monitor the populace and the media tells everyone it is necessary to protect against the “enemy”. That shadowy figure is called O’Brien. My passion for data privacy is a reaction to my namesake, and from that perspective I can see three reasons why this is A VERY BAD IDEA.

    Bad Idea Reason #1  – What is Identity?

    Requiring people to post comments, write blogs, or tweet under their own identity creates a clear and public link between the public persona and the private individual. The supporters of any such proposal will argue that this is a deterrent to people making harsh or abusive comments. However, in a fair society that respects fundamental rights, it is important to think through who else might be impacted by a “real names” policy. There are quite a number of examples of this, the most famous recent example being Salman Rushdie having his Facebook account suspended because it didn’t think he was him. Identity is a complex and multifaceted thing. We all, to borrow a phrase from T.S Eliot, “prepare a face to meet the faces that we meet”. The GeekFeminism Wiki has an excellent list of scenarios where your “real name” might not be the name you are really known by. In Ireland, people who would be affected by a “real names” policy in social comment would include:

    • Public servants who cannot comment publicly on government policy but may be affected by it
    • Survivors of abuse
    • People with mental health concerns or problems
    • Whistleblowers
    • Celebrities.

    A real names policy would require that every time Bono tweets or blogs about Ireland, Irishness, or Irish Government policies he would have to do it under the name Paul David Hewson. And who the heck would be interested in an opinion expressed by Paul Crossan about epilepsy?

    Bad Idea Reason #2 – How will it work exactly?

    It is one thing to say that you want people to post comments using their identity, but it is another thing entirely to get a system in place that actually works. Identity is a “flexible” thing, as outlined above. Facebook require evidence of your identity in the form of personal ID (passport/driver’s license). They have the resources to process that data securely. But they still get it wrong (see the Salman Rushdie example cited above). If verifiable identities are required for comment, then how exactly would a small personal blog that is used to exercise my mental muscles outside of my work persona (domestic use) be expected to handle the overhead of verifying the identity of commenters in a verifiable way. Would I be expected to get people to register with the blog and provide evidence of ID? Would I be able to get a grant to help implement secure processes to obtain and process copies of passports and drivers licenses? Or will the State just require that I shut up shop? Would the State indemnify me if this blog was compromised and data held on it about the identity of others was stolen? Every few years we used to hear similar calls about the registration of mobile phones. The argument in favour of registration usually goes: “If they have to register, bad people won’t use these phones”.  That argument is bunkum. I’ve written about it at length here but the short form:

    1. If people have to register and provide ID for verification, they will use fake ID (as is happening in China with their mobile phone registration requirement)
    2. If the law is to register, strangely it is unlikely that that would bother criminals by definition they find the law an inconvenience rather than a barrier.
    3. If people are required to register without some form of identity verification then you’ll wind up with Mr D. Duck of  The Pond owning a lot of phones. A pseudonym, so no more identifiable than a picture of an egg.

    Applying this to a proposal for a “real names” policy for tweets, blogs, comments and other social media discourse and we wind up with a situation where, to achieve the objective that the proposers of non-anonymised comment seem to be seeking, would result in a disproportionate burden being placed on those of us who engage in debate on-line. Even then it would not be fool proof. And a non-verified identity is nothing more than another pseudonym. I could, for example, use the name of another person when “registering” to comment. Or a fictional duck. It is worth noting that South Korea is abandoning its “Real Names” policy for social media for a variety of reasons.

    Bad Idea Reason #3  –  The logical principle must be technology neutral

    Blogging, tweeting, social media… these are all technologies for self-expression and social interaction that barely existed five years ago and where unheard in the mainstream of a decade ago. Therefore any regulation that requires identification of commenters must be framed in such a way as to anticipate new technologies or new applications of existing technology or risk near instant obsolescence. Therefore the regulation would need to be technology neutral. Which means that, in order to avoid it being discriminatory and to ensure it has the fullest possible effect, it would need to be applicable to other forms of technology.

    When debating this on Twitter with Harry McGee on the 22nd December I asked him if he saw a difference between Twitter and a malicious phone call or an anonymous pamphlet. His response was they were, in his opinion, the same. So, if tweets are the same as anonymous pamphlets, the logical extension of needing to be able to identify the tweeter is a need to be able to identify the pamphleteer. The State would want to be able to identify the author of a published thought. We have seen this before. In fact, the seeing of it before is one of the reasons that the EU has a right to personal Data Privacy (introduced in the Lisbon Treaty) and why the strictest interpretations of Data Protection laws in Europe tend to be in Germany and former Soviet bloc countries. Have we managed to forget that, within the lifetime of people now in their mid thirties, governments in Eastern Europe required people to register their typewriters with the State so the State could identify the writers of letters, plays, pamphlets and other communications? As Mikko Hypponen of F-Secure (one of the world’s leading experts on information security) says in one of his many presentations:

    In the 1980s in the communist Eastern Germany, if you owned a typewriter, you had to register it with the government. You had to register a sample sheet of text out of the typewriter. And this was done so the government could track where text was coming from. If they found a paper which had the wrong kind of thought, they could track down who created that thought. And we in the West couldn’t understand how anybody could do this, how much this would restrict freedom of speech. We would never do that in our own countries. But today in 2011, if you go and buy a color laser printer from any major laser printer manufacturer and print a page, that page will end up having slight yellow dots printed on every single page in a pattern which makes the page unique to you and to your printer. This is happening to us today. And nobody seems to be making a fuss about it. And this is an example of the ways that our own governments are using technology against us, the citizens.

    So, if we can uniquely identify the typewriter or the printer shouldn’t we take the logical step and have the owner register it, just like in communist East Germany in the 1980s? So that when a pamphlet or letter is sent that has the wrong kind of thought the relevant authorities can take action and immediately stop that kind of thing. But sure, we’d never do that in our own country. We’d just ask everyone register their identity before blogging or tweeting. Totally different. The Government would never propose the creation of a register of printer owners. Would they? {update: here’s an article from EFF.org outlining their take (from the US) on why “real name” policies and regulation are a bad idea }

    Use the laws we have, don’t create crazy new ones

    But something must be done!! This is an intolerable thing, this “cyberbullying”. And indeed it is. But let’s not get hung up on the label. It is not “cyberbullying”. That is bullying by a fictional race from the TV show Dr. Who.

    What this is is inappropriate and/or malicious use of communications networks and technologies. It is no different from a smear poster campaign, a co-ordinated letter writing campaign, or a malicious calling campaign. And there are already laws a-plenty to combat this in a manner that is proportionate with the curtailment of freedoms of speech and rights to privacy. Bluntly: If your conduct on-line amounts to a criminal act or defamation it is almost inevitable that your illusion of privacy will evaporate once the blow-torch of appropriate and existing laws are applied.

    The power to pierce privacy in this case comes from the pursuit of a criminal investigation of what are deemed under the Communications (Retention of Data) Act 2011 as serious offences. Any social media provider will provide information about users where a serious offence is being investigated. It’s in their terms and conditions (see Twitter’s here – Section 8). This would allow the identification of the IP address used at a date and time for transmitting a message via twitter and could be used to compel a telecommunications provider to provide the name of the account holder and/or the location of the device at the time and at present. But it is done under a clear system of checks and balances. And it would be focussed just on the people who had done a bold thing that was complained about, not placing a burden on society as a whole just in case someone might do something naughty. I would ask the Government to use the laws we already have. Update them. Join them up. Standardise and future proof their application. But do so in a technology neutral way that isn’t swiping at flies while ignoring larger concerns. And please don’t mandate non-anonymised comment – it simply doesn’t work.

    The Risk

    When proposing any course of action it is advisable to prepare for the unintended consequence. With this chatter of requiring comment to be identifiable comes the risk that, should it happen, the social media data of Irish citizens will become either more valuable (because marketers will be able to mine the “big data” more efficiently) or less valuable (because we switch off and there is less data to meaningfully mine). There is also the risk that our Government will, yet again, send a signal to the world that it just doesn’t understand On-Line, for all its bleating about a “Knowledge Economy”. And at that point we may become less attractive to the foreign new media firms who are setting up base here. Like Twitter, LinkedIn, Facebook, etc.

    Conclusion

    Requiring identifiable comment is a dumb move and a silly non-solution to a non-problem. The problem is not anonymity. The problem is actually how we evolve our laws and culture to embrace new communication channels. We have always had anonymous comment or pseudonymous dispute. Satire thrives on it, art embraces it, and literature often lives through it. Just because every genius, wit, and idiot now has a printing press with a global reach does not mean we need to lock down the printing presses. It didn’t work in Stasi East Germany or other Soviet Bloc dictatorships. Other solutions, such as working the laws we already have, are preferable and are more likely to work. Educating users of social media that there are still social standards of acceptable behaviour is also a key part of the solution.

    Tagging the typewriters is NEVER the answer in a democracy. This O Brien stands firmly against this particular Thought Crime.

  • Europe v Facebook–a lesson in clarity

    I was on the news this afternoon. The radio. So the world was spared my visage. My words were quick in response to rapid fire questions about why Europe v Facebook had announced they were suing Facebook in Ireland and their comments about the Irish Data Protection Commissioner.

    To put some clarity on my comments (which I believe were reasonably balanced) I thought I’d write a short post here in my personal rant zone. Note I am not a lawyer but am renowned for my Matlock impressions.

    Europe v Facebook are suing?

    That’s nice. Who are they suing? Why?

    Well, it would seem they want to sue Facebook in Irish Courts for breaches of the Data Protection Acts. That’s nice. Section 7 of the Data Protection Acts allows for the Data Subject to sue for specific breaches of the Acts – the Duty of Care is contained in Section 7 and the Standard of Care is effectively Section 2 (and given the level of specificity that Accuracy as a test is defined with the recent Dublin Bus v DPC case would suggest that a strict interpretation would be applied by the Courts as to what the standard would be).

    But that is not Europe v Facebook suing. That’s a single punter. Or a series of single punters. Individually. Because we (as Europe v Facebook acknowledge) don’t have Class Actions here in Ireland. So each person rolls the dice and takes their chances in an area of law with little jurisprudence or precedent behind it in Ireland. Oh. And it would likely be a case taken at Circuit Court level unless the individuals wanted to risk large costs if they lost.

    Of course, Europe v Facebook could take a case against the State to the ECJ on the basis that the State hasn’t properly implemented the Directive. But as we basically photocopied it in a hurry that might be a long shot. The ECJ tends not to get directly involved in telling Member States how to spend money, particularly when the rest of the EU machinery is trying to get us to spend less money. But it is an option.

    Europe v Facebook itself can’t sue under Section 7. No duty of care is owed under the Data Protection Acts to a body corporate.

    What it could do is appeal a decision taken by the Data Protection Commissioner on foot of one of the 22 complaints the organisation has submitted. But apparently Europe v Facebook won’t state clearly what the specific complaint is so that a decision can be taken or what specific complaints they require decisions to be taken on, ergo there can be no decision from the DPC and ergo there is nothing to appeal against.

    But suing under Section 7 is entirely separate to any DPC investigation (just as suing someone for personal injuries arising from an assault is separate to a criminal investigation of assault). Just as the DPC Audit is a separate process from any investigation of a complaint.

    Why the focus on Ireland and the Irish DPC?

    Well Facebook have decided that, for a variety of reasons to set up shop in Ireland. (Europe v Facebook seem obsessed with tax breaks but there are other reasons multinationals come to Ireland. The scenery. The nice people. The multilingual skill sets, the cluster effect of other companies).

    In setting up Facebook Ireland Ltd Facebook also decided that, for any Facebook User outside of the US and Canada, Ireland would be the country and legislative framework and enforcement framework they would comply with.

    So the Irish DPC became responsible for policing the activities of Facebook globally.

    Hence Europe v Facebook are dealing with them.

    Dealing with the DPC

    Europe v Facebook are making some odd demands. They want the evidence from the investigation of their complaints before they will decide to proceed with their complaints. Nuts.

    That’s like asking the gardaí for the Book of Evidence before deciding if you will press charges against a thief. Lets ignore the fact that the ‘evidence’ might contain personal data of other individuals or may include commercially sensitive information or other confidential information.  If Europe v Facebook believe they have valid complaints they should specify which ones they want to move to a decision on and then take the process on.

    Personally and commercially I have found the DPC to be both a pleasure and a frustration to engage with. But the process is straight forward. Pissing around like a spoiled teenager is frankly, in my opinion, just a waste of the limited time and resources of the DPC.

    Europe v Facebook have highlighted that they have the support of German Data Protection Authorities. For balance it is worth pointing out that they have the public support of one of FIFTEEN German Data Protection Authorities, not counting the Federal Data Protection Authority for Germany.

    It’s a bit like having the backing of Carlow County Council on a matter of Foreign Affairs policy. Great to have it but not conclusive until the Feds (who represent Germany at the A29 Working Group) back the position. Yes it is important and needs to be noted and considered, but it is not in and of itself decisive.

    Time and Resources

    The audit of Facebook and subsequent reviews have taken up over 25% of the resources of the Office of the DPC. External technical support was resourced from UCD Campus company pro bono. Europe v Facebook’s press release say they couldn’t find the company. They didn’t look very hard. All the details about the company and the qualifications of the person doing the work were in the first Audit Report.

    Europe v Facebook does have a point though: the DPC has no “legally qualified” people. Now, that’s an interesting phrase. Do they mean qualified solicitor or barrister entered into the Roll of the relevant professional society here, or do they mean someone with a legal qualification (such as a BBLS degree) who has not gone on to qualify. Frankly if it is the latter I’m quids in… I’ve a legal qualification and I’m a recognised expert internationally on Data Governance practices.

    They point out that the DPC is faced with armies of lawyers when dealing with companies. No shit. A policeman. Having to deal with lawyers. Who’d a thought it? The implication is that they are outclassed in the legal skillz department. And guess what… they are. And they will be forever. For the simple reason that the salary scale of a civil servant wouldn’t match that of the hired guns on retainer. The smarter people go where the money is. Just as the Attorney General and the DPP and Revenue and other high-skill arms of Government lose skilled resources to the private sector so to would the DPC. I would be surprised if they haven’t already lost members of staff to law firms.

    And frankly the focus on a tick box skill set is narrow minded in my view. Hiring people who understand how businesses use data, the kinds of technology that are there, the actual best practices in Governance etc. is equally if not more important to driving compliance.

    The Upshot

    Max Schrems, the law student behind Europe v Facebook, will likely sue Facebook in Ireland. Likely at the Circuit Court level. The DPC will likely be called to give evidence, and they will submit the Audit Report. Facebook will probably be asked in discovery to provide information about their communications with the DPC.

    Europe v Facebook will do diddly squat, given they have no standing in the case. They might float a case up to the European Court re the effectiveness of the implementation of the Directive and the adequacy of resourcing and skills of the DPC. But the Directive is largely silent on those questions (as is the Regulation). Beyond that they can and will do nothing until they piss or get off the pot and tell the DPC what complaints they want decisions on. Then they are free to appeal the decisions.

    The real upshot is that this kerfuffle and the commentary surrounding it should focus attention on the resourcing, training, skills, qualifications, and competence of the Data Protection Commissioner’s office. They are diligent hard working servants of the public who could probably benefit from upskilling in a variety of areas either through hiring or training. They could also do with more resources, but the focus needs to be on brains not bodies.

    The continuing failure of the Courts to properly apply the criminal sanctions in the Acts should also be looked at. Having cases struck out as it is a “first offence” is feck all use when the DPC engagement model is to only prosecute after a second or third occurrence of an offence. I would consider the need for written judgements in DP cases to be important. I would also consider the need for a published archive of Enforcement notices and penalties, similar to the publications from the ICO in the UK, to be a useful step forward.

    I wish Europe v Facebook luck in their endeavours. A binding precedent on Data Protection compliance would be nice. But they would do well to remember that the Audit and the investigation of their complaints are two different processes and they need to engage with their process to bring the investigation leg to a close.

    Only by specifying the complaints they require a decision on can Europe v Facebook conclude the criminal investigation, either through findings they agree with or an appeal that is upheld.

    The potential for legal action by a Data Subject under Section 7 is interesting and has already lead to a number of key cases moving their way through the Irish Courts System at the moment. It would be a valuable contribution to Data Protection law here and elsewhere in Europe. But I can’t help but feel that the better approach would have been to engage positively with the Irish DPC and work towards clarity rather than calling the independence of the DPC into question and being confrontational.

    But maybe we are all just pixie heads.

  • Why (with due respect) Ian Elliott is mistaken

    Ian Elliott is the chairman of the National Board for Safeguarding Children in the Catholic Church. It is an agency of the Catholic Church in Ireland and is not a State agency. It is tasked with ensuring that the Catholic Church in Ireland follows and implements its own child protection guidelines, particularly with reference to allegations of clerical sexual abuse of children.

    It is a difficult job. It is an important job. And it is a function and role that we should be thankful someone is filling.

    However Mr Elliott seems to be operating under the misapprehension that the Data Protection Acts are an impediment to the NBSCCC from doing its job effectively. This is not the first time that this fig leaf has been trundled out.Similar issues raised their heads in 2011 when Bishops refused to cooperate with Mr Elliott on spurious Data Protection grounds that were dismissed by the Data Protection Commissioner. Given that the NBSCCC is in effect an agency of the Church it was a bit odd seeing the middle management of the Church trying to wheedle out of cooperating with it.

    In the present complaint about the Data Protection Acts Mr Elliott cites the example that the Gardaí are not able to pass information to his organisation without there being a risk of “imminent harm” to a child, which causes problems for the processes of safeguarding children. I believe Mr Elliott to be mistaken in his analysis of where the problem lies. Let’s look at this.

    An allegation that someone has committed a criminal offence is sensitive personal data. Information about an identified person contained in such an allegation is personal data. Therefore it can only be disclosed either with the consent of the Data Subject (and in this case the Data Subject is the individual about whom the allegation has been made) or where another exemption under Section 8 of the Data Protection Acts can be identified. The relevant condition that seems to be in dispute here is Section 8(d) which requires that the disclosure is

    Required urgently to prevent injury or other damage to the health of a person or serious loss of or damage to property.

    In effect he is stating that the Gardaí (or possibly the Attorney General who would likely have advised the Gardaí) are taking the view that there is no imminent harm therefore there is no lawful grounds for onward disclosure. That does not mean that the Gardai are not retaining the data and processing it themselves. Such processing however would fall under the protection of Section 62 of the Garda Siochana Act 2005 which places certain restrictions on the disclosure of data by members of An Garda Siochana, particularly related to investigations or other operational information. Breaches of this section of the Garda Siochana Act carry potentially significant penalties (and as they are a criminal conviction could be at best career limiting for members of the force).

    As the NBSCCC is not a State body that investigates criminal offences Section 8(b ) does not apply to them. As child safety in the Church is not a matter of National Security Section 8(a ) does not apply. As there is no legal advice being sought (the Gardaí are not asking the NBSCCC for a legal opinion) and there are no legal proceedings Section 8(f ) doesn’t apply. And given that the subject of an allegation is unlikely to have consented to their data being disclosed, the Consent exemption cannot be relied on.

    Which leaves us with Section 8(e). Section 8(e) is what I believe Mr Elliott was actually alluding to (but I may be mistaken). Section 8 (e) allows for the disclosure of information where it is

    Required by or under any enactment or by rule of law or order of a court

    So the Data Protection Acts contain a provision which would enable the sharing of data by the Gardai with the NBSCCC in any or all circumstances Mr Elliott might wish. He just needs legislation to allow it. This could be either primary legislation or a Statutory Instrument. Primary legislation would have the added benefit of giving some scope to making the role of the NBSCCC more formal. Any form of legislation would potentially provide a framework for properly balanced sharing of information from other State Agencies.

    The legislation would, of course, have to include some outlining of the protocols and security controls and limitations on processing that would be applied to the data but that is simply good practice.

    But (and here is the important bit) the Data Protection Acts would not need to be touched. The Junior Minister with responsibility for Children would simply need to legislate for some thought through Child Protection rules that would enable balanced and appropriate sharing of information.

    The risk in touching the Data Protection Acts is that you could create a situation where the Risk Committee of any employer could potentially seek disclosure from the Gardaí of any reports of specific criminal offences or reports of possible offences committed by current or prospective employees (unless you write the NBSCCC specifically into the legislation, which is derived from an EU Directive that is about to be replaced with a Regulation so… eh… not really possible). That is a dangerously broad and clumsy tool to apply. The law of unintended consequences is still on the metaphorical statute books after all.

    Mr Elliott, I politely submit that your analysis – or perhaps the media’s one-sided interpretation and reporting of it – is flawed. Leave the Data Protection Acts alone. Government – legislate for a clear exemption under Section 8(e ) and solve the problem the right way.

    Of course if the Data Protection Acts are going to be opened up the logical thing to do- given the impending Data Protection Regulation- would be to legislate on the basis of the principles in the Regulation. Beating the rush so to speak and definitely putting a stamp on the Irish EU Presidency. And I’ve a shopping list of other things…

  • The Anti-Choice Robodialler–some thoughts

    The Intro

    Robodialling, autodialling, power dialling. Call it what you will. It is the use of computers and computer telephony integration to save the tired fingers of call centre workers and turn the job into a battery farm of talk… pause.. talk.

    I know. I’ve worked with them. Heck, I designed the backend data management and reporting processes for one of the first big installations of one in Ireland back in the late 1990s. It was fun.

    I also learned a lot about how they work and some of the technical limitations and capabilities of them. Such as the lag that can happen when there is no agent available to take a call so the person dialled hears noise and static. Or the fact that you can trigger the dump of a recorded message either as a broadcast or based on the machine’s interpretation of whether it’s hit an answering machine or not (at least on the snazzy RoboDial9000 we were putting in).

    And I also remember the grizzled CRM and Direct Marketing consultant who was helping advise on best practice for using it telling the management team:

    “Don’t. For the love of all that is sacred don’t. Doing that shit just gets our industry a really bad name because it freaks people out.”

    Today – Fallout and penalties

    Today I’m trying to reengage brain after a night on twitter helping to advise people how to register their complaints about the use of a Robodialler to push anti-choice messages to unsuspecting households. The DPC is now getting up to 3 complaints every 5 minutes on this.

    Each complaint could carry a €5000 penalty on summary conviction. That is the tricky bit as this requires evidence gathering etc. This could take time. But the DPC has time available to them to conduct investigations and bring prosecutions. And if it is a case that this is an individual acting on their own behalf, the DPC has the powers to enter domestic premises to conduct searches and can levy a significant personal penalty of up to €50,000.

    Oh.. and if the dialler is in the UK the maximum penalty per offence is £500k and the DPC and ICO do talk to each other. A lot. They’re co-hosting an event in Newry at the end of the month.

    The unintended consequences

    My thoughts now turn to the unexpected consequences this robodialling will have.

    1. All future market research or polling that may be done on this topic by phone is borked and broken. People will be suspicious, even when the nice man from the polling agency ticks all the boxes and explains who they are etc.
    2. There will be a wave of “false positive” complaints to the DPC arising from any phone polling on this topic (for the reason outlined above). This will tax the resources of the DPC, and will tax the resources of market research and polling organisations as they work to deal with complaints and investigations etc.

    The impact of this on debate is that the published results of any polling will be distorted and will be potentially unreliable as barometers of public opinion. Face to face field work results will likely be less tainted by the robodialler experience but will be a LOT more expensive and time consuming for media and other organisations to run. So there may be less of them.

    The dialler incident will tie up resources in the ODPC that would otherwise be spent dealing with the wide range of complaints they get every day, driving investigations, conducting audits, and managing the large number of existing open cases they are working through.

    22 staff. In total. 25% of their staff regularly being tied up dealing with Facebook alone. With a mandate that covers ANY non-domestic processing of personal data. (by comparison the Financial Services Regulatory Authority has three times the number of staff at Director level alone).

    Another consequence of this is that we might get a little debate about how this is no different from the placard waving and leaflet shoving of the Anti-choice camp historically. But it is different. Disturbingly different. If I am walking on the street with my daughter and a leaflet or picture is thrust in her face, I can turn away, walk another route, or some other strategy to help shield my daughter from disturbing imagery.

    Last night I read of parents whose small children or young tweenagers answered the call and listened and have been upset by the calls.

    The wrap up

    I worked in a telemarketing business early in my career. Even then (nearly 2 decades ago) we were cautious about ringing people in the evenings. It is an invasion of the private family time of individuals, an abrupt interruption of what Louis Brandeis called “the right to be left alone”. No recorded messages were left. Human interaction was key to ensuring we only continued to encroach where welcomed, and requests to be removed from lists were treated respectfully. “Do Not Call in Evenings” was a call outcome code in the robodialler that prevented that number ever being called again (at least in theory when the software worked correctly and the teams did their jobs right).

    To tread on that right to be left alone to ram a pre-recorded message into the ears of an unsuspecting and unidentified audience belies an arrogance and ignorance on the part of those who thought it would be a good idea to choose to commit a criminal offence to push their message, ignoring both the law and the choices people had made with respect to their own personal data privacy (a fundamental right of all EU citizens).

    _____

    If you have received a call from a robodialler with an automated message or where the caller did not identify themselves to you you should register a complaint with the Data Protection Commissioner

    Investigations can be complex and it may be impossible to verify who to prosecute, but by registering the complaint you can help build the case against people who are acting illegally.

    Try to find the number that called you (in your phone’s call log). Note the date and time of the call. If the number is blocked, include that fact in your complaint. While numbers are blocked from being presented to you, the phone network will still know who called you and having the date and time you received the call will potentially enable ComReg and the Data Protection Commissioner to request data from the telecommunications companies to trace calling numbers. They may subsequently require you to give consent to accessing your phone records as part of their investigation but only to identify the number that phoned you on that date/time from the network call logs that are generated.

  • A little bit of root cause analysis (Web Summit)

    One of the issues highlighted by Karlin Lillington in her article today was the fact that people who had not opted into mailings were receiving them and there was inconsistency between the format and content of mailings received, with some including an option to opt-out and others not.

    This is symptomatic of a disparate data architecture at the backend. Which is consultant speak for “they’ve got too many buckets".

    This is a classic Information Quality problem. My friend and colleague Dr Peter Aiken identifies the root cause of this as being the training received in Computer Science courses world wide which primes people to solve problems by building/buying another database.

    Based on very quick analysis conducted today with help from @orlacox (one of the new “women of IT” in Ireland who I’ve discovered thanks to #dws) the following sources and tools for email communications were identified as being in use by Dublin Web Summit.

    1. Contact Form 7 plugin on the website (which is running on WordPress). This page captures email addresses in the contact form. No information is given about uses for the data you provide on this form and there is no option to opt-in to receiving marketing messages from DWS or its associates. So… if you fill in that form they should only be responding to your question and doing NOTHING else with your name and email address. [the use of contact form 7 was confirmed by inspecting page source for the form]
    2. CreateSend. On the website there is an option to provide an email address to subscribe to their mailing list. This is processed using CreateSend. I’ll return this later for another point [the use of CreateSend was determined by an inspection of the page source]
    3. MailChimp. @OrlaCox received an email from the organiser of the WebSummit the header of which confirms it was sent via MailChimp.

    Fair Obtaining

    If anyone involved in Dublin WebSummit was to have taken contact details supplied via their contact form on the website to include in commercial promotional email marketing that is a breach of the Data Protection Acts 1988 and 2003 and SI336 which require that

    • Data be processed for a specified purpose and not for a purpose incompatible with the specified purpose
    • Marketing by email requires consent.

    It is not possible in this case to argue “soft opt-in” based on terms and conditions that are associated with booking for the event. There is no commercial relationship in this context that can be relied upon as “soft opt-in” consent.

    [What would I suggest as a learning: If you have contact form, ASK PERMISSION to add people to contact lists. Otherwise you HAVE NO CONSENT]

    The Two Bucket Problem

    DWS appears to have been using two bulk email platforms. The technical term I use to describe that kind of data management strategy is TBSC  (Totally Bat Shit Crazy). It invites variation in process (one platform having opt-outs built in to the message, the other not), inevitably leads to inconsistencies in data (persons loaded to both platforms may wind up being opted out on one but not opted out on another, the headaches of keeping data synchronised).

    It is symptomatic of the “jump in and get it done” culture that can be brilliant… if you have thought through the things that need to be done to get it done.

    Information, like every other asset in an organisation, has a well defined Asset Life Cycle. The acronym is POSMAD. This resource by my friend Danette McGlvray (who introduced me to the idea a number of years ago) explains it in detail.

    DWS seems to have jumped into the Obtain and Store phases without doing the Plan. So they wound up with two (or more) buckets within which they had to manage data.

    (As an aside, it would appear there may be a third bucket as the media registration appears to have been backed by Google Forms).

    [What would I suggest as a learning: This is MASTER DATA. You need to have a SINGLE BUCKET so you can control what data is coming in, consistently apply suppressions, consistently manage content and format of messages, and generally only have one ‘house’ you need to perform housekeeping on. Tools like MailChimp let you set up multiple lists that people can subscribe to. Use multiple lists. Not multiple tools. That way you have a “Single View of the truth” and won’t make an arse of managing your obligations under the ePrivacy Regulations and/or the Data Protection Acts]

    [What I would strongly advise: Apply the POSMAD framework to the sketching out of the platform you will build to execute and deliver. It will help you resist the temptation to throw tech and tools at the strategy without having a strategy. It will prevent you from implementing things that are TBSC]

    Safety in Harbor – Remembering that Mail List tools are Data Processors

    Every time you use an external mailing list service you are engaging a Data Processor. As part of that a Data Controller needs to pay attention to a number of things. Among them is the thorny issue of whether the data is leaving the EEA at any point and whether there is actually any lawful basis for allowing that to happen.

    The DPA doesn’t prevent Cross Border transfers like this. And it doesn’t make using a Cloud Service or Outsourced service illegal. It makes doing it wrong and without attention to detail something that could constitute an offence.

    Mailchimp is a reasonably good tool. One good thing about it is that it is Safe Harbor registered. This means that a Data Controller in the EU can send data to Mailchimp in the US without being in breach of S11 of the Data Protection Acts.

    CreateSend.ie is a company based in Co. Clare. However, CreateSend.com is the server that the data is written to if you register for a mailing list hosted by CreateSend.  That server is hosted in Charlotte North Carolina. So, data is going to the US. There may be a “chain of processors” in place here (CreateSend Ireland, CreateSend US). Either way, data is going out of the European Economic Area. So one would expect that one of the legal grounds for cross border transfer.

    • CreateSend does not appear to be registered for US Safe Harbor. (It may be that their registration is under a different name)

    A scan through the terms and conditions of CreateSend.ie indicates in Section 2.7 that the data provided to CreateSend is indeed passed to servers in the United States. But then it goes a little bit squirrely:

    you warrant that you have obtained the consent of the relevant individuals to the storage and transmission of their personal information in this manner.

    In other words, any organisation that uses CreateSend as their email marketing platform has to get consent from their subscribers to transfer personal data to the United States. Not having that  consent means any transfer is illegal under S11 of the Data Protection Acts

    There is no notice of or consent sought for a transfer of personal data to the US when signing up for that mailing list. I know. I’ve done it. What I got was a lovely pdf telling me the name, department, and organisation of every attendee at the conference.

    So… to get a list of everyone at the conference I don’t even have to attend the conference, I just need to sign up to a mailing list. That’s TBSC strategy yet again.

    But I digress.

    [A lesson to learn: When selecting an email marketing service provider, it pays to do due diligence and make sure that you have clear lawful bases for the processing you are proposing to do. Safe Harbor is a good thing to look for. Relying on consent is allowed, but you have to get the consent]

    Conclusion

    Dublin Web Summit had too many buckets that were filled up without any apparent thought to Data Protection compliance and how to manage it.

    A single email marketing platform, with a simple and compliant structure for transferring data outside the EEA if required, and a clearly defined strategy for using it effectively and in a compliant manner would have saved a host of problems headaches.

    The approach that has been taken would raise questions about how prepared DWS would be if audited or investigated by the Data Protection Commissioner.

  • Dublin Web Summit, Data Protection, Data Quality, and Brand

    The KoolAid is being quaffed in great quantities this week in Dublin. And, having run national and international conferences in the Data Protection and Data Quality fields, I have to respect the achievement of the organisers of the Dublin Web Summit for putting together an impressive event that showcases the level of innovation and thought leadership, and capability in web, data, and all things tech.

    Yes. About that “thought leadership”…

    Data Protection

    Today’s Irish Times Business Section carries a story by Karlin Lillington about things that have been happening with her personal data at the Web Summit. An event she is not attending and has not registered for but for which she:

    • is registered as an attended
    • is listed on the media attendees list
    • has had her contact details distributed to sponsors and companies attending the event
    • has had her details shared with a social networking application that has pulled data from her Facebook profile

    In addition, she highlights that a list of ALL attendees is being distributed by the organisers if you request it through their Facebook page, but there is no opt-out for being included on this list and nothing in your registration that informs you that this will be happening.

    Emails are being sent out without people having opted-in, and not every email that is being sent out has the required opt-out. And I suspect that that may be the tip of the iceberg.

    Karlin reports that there have been complaints filed with the ODPC. My twitter stream this morning confirms that there are a number of people who I follow who have complained about how their data has been used. Many of these people would be the kind of people who you’d like to see fronting the thought leadership and innovation in web and data stuff, and they are irked at how their data is being abused.

    The DPC apparently has had previous complaints about Web Summit and has engaged with them in an “Advisory Capacity”. In my experience working with clients who have been subject to Data Protection complaints and have been investigated by the DPC, that is the Data Protection equivalent of “helping the police with their enquiries”. Web Summit has been handed rope. They have been guided and advised as to what needs to be done to be compliant (in keeping with the gummy tiger provisions of Section 10 of the Data Protection Acts which require the DPC to seek amicable resolution first and to focus on encouraging compliance rather than punish breaches).

    Dublin Web Summit has chosen, whether through a deliberate decision or a series of ego-driven and ignorance fuelled errors of judgement to ignore the advice of the DPC and continues to act in a manner that flouts the Data Protection rules that (and here’s the kicker) are not ‘nice to have’ but are guaranteed under Article 16 of the TFEU and have been subject to a number of recent tests at Circuit Court and High Court.

    Basically this is a Data Protection cluster f*ck of the highest order that illustrates one of the key problems with the “Innovation culture” in Ireland and, on the part of Government, either a blatant hypocrisy or a sociopathic ability to hold multiple contradictory positions at once. We want to promote Ireland as a great place to do business with web and data. And we want to be seen to be a bastion of increasingly responsible governance and regulation (after all, we’ve learned the lessons of the financial services collapse right? That one where we had  a Regulatory regime that was of so light a touch it could earn extra pin money touting for trade along the canal.) But for feck’s sake, don’t let the LAW get in the way of the use of TECHNOLOGY.

    Dublin Web Summit has almost certainly breached the Data Protection Acts in a variety of ways. Given that many of those breaches would appear to have been taken AFTER the DPC had given advice and guidance on what not to do. So the Web Summit organisers might want to check section 29 of the Data Protection Acts (never used, but there’s always a first time).

    Data Quality

    Data Protection and Data Quality go hand in hand. Heck, the principles for Data Protection are referred to in Directive 95/46/EC (and a variety of other places) as “Principles for Data Quality”. But on a more practical level, the approach the Web Summit has taken to obtaining and gathering their data and putting it to use has created some Data Quality problems.

    Take Karlin for example.Her contact details have been included on a media contact list for the event, touting her as someone from the media who is attending. A variety of sponsors and exhibitors at the event have apparently contacted her looking to meet at the conference. I’m guessing they’re a bit surprised when a leading tech journalist tells them she isn’t attending the event and won’t be able to meet with them.

    Also, eyeballing the “media list” I’ve found:

    • Duplicate entries (suggesting the list was created from multiple sources)
    • Organisations listed that might not be media organisations but are possibly service providers interfacing with media (new media/old media)… so VENDORS.

    The categorisation of organisations is hair splitting on my part, but the duplicate entries on a list that was being circulated to sponsors and exhibitors is indicative of a lazy and careless approach to managing data.

    How many of the people on the list are actually attending? And if you are counting the number of people attending from an organisation, are you allowing for duplicate and triplicate entries? If you are a marketing manager from a company who is ringing all these media people only to be told that they are either not attending or that they are not actually covering the tech aspects of the event but are (heaven forfend) actually exhibiting at the event yourself, how much will you trust this list next year? Will you be happy to pay for it?

    Never mind the quality, look at the tech!!

    Brand

    And this is where we come to the brand aspect of all of this. The Web Summit has made basic mistakes in Data Protection compliance even when presented with advice and guidance from the DPC. With regard to their Presdo social networking application, there are examples of it being used in data protection compliant ways (Karlin cites the le Web conference which used the same application but presented people with a code they could use to confirm their consent to their personal data being accessed and shared).

    But Dublin knows better. Dublin is the go-getter innovator. Rules schmules, Indians Schmindians.

    Which is a mantra that has disturbing echoes in the recent history of the European Economy. So it is a mantra we should, as thought leaders and innovators, be trying to distance ourselves from as much as possible. By showing how we can design privacy into everything we do in web and data and pushing the innovate envelope in ensuring balance.

    But here’s my fear. EI and the Government don’t get this. I am not aware of ANY EI incubator programme [Brian Honan informs me that Blanchardstown and Dundalk IT have had him in to talk to programmes] that provides training or briefings on Data Protection (Wayra does. I recently provided some content to help).

    My company has submitted proposals to various government backed training programmes for On-Line business, and I have got letters back telling me that Data Protection is not relevant.

    Everyone seems happy to touch the hem of the prophets of the Web and drink hungrily from the Kool Aid, repeating the mantra “Rules Schmules, Indians Schmindians”. But it is worth remembering the origins of the phrase “Drinking the Kool Aid” (hint: it didn’t work out well for the first group to do it).

    The Data Protection world globally is in a state of rapid evolution. Those who ignore the help and advice of Regulators invite penalties and brand damage. It  is time that the thought leaders of our web economy stepped back and actually thought about how they develop their brand and build trust based in the personal data economy.

    Koolaid from the Floor [an update]

    I made the mistake of watching twitter streams from the Dublin Web Summit. The KoolAid was gushing. Lots of great ideas and interesting innovation but not a single person seemed to be addressing the gorilla in the room that is Data Protection and Privacy.

    Yes, Social Engagement is important. Yes it is important to build trust and engagement with your brand. But as W.Edwards Deming famously said:

    You can’t inspect quality into a product, it’s there from the beginning.

    In other words, if you don’t start off by respecting your customers and their privacy rights, you will leave a bad taste in your customer’s mouths and sour your brand.

    That’s the weedkiller in your web branding koolaid. Drink with care.

  • Triskaidekaphobia Cars and Information Economics

    So, the Irish Government has decided – based it would seem solely on the analysis and advice of the Society for the Irish Motor Industry- to introduce a revised licence plate system for Irish cars starting from January of next year.

    The reasoning put forward is that fear of the number 13 will hamper car sales (superstition) and people don’t like the current system because they don’t know for certain when a car was manufactured (snobbery).

    Snobbery

    To address the snobbery element first, according to comments from SIMI quoted in the Irish Independent:

    Even though 70pc of new cars are bought during the first four months of the year, some consumers believe that it doesn’t accurately reflect the real age of a new car since cars bought in January are obviously manufactured the previous year while those bought later in the year are actually made in the same year

    So. 70% of all new cars are purchased in the first four months of the year. That’s a good statistic. It means that, on average, 3.75% of all new cars are sold in each of the remaining 8 months of the year. From that a reasonable guesstimate of the value at risk in each month can be worked out.

    What is not a good statistic is “some consumers”. Is that one consumer, one consumer and their friend from the gym, 1000 consumers, or every consumer who buys a car in the first 4 months of the year? If is the latter it obviously doesn’t bother them that much or they wouldn’t buy until later in the year.

    Surely a better and more cost effective approach would be for the SIMI to educate purchasers about the manufacture and supply chain processes that apply to vehicles. Bluntly – car manufacturers don’t build cars in the hope they will sell them. That’s too expensive. They apply logistics principles to build enough to just about meet forecast demand. And no more. So a car purchased in January will not have been sitting in a storage facility for a dozen months. It will be relatively recent.

    And does the fact that it was manufactured in the previous calendar year actually matter if features, specifications, and price are the same in December 2012 versus January 2013. I know from experience that the announcement of a new model of a car affects book value, but, excluding the change of model for a moment, logistics need to be considered when we think about the idea of the year of manufacture being a real decision point for people. After all, a car manufactured in January 2013 will be using parts that were on-hand at end December 2012, that were probably ordered at the start of December 2012, and were probably being manufactured by the downstream supplier from October 2012 in anticipation of a glut of orders from car manufacturers in December/January 2012.

    The new iPhone isn’t due out for a while yet, but already there are rumours of supply chains having been ramping up for months… that’s how logistics works.

    And as the supply chain for vehicles is largely a pull supply chain (building to respond to demand), the easiest way to avoid having a car that was assembled in 2012 delivered to you as a new car in 2013 is to order it in Month 2 or 3 of 2013.

    But even then it doesn’t matter as the actual age of components going into the car will depend on the vagaries of supply chain management down the line from the dealership to the nice man in Schenzen whose company makes the screws that hold your sun visor in place.

    I can remember a few years ago looking to buy a particular model of car. The dealership didn’t have any in stock and when they (and this is the CSI moment) looked at the logistics system from the manufacturer they were able to tell me when the next one of the model I wanted would be manufactured. There was no great holding pen of stocks waiting for me to turn up and buy.

    So… I would really like to see some objective evidence that people actually give a rats ass about when their car is assembled, given that the majority of new cars are purchased in a time period when it would be logical that the supply chain inputs to the delivery of that car would have taken place in the previous year. The data does not correlate.

    Superstition

    It’s a number. Currently there are vehicles on the roads in Ireland with the number 13 in their license plate. Not in the year, but in the other element of the license plate.

    Surely insurance companies can provide data on the number of claims involving vehicles registered within the past 10 years with the number 13 in their license plate against which we can determine if superstition is borne out by evidence. If it is… brilliant, we can establish an economic value case for changing an otherwise logical and straight forward system.

    The National Vehicle database (where registration numbers come from) would likewise have data on how many cars currently have a 13 in their license plate. If people are already avoiding it then the data will be there… lots of 12s, lots of 14s, no 13s.

    If not. Then there’s no actual reason to change other than a vague (and quantified) assertion that people won’t buy new cars because they have a 13 in the license plate.

    Reality

    This sounds like a simple change. But it isn’t. Many of the systems that your licence plate goes into are old and could require systems changes to accommodate the new format. Many of these are government departments. For example:

    • National Vehicle Driver File (Dept of Transport)-  reg number and registered owner
    • VRT tax systems (Revenue Commissioners)
    • Gardaí (PULSE system, asset registers for garda vehicles)
    • Insurers
    • Car park ticketing systems such as the Pay-by-SMS service in Dublin (Local Authorities)
    • Car clamping operator systems
    • CIE (they need to log busses)
    • Car Rental operators

    It would be interesting to know if the Government commissioned any form of economic impact assessment to off-set the cost of catering to one industry lobby group for a problem that would exist in one year against the costs to the State and other private sector organisations of making systems changes to support the new format.

    Particularly given that the changes would need to be implemented before mid December to allow for them to be in place for cars being registered in January.

    The reality is that life is not like Star Trek and data is not well managed. I would doubt if there is the required metadata available to do a quick Impact Assessment on the change. At a minimum you would need to know the maximum field lengths for reg numbers in key systems. Other data required would be information on data transfers, batch processing functionality, or edit checking that might be applied to make sure that the full extent of the changes is understood and addressed to avoid any systems or process failures.

    I was involved in a lot of that kind of activity in Call Centre systems for Y2K in a former life. It is not easy if things aren’t documented. And they are never documented.

    My prediction: It this suggestion goes ahead without any rigorous impact assessment here will be at least one major process failure in January/February 2013 arising from this. It is an idea that, while it may have merits, risks being rushed in without proper impact assessment being performed or any examination of the costs of implementation across the public sector or other private sector users of this information.

    In reality there has been a tentative Value case put forward with no corresponding assessment of the costs associated with delivering that value. And a horrendously ambitious time scale to make what is actually a deceptively complicated change.

  • Daisy (chain) cutters needed

    Brian Honan (@brianhonan on twitter) has been keeping me (and the omniverse) updated via Twitter about the trials and tribulations of Wired.com columnist Matt Honan who was the subject of a Social Engineering attack on his Amazon, Apple, Gmail, and ultimately twitter accounts which resulted in every photograph he had of his young daughter being deleted, along with a whole host of other problems.

    Matt writes about his experience in Wired.com today.

    Apart from the salutary lesson about Cloud-based back-up services (putting your eggs in their basket leaves you at the mercy of their ability to recover your data if something goes wrong), Matt’s story also raises some key points about Information Quality and Data Governance and the need to consider Privacy as a Quality Characteristic of data.

    Part of the success of the attach on Matt’s accounts hinged on the use of his Credit Card number for identity verification:

    …the very four digits that Amazon considers unimportant enough to display in the clear on the web are precisely the same ones that Apple considers secure enough to perform identity verification. The disconnect exposes flaws in data management policies endemic to the entire technology industry, and points to a looming nightmare as we enter the era of cloud computing and connected devices.

    So, Amazon view the last four digits as being useful to the customer (quality) so they can identify different cards on their account so they are exposed. But Apple considers that short string of data to be sufficient to validate a person’s identity.

    This is a good example of what I call “Purpose Shift” in Information Use. Amazon uses the credit card for processing payments, and need to provide information to customers to help them select the right card. However, in Apple-land, the same string of data (the credit card number) is used both as a means of payment (for iTunes, iCloud etc.) and for verifying your identity when you ring Apple Customer Support.

    This shift in purpose changes the sensitivity of the data and either

    • The quality of its display in Amazon (it creates a security risk for other purposes) or
    • The risk of its being relied on by Apple as an identifier (there is no guarantee it has not been swiped, cloned, stolen, or socially engineered from Amazon)

    Of course, the same is true of the age old “Security Questions”, which a colleague of mine increasingly calls INsecurity questions.

    • Where were you born?
    • What was your first pet’s name?
    • Who was your favourite teacher?
    • What is your favourite book?
    • What is your favourite sport?
    • Last four digits of your contact phone number?

    In the past there would have been a reasonable degree of effort required to gather this kind of information about a person. But with the advent of social media it becomes easier to develop profiles of people and gather key facts about them from their interactions on Facebook, Twitter, etc. The very facts that were “secure” because only the person or their close friends would know it (reducing the risk of unauthorised disclosure) are now widely broadcast – often to the same audience, but increasingly in a manner less like quiet whispers in confidence and more like shouting across a crowded room.

    [update: Brian Honan has a great presentation where he shows how (with permission) he managed to steal someone’s identity. The same sources he went to would provide the data to answer or guess “security” questions even if you didn’t want to steal the identity. http://www.slideshare.net/brianhonan/knowing-me-knowing-you)

    The use of and nature of the data has changed (which Tom Redman highlights in Data Driven as being one of the Special Characteristics of Information as an Asset). Therefore the quality of that data for the purpose of being secure is not what it once may have been. Social media and social networking has enabled us to connect with friends and acquaintances and random cat photographers in new and compelling ways, but we risk people putting pieces of our identity together like Verbal Kint creating the myth of Kaiser Sose in the Usual Suspects.

    Building Kaiser Soze

    Big Data is the current hype cycle in data management because the volumes of data we have available to process are getting bigger, faster, more full of variety. And it is touted as being a potential panacea for all things. Add to that the fact that most of the tools are Open Source and it sounds like a silver bullet. But it is worth remembering that it is not just “the good guys” who take advantage of “Big Data”. The Bad Guys also have access to the same tools and (whether by fair means or foul) often have access to the same data. So while they might not be able to get the exact answer to your “favourite book” they might be able to place you in a statistical population that likes “1984 by George Orwell” and make a guess.

    Yes, it appears that some processes may not have been followed correctly by Apple staff (according to Apple), but ‘defence in depth’ thinking applied to security checks would help provide controls and mitigation from process ‘variation’. Ultimately, during my entire time working with Call Centre staff (as an agent, Team Leader, Trainer, and ultimately as an Information Quality consultant) no staff member wanted to do a bad job… but they did want to do the quickest job (call centre metrics) or the ‘best job they thought they should be doing’ (poorly defined processes/poor training).

    Ultimately the nature of key data we use to describe ourselves is changing as services and platforms evolve, which means that, from a Privacy and Security perspective, the quality of that information and associated processes may no longer be “fit for purpose”.

    As Matt Honan says in his Wired.com article:

    I bought into the Apple account system originally to buy songs at 99 cents a pop, and over the years that same ID has evolved into a single point of entry that controls my phones, tablets, computers and data-driven life. With this AppleID, someone can make thousands of dollars of purchases in an instant, or do damage at a cost that you can’t put a price on.

    And that can result in poor quality outcomes for customers, and (in Matt’s case) the loss of the record of a year of his child’s life (which as a father myself would count as possibly the lowest quality outcome of all).

  • Qui Custodiet CAI?

    The CAI (@the_cai on twitter), not to be confused with that famous venture capital firm based in Langley Virginia, the CIA, has today announced that it wants people who have been affected by the Ulster Bank IT outage in recent weeks to provide personal data to them for the purposes of starting a Class Action http://thecai.ie/media-news/the-consumers-association-of-ireland-cai-ub-%E2%80%98class-action%E2%80%99-initiative/suit. Or Initiative. It’s not entirely clear which, for reasons to follow.

    Jebus.. where do I start on this one?

    1. I have the legal standing of a Matlock script (4 yrs UCD Law as a BBLSer but never qualified professionally) and even I know that there is NO SUCH THING AS A CLASS ACTION SUIT IN IRISH LAW. So to claim that you are going to initiate such a thing is false and misleading advertising. So the CAI has stated it is a process to gather information to provide to the Dept of Finance and the Central Bank for the purposes of calculating the losses and impacts suffered.
    2. Should this campaign ever appear in print or media adverts as a “class action” I will be lodging a complaint with the Advertising Standards Authority on the basis that any such advert would be misleading as to a significant matter of fact
    3. Journalists covering the story should pay attention to the Press Council code of practice on accuracy in reporting – do not report as being something that is about to happen something that CANNOT EVER HAPPEN. Talk to a lawyer about this and get a quote from one. Simon over in McGarr Solicitors is a good one, and Fergal Crehan BL is a frequent media commentator on legal issues whose surname is not McDermott.
    4. I think the use of the phrase “Class Action” in this context is just dumb as the average consumer doesn’t know that there is no such thing as a Class Action in Irish law, given that their legal training and skills are derived from reruns of LA Law and Boston Legal, and perhaps a few episodes of the Good Wife. Therefore I would suggest that the CAI needs to be very careful how they set and manage expectations here.

    Right, now that that bit is out of the way, it is worth considering the implications of what the CAI is proposing to do here.

    • Obtain Personal Data and potentially Personal Financial Data from individuals
    • For the stated purpose of doing
    • A thing that can not be done without a legislative change that has been long fingered… well, since I was a doe-eyed undergrad in UCD Law to be honest, unless the thing that is being done is just to forward the information on to the Central Bank and Dept of Finance.

    But the DPC is very clear that the expectation of the customer is important here – they should not be ‘surprised’ by the processing of their data. And what exactly will be presented to Government? Raw data or aggregated data? The former creates risk of ‘scope creep’ if data is left with Government and finds its way into other processes.

    I’ve put the words “Personal Data” and “Personal Financial Data” in Capitals because they are important Words of Power (to steal a term from Frank Herbert’s Dune saga, a story that has more chance of happening than the ‘Class Action’ the CAI is discussing).

    Personal Data is protected under the Data Protection Acts. It must be obtained for a specified and lawful purpose, be adequate and not excessive for the stated purpose, and it needs to be disposed of once that purpose has expired. And while you have it you have to keep it safe and secure.

    Personal Financial Data is a term that entered Irish Data Protection practice in July 2010 with the introduction of the Data Security Breach Code of Practice (which I was involved in consultation submissions on). Basically it is a surname and an account number, or an account number or data from which a surname could reasonably be inferred.

    Personal Financial Data needs to be kept safe and secure as well. And if there is even the suspicion that it has been lost, stolen, misplaced, accessed with out authority, or otherwise tampered with, the Data Controller (in this case the CAI) has a very clear duty to notify the Data Protection Commissioner and the affected Data Subjects. It too must be obtained lawfully and fairly by the Data Controller.

    So, let’s run the rule over this shall we:

    • There is a specified purpose.
    • It is not one that can be achieved in law… (Oh… there’s a problem, if people think they’re entering a process to have their day in Court).
    • The purpose for which it is being obtained cannot come into being (if it is a “Class Action” as described), therefore the data should not be retained. (OK.. fill out a form, press send, clear form, don’t send any data).
    • Given that the stated purpose cannot actually be achieved (see my earlier point about Class Actions in Irish Law) then, by definition, any data obtained for that purpose is excessive and should not be captured or retained.

    So, in short…

    IN OPERATING A PROCESS TO OBTAIN AND RETAIN PERSONAL DATA AND PERSONAL FINANCIAL DATA FOR THE PURPOSES OF A CLASS ACTION WHICH CAN NEVER TAKE PLACE UNDER THE CURRENT LEGAL SYSTEM IN IRELAND CAI ARE ALMOST CERTAINLY ACTING IN BREACH OF THE DATA PROTECTION ACTS.

    Of course, we are in the wonderful world of branding and sound-bites and the phrase “Class Action” will doubtless wind its way into UB Headquarters where it will be bounced around meeting rooms like a tribble at a Star Trek convention (I used to freak out when people mis-used the term “Duty of Care” and talked about “Precedence” when they meant “Precedents”, all without a fricking clue what the terms ACTUALLY meant in a Legal & Regulatory context – but a man on Matlock kept saying them so they must be WORDS OF POWER they told me).

    This may spark some more serious introspection about the issues involved in the Bank but won’t actually get inside a Court, which could be a problem for CAI if people submitting information to them believe that a mass litigation is the end game with money at the end of the rainbow.

    So, CAI need to be a little more up-front and explicit about the SPECIFIC PURPOSE for the data they are processing. The branding isn’t helping that and could trigger problems under the Data Protection Acts. Also, they need to be clear about WHAT data is being presented to the Government and the Central Bank. And they need to be clear about what will happen to it once the Government and Central Bank have been briefed. And then they need to be clear about when it will be deleted. Also they need to be darned sure that the security on the submission of that data is secure (hint: email is not PCI-DSS compliant).

    Yes.. lobby and campaign and organise on behalf of consumers. But in doing so don’t get so caught up in the branding, image, and soundbites of what you are doing that you forget about the rights of the, well… ummmm…, CONSUMER.

    (There is a way they can go about this without any of the problems outlined above but it will mean

    • Changing their branding and eating humble pie about the whole thing
    • Hiring me to be VERY VERY CLEVER on their behalf with some Smart Monkey Consulting â„¢

    Heck, if they want to have an independent Data Quality review of the end to end processes and impacts I am a qualified Information Quality practitioner with years of experience and two books under my belt. (Hint… the key to it all is process and information flows).

  • Olympic betting scandal and Data Protection

    An Irish athlete is under investigation less than 24hrs into the Olympics arising from allegations that they, in effect, bet against themselves.

    An anonymous source became aware of the pattern of betting and notified the authorities.

    This blog post is being written to help media commentators avoid either putting their feet in it or wasting the scarce time of the Data Protection Commissioner raising spurious enquiries about whether the disclosure of the data in this was legal.

    Bluntly – you don’t want to come out swinging against the bookies if they were acting correctly as you’ll look like a fool. And, if they were in the wrong, you don’t want to throw the Data Protection Act around like snuff at a wake as there’s enough bullshit out there about what it is and what it does to fertilise the Rose Gardens in St Anne’s Park until doomsday.

    First things first: we need to bone up on some of the law governing gambling, specifically section 11 of the Gaming and Lotteries Act 1956. That legislation makes it an offence to cheat.

    11.—Every person who by any fraud or cheat in promoting or operating or assisting in promoting or operating or in providing facilities for any game or in acting as banker for those who play or in playing at, or in wagering on the event of, any game, sport, pastime or exercise wins from any other person or causes or procures any person to win from another anything capable of being stolen shall be deemed guilty of obtaining such thing from such other person by a false pretence, with intent to defraud, within the meaning of section 10 of the Criminal Justice Act, 1951 (No. 2 of 1951), and on conviction shall be punished accordingly.

    That is important as Section 8 of the Data Protection Acts permits the disclosure of personal data where necessary to allow the prevention, detection, or investigation of a crime. In this case cheating.

    Note: I’m not saying that any cheating actually took place here, just that circumstances appear to exist which seem to require investigation of the possibility of such cheating.

    As winning bets were drawn down that might fit the bill under the Gaming Acts.

    I always advise clients to have at least two lawful processing conditions to rely on. In this case the bookmakers could probably argue the “Legitimate Interest” grounds… It is in their interest to red flag potential cheating in the placing of bets or rigging of events. And the remedy to that would be to alert the appropriate body who would in turn have a legitimate interest in ensuring the propriety of the Games.

    Of course, the complicating factor is that the information was sent to the OCI from an “anonymous email”. If the sender was an employee of the bookmakers then, if they had permission from their employer to alert the OCI then that might be an allowable disclosure. But if they aren’t an employee (for example if they work with the police and came into possession of information relating to an investigation) or didn’t have permission to disclose the details of the athlete then that could be a breach of the Data Protection Acts.

    So. Before we start chasing hares that aren’t there, let’s all step back and remember what the law actually is here. Far more important to focus on google and their ‘factual inexactitude’ on street view and the paltry resources of our DPC.

    Thus endeth the rant