The Data Protection Commissioner has just published his annual report. It makes (as always) interesting reading. It has only been released in the last 30 minutes but there are elements of it that I will return to in detail in a post on my company website later this week (once digested).
Over here on my personal blog I thought I’d pick up on a broader question that bubbles up frequently in Data Governance and Compliance, not least in Data Protection. That is the importance of “tone at the top”.
The DPC’s Annual report instances a number of breaches by way of case study and a report on an audit follow up. The audit of An Garda Siochana’s Pulse system is mentioned in dispatches. Which brings me to the troubling topic of this post.
The Minister for Justice and Defence has disclosed into the public domain information about another person (a Data Subject) relating to an alleged (and disputed) “stop and caution” which came into his possession one must assume in the course of his ministerial function from some, as yet unconfirmed, source. The Minister sees nothing wrong with disclosing personal data, and in this case potentially sensitive personal data, for his own purposes. He has stated in his defence that he felt the disclosure was in the “Public Interest”. His Taoiseach has backed him in his actions.
It brings to mind the argument put forward by Nixon when challenged by David Frost about the legality of certain actions. Just because you can do something doesn’t mean you should – this is the long repeated mantra of Data Protection practitioners world wide.
To extrapolate a little: a senior member of the executive management of an organisation has disclosed publicly information about another person that has come into their possession through the course of their professional activities. The disclosure is without a clear lawful purpose but the manager feels it is in the Public Interest to know the kind of person they are dealing with (“Public Interest” and “Of Interest to the Public” are two different concepts). The manager sees nothing wrong with this. His CEO sees nothing wrong with this and backs the manager.
If this was a private organisation the DPC would be investigating and the executive and CEO would potentially be facing personal liability under section 29 for their consent and connivance in the commission of an offence under the Data Protection Acts.
When the Minister for Justice and Defence, under whose Department the Data Protection Acts reside, cannot recognise where the political win that comes from dropping the other guy in it because you have a information about them others don’t have runs into conflict with fundamental rights to Personal Data Privacy and the Data Protection Acts themselves, the tone at the top is resonating bum notes.
When the Taoiseach sees no problem with this, the bum notes become cacophonic.
If the Minister is to argue that business owners and public servants should respect the law then his recent actions inject a diminished minor note to the fanfare he should have around Data Protection, what with him being the Minister in Europe charged currently with shepherding the revised Data Protection Regulation to a final text.
Why should an SME owner or CEO of a large corporate challenged with respecting the Data Protection Acts seek now to act in compliance with that legislation? The Minister can flaunt it, why not them? Why should a young garda officer on the beat, struggling to make the mortgage payment that month, respect the Data Protection Acts and their code of conduct under the Garda Siochana Act when offered money by external entities for information when the Minister can unthinkingly ignore the ethics and letter of the legislation in pursuit of political point scoring?
Over two years ago I wrote about the same issues arising in a political context. One key quote sticks out in the context of the current situation with regard to political leaders:
If they are promoting a “tough on regulation” policy platform, then they must lead with a clear “tone from the top” of Compliance and good Governance.
The Office of the Data Protection Commissioner is intended to be independent and is required under the TFEU to be so. However they operate under the auspices of the Department of Justice and Defence. In such a structure there is a significant risk that the clarinet solo of the Commissioner (still grossly under resourced) will be drowned out by the cacophonic discord of the Tone at the Top.
The DPC has commented today that:
in general the public sector, including ministers, has a solemn duty to protect any personal data coming into its possession and may only disclose it when it has the consent of the individual concerned or under another basis laid down in law.
Should Deputy Mick Wallace make, or have made, a complaint it would be interesting to see how the Government and the Commissioner’s office would act to avoid the kind of enforcement actions related to a lack of independence of the Regulatory Authority (the DPC) that have arisen in Hungary. Certainly it would be a matter worthy of mention in the 2013 Annual Report of the DPC. Bluntly it would represent a clear test of the independence of the Office of the Commissioner (and for that matter their resourcing) if they were to have to investigate a Minister rather than just a Minister’s department or agency.
The Data Protection Acts owe their genesis and evolution to the actions of political forces in Europe in the early years of the 20th century. We should be very worried when the Minister responsible for internal and external State Security feels that encroaching on a right to privacy without a clear lawful purpose is an acceptable political tool.
The Tone at the Top is braying some bum notes this week and some conductor needs to bring the orchestra back in tune. Otherwise, like all great bands, musical differences may trigger the beginning of the end.
I had looked forward to the Minister’s statement today but am underwhelmed by his position that Deputy Wallace’s status as a public personality is a justification for the disclosure. Where this the case, the DPC audit of the Gardaí and Dept of Social Protection would not have been so concerned about access of data relating to celebrities. The disconnect and discord in the “tone at the top” is palpable!
(If the use of the phrase “tone at the top” in connection with Fine Gael and Data Protection is familiar to some, this post might refresh your memory)
So, David Hall is challenging the provisions of the Personal Insolvency Act regarding the publication of details on public registers. I’m quoted in this Irish Times article about it. My comments, which I expand on here as an update to my earlier post, where to the effect that:
- The publication of detailed personal data on a publicly accessible register would invite the risk of identity theft in the absence of any appropriate controls over the access to that data.
Examples of public registers where controls are in place are the Electoral Register (search one name and address at a time), and the Companies Registration Office (find out the home addresses of Directors if you pay a small admin fee), or the list of Revenue Tax defaulters (publication only over a threshold, summary personal data published).
Public does not mean Open. Public means that it should be able to be accessed, subject to appropriate controls. The requirement to name people who are in an insolvency arrangement needs to be balanced against their right to personal data privacy and the risk of identity theft or fraud through the use of published personal data.
The mockup Register entries presented on the ISI website may do the organisation a disservice with the level of data they suggest would be included and I await the publication of further revisions and the implementation of a control mechanism to introduce balance between the requirement to publish a Register and the need to protect personal data privacy. But of course, Section 133 of the Personal Insolvency Act is silent as to what the actual content of the published Registers should be (at least as far as I can see). So there is scope for some haggling over the content of what the final Registers will be.
A key question to be considered here is what is the purpose of the Registers and what is the minimum data that would be adequate and relevant to be provided on a Register to meet that purpose.
Section 133(4) allows for the public to “inspect a Register at all reasonable times" and to take extracts or copies of entries, and even allows for a small fee to be charged (the “reasonable cost of making a copy”). So there is scope for some form of access control to be put in place either with a search mechanism like the electoral register and/or the operation of a paywall for the making of copies (e.g. generating a pdf report on headed paper, at €1 a go).
- Section 186 of the Personal Insolvency Act needs to be interpreted and applied with care.
Section 186 of the Personal Insolvency Act purports to suspend the operation of Section 4 of the Data Protection Acts in certain circumstances. This is the section which allows a Data Subject to request a copy of their personal data. This is a basic right under the Acts.
However the Data Protection Acts already contain provisions which allow for the suspension of Section 4 in Section 5 of the Data Protection Acts. Specifically Section 5(1)(d) allows for an exclusion for data which is being processed in the performance of a statutory function intended
…to protect members of the public against financial loss occasioned by
i) dishonesty, incompetence, or malpractice on the part of persons concerned in the provision of banking, insurance, investment or other financial services or in the management of companies or similar organisations
ii) the conduct of persons who have at any time been adjudicated bankrupt
in any case where the application of that section would be likely to prejudice the proper performance of any of those functions.
The operation of the Insolvency Service of Ireland would appear to fall under this section. But rather than a blanket exclusion, Section 5 has a more nuanced approach – you can’t have your data if it will prejudice the proper performance of the ISI’s role. Of course, 5(1)(d) only kicks in if there has been dishonesty, incompetence, or malpractice on the part of a bank that has resulted in a financial loss or risk of financial loss to the Data Subject.
Section 5 gives a number of other grounds for exclusion from the operation of Section 4. Among them are:
- If disclosing the data is contrary to the interests of protection the international relations of the State (which would raise an eyebrow I’m sure if cited in an insolvency situation).
- If legal privilege attaches to the records in the case of communications between clients and legal advisers.
If the restriction is on disclosure of personal data during the course of an investigation then this would likely be covered under Section 5(1)(a ) and there is legislative precedent in the Property Services (Regulation) Act 2011 to extend that to an investigation undertaken by the PRA under that Act.
An explanation and clarification?
The ISI has similar powers of investigation and prosecution of offences (section 180 and Chapter 5 of the Personal Insolvency Act 2012). Therefore the exemption from disclosure under Section 5(1)(a ) would apply. A “belt and braces” inclusion of an exemption from section 4 of the DPA for the investigation of offences would be consistent with the Acts.
However this would only be the case for the investigation of an offence. The processing of a general complaint would not fall within the scope of an offence under the Insolvency Act or other legislation.
Therefore a blanket opt out would not exist. If an offence is suspected Section 186 reinforces the existing provisions of the Data Protection Acts. But general complaints to the Complaints committee would (based on my reading) not, unless the complaint wound up in an offence being detected. Of course a Data Subject would only be entitled to their own data.
- Excessive Retention of Data on Public Registers is a concern.
This, of course, is another biggie from a Data Protection point of view.How long does this data need to be held for? In the UK similar schemes have the personal data removed from the public register 3 months after the debtor exits the scheme. Here…
Section 170 of the Personal Insolvency Act indicates that Personal Insolvency Practitioners will need to retain data for 6 years after the “completion of the activity to which the record relates”. This is consistent with the statute of limitations on a debt and makes sense – it would allow people who avail of an Arrangement to get access to information about their arrangement if required. However it is not the same as the Public Registers.
Section 133 sets out the provisions relating to the Registers of Insolvency Arrangements. It says nothing about the length of time a person’s data will be listed on a Register. Given the purpose is to maintain a searchable register of people who are in Insolvency Arrangements, the principle of not retaining data for longer than it is required for a stated purpose kicks in.
And, as is all to often the case in Irish legislation, we seem to be left looking to the UK for a benchmark period for retention: Duration of Arrangement plus 3 months… but that may be 3 months longer than required.
- Personal Solvency Practitioners acting as Data Processors, and the implications for security and awareness of obligations under the Data Protection Acts
This is a squeaky wheel issue in many respects. All too often organsiations will outsource functions or engage people to perform functions on their behalf on contract, which would set out the purposes of the processing and the role of the Processor and sanctions for breaching their obligations. The Personal Insolvency Act sets out how Personal Insolvency Practitioners will be appointed, empowers the ISI to set standards re: their level of education and skill, and imposes sanctions for breaches of the standards of conduct of the role.
The function of a PIP is one which could have been undertaken internally within the ISI but it has been decided to outsource it to these PIPs.
Therefore a PIP is likely to be viewed as a Data Processor acting on behalf of the Data Controller (ISI) [for more on this read here]. Therefore they need to be taking (at a minimum) appropriate security measures to prevent unauthorised access to data. The concern I expressed in the article was that it is an unknown quantity what level of understanding of their obligations under the Data Protection Acts a PIP will have and what training (if any) will be provided.
Section 161(c) of the Personal Insolvency Act 2012 provides a mechanism for this to be addressed through the prescribing of the completion of appropriate training from a qualified trainer with a proficiency in Data Protection as one of the training requirements for authorisation as a PIP.
[Disclosure: my company provides an extensive range of Data Protection compliance review and training services]
Coverage of some of the structures of the Insolvency Service of Ireland has been rattling through my ears while I work the past few days. What I’ve heard gives rise to an unsettling feeling that the architects of the scheme have decided that the insolvent are a form of unter-mensch for whom some of the fundamental rights that EU citizens enjoy are either put on hold or entirely foregone.
Data protection is a fundamental right in Europe, enshrined in Article 8 of the Charter of Fundamental Rights of the European Union, as well as in Article 16(1) of the Treaty on the Functioning of the European Union (TFEU). As a fundamental right, according to the EU Commission it “needs to be protected accordingly”.
Some of what I have heard I can only hope is half-informed speculation, but I fear it may be grounded in reality.
- Publication of personal data including name, address, and date of birth on a public register of insolvents. This is problematic as it creates a risk of identity theft in my view. Also – what is the purpose for which this data is being published? How could the same objective be met without putting personal data privacy at risk of unauthorised access? How is this compatible with s2(d) of the Data Protection Acts which require appropriate measures to be taken to keep data safe and secure?
- Retention of data on the register after a scheme has been exited. It is rumoured that the details of people listed on the register mentioned above would have their details retained indefinitely. Why? How is this compatible with the requirement under the Data Protection Acts (and the underlying Directive) to retain data no longer than is necessary for the purpose? How would it be compatible with the requirement under the proposed General Regulation for Data Protection to give citizens of the EU a “Right to be forgotten”? What is the function/purpose of retaining information once the agreed scheme has been completed?
- Section 186 of the legislation purports to exempt the Agency from Section 4 of the Data Protection Acts. This is the section that allows individuals to get copies of information held about them by Data Controllers. It is a right that is derived from Directive 95/46/EC. While there are grounds under Article 13 of the Directive for a member state to limit subject access requests where it impacts economic or financial interest of the State, I’m at a loss to see how a response to a Subject Access Request for a single person or class of people might impact our economic and financial interests as a State. The test is that the restriction must be necessary not nice to have. Of course, if things are so precarious that a Subject Access Request will tip the economy into a death spiral, then perhaps the Irish people should be told this.
There is a significant imbalance in rights and duties emerging here. Particularly when compared with the secrecy of NAMA and the closeness with which the privacy of significant contributors to the exuberance of the Boom times has been guarded by that Agency. There is also a suggestion that Data Protection rights are optional extras that can be mortgaged as part of entering the process.
I really do hope I’m wrong about all of this and it is not the data black hole that it appears to be and that personal data privacy will continue to be respected as a fundamental right. After all, when you’ve lost everything else, things like that can be very important.
I’m addicted to the think. Every day, when not thoroughly occupied with the challenges of a client strategy or issue, I find myself drawn to hard thinking. Sometimes I even get people plying me with think.
Like this past few weeks. Lots of think.
One thing I’ve been asked to think about is the whole area of Bring Your Own Device, colloquially known as “BYOD”. I understand that this emerged as a term because people hoped that enterprise technology management would be a lot like a college house party. You’d bring a bottle and go home with two bottles of something better than you went with. Which in tech terms might be going with an Android JellyBean device and coming home with an iPhone and a Windows 8 slate.
But everyone is wrong. The focus is wrong. Because we have in effect focussed on the size, colour, shape, and label of the bottles in our BYOD/BYOB thinking. In doing so we’ve missed the importance of what is in those bottles. Which is important if you find out that you’ve arrived home from your party with two bottles of water when you had been expecting vodka.
From a process and governance perspective what we are actually dealing with is a classically simple issue that has just been obscured because:
- In the old days the company gave you your bottle and you where damn glad to have one (i.e. they provided the technology you used to do things)
- We entered the hooplah hype cycle at the time when everyone was jumping up and down like 5 year olds on Christmas morning when they find Santa has left them a bike. – “YAY!!!! TOYS!!!!!
What we are actually dealing with is a problem not of how to allow people to use their devices but rather a problem of how to give people access to resources in a secure and controlled manner when we don’t own the bottles any more. This requires organisations to do some thinking. What can be done to ensure that people are given access to resources in the right way?
Some thoughts spring to mind:
- Define standards for the bottle (the device) you will let people bring to the barrel to be filled with yummy data/booze. Provide data in 1 litre chunks, or require 32GB capacity and perhaps limit the OS versions you’ll allow
- Put a bottle in their pocket: Implement a standard workspace that sits on the device that you can control the parameters of.
- Sell them the bottles (i.e stick with only allowing approved company issue devices).
Of course, the world is a complicated place so when people start using their own device for work purposes it means there is a risk that the red wine you are giving them for work will be mixed with the white wine of their private personal world. That means the practice of giving them a bottle that is marked “WORK” would be sensible.
By reframing the thinking away from the fact that they are bringing a device to the party but instead looking at how access to data, applications, and other resources will be provided to n variants of platform the organisation can begin to think strategically without getting bogged down in detail.
It also gives a great branding opportunity for the strategy. This is a strategy for GIVING ACCESS TO OUR RESOURCES. Abbreviated it is a GATOR Strategy.
So, does your organisation have a GATOR strategy yet? If not, you should really get one. And make it snappy.
Yes. It is a pity that Guthrie cards will be destroyed. Yes, there is potentially valuable data held on them. But there is also a fundamental right to Personal Data Privacy under EU Treaties and there is that pesky thing called the Data Protection Acts/Data Protection Directive.
The DPC investigated the issue of heel prick cards. They negotiated with the HSE to determine a “best fit” solution that struck an uneasy and far from ideal balance between the desire to have a genetic databank and the need to have specific explicit informed consent for the processing of sensitive personal data in that way.
Comments today from Minister Kathleen Lynch that this needs to be looked at again and efforts are underway to prevent the destruction are baffling. “Efforts are underway”? So the Department is actively working to undermine the role and independence of the DPC? Is new legislation being prepared with retrospective effect that will be passed by the end of next week? Is data being anonymised (tricky with genetic data)? Is the HSE going to do a big push to get people to request the cards relating to them and/or their children from the HSE?
What needs to be looked at in my view is the culture and ethos around managing personal data that pervades in some areas of political and civil society. For that is where the root and origin of this dismal scenario lies. (A scenario, as an aside, that has faced private sector organisations with their customer databases on a number of occasions: not obtained lawfully, not obtained for that purpose, destroy it.)
The reason the issue arises with the heel prick tests is that consent was obtained for the processing of blood samples for a very specific purpose – testing for metabolic disorders in neonatal contexts. The consent obtained was for that purpose. No other. Sensitive personal data must be processed on the basis of specific, explicit informed consent. There appears to have been no plan for maintaining the data associated with those samples or for managing the process of obtaining consent for future purposes (or enacting legislation to allow for future purposes without requiring consent). There appears to have been an assumption that these samples could be retained ad infinitum and used for purposes undisclosed, unimagined, or unavailable at the time the samples were originally taken. This was, and is, not the case under Data Protection law.
As an Information Quality practitioner, I am bemused by the optimism that is expressed that the heel prick data would be useable in all cases. What processes are in place to link the data on the Guthrie card to an identifiable individual? Do those processes take account of the person moving house, their parents marrying, divorcing, remarrying (and the name changes that ensue), or the family emigrating? If the Information Governance in the HSE is such that this is rock solid data then great. I’m running a conference and want good case studies… call me!
The quality of information angle is important as it raises a second Data Protection headache – adequacy of information. If the information associated with the actual blood tests is not accurate, up to date, and adequate then a further two principles of the Data Protection Acts come into play.
Yes the destruction of Guthrie cards is a problem (but as Ireland has been doing Guthrie tests since 1966 it has happened before. Yes it is an unsatisfactory situation (but one that appears unavoidable given the legal situation). But the root cause is not the Data Protection Acts or the DPC. The root cause is a failure in how we (as a society) think about information and its life cycle, particularly in Government and Public sector organisations. A root cause is a failure of governance and government to understand the legal, ethical, and practical trade offs that are required when processing personal data, particularly sensitive personal data. A root cause is the failure to anticipate the issues and identify potential solutions before a crisis.
RTE reports that the Minister describes the 12% awareness level of the right to have cards returned to families rather than destroyed as “telling”. But what does it tell us? Does it tell us people don’t care? Or does it tell us that the HSE awareness campaign was ineffective? I would go with the latter. Frankly the lack of information has been stunning and, as always in Irish life, there is now a moral panic in the fortnight before the deadline. And again, the governance of how we communicate about information and information rights is called into question here.
I haven’t seen any data on how often the Guthrie card data was being used for research purposes. I’m sure some exists somewhere. Those arguing for the records to be saved should go beyond anecdote and rhetoric and present some evidence of just how useful this resource has been. We need to move beyond sound-bite and get down to some evidence based data science and evidence driven policy making.
Storing the samples takes physical and economic resource, two things in short supply in the HSE. Storing them ad infinitum without purpose “just in case” creates legal issues. Legally the purpose for which the samples was originally taken has expired. By giving families the option of having the cards returned to them the HSE creates the opportunity for specific informed consent to future testing, while removing the other data protection compliance duties for those records from themselves.
The choice is not an easy one but the Data Protection mantra is “just because you can doesn’t mean you should”. And just because you have to doesn’t mean it is easy or without pain. But by clearly drawing a line in the sand between non-compliant and compliant practices the HSE avoids the risk of future processing being challenged either to the DPC or the ECJ (after all, this is a fundamental human right to data privacy we are dealing with).
Hard cases make bad laws is the old saying. However the corollary is that often good laws lead to hard cases where society needs to accept errors of the past, take short term pain, identify medium and long term solutions, and move on in a compliant and valid manner.
Rather than weeping and gnashing teeth over a decision that is done and past it would behove the Minister and our elected representatives more to focus their efforts on ensuring that the correct governance structures, mind-sets, knowledge, training, and philosophy are developed and put in place to ensure we never find ourselves faced with an unsatisfactory choice arising from a failure to govern an information asset.
Twitter is great. I found myself this evening discussing the psychology of alarms with Rob Karel of informatica. He had tweeted that a car alarm outside his office had been going off for an hour but his brain had filtered it out. This is not an uncommon reaction to bells and alarms and is the reason why I have a monitored alarm system in my home, a fact I will return to later.
Our neuropsychological response to alarms is pretty much the same as our response to any alert to risk. It is influenced by the same basic flaws in information processing in the limbic system of the brain, our “lizard brain”. If the danger is not one we are familiar with and it is not immediate we discount it to the point of ignoring it.
An alarm going off is an alert that something is happening somewhere else to someone or something else. Without a hook to make it personal it is just noise and it fades into the background. In the absence of a direct effect on us we tune out the distraction so our lizard brain can focus on other immediate risks – to us. An alarm = someone else’s stuff at risk.
This is why a measure of data quality needs to have an action plan associated with it so that the people in the organisation can tie the metric to a real affect and put a clear response plan into action. Just as how when a fire alarm goes off we know to go to the nearest exit and leave belongings behind or just as we know that if an oxygen mask drops in front of us on a plane we should tug hard and take care of our own mask first.
There is an alarm stimulus. There is a planned response that makes it personal to us. Alarm, something must be done, this is a thing, let’s do it.
But often Information Quality scorecards are left hanging. The measure of success is the success of measurement. Just as the measure of home security is often whether you have a house alarm. But a ringing alarm that has no action to be called to serves no purpose.
My home has a monitored alarm. If one sensor is triggered I get a phone call to check on me and alert me. If a perimeter sensor and an internal sensor are triggered together I get a call to let me know that there are police en route. Each time the alarm is responded to by a stranger with a planned response. My role is to cry halt at any time, gather data about the incident (was there someone calling to house who forgot alarm code? Is there a key holder on the way?), and generally coordinate the plan’s roll out.
What can we learn from this for how we design DG and IQ strategies? What is your planned response to an alarm bell ringing in your data?
It was reported yesterday that the Irish Government has issued a “discussion paper” on the proposed administrative sanctions under the new Data Protection Regulation.
EDRI has criticised the proposals with reference to the “warning/dialogue/enforcement” approach taken by the Irish DPC. Billy Hawkes has, in the past, been at pains to clarify that the Irish DPC uses dialogue to encourage compliance and also seeks to encourage organisations to raise questions and issues with the DPC to avoid breaches. There is a belief that the “brand impact” of even being spoken to by the DPC about an issue can prompt “road to Damascus” conversions in organisations.
That is all well and good, but my experience working with organisations is that this can result in management playing a game of “mental discounting” (I’ve written about this before in response to the original draft DP Regulation). If there is a perception that the probability of an actual penalty is low, there is little leverage in appealing to intrinsic motivation of a business manager when his extrinsic drivers for behaviour are pushing the decision towards a “suck it and see” approach.
Having re-read the discussion paper and EDRI’s response to it I can’t help feel that EDRI may be over-stating the “ask” that is being made here a small bit. They cite it as the “destruction of the right to privacy”, citing the Irish DPC’s own experiences with the Garda Pulse system which has been plagued by reports of breaches in Data Protection since its introduction, despite the Gardaí having a statutory Code of Practice for Data Protection. In 2010 the DPC reported that that Code of Practice was not being implemented in the Gardaí.
However, this says as much to to me about the attitude to Data Protection in some (but not all) parts of the Irish Public Service then it does about the merits of the Data Protection Commissioner’s approach to encouraging compliance or the specifics of anything that might be discussed on foot of this discussion paper. Furthermore it raises questions for me about the capability and resources that the Data Protection Commissioner has to execute their function effectively in Ireland, and even suggests that there may be informal barriers to the effective operation of their function in the public sector which need to be urgently considered (given that the Office of the DPC is supposed to be independent).
Given the extent of the negative findings in the interim report on the 2012 audit of the PULSE system I personally would hope that there would be some level of penalty for the Garda Siochana for failing to follow their own code of practice. But that is a different issue to what the Discussion paper actually raises.
What is being discussed (and what would I like them to consider?)
The Discussion Paper that was circulated invites Ministers at an Informal Council meeting to consider (amongst other things):
- If wider provision should be made for warnings or reprimands, making fines optional or at least conditional upon a prior warning or reprimand;
- if supervisory authorities should be permitted to take other mitigating factors, such as adherence to an approved code of conduct or a privacy seal or mark, in to account when determining sanctions.
It flags the fact that the Regulation, as drafted, allows for no discretion in terms of the levying of a penalty. What is proposed here in the discussion is a discussion of whether warnings or the making of fines optional would be the mechanism to go to rather than scaring the bejesus out of people with massive fines. This in itself doesn’t kill the right to Privacy, but it does potentially create the environment where the fundamental Right to Privacy will die, starved of any oxygen of effective enforcement.
Bluntly – when faced with a toothless framework of warnings and vague threats, businesses and public sector bodies will (and currently do) play a game of mental discounting where the bottom line impact (in terms of making money or achieving a particular goal) outweigh the other needs and requirements of society. So an organisation may choose to obtain information unfairly or process it for an undisclosed secondary purpose because it will hit its target in this quarter and the potential monetary impact won’t emerge for many more months or years, after an iterative cycle of warnings. The big penalty will be seen as something “far away” that can be worried about later. After everyone’s got their bonuses or their promotions etc.
If strict statutory liability is the model that is being proposed, and the discussion is to look at watering it down to a stern talking to as a matter of formal policy in the Regulation, I must despair of the wingnuts in my government who even thought that would be a good idea to even suggest this. But I do agree that tying the hands of the Regulators to the big ticket monetary penalties might not work in their interests or in the interests of encouraging compliance with the legislation.
What is needed is a middle ground. A mechanism whereby organisations can make errors of judgement and be warned, but that the warning will have some sanction with it. The sanction needs to be non-negotiable. But it needs to be transparent and obvious that this is what will happen if you ignore DP rules. It needs to be easily enforced and managed. There should be a right of appeal, but appealing the non-negotiable fixed-penalty should carry with it the risk of greater penalties. And the ability of an organisation to benefit from iterative small penalties should be removed if they are a recidivist offender.
There is a system that operates like this in most EU countries – it is the Penalty Points system for motoring offences. Hopefully the discussion will move to looking at how a similar system might be implemented for Data Protection offences. The penalties could be tiered (e.g. no cookies notification – €150 fine and 2 points on first offence, €500 and 4 points on second, failure to document processing €500 fine on first offence and 6 points). The points could be cumulative, with the “optionality” of higher sanctions being removed if you were, for example, an organisation with 100 points against you (congratulations, you’ve failed to up your game and now you are being prosecuted for the full tariff). Organisations bidding for public sector contracts could be required to have a “Data Protection Points” score below a certain level.
This system could be devised in a way that would take account of mitigating factors. If a code of practice was entered in to, and was successfully audited against by an appropriate body, then points could be removed from the “scorecard” at the end of a 12 month period. If there were mitigating factors, a lower level category of offence might actually apply (I’ll admit I’m not sure how that might work in practice and need to think it through myself a little). Perhaps self-notification to the DPC, engagement in codes of practice, mitigating factors or actions etc. would carry a “bonus points” element which could be used to off-set the points total being carried by a Data Controller (e.g. “adopted code of practice and passed audit: minus 3 points, introduced training and has demonstrated improved staff knowledge: minus 3 points).
Certain categories of breach might be exempt from mitigation, and certain categories of offence, just like with motoring offences, might be a permanent black mark on the organisation’s Data Protection record (e.g.: Failure to engage with DPC in an investigation, failing to take actions on foot of an audit/investigation).
The scheme could be administered at an EU level by the EDPB, with the points accumulated by organisations operating in multiple member states either being cumulative or averaged based on a standardised list of key offences. Member States could be free to add additional offences to this list locally, within the spirit and intent of the Regulation.
That would be an innovative idea, based on a model that has been proven to have an influence on compliance behaviour in motoring. And it would provide a transparent mechanism that would ensure that warnings could be given, advice could be sought, and positive engagement could be entered into by Micro Enterprises, SMEs, and large corporates. It would provide a relatively low impact mechanism for levying and collecting penalties from organisations who are in breach (penalties could potentially be collected as part of annual tax returns as a debt owed to the State), and it could be used to reward organisations who are taking positive actions (“bonus points”).
Finally, it would give the basis of a transparent scorecard for organisations seeking to evaluate data processors or other service providers (in the same way as Insurance providers use penalty points data for motoring to assess driver risk), and it would give a clear escalation path to the full sanctions in the Regulation (e.g. 100 points and you go straight to full penalties).
What it does not give is a death spiral of warnings that don’t amount to penalty and as a result give a platform for organisations to ignore the Right to Privacy. It is an evolution of the conciliatory approach to encouraging compliance but one that is given teeth in a manner that can be transparent, easily explained, and standardised across the EU27.
An opinion piece by Joe Humphreys in the Irish Times on the 9th of January (which I can link to here thanks to the great work of McGarr Solicitors) discusses anonymous comment on-line. In doing so he presents an argument that would appear to suggest that persons taking a nom de plume in debate are in some way sinister and not trustworthy.
He suggests three actions that can be taken to challenge “trolling”. I’ve previously addressed this topic on this blog (27th December 2012 and previously) I thought I’d examine each of Mr Humphrey’s suggestions in turn and provide agreement or counter argument as appropriate.
1. Publicly condemn it. Overall I agree with this. However who or what should be condemned? The pseudonymous comment or the pseudonymous commenter? Should you ‘play the man or the ball’, to borrow a metaphor from sports? The answer is that, in an open society the correct course of action is to either ignore the argument or join the argument. Anything else leads to a downward spiral of tit-for-tat trolling and abuse, one of the very behaviours that has sections of our body politic and mainstream media crying “Down with this sort of thing!”
2. “Develop ways of discriminating against it… … by technology that helps to authenticate people’s identities”. In my blog post of the 27th of December I address this under the heading of “Bad Idea #1”. The concept of identity is incredibly fluid. As Mr Humphreys appears fond of citing scientists and philosophers, I’m sure he is familiar with Descarte’s writings on the existentialist concepts of identity.
The idea of an “identity register” is one that raises significant technical, philosophical, and legal issues. South Korea has recently abandoned their attempts to impose a “Real Names” policy on the use of social media due to these issues, and “Real Name” policies in social media have been criticised on Data Protection grounds in Europe. In China, where a “real names” policy is in place for social media, people use fake ID to register and the Chinese government has failed to get a significant majority of internet users to comply with their law.
Describing anonymity as a “market failure” to be fixed by enforced identification equates identity with a tradable commodity. This is, ironically, the business model of Facebook, which Mr Humphreys describes as “an invention of Orwellian proportions”.
3. “Challenge the anonymous to explain why they are hiding themselves. I’ve yet to hear a good excuse…” In my post of the 27th of December I link to an excellent resource (the GeekFeminism Wiki) which lists a number of reasons why people might not be able to use their real names in on-line comment. Time taken to research this: 30 seconds on Google. They include: survivors of abuse, whistleblowers, law enforcement personnel, and union activists.
The implication made by Mr Humphreys that people choose to comment anonymously because they don’t want their employer to know they are on social media all day is disingenuous to say the least and belies a biased view of those of us who are active users of modern technologies for communication, discussion, and debate.
Finally, history has a litany of examples of people who, for various reasons have used pen names to hide themselves. From Leslie Charles Bowyer-Yin (Leslie Charteris, author of The Saint) to Samuel Langhorne Clemens (Mark Twain), to Francois-Marie Arouet (Voltaire), to Eric Blair (George Orwell) there is a tradition of, in the words of preparing “a face to meet the faces that you meet” (to borrow a line from T.S Eliot) for a variety of reasons. See http://en.wikipedia.org/wiki/List_of_pen_names for more examples.
The Official Twitter Account of the Irish EU Presidency (@eu2013ie) tweeted earlier today about recipes.
That gave me a little food for thought given the subject matter I posted on yesterday.
- Ireland will hold the Presidency of the EU in the first half of 2013.
- Part of what we will be tasked with is guiding the Data Protection Regulation through the final stages of ratification
- Viviane Reding has been very vocal about the role Ireland will play and the importance of strengthening enforcement of rights to Personal Data Privacy in the EU.
- World wide media and our European peers will be looking at Ireland and our approach to Data Protection.
In that context I would hope that any Dáil Committee would have the importance of the right to Privacy (as enshrined in EU Treaties and manifested by our current Data Protection Acts and the forthcoming Data Protection Regulation) when reviewing legislation and regulation around Social Media.
While I don’t think that the recipes being tweeted about by the @eu2013ie account contained any Chinese recipes, the news today about changes in the Chinese Social Media regulatory environment are disturbing in the context of the rights to privacy and free speech. One interesting point about China’s approach to control of on-line comment from the FT article linked to above is this:
It has also tried to strengthen its grip on users with periodical pushes for real name registration. But so far, these attempts have been unsuccessful in confirming the identity of most of China’s more than 500m web users
Food for thought.
[edited introductory paragraphs at 20:34 2012-12-27 reflecting feedback from Aoife below – fair comment made and responded to]
[Note: This has been posted today because RTE are doing a thing about “social media regulation” which means that levers are being pulled that need to be red flagged]
I drafted this post on Christmas Eve morning 2012. The original post had the introduction below. One person (out of the 600+ who have read this post by now, a few hours after I posted it) felt that the opening was too hyperbolic. Perhaps it was, so I decided to tweak it. I did hope I wouldn’t have to publish the piece I’d drafted. But the fact that the opening item on the 6pm news on the 27th of December 2012 was a piece about the Chairman of the Dáil communications committee announcing that the committee would meet in the New Year to discuss regulating ‘Social Media’ meant that my misgivings about the approach of the Irish political classes to the use of Social Media were not entirely misplaced.
I’m writing this on Christmas Eve morning 2012. I dearly hope I never have to publish it. If I do it will be because the Government I helped elect will have abandoned any pretence of being a constitutional democracy and will have instead revealed its true insular, isolated, clientelist nature in a manner that will disgust and appal people. And this will be all the more disturbing as the Government will have used real personal tragedies to justify this abandonment of principles.
But I am not hopeful. If this post sees the light of day something will have gone horribly wrong with the Irish Body Politick.
That the content of the media coverage today echoed the expectation I set out in the paragraphs below for the rationale of any review of regulation (“cyber bullying” and other misuses/abuses of social media) suggests that, perhaps, this post might contribute a useful counterpoint to a perspective that appears to dominate the mainstream.
I fully expect within the early weeks of 2013 for the Irish Government to propose regulations requiring that users of social media be required to tweet or blog in an identifiable way. No more anonymous tweets, no more anonymous blogs. The stated reason will be to “combat cyber bullying”.
Sean Sherlock TD is quoted in today’s Irish Times (2012/12/24) calling for action on anonymous posting. This is ominous. Others quoted in that article are calling for ‘support systems’ to help TDs deal with the “venom” being targeted at them via social media. While the support systems suggested are to be welcomed, the categorisation of expressions of opinion by citizens as “venom” is, at best, unhelpful and, at worst, disingenuous.
What seems to be in pipeline to be proposed to stem this tide is almost inevitably going to be some form of requirement that people verify their identity in some way in blog posts or tweets. Remove the veil of anonymity, the reasoning will go, and this venom will go away. The “keyboard warriors” will put their weapons beyond use and step in line with the process of government and being governed.
The fact that politicians are lumping Facebook in with these other platforms illustrates the tenuous grasp many have on the facts – Facebook already requires a ‘real identity’ policy, which raises problems about what your real identity is and has been flagged as potentially in breach of EU law by at least one German Data Protection Authority.
Why this is a bad idea
In Orwell’s 1984 a shadowy figure of the State ultimately breaks the protagonist Smith, requiring him to give up on love and private intimacy and resubmit to a surveillance culture in which the Thought Police monitor the populace and the media tells everyone it is necessary to protect against the “enemy”. That shadowy figure is called O’Brien. My passion for data privacy is a reaction to my namesake, and from that perspective I can see three reasons why this is A VERY BAD IDEA.
Bad Idea Reason #1 – What is Identity?
Requiring people to post comments, write blogs, or tweet under their own identity creates a clear and public link between the public persona and the private individual. The supporters of any such proposal will argue that this is a deterrent to people making harsh or abusive comments. However, in a fair society that respects fundamental rights, it is important to think through who else might be impacted by a “real names” policy. There are quite a number of examples of this, the most famous recent example being Salman Rushdie having his Facebook account suspended because it didn’t think he was him.
Identity is a complex and multifaceted thing. We all, to borrow a phrase from T.S Eliot, “prepare a face to meet the faces that we meet”. The GeekFeminism Wiki has an excellent list of scenarios where your “real name” might not be the name you are really known by. In Ireland, people who would be affected by a “real names” policy in social comment would include:
- Public servants who cannot comment publicly on government policy but may be affected by it
- Survivors of abuse
- People with mental health concerns or problems
A real names policy would require that every time Bono tweets or blogs about Ireland, Irishness, or Irish Government policies he would have to do it under the name Paul David Hewson. And who the heck would be interested in an opinion expressed by Paul Crossan about epilepsy?
Bad Idea Reason #2 – How will it work exactly?
It is one thing to say that you want people to post comments using their identity, but it is another thing entirely to get a system in place that actually works. Identity is a ‘flexible’ thing, as outlined above. Facebook require evidence of your identity in the form of personal ID (passport/driver’s license). They have the resources to process that data securely. But they still get it wrong (see the Salman Rushdie example cited above). If verifiable identities are required for comment, then how exactly would a small personal blog that is used to exercise my mental muscles outside of my work persona (domestic use) be expected to handle the overhead of verifying the identity of commenters in a verifiable way.
Would I be expected to get people to register with the blog and provide evidence of ID? Would I be able to get a grant to help implement secure processes to obtain and process copies of passports and drivers’ licenses? Or will the State just require that I shut up… shop? Would the State indemnify me if this blog was compromised and data held on it about the identity of others was stolen?
Every few years we used to hear similar calls about the registration of mobile phones. The argument in favour of registration usually goes: “If they have to register, bad people won’t use these phones”. That argument is bunkum. I’ve written about it at length here but the short form:
- If people have to register and provide ID for verification, they will use fake ID (as is happening in China with their mobile phone registration requirement)
- If the law is to register, strangely it is unlikely that that would bother criminals… by definition they find the law an inconvenience rather than a barrier.
- If people are required to register without some form of identity verification then you’ll wind up with Mr D. Duck of “The Pond” owning a lot of phones. A pseudonym – so no more identifiable than a picture of an egg.
Applying this to a proposal for a ‘real name’ policy for tweets, blogs, comments and other social media discourse and we wind up with a situation where, to achieve the objective that the proposers of non-anonymised comment seem to be seeking, would result in a disproportionate burden being placed on those of us who engage in debate on-line. Even then it would not be fool proof. And a non-verified identity is nothing more than another pseudonym. I could, for example, use the name of another person when “registering” to comment. Or a fictional duck.
It is worth noting that South Korea is abandoning its “Real Names” policy for social media for a variety of reasons.
Bad Idea Reason #3 – The logical principle must be technology neutral
Blogging, tweeting, social media… these are all technologies for self-expression and social interaction that barely existed five years ago and where unheard in the mainstream of a decade ago. Therefore any regulation that requires identification of commenters must be framed in such a way as to anticipate new technologies or new applications of existing technology or risk near instant obsolescence. Therefore the regulation would need to be technology neutral. Which means that, in order to avoid it being discriminatory and to ensure it has the fullest possible effect, it would need to be applicable to other forms of technology.
When debating this on Twitter with Harry McGee on the 22nd December I asked him if he saw a difference between Twitter and a malicious phone call or an anonymous pamphlet. His response was they were, in his opinion, the same.
So, if tweets are the same as anonymous pamphlets, the logical extension of needing to be able to identify the tweeter is a need to be able to identify the pamphleteer. The State would want to be able to identify the author of a published thought. We have seen this before. In fact, the seeing of it before is one of the reasons that the EU has a right to personal Data Privacy (introduced in the Lisbon Treaty) and why the strictest interpretations of Data Protection laws in Europe tend to be in Germany and former Soviet bloc countries. Have we managed to forget that, within the lifetime of people now in their mid thirties, governments in Eastern Europe required people to register their typewriters with the State so the State could identify the writers of letters, plays, pamphlets and other communications?
As Mikko Hypponen of F-Secure (one of the world’s leading experts on information security) says in one of his many presentations:
In the 1980s in the communist Eastern Germany, if you owned a typewriter, you had to register it with the government. You had to register a sample sheet of text out of the typewriter. And this was done so the government could track where text was coming from. If they found a paper which had the wrong kind of thought, they could track down who created that thought. And we in the West couldn’t understand how anybody could do this, how much this would restrict freedom of speech. We would never do that in our own countries.
But today in 2011, if you go and buy a color laser printer from any major laser printer manufacturer and print a page, that page will end up having slight yellow dots printed on every single page in a pattern which makes the page unique to you and to your printer. This is happening to us today. And nobody seems to be making a fuss about it. And this is an example of the ways that our own governments are using technology against us, the citizens.
So… if we can uniquely identify the typewriter or the printer shouldn’t we take the logical step and have the owner register it, just like in communist East Germany in the 1980s? So that when a pamphlet or letter is sent that has the wrong kind of thought the relevant authorities can take action and immediately stop that kind of thing. But sure, we’d never do that in our own country.
We’d just ask everyone register their identity before blogging or tweeting. Totally different. The Government would never propose the creation of a register of printer owners. Would they?
Use the laws we have, don’t create crazy new ones
But something must be done!! This is an intolerable thing, this “cyberbullying”.
And indeed it is. But let’s not get hung up on the label. It is not “cyberbullying”. That is bullying by a fictional race from the TV show Dr. Who.
What this is is inappropriate and/or malicious use of communications networks and technologies. It is no different from a smear poster campaign, a co-ordinated letter writing campaign, or a malicious calling campaign. And there are already laws a-plenty to combat this in a manner that is proportionate with the curtailment of freedoms of speech and rights to privacy.
Bluntly: If your conduct on-line amounts to a criminal act or defamation it is almost inevitable that your illusion of privacy will evaporate once the blow-torch of appropriate and existing laws are applied.
- RateMySolicitor.com’s owner tried to argue that he couldn’t take down defamatory content because he’d ‘lost the keys’ to the system. The High Court issued an order taking the site down on foot of a civil action.
- The Post Office (Amendment) Act 1951 contains prohibitions on the use of the “telephone” to “send any message” which is indecent, menacing, or false for the purpose of causing “needless anxiety”. (see Section 13). Note the section talks of using a “telephone”, but does not specify PSTN voice communications and, I would suggest, could be arguably interpreted in line with the Communications Retention of Data Act 2011 definition of a “telephone service” which is quite broad. Also, as it refers to ‘sending” a message, it is arguably not limited to voice communications application. And people have to connect to twitter and facebook somehow to use those platforms to distribute messages. Of course, that line of argument would need to overcome the problem of precedent in a case of abusive messages posted on Bebo, where the meaning of “telephone” was interpreted tightly. This would suggest to me that one option open to Government is to review existing legislation rather than introduce new legislation. [Updated to reflect comments and link provided by TJ McIntyre of Digital Rights Ireland]
- The Non-Fatal Offences against the Person Act 1997 contains a number of provisions that would be of relevance here:
The power to pierce privacy in this case comes from the pursuit of a criminal investigation of what are deemed under the Communications (Retention of Data) Act 2011 as serious offences. Any social media provider will provide information about users where a serious offence is being investigated. It’s in their terms and conditions (see Twitter’s here – Section 8). This would allow the identification of the IP address used at a date and time for transmitting a message via twitter and could be used to compel a telecommunications provider to provide the name of the account holder and/or the location of the device at the time and at present.
But it is done under a clear system of checks and balances. And it would be focussed just on the people who had done a bold thing that was complained about, not placing a burden on society as a whole just in case someone might do something naughty.
I would ask the Government to use the laws we already have. Update them. Join them up. Standardise and future proof their application. But do so in a technology neutral way that isn’t swiping at flies while ignoring larger concerns. And please don’t mandate non-anonymised comment – it simply doesn’t work.
When proposing any course of action it is advisable to prepare for the unintended consequence. With this chatter of requiring comment to be identifiable comes the risk that, should it happen, the social media data of Irish citizens will become either more valuable (because marketers will be able to mine the ‘big data’ more efficiently) or less valuable (because we switch off and there is less data to meaningfully mine).
There is also the risk that our Government will, yet again, send a signal to the world that it just doesn’t understand On-Line, for all its bleating about a “Knowledge Economy”. And at that point we may become less attractive to the foreign new media firms who are setting up base here. Like Twitter, LinkedIn, Facebook, etc.
Requiring identifiable comment is a dumb move and a silly non-solution to a non-problem. The problem is not anonymity. The problem is actually how we evolve our laws and culture to embrace new communication channels. We have always had anonymous comment or pseudonymous dispute. Satire thrives on it, art embraces it, and literature often lives through it. Just because every genius, wit, and idiot now has a printing press with a global reach does not mean we need to lock down the printing presses.
It didn’t work in Stasi East Germany or other Soviet Bloc dictatorships.
Other solutions, such as working the laws we already have, are preferable and are more likely to work. Educating users of social media that there are still social standards of acceptable behaviour is also a key part of the solution.
Tagging the typewriters is NEVER the answer in a democracy. This O Brien stands firmly against this particular Thought Crime.