Category: Information Quality

  • The Electoral Register (Here we go again)

    The Irish Times today carries a story on page five which details a number of proposed changes to the management of the Electoral Register arising from the kerfuffle of the past two years about how totally buggered it is. For those of you who don’t know, I’ve written a little bit about this in the past (earning an Obsessive Blogger badge in the process donchaknow). It was just under two years ago that I opened this blog with a post on this very topic…

    A number of points raised in the article interest me, if for no other reason than they sound very familiar – more on that anon. Other interest me because they still run somewhat counter to the approach that is needed to finally resolve the issue.

    I’ll start with the bits that run counter to the approach required. The Oireachtas Committee has been pretty much consistent in its application of the boot to Local Authorities as regards the priority they give to the management of the Electoral Register. According to the Irish Times article, the TDs and Senators found that:

    “Running elections is not a core function of local authorities. Indeed, it is not a function that appears to demand attention every year. It can, therefore, be questioned if it gets the priority it warrants under the array of authorities”

    I must humbly agree and disagree with this statement. By appearing to blame Local Authorities for the problem and for failing to prioritise the management of the Electoral Register, the Committee effectively absolves successive Ministers for the Environment and other elected officials from failing to ensure that this ‘information asset’ was properly maintained. Ultimately, all Local Authorities fall under the remit of the Minister for Environment, Heritage and Local Government. As the ‘supreme being’ in that particular food chain, the Minister (and their department) is in a position to set policy, establish priorities and mandate adequate resourcing of any Local Authority function, from Water Services to Electoral Franchise.

    The key issue is that Franchise section was not seen as important by anyone. A key information asset was not managed, no continual plans were put in place for the acquisition of information or the maintenance of information. Only when there were problems applying the information did anyone give a darn. This, unfortunately, is a problem that is not confined to Local Government and Electoral data however – a large number of companies world wide have felt the pain of failing to manage the quality of their information assets in recent times.

    Failing to acknowledge that the lack of management priority was systemic and endemic within the entire hierarchy of Central and Local Government means that a group of people who probably tried to do their best with the resources assigned to them are probably going to feel very aggrieved. “The Register is buggered. It’s your fault. We’re taking it away from you” is the current message. Rather it should be “The system we were operating is broken. Collectively there was a failure to prioritise the management of this resource. The people tried to make it work, but best efforts were never enough. It needs to be replaced.”

    W. Edward’s Deming advised people seeking to improve quality to ‘drive out fear’. A corollary of that is that one should not engage in blame when a system is broken unless you are willing to blame all actors in the system equally.

    However, I’m equally guilty as I raised this issue (albeit not in as ‘blaming’ a tone) back in… oh 2006.:

    Does the current structure of Local Authorities managing Electoral Register data without a clear central authority with control/co-ordination functions (such as to build the national ‘master’ file) have any contribution to the overstatement of the Register?

    Moving on to other points that sound very familiar…

    1. Errors are due to a “wide variety of practices” within Local Authorities. Yup, I recall writing about that as a possible root cause back in 2006. Here and here and here and here and here in fact.
    2. The use of other data sources to supplement the information available to maintain the Register is one suggestion. Hmmm… does this sound like it covers the issue?
    3. Could the Electoral Register process make use of a data source of people who are moving house (such as An Posts’s mail redirection service or newaddress.ie)? How can that be utilised in an enhanced process to manage & maintain the electoral register? These are technically surrogate sources of reality rather than being ‘reality’ itself, but they might be useful.

      That’s from a post I wrote here on the 24th April 2006.

      And then there’s this report, which was sent to Eamon Gilmore on my behalf and which ultimately found its way to Dick Roche’s desk while he was still the Minister in the DOELG. Pages 3 to 5 make interesting reading in light of the current proposals. Please note the negatives that I identified with the use of data from 3rd party organisations that would need to be overcome for the solution to be entirely practicable. These can be worked around with sound governance and planning, but bumbling into a solution without understanding the potential problems that would need to be addressed will lead to a less than successful implementation.

    4. The big proposal is the creation of a ‘central authority’ to manage the Electoral Register. This is not new. It is simply a variation on a theme put forward by Eamon Gilmore in a Private Member’s Bill which was debated back in 2006 and defeated at the Second Stage(The Electoral Registration Commissioner Bill, 2005). This is a proposal that I also critiqued in the report that wound its way to Dick Roche… see pages 3 to 5 again. I also raise issues of management and management culture at page 11.
    5. The use of PPS numbers is being considered but there are implications around Data Protection . Hmm… let’s see… I mentioned those issues in this post and in this post.
    6. And it further assumes that the PPS Identity is always accurate (it may not be, particularly if someone is moving house or has moved house. I know of one case where someone was receiving their Tax Certs at the address they lived in in Dublin but when they went to claim something, all the paperwork was sent to their family’s home address down the country where they hadn’t lived for nearly 15 years.)

      In my report in 2006 (and on this blog) I also discussed the PPS Number and the potential for fraud if not linked to some form of photographic ID given the nature of documents that a PPS number can be printed on in the report linked to above. This exact point was referenced by Senator Camillus Glynn at a meeting of the Committee last week

      “I would not have a difficulty with using the PPS card. It is logical, makes sense and is consistent with what obtains in the North. The PPS card should also include photographic evidence. I could get hold of Deputy Scanlon’s card. Who is to say that I am not the Deputy if his photograph is not on the card? Whatever we do must be as foolproof as possible.”

      This comment was supported by a number of other committee members.

    So, where does that leave us? Just under two years since I started obsessively blogging about this issue, we’ve moved not much further than when I started. There is a lot of familiarity about the sound-bites coming out at present – to put it another way, there is little on the table at the moment (it seems) that was not contained in the report I prepared or on this blog back in 2006.

    What is new? Well, for a start they aren’t going to make Voter Registration compulsory. Back in 2006 I debated this briefly with Damien Blake… as I recall Damien had proposed automatic registration based on PPS number and date of birth. I questioned whether that would be possible without legislative changes or if it was even desirable. However, the clarification that mandatory registration is now off the table is new.

    The proposal for a centralised governance agency and the removal of responsibility for Franchise /Electoral Register information from the Local Authorities sounds new. But it’s not. It’s a variation on a theme that simply addresses the criticism I had of the original Labour Party proposal. By creating a single agency the issues of Accountability/Responsibility and Governance are greatly simplified, as are issues of standardisation of forms and processes and information systems.

    One new thing is the notion that people should be able to update their details year round, not just in a narrow window in November. This is a small but significant change in process and protocol that addresses a likely root cause.

    What is also new – to an extent – is the clear proposal that this National Electoral Office should be managed by a single head (one leader), answerable to the Dail and outside the normal Civil Service structures (enabling them to hire their own staff to meet their needs). This is important as it sets out a clear governance and accountability structure (which I’d emphasised was needed – Labour’s initial proposal was for a Quango to work in tandem with Local Authorities… a recipe for ‘too many cooks’ if ever I’d heard one). That this head should have the same tenure as a judge to “promote independence from government” is also important, not just because of the independence and allegiance issues it gets around, but also because it sends a very clear message.

    The Electoral Register is an important Information Asset and needs to be managed as such. It is not a ‘clerical’ function that can be left to the side when other tasks need to be performed. It is serious work for serious people with serious consequences when it goes wrong.

    Putting its management on a totally independent footing with clear accountability to the Oireachtas and the Electorate rather than in an under-resourced and undervalued section within one of 34 Local Authorities assures an adequate consistency of Governance and a Constancy of Purpose. The risk is that unless this agency is properly funded and resourced it will become a ‘quality department’ function that is all talk and no trousers and will fail to achieve its objectives.

    As much of the proposals seem to be based on (or eerily parallel) analysis and recommendations I was formulating back in 2006, I humbly put myself forward for the position of Head of the National Elections Office 😉

  • Final post and update on IBTS issues

    OK. This is (hopefully) my final post on the IBTS issues. I may post their response to my queries about why I received a letter and why my data was in New York. I may not. So here we go..

    First off, courtesy of a source who enquired about the investigation, the Data Protection Commissioner has finished their investigation and the IBTS seems to have done everything as correct as they could, in the eyes of the DPC with regard to managing risk and tending to the security of the data. The issue of why the data was not anonymised seems to be dealt with on the grounds that the fields with personal data could not be isolated in the log files. The DPC finding was that the data provided was not excessive in the circumstances.

    [Update: Here’s a link to the Data Protection Commissioner’s report. ]

    This suggests to me that the log files effectively amounted to long strings of text which would have needed to be parsed to extract given name/family name/telephone number/address details, or else the fields in the log tables are named strangely and unintuitively (not as uncommon as you might think) and the IBTS does not have a mapping of the fields to the data that they contain.

    In either case, parsing software is not that expensive (in the grand scheme of things) and a wide array of data quality tools provide very powerful parsing capabilities at moderate costs. I think of Informatica’s Data Quality Workbench (a product originally developed in Ireland), Trillium Software’s offerings or the nice tools from Datanomic.

    Many of these tools (or others from similar vendors) can also help identify the type of data in fields so that organisations can identify what information they have where in their systems. “Ah, field x_system_operator_label actually has names in it!… now what?”.

    If the log files effectively contained totally unintelligible data, one would need to ask what the value of it for testing would be, unless the project involved the parsing of this data in some way to make it ‘useable’? As such, one must assume that there was some inherent structure/pattern to the data that information quality tools would be able to interpret.

    Given that according to the DPC the NYBC were selected after a public tender process to provide a data extraction tool this would suggest that there was some structure to the data that could be interpreted. It also (for me) raises the question as to whether any data had been extracted in a structured format from the log files?

    Also the “the data is secure because we couldn’t figure out where it was in the file so no-one else will” defence is not the strongest plank to stand on. Using any of the tools described above (or similar ones that exist in the open source space, or can be assembled from tools such as Python or TCL/TK or put together in JAVA) it would be possible to parse out key data from a string of text without a lot of ‘technical’ expertise (Ok, if you are ‘home rolling’ a solution using TCL or Python you’d need to be up to speed on techie things, but not that much). Some context data might be needed (such as a list of possible firstnames and a list of lastnames, but that type of data is relatively easy to put together. Of course, it would need to be considered worth the effort and the laptop itself was probably worth more than irish data would be to a NYC criminal.

    The response from the DPC that I’ve seen doesn’t address the question of whether NYBC failed to act in a manner consistent with their duty of care by letting the data out of a controlled environment (it looks like there was a near blind reliance on the security of the encryption). However, that is more a fault of the NYBC than the IBTS… I suspect more attention will be paid to physical control of data issues in future. While the EU model contract arrangements regarding encryption are all well and good, sometimes it serves to exceed the minimum standards set.

    The other part of this post relates to the letter template that Fitz kindly offered to put together for visitors here. Fitz lives over at http://tugofwar.spaces.live.com if anyone is interested. I’ve gussied up the text he posted elsewhere on this site into a word doc for download ==> Template Letter.

    Fitz invites people to take this letter as a starting point and edit it as they see fit. My suggestion is to edit it to reflect an accurate statement of your situation. For example… if you haven’t received a letter from the IBTS then just jump to the end and request a copy of your personal data from the IBTS (it will cost you a few quid to get it), if you haven’t phoned their help-line don’t mention it in the letter etc…. keep it real to you rather than looking like a totally formulaic letter.

    On a lighter note, a friend of mine has received multiple letters from the Road Safety Authority telling him he’s missed his driving test and will now forfeit his fee. Thing is, he passed his test three years ago. Which begs the question (apart from the question of why they are sending him letters now)… why the RSA still has his application details given that data should only be retained for as long as it is required for the stated purpose for which it was collected? And why have the RSA failed to maintain the information accurately (it is wrong in at least one significant way).

  • IBTS… returning to the scene of the crime

    Some days I wake up feeling like Lt. Columbo. I bound out of bed assured in myself that, throughout the day I’ll be niggled by, or rather niggle others with, ‘just one more question’.

    Today was not one of those days. But you’d be surprised what can happen while going about the morning ablutions. “Over 171000 (174618 in total) records sent to New York. Sheesh. That’s a lot. Particularly for a sub-set of the database reflecting records that were updated between 2nd July 2007 and 11th October 2007. That’s a lot of people giving blood or having blood tests, particularly during a short period. The statistics for blood donation in Ireland must be phenomenal. I’m surprised we can drag our anaemic carcasses from the leaba and do anything; thank god for steak sandwiches, breakfast rolls and pints of Guinness!”, I hummed to myself as I scrubbed the dentation and hacked the night’s stubble off the otherwise babysoft and unblemished chin (apologies – read Twenty Major’s book from cover to cover yesterday and the rich prose rubbed off on me).

    “I wonder where I’d get some stats for blood donation in Ireland. If only there was some form of Service or agency that managed these things. Oh.. hang on…, what’s that Internet? Silly me.”

    So I took a look at the IBTS annual report for 2006 to see if there was any evidence of back slapping and awards for our doubtlessly Olympian donation efforts.

    According to the the IBTS, “Only 4% of our population are regular donors” (source: Chairperson’s statement on page 3 of the report). Assuming the population in 2006 (pre census data publication) was around 4.5 million (including children), this would suggest a maximum regular donor pool of 180,000. If we take the CSO data breaking out population by age, and make a crude guess on the % of 15-24 year olds that are over 18 (we’ll assume 60%) then the pool shrinks further… to around 3.1 million, giving a regular donor pool of 124000 approx.

    Hmm… that’s less than the number of records sent as test data to New York based on a sub-set of the database. But my estimations could be wrong.

    The IBTS Annual Report for 2006 tells us (on page 13) that

    The average age of the donors who gave blood
    in 2006 was 38 years and 43,678 or 46% of our
    donors were between the ages of 18 and 35
    years.

    OK. So let’s stop piddling around with assumptions based on the 4% of population hypothesis. Here’s a simpler sum to work out… If X = 46% of Y, calculate Y.

    (43678/46)X100 = 94952 people giving blood in total in 2006. Oh. That’s even less than the other number. And that’s for a full year. Not a sample date range. That is <56% of the figure quoted by the IBTS. Of course, this may be the number of unique people donating rather than a count of individual instances of donation… if people donated more than once the figure could be higher.

    The explanation may also lie with the fact that transaction data was included in the extract given to the NYBC (and record of a donation could be a transaction). As a result there may be more than one row of data for each person who had their data sent to New York (unless in 2007 there was a magical doubling of the numbers of people giving blood).

    According to the IBTS press release:

    The transaction files are generated when any modification is made to any record in Progesa and the relevant period was 2nd July 2007 to 11th October 2007 when 171,324 donor records and 3,294 patient blood group records were updated.

    (the emphasis is mine).

    The key element of that sentence is “any modification is made to any record”. Any change. At all. So, the question I would pose now is what modifications are made to records in Progresa? Are, for example, records of SMS messages sent to the donor pool kept associated with donor records? Are, for example, records of mailings sent to donors kept associated? Is an audit trail of changes to personal data kept? If so, why and for how long? (Data can only be kept for as long as it is needed). Who has access rights to modify records in the Progresa system? Does any access of personal data create a log record? I know that the act of donating blood is not the primary trigger here… apart from anything else, the numbers just don’t add up.

    It would also suggest that the data was sent in a ‘flat file’ structure with personal data repeated in the file for each row of transaction data.

    How many distinct person records were sent to NYBC in New York? Was it

    • A defined subset of the donors on the Progresa system who have been ‘double counted in the headlines due to transaction records being included in the file? ….or
    • All donors?
    • Something in between?

    If the IBTS can’t answer that, perhaps they might be able to provide information on the average number of transactions logged per unique identified person in their database during the period July to October 2007?

    Of course, this brings the question arc back to the simplest question of all… while production transaction records might have been required, why were ‘live’ personal details required for this software development project and why was anonymised or ‘defused’ personal data not used?

    To conclude…
    Poor quality information may have leaked out of the IBTS as regards the total numbers of people affected by this data breach. The volume of records they claim to have sent cannot (at least by me) be reconciled with the statistics for blood donations. They are not even close.

    The happy path news here is that the total number of people could be a lot less. If we assume ‘double dipping’ as a result of more than one modification of a donor record, then the worst case scenario is that almost their entire ‘active’ donor list has been lost. The best case scenario is that a subset of that list has gone walkies. It really does boil down to how many rows of transaction information were included alongside each personal record.

    However, it is clear that, despite how it may have been spun in the media, the persons affected by this are NOT necessarily confined to the pool of people who may have donated blood or had blood tests peformed between July 2007 and October 2007. Any modification to data about you in the Progresa System would have created a transaction record. We have no information on what these modifications might entail or how many modifications might have occured, on average, per person during that period.

    In that context the maximum pool of people potentially affected becomes anyone who has given blood or had blood tests and might have a record on the Progressa system.

    That is the crappy path scenario.

    Reality is probably somewhere in between.

    But, in the final analysis, it should be clear that real personal data should never have been used and providing such data to NYBC was most likely in breach of the IBTS’s own data protection policy.

  • So what did the IBTSB do right?

    In the interests of a bit of balance, and prompted by some considered comment by Owen O’Connor on Simon’s post over on Tuppenceworth, I thought it might be worth focussing for a moment on what the IBTSB did right.

    1. They had a plan that recognised data security as a key concern.
    2. They specified contract terms to deal with how the data was to be handled. (these terms may have been breached by the data going on an unexpected tour of New York)
    3. They made use of encryption to protect the data in transit (there is no guarantee however that the data was in an encrypted state at all times)
    4. They notified promptly and put their hands up rather than ignoring the problem and hoping it would go away. That alone is to be commended.

    So they planned relatively well and responded quickly when shit hit the fan. The big unknown in all of this is whether the data has been compromised. If we assume happy path, then the individual in an organisation which had a contractual obligation to protect the security of the data but took it home anyway kept the data encrypted on the laptop. This may indeed be the case. I

    t could also be the case that this person didn’t appreciate the obligations owed and precautions required and, apart from removing the data from a controlled and securable environment, had decrypted the data to have a poke around at it. That is the crappy path.

    Ultimately it is a roll of the dice as to which you put your trust in.

    In previous posts I have asked why production data was being used for a test event and why it had not been anonymised or tweaked to reduce its ability to identify real individuals. In his comment over on Tuppenceworth, Owen O’Connor contends that

    the data being examined was to do with the actual usage and operation of the IBTS system

    If the data that was being examined was log files for database transactions then one might query (no pun intended) why personal identifying data was included. If it was unavoidable but to send sample records (perhaps for replication of transaction events?) then this might actually be in accordance with the IBTSB’s data protection policy. But if the specifics of names etc. were not required for the testing (ie if it was purely transactional processing that was being assesed and not, for example, the operation of parsing or matching algorithms) then they should have and could have been mangled to make them anonymous with out affecting the validity of any testing that was being done.

    If a sound reason for using real data exists that is plausible and warranted the level of risk involved then (having conducted similar testing activities myself during my career) I’d be happy that the IBTSB had done pretty much everything they could reasonably have been asked to do to ensure security of the data. The only other option I would possibly have suggested would be remote access to data held on a server in Ireland which would have certainly meant that no data would have been on a laptop in New York (but latency on broadband connections etc. might have mitigated against accurate test results perhaps).

    In the Dail, the IBTSB has come in for some stick for their sloppy handling. Owen O’Connor is correct however – the handling of the spin has been quite good and most of the risk planning was what would be expected. If anyone is guilty of sloppy handling it is the NYBC who acted in breach of their agreement (most likely) by letting the data out of the controlled environment of their offices.

    So, to be clear, I feel for the project manager and team in the IBTSB who are in the middle of what is doubtless a difficult situation. But for the grace of god (and a sense of extreme paranoia in the planning stages of developer test events) go I. The response was correct. Get it out in the open and bring in the Data Protection commissioner as soon as possible. The planning was at least risk-aware. They learned from Nixon (it’s the cover up that gets you)

    However, if there was not a compelling reason for real data about real people being used in the testing that could not have been addressed with either more time or mor money then I would still contend that the use of the production data was ill-advised and in breach of the IBTSB’s own policies.

  • More thoughts on the IBTS data breach

    One of the joys of having occasional bouts of insomnia is that you can spend hours in the dead of night pondering what might have happened in a particular scenario based on your experience and the experience of others.

    For example, the IBTS has rushed to assure us that the data that was sent to New York was encrypted to 256bit-AES standard. To a non-technical person that sounds impressive. To a technical person, that sounds slightly impressive.

    However, a file containing 171000+ records could be somewhat large, depending on how many fields of data it contained and whether that data contained long ‘free text’ fields etc. When data is extracted from database it is usually dumped to a text file format which has delimiters to identify the fields such as commas or tab characters or defined field widths etc.

    When a file is particularly large, it is often compressed before being put on a disc for transfer – a bit like how we all try to compress our clothes in our suitcase when trying to get just one bag on Aer Lingus or Ryanair flights. One of the most common software tools used (in the microsoft windows environment) is called WinZip. It compresses files but can also encrypt the archive file so that a password is required to open it. When the file needs to be used, it can be extracted from the archive, so long as you have the password for the compressed file. winzip encryption screenshot.
    So, it would not be entirely untrue for the IBTS to say that they had encrypted the data before sending it and it was in an encrypted state on the laptop if all they had done was compressed the file using Winzip and ticked the boxes to apply encryption. And as long as the password wasn’t something obvious or easily guessed (like “secret” or “passw0rd” or “bloodbank”) the data in the compressed file would be relatively secure behind the encryption.

    However, for the data to be used for anything it would need to be uncompressed and would sit, naked and unsecure, on the laptop to be prodded and poked by the application developers as they went about their business. Where this to be the case then, much like the fabled emperor, the IBTS’s story has no clothes. Unencrypted data would have been on the laptop when it was stolen. Your unencrypted, non-anonymised data could have been on the laptop when it was stolen.

    The other scenario is that the actual file itself was encrypted using appropriate software. There are many tools in the market to do this, some free, some not so free. In this scenario, the actual file is encrypted and is not necessarily compressed. To access the file one would need the appropriate ‘key’, either a password or a keycode saved to a memory stick or similar that would let the encryption software know you were the right person to open the file.

    However, once you have the key you can unencrypt the file and save an unencrypted copy. If the file was being worked on for development purposes it is possible that an unencrypted copy might have been made. This may have happened contrary to policies and agreements because, sometimes, people try to take shortcuts to get to a goal and do silly things. In that scenario, personal data relating to Irish Blood donors could have wound up in an unencrypted state on a laptop that was stolen in New York.

    [Update**] Having discussed this over the course of the morning with a knowledgable academic who used to run his own software development company, it seems pretty much inevitable that the data was actually in an unencrypted state on the laptop, unless there was an unusual level of diligence on the part of the New York Blood Clinic regarding the handling of data by developers when not in the office.

    The programmer takes data home of an evening/weekend to work on some code without distractions or to beat a deadline. To use the file he/she would need to have unencrypted it (unless the software they were testing could access encrypted files… in which case does the development version have ‘hardened’ security itself?). If the file was unencrypted to be worked on at home, it is not beyond possiblity that the file was left unencrypted on the laptop at the time it was stolen.

    All of which brings me back to a point I made yesterday….

    Why was un-anonymised production data being used for a development/testing activity in contravention to the IBTS’s stated Data Protection policy, Privacy statement and Donor Charter and in breach of section 2 of the Data Protection Act?

    If the data had been fake, the issue of encryption or non-encryption would not be an issue. Fake is fake, and while the theft would be embarrassing it would not have constituted a breach of the Data Protection Act. I notice from Tuppenceworth.ie that the IBTSB were not quick to respond to Simon’s innocent enquiry about why dummy data wasn’t used.

  • Fair use/Specified purpose and the IBTS

    I am a blood donor. I am proud of it. I have provided quite a lot of sensitive personal data to the IBTS over the years that I’ve been donating.

    The specific purposes for which I believed I was providing the information was to allow the IBTS to administer communications with me as a donor (so I know when clinics are on so I can donate), to allow the IBTS to identify me and track my donation patterns, and to alert IBTS staff to any reasons why I cannot donate on a given occasion (donated too recently in the past, I’ve had an illness etc.). I accepted as implied purposes the use of my information for internal reporting and statistical purposes.

    I did not provide the information for the purposes of testing software developed by a 3rd party, particularly when that party is in a foreign country.

    The IBTS’s website (www.ibts.ie) has a privacy policy which relates to data captured through their website. It tells me that

    The IBTS does not collect any personal data about you on this website apart from information which you volunteer (for example by emailing us or by using our on line contact forms). Any information which you provide in this way is not made available to any third parties, and is used by the IBTS only for the purpose for which you provided it.

    So, if any information relating to my donor record was captured via the website, the IBTS is in breach of their own privacy policy. So if you register to be a donor… using this link… http://www.ibts.ie/register.cfm?mID=2&sID=77 then that information is covered by their Privacy policy and you would not be unreasonable in assuming that your data wouldn’t wind up on a laptop in a crackhouse in New York.

    In the IBTS’s Donor Charter, they assure potential Donors that:

    The IBTS guarantees that all personal information about donors is kept in the strictest confidence

    Hmm… so no provision here for production data to be used in testing. Quite the contrary.

    However, it gets even better… in the Donor Information Leaflet on the IBTS’s website, in the Data Protection section (scroll down… it’s right at the bottom), current and potential donors the IBTS tells us that (emphasis is mine throughout):

    The IBTS holds donor details, donation details and test results on a secure computerised database. This database is used by the IBTS to communicate with donors and to record their donation details, including all blood sample test results. It is also used for the proper and necessary administration of the IBTS. All the information held is treated with the strictest confidence.

    This information may also be used for research in order to improve our knowledge about the blood donor population, and for clinical audit, to assess and improve the quality of our service. Wherever possible, all such information will be anonymised.

    Right.. so from their policy and their statement of fair use and specified purposes we learn that:

    1. They can use it for communication with donors and for tracking donation details and results of tests (as expected)
    2. They can use it for necessary administration. Which covers internal reporting but, I would argue, not giving it to other organisations to lose on their behalf.
    3. They can use it for research about the blood donor population, auditing clinical practices. This is OK… and expected.
    4. They are also permitted to use the data to “improve the quality of [their] service”. That might cover the use of the data for testing…

    Until you read that last bit… the data would be anonymised whenever possible. That basically means the creation of dummy data as described towards the end of my last post on this topic.

    So, the IBTS did not specify at any time that they would use the information I had provided to them for the purposes of software development by 3rd parties. It did specify a purpose for using the information for the improvement of service quality. But only if it was anonymised.

    Section 2 of the Data Protection Act says that data can only be used by a Data Controller for the specific purposes for which it has been gathered. As the use of un-anonymised personal data for the purposes of software development by agencies based outside of the EU (or in the EU for that matter) was not a specified use, the IBTS is, at this point, in breach of the Data Protection Act. If the data had been anonymised (ie if ‘fictional’ test data had been used or if the identifying elements of the personal data had been muddled up before being transferred) there would likely be no issue.

    • Firstly, the data would have been provided in a manner consistent with the specified use of the data
    • Secondly, there would have been no risk to personal data security as the data on the stolen laptop would not have related to an identifiable person in the real world.

    Of course, that would have cost a few euros to do so it was probable de-scoped from the project.

    If I get a letter and my data was not anonymised I’ll be raising a specific complaint under Section 2 of the Data Protection Act. If the data was not anonymised (regardless of the security precautions applied) then the IBTS is in breach of their specified purposes for the collection of the data and are in breach of the Data Protection Act.

    Billy Hawkes, if you are reading this I’ve just saved your team 3 weeks work.

  • Irish Blood Transfusion Service loses data..

    Why is it that people never learn? Only months after the debacle of HMRC sending millions of records of live confidential data whizzing around in the post on 2 CDs (or DVDs), the Irish Blood Transfusion Service (IBTS) has had 171,000 records of blood tests and blood donors stolen.

    The data was on a laptop (bad enough from a security point of view). The data was (apparently) secured with 256bit AES encryption (happy days if true). The laptop was taken in a mugging (unfortunate). The mugging took place in New York (WTF!?!?)

    Why was the data in New York?
    It would seem that the IBTS had contracted with the New York Blood Centre (NYBC) for the customisation of some software that the NYBC had developed to better manage information on donors and blood test results. To that end the IBTS gave a copy of ‘live’ (or what we call in the trade ‘production’) data to the NYBC for them to use in developing the customisations.

    So, personal data, which may contain ‘sensitive’ data relating to sexual activity, sexual behaviour, medicial conditions etc. was sent to the US. But it was encrypted, we are assured.

    A quick look at the Safe Harbor list of the US Dept of Commerce reveals that the NYBC is not registered as being a ‘Safe Harbor’ for personal data from within the EU. Facebook is however (and we all know how compliant Facebook is with basic rules of data protection).

    Apparently the IBTS relied on provisions of their contract with the NYBC to ensure and assure the security of the data relating to REAL people. As yet no information has come to light regarding whether any audits or checks were performed to ensure that those contractual terms were being complied with or were capable of being complied with.

    How did the data get to New York?
    From the IBTS press release it is clear that the data got to New York in a controlled manner.
    An employee of NYBC took the disc back from Ireland and placed it in secure storage.

    Which is a lot better than sticking two CDs in the post, like the UK Revenue services did not so long ago.

    What about sending the data by email? Hmmm… nope, not secure enough and the file sizes might be to big. A direct point to point FTP between two servers? that would work as well, assuming that the FTP facilities were appropriately secured by firewalls and a healthy sense of paranoia.

    Why was the data needed in New York?
    According to the Irish Times

    The records were in New York, the blood service said, “because we are upgrading the software that we use to analyse our data to provide a better service to donors, patients and the public service”.

    Cool. So the data was needed in New York to let the developers make the necessary modifications to code.

    Nice sound bite. Hangs together well. Sounds reasonable.

    Unfortunately it is total nonsense.

    For the developers to make modifications to an existing application, what was required in New York was

    • A detailed specification of what the modifications needed to be to enable the software to function for Irish datasets and meet Irish requirements. Eg. if the name/address data capture screens needed to change they should have been specified in a document. If validation routines for zip cods/postcodes needed to be turned off, that should have been specified. If base data/reference data needed to be change – specify it in a document. Are we seeing a trend here?
    • Definition of the data formats used in Ireland. by this I mean the definition of the formats of data such as “social security number”. We call it a PPSN and it has a format nnnnnnnA as opposed to the US format which has dashes in the middle. A definition of the data formats that would be used in Ireland and a mapping to/from the US formats would possibly be required… this is (wait for it) another document. NOT THE DATA ITSELF
    • Some data for testing. Ok, so this is why all 171000+ records were on a laptop in New York. ehh… NO. What was required was a sample data set that replicates the formats and patterns of data found in the IBTS production data. This does not mean a cut of production data. What this means is that the IBTS should have created dummy data that was a replica of production data (warts and all – so if there are 10% of their records that have text values in fields where numbers would be expected, then 10% of the test data should reflect this). The test data should also be tied to specific test cases (experiments to prove or disprove functionality in the software).

    At no time was production data needed for development or developer testing activities in New York. Clear project specification and requirements documentation, documents about data formatting and ‘meta-data’ (data about data), Use Cases (walk throughs of how the software would be used in a given process – like a movie script) and either a set of dummy sample data that looks and smells like you production data or a ‘recipe’ for how the developer can create that data.

    But the production data would be needed for Acceptance testing by IBTS?
    eh… nope. And even if it was it would not need to be sent to New York for the testing.

    User Acceptance testing is a stage of testing in software development AFTER the developer swears blind that the software works as it should and BEFORE the knowledge workers in your organisation bitch loudly that the software is buggered up beyond all recognition.

    As with all testing it does not require a the use of production data is not required, and indeed is often a VERY BAD IDEA (except in certain extreme circumstances such as the need for volume stress testing or testing of very complex software solutions that need data that is exactly like production to be tested effectively… eg. a complex parsing/matching/loading process on a multi-million record database – and even at that, key data not relevant to the specific process being tested ought to be ‘obscured’ to ensure data protection compliance ).

    What is required is that your test environment is as close a copy to the reality you are testing for as possible. So, from a test data point of view, creating test data that looks like your production data is the ideal. One way is to do data profiling, develop an understanding of the ‘patterns’ and statistical trends in your data and then hand carve a set of test data that looks and smells like your production data but is totally fake and fraudulent and safe. Another approach is to take a copy of your production data and bugger around with it to mix names and addresses up, replace certain words in address data with different words (e.g. “Park” with “Grove” or “Leitrim” with “Carialmeg” or “@obriend.info” with “obriend.fakedatapeople” – whatever works). So long as the test data is representative of the structure and content of your production data set and can support the test scenarios you wish to perform then you are good to go.

    So, was the production data needed in New York – Nope. Would it be needed for testing in a test event for User Acceptance testing? Nope.

    And who does the ‘User Acceptance testing’? Here’s a hint… whats the first word? User Acceptance testing is done by representatives of the people who will be using the software. They usually follow test scripts to make sure that specific functionality is tested for, but importantly they can also highlight were things are just wrong.

    So, were there any IBTS ‘users’ (knowledge workers/clerical staff) in New York to support testing? We don’t know. But it sounds like the project was at the software development stage so it is unlikely. So why the heck was production data being used for development tasks?

    So… in conclusion
    The data was stolen in New York. It may or may not have been encrypted (the IBTS has assured the public that the data was encrypted on the laptop… perhaps I am cynical but someone who takes data from a client in another nation home for the weekend might possibly have decrypted the data to make life easier during development). We’re not clear (at this point) how the data got to New York – we’re assuming that an IBTS employee accompanied it to NY stored on physical media (the data, not the employee).

    However, there is no clear reason why PRODUCTION data needed to be in New York. Details of how the IBTS’s current data formats might map to the new system, details of requirements for changes to the NYBC’s current system to meet the needs of the IBTS, details of the data formats in the IBTS’s current data sets (both field structues and, ideally, a ‘profile’ of the structure of the data and any common errors that occur) and DUMMY data might be required for design, development and developer testing are all understandable. Production data is not.

    There is no evidence, other than the existence of a contractual arrangement, that the NYBC had sufficient safeguards in place to ensure the safety of personal data from Ireland. The fact that an NYBC employee decided to take the data out of the office into an unsecure environment (down town New York) and bring it home with them would evidence that, perhaps, there is a cultural and procedural gap in NYBC’s processes that might have meant they either couldn’t comply or didnt’ understand what the expectation of the clauses in those contracts actually meant.

    For testing, what is required is a model of production. A model. A fake. A facsimile NOT PRODUCTION. The more accurate your fake is the better. But it doesn’t need to be a carbon copy of your production data with exactly the same ‘data DNA’… indeed it can be a bad idea to test with ‘live’ data. Just like it is often dangerous to play with ‘live’ grenades or grab a ‘live’ power line to see what will happen.

    The loss of our IBTS data in New York evidences a failure of governance and a ‘happy path’ approach to risk planning, and a lack of appreciation of the governance and control of software development projects to ensure the protection of live data.

    As this was a project for the development of a software solution there was no compelling reason that I can identify for production data to have been sent from Ireland to New York when dummy data and project documentation would have sufficed.

    The press release from the IBTS about this incident can be found here..

    [UpdateSimon over at Tuppenceworth has noted my affiliation to the IAIDQ. Just to clarify, 99% of this post is about basic common sense. 1% is about Information Management/Information Quality Management. And as this post is appearing here and not on the IAIDQ’s website it goes without saying that my comments here may not match exactly the position of the IAIDQ on this issue. I’m also a member of the ICS, who offer a Data Protection certification course which I suspect will be quite heavily subscribed the next time it runs.]

    [Update 2: This evening RTE News interviewed Dr David Gray from DCU who is somewhat of an expert on IT security. The gist of Dr Gray’s comments were that software controls to encrypt data are all well and good, but you would have to question the wisdom of letting the information wander around a busy city and not having it under tight physical control… which is pretty much the gist of some of my comments below. No one has (as yet) asked why the hell production data rather than ‘dummy’ data was being used during the development phase of a project.]

  • Facebook & Data Protection

    The Younger McGarr (Simon that is) has a very detailed and well written post on the data protection issues that arise (and seemingly are ignored) by Facebook. It can be found over at the McGarr Solicitors website. He has already picked up some complimentary comments, including one from Thomas Otter (who has written on these issues previously). (Surely a reply from Robert Scoble is only a mouse-click away?)

    I’ve been scratching away on some notes for a post on Facebook myself (never one to miss a rolling bandwagon me). Expect more on this soon. (ie as soon as I’ve written the buggering thing).

  • Getting back to my Information Quality agenda

    One or two of the comments (and emails) I received after the previous post here were enquiring about some stuff I’d written previously (2006 into 2007) about the state of the Irish Electoral Register.

    It is timely that some people visited those posts as our Local Elections are coming up in less than 18 months (June 2009) and frankly, unless there is some immense effort going on behind the scenes that I haven’t heard of, the Register is still in a poor state.

    The issue isn’t the Register per se but the processes that surround it, the apparent lack of a culture where the leadership take the quality of this information seriously enough to make the necessary changes to address the cultural, political and process problems that have resulted in it being buggered.

    There are a few consolidating posts knocking around on this blog as I’ve pulled things together before. However a quick search for “Electoral Register” will pull all the posts I’ve done on this together. (If you’ve clicked the link all the articles are presented below).

    I’ve also got a presentation on the subject over at the IQNetwork website, and I did a report (which did go to John Gormely’s predecessor) which can be found here, and I wrote Scrap and Rework articlethat I submitted to various Irish newspapers at the time to no avail but which has been published internationally (in print and on-line).

    At this stage, I sense that as it doesn’t involve mercury filled CFLs or Carbon taxes, the state of the electoral register and the legislative framework that surrounds it (a lot of the process issues require legislative changes to address them) has slipped down the list of priorities our Minister has.

    However, with Local Elections looming it is important that this issue be addressed.

  • Information Quality in 2008…

    So yet another year draws to a close. Usually around this time of year I try to take a few hours to review how things went, what worked and what still needs to be worked on in the coming year. In most cases that is very personal appraisal of whether I had a ‘quality’ year – did I meet or exceed my own expectations of myself (and I’m a bugger for trying to achieve too much too quickly).

    Vincent McBurney’s Blog Carnival of Data Quality has invited submissions on the theme “Happy New Year”, so I thought I’d take a look back over 2007 and see what emerging trends or movements might lead to a Happy New Year for Information Quality people in 2008.

    Hitting Mainstream
    In 2007 Information Quality issues began to hit the mainstream. It isn’t quite there yet but 2007 saw the introduction of taught Master’s degree programmes in Information Quality in the University of Arkansas at Little Rock and there have been similar developments mooted in at least one European University. If educators think they can run viable courses that will make money then we are moving out of the niche towards being seen asa a mainstream discipline of importance to business.

    The IAIDQ’s IDQ Conference in Las Vegas was a significant success, with numbers up on 2006 and a wider mix of attendees. I did an unofficial straw poll of people at that conference and the consensus from the delegates and other speakers was that there were more ‘Business’ people at the conference than previous Information Quality conferences they’d attended, a trend that has been growing in recent years. The same was true at the European Data Management and Information Quality Conference(s) in London in November. Numbers were up on previous years. There were more ‘Business’ people in the mix, up even on last year. – this of course is all based on my unofficial straw poll and could be wrong.

    The fact that news stories abounded in 2007 about poor quality information and the initial short sharp shock of Compliance and SOx etc. has started to give rise to questions of how to make Compliance a value-adding function (hint – It’s the INFORMATION people) may help, but the influence of bloggers such as Vincent, and the adoption of blogs as communications tools by vendors and by Professional Associations such as the IAIDQ is probably as big if not more of an influence IMHO.

    Also, and I’m not sure if this is a valid benchmark, I’ve started turning down offers to present at conferences and write articles for people on IQ issues. because a) I’m too busy with my day job and with the IAIDQ (oh yeah… and with my family) and b)there are more opportunities arising than I’d ever have time to take on.

    Unfortunately, much of the ‘mainstream’ coverage of Information Quality issues either views it either as a ‘technology issue’ (most of my articles in Irish trade magazines are stuck in the ‘Technology’ section) or fails to engage with the Information Quality aspects of the story fully. The objective of IQTrainwrecks.com is to try to highlight the Information Quality aspects of things that get into the media.

    What would make 2008 a Happy Year for me would be to have more people contributing to IQ Trainwrecks but also to have some happy path stories to tell and also for there to be better analysis of these issues in the media.

    Community Building
    There is a strong sense of ‘community’ building amongst many of the IQ practitioners I speak with. That has been one of the key goals of the IAIDQ in 2007 – to try and get that sense of Community triggered to link like-minded-people and help them learn from each other. This has started to come together. However it isn’t happening as quickly as I’d like, because I have a shopping list of things I want yesterday!

    What would make 2008 a happy new year for me would be for us to maintain the momentum we’ve developed in connecting the Community of Information/Data Quality professionals and researchers. Within the IAIDQ I’d like us to get better at building those connections (we’ve become good… we need to keep improving).

    I’d like to see more people making contact via blogs like Vincent’s or mine or through other social networking facilities so we can build the Community of Like Minded people all focussing on the importance of Information Quality and sharing skills, tips, tools, tricks and know how about how to make it better. I’d be really happy at the end of 2008 a few more people make the transition from thinking they are the ‘lonely voice’ in their organisation to realising they are part of a very large choir that is singing an important tune.

    Role Models for Success
    2007 saw a few role models for success in Information Quality execution emerging. All of these had similar stories and similar elements that made up their winning plan. It made a change from previous years when people seemed afraid to share – perhaps because it is so sensitive a subject (for example admitting you have an IQ problem could amount to self-incrimination in some industries)? In the absence of these sort of ‘role models’ it is difficult to sell the message of data quality as it can come across as theoretical.

    I’d be very happy at the end of 2008 if we had a few more role models of successful application of principles and tools – not presented by vendors (no offence to vendors) but emerging from within the organisations themselves. I’d be very happy if we had some of these success stories analysed to highlight the common Key Success Factors that they share.

    Break down barriers
    2007 saw a lot of bridges being built within the Information Quality Community. 2006 ended with a veritable bloodbath of mergers and acquisitions amongst software vendors. 2007 had a development of networks and mutual support between the IAIDQ (as the leading professional organisation for IQ/DQ professionals) and MIT’s IQ Programme. In many Businesses the barriers that have prevented the IQ agenda from being pursued are also being overcome for a variety of reasons.

    2008 should be the year to capitalise on this as we near a signicificant tipping point. I’d like to see 2008 being the year were organisations realise that they need to push past the politics of Information Quality to actually tackle the root causes. Tom Redman is right – the politics of this stuff can be brutal because to solve the problems you need to change thinking and remould governance all of which is a dangerous threat to traditional power bases. The traditional divide between “Business” and “IT” is increasingly anachronistic, particularly when we are dealing with information/data within systems. If we can make that conceptual leap in 2008 to the point were everyone is inside the same tent peeing out… that would be a good year.

    Respect
    For most of my professional life I’ve been the crazy man in the corner telling everyone there was an elephant in the room that no-one else seemed able to see. It was a challenge to get the issues taken seriously. Even now I have one or two managers I deal with who still don’t get it. However most others I deal with do get it. They just need to be told what they have. 2007 seems to be the year that the lights started to go on about the importance of the Information Asset. Up to now, people spoke about it but didn’t ‘feel’ it… but now I don’t have trouble getting my Dept Head to think in terms of root causes, information flows etc.

    2008 is the year of Respect for the IQ Practitioner…. A Happy New Year for me would be to finish 2008 with appropriate credibility and respect for the profession. Having role models to point to will help, but also having certification and accreditation so people can define their skillsets as ‘Information Quality’ skill sets (and so chancers and snake-oil peddlers can be weeded out).

    Conclusion
    2007 saw discussion of Information Quality start to hit the mainstream and the level of interest in the field is growing significantly. For 2008 to be a Happy New Year we need to build on this, develop our Community of practitioners and researchers and then work to break down barriers within our organisations that are preventing the resolution of problems with information quality. If, as a community of Information/Data Quality people we can achieve that (and the IAIDQ is dedicated to that mission) and in doing so raise our standards and achieve serious credibility as a key management function in organisations and as a professional discipline then 2008 will have been a very Happy New Year.

    2008 already has its first Information Quality problem though…. looks like we’ve got a bit of work to do to make it a Happy New Year.