Category: IAIDQ

  • A game changer – Ferguson v British Gas

    Back in April I wrote an article for the IAIDQ’s Quarterly Member Newsletter picking up on my niche theme, Common Law liability for poor quality information – in other words, the likelihood that poor quality information and poor quality information management practices will result in your organisation (or you personally) being sued.

    I’ve written and presented on this theme many times over the past few years and it always struck me how people started off being in the “that’s too theoretical” camp but by the time I (and occasionally my speaking/writing partner on this stuff, Mr Fergal Crehan) had finished people were all but phoning their company lawyers to have a chat.

    To an extent, I have to admit that in the early days much of this was theoretical, taking precedents from other areas of law and trying to figure out how they fit together in an Information Quality context. However, in January 2009 a case was heard in the Court of Appeal in England and Wales which has significant implications for the Information Quality profession and which has had almost no coverage (other than coverage via the IAIDQ and myself). My legal colleagues describe it as “ground breaking” for the profession because of the simple legal principle it creates regarding complex and silo’d computing environments and the impact of disparate and plain crummy data. I see it as a clear rallying cry that makes it crystal clear that poor information quality will get you sued.

    Recent reports (here and here) and anecdotal evidence suggest that in the current economic climate, the risk to companies of litigation is increasing. Simply put, the issues that might have been brushed aside or resolved amicably in the past are now life and death issues, at least in the commercial sense. As a result there is now a trend to “lawyer up” at the first sign of trouble. This trend is likely to accelerate in the context of issues involving information, and I suspect, particularly in financial services.

    A recent article in the Commercial Litigation Journal (Frisby & Morrison, 2008) supports this supposition. In that article, the authors conclude:

    “History has shown that during previous downturns in market conditions, litigation has been a source of increased activity in law firms as businesses fight to hold onto what they have or utilise it as a cashflow tool to avoid paying money out.”

    The Case that (should have) shook the Information Quality world

    The case of Ferguson v British Gas was started by Ms. Ferguson, a former customer of British Gas who had transferred to a new supplier but to whom British Gas continued to send invoices and letters with threats to cut off her supply, start legal proceedings, and report her to credit rating agencies.

    Ms Ferguson complained and received assurances that this would stop but the correspondence continued. Ms Ferguson then sued British Gas for harassment.

    Among the defences put forward by British Gas were the arguments that:

    (a) correspondence generated by automated systems did not amount to harassment, and (b) for the conduct to amount to harassment, Ms Ferguson would have to show that the company had “actual knowledge” that its behaviour was harassment.

    The Court of Appeal dismissed both these arguments. Lord Justice Breen, one of the judges on the panel for this appeal, ruled that:

    “It is clear from this case that a corporation, large or small, can be responsible for harassment and can’t rely on the argument that there is no ‘controlling mind’ in the company and that the left hand didn’t know what the right hand was doing,” he said.

    Lord Justice Jacob, in delivering the ruling of the Court, dismissed the automated systems argument by saying:

    “[British Gas] also made the point that the correspondence was computer generated and so, for some reason which I do not really follow, Ms. Ferguson should not have taken it as seriously as if it had come from an individual. But real people are responsible for programming and entering material into the computer. It is British Gas’s system which, at the very least, allowed the impugned conduct to happen.”

    So what does this mean?

    In this ruling, the Court of Appeal for England and Wales has effectively indicated a judicial dismissal of a ‘silo’ view of the organization when a company is being sued. The courts attribute to the company the full knowledge it ought to have had if the left hand knew what the right hand was doing. Any future defence argument grounded on the silo nature of organizations will likely fail. If the company will not break down barriers to ensure that its conduct meets the reasonable expectations of its customers, the courts will do it for them.

    Secondly, the Court clearly had little time or patience for the argument that correspondence generated by a computer was any less weighty or worrisome than a letter written by a human being. Lord Justice Jacob’s statement places the emphasis on the people who program the computer and the people who enter the information. The faulty ‘system’ he refers to includes more than just the computer system; arguably, it also encompasses the human factors in the systemic management of the core processes of British Gas.

    Thirdly, the Court noted that perfectly good and inexpensive avenues to remedy in this type of case exist through the UK’s Trading Standards regulations. Thus from a risk management perspective, the probability of a company being prosecuted for this type of error will increase.

    British Gas settled with Ms Ferguson for an undisclosed amount and was ordered to pay her costs.

    What does it mean from an Information Quality perspective?

    From an Information Quality perspective, this case clearly shows the legal risks that arise from (a) disconnected and siloed systems, and (b) inconsistencies between the facts about real world entities that are contained in these systems.

    It would appear that the debt recovery systems in British Gas were not updated with correct customer account balances (amongst other potential issues).

    Ms. Ferguson was told repeatedly by one part of British Gas that the situation was resolved, while another part of British Gas rolled forward with threats of litigation. The root cause here would appear to be an incomplete or inaccurate record or a failure of British Gas’ systems. The Court’s judgment implies that that poor quality data isn’t a defence against litigation.

    The ruling’s emphasis on the importance of people in the management of information, in terms of programming computers (which can be interpreted to include the IT tasks involved in designing and developing systems) and inputting data (which can be interpreted as defining the data that the business uses, and managing the processes that create, maintain, and apply that data) is likewise significant.

    Clearly, an effective information quality strategy and culture, implemented through people and systems, could have avoided the customer service disaster and litigation that this case represents.  The court held the company accountable for not breaking down barriers between departments and systems so that the left-hand of the organization knows what the right-hand is doing.

    Furthermore, it is now more important than ever that companies ensure the accuracy of information about customers, their accounts, and their relationship with the company, as well as ensuring the consistency of that information between systems. The severity of impact of the risk is relatively high (reputational loss, cost of investigations, cost of refunds) and the likelihood of occurrence is also higher in today’s economic climate.

    Given the importance of information in modern businesses, and the likelihood of increased litigation during a recession, it is inevitable: poor quality information will get you sued.

  • The Risk of Poor Information Quality #nama

    I thought it timely to add an Information Quality perspective to the debate and discussion on NAMA. So, for tweeters the hashtag is #NAMAInfoQuality.

    The title of this post (less the Hashtag) is, co-incidentally, the title of a set of paired conferences I’m helping to organise in Dublin and Cardiff in a little over a week.

    It is a timely topic given the contribution that poor quality information played in the sub-prime mortgage collapse in the US. While a degree of ‘magical thinking’ is also to blame (“what, I can just say I’m a CEO with €1million and you’ll take my word for it?”) and , ultimately the risks that poor quality information posed to down stream processes and decisions  were not effectively managed even if they were actually recognised.

    Listening to the NAMA (twitter hash-tag #nama) debate on-line yesterday (and following it on the excellent liveblog.ie I couldn’t help but think about the “Happy path” thinking that seems to be prevailing and how similar it is to the Happy Path thinking that pervaded the CRM goldrush of the late 1990s and early 2000’s, and the ERP and MDM bandwagons that have trundled through a little place I call “ProjectsVille” in the intervening years.

    (note to people checking Wikipedia links above… Wikipedia, in its wisdom, seems to class CRM, ERP and MDM as “IT” issues. That’s bullshit frankly and doesn’t reflect the key lessons learned from painful failures over the years in many companies around the world. While there is an IT component to implementing solutions and excuting  projects, these are all fundamentally part of core business strategy and are a business challenge. )

    But I digress….

    Basically, at the heart of every CRM project, ERP project or MDM project is the need to create a “Single View of Something”, be it this bizarre creature called a “Customer” (they are like Yeti.. we all believe they exist but no-one can precisely describe or define them), or “Widget” or other things that the Business needs to know about to, well… run the business and survive.

    This involves taking data from multiple sources and combining them together in a single repository of facts. So if you have  999 seperate Access databases and 45000 spreadsheets with customer  data on them and data about what products your customers have bought, ideally you want to be boiling them down to one database of customers and one database of products with links between them that tell you that Customer 456  has bought 45000 of Widget X in the last 6 months and likes to be phoned after 4:30pm on Thursdays and prefers to be called ‘Dave’ instead of “Mr Rodgers”, oh… and they haven’t got around to paying you for 40,000 of those widgets yet.

    (This is the kind of thing that Damien Mulley referred to recently as a “Golden Database“.)

    NAMA proposes to basically take the facts that are known about a load of loans from multiple lenders, put them all together in a “Single View of Abyss” (they’d probably call it something else) and from that easily and accurately identify under-performing and nonperforming loans and put the State in the position where it can ultimately take the assets on which loans were secured or for which loans were acquired if the loans aren’t being repaid.

    Ignoring the economists’ arguments about the merits and risks of this approach, this sounds very much like a classic CRM/MDM problem where you have lots of source data sets and want to boil them down to three basic sets of facts, in this case:

    • Property or other assets affected by loans (either used as security or purchased using loans)

    • People or companies who borrowed those monies

    • Information about the performance of those loans.

    Ideally then you should be able to ask the magic computermebob to tell you exactly what loans Developer X has, and what assets are those loans secured on. Somewhere in that process there is some magic that happens that turns crud into gold and the Irish taxpayer comes out a winner (at least that’s the impression I’m getting).

    This is Happy Path.

    The Crappy Path

    Some statistics now to give you an insight into just how crappy the crappy path can be.

    • Various published studies have found that over 70% of CRM implementations had failed to deliver on the promised “Single View of Customer”

    • In 2007 Bloor Research found that 84% of all ERP data migrations fail (either run over time, over budget or fail to integrate all the data) because of problems with the quality of the data

    • As recently as last month, Gartner Group reported that 75% of CFOs surveyed felt that poor quality information was a direct impediment to achieving business goals.

    • A study by IBM found that the average “knowledge worker” can spend up to 30% of their time rechecking information and correcting errors.

    Translating this to NAMA’s potential Information Management Challenge:

    1. The probability of the information meeting expectations is about the same as the discount tthat has been applied on the loans. (30%).
    2. The probability of the migration and consolidation of information happening on time, on budget and to the level of quality required is slightly better than the forecast growth rate in property prices once the economy recovers (16% versus 10%)
    3. Around 30% of the time of staff in NAMA will likely be spent checking errors, seeking information, correcting and clarifying facts etc.

    There is a whole lot more to this than just taking loans and pressing a button on a money machine for the banks.

    Ultimately the loans are described in the abstract by Information, the assets which were used as security or which were purchased with those loans are defined by data, and the people and businesses servicing those loans (or not as the case may be) are represented by facts and attributes like “Firstname/LastName”, “Company Registration Number”. Much as we taxpayers might like it, Liam Carroll will not be locked in a dungeon in the basement of Treasury Buildings while NAMA operates. However, the facts and attributes that describe the commercial entity “Liam Carroll” and the businesses he operated will be stored in a database (which could very well be in

    This ultimate reliance on ephemeral information brings with it some significant risks across a number of areas, all of which could signpost the detour from Happy Path to Crappy Path.

    Rather than bore readers with a detailed thesis on the types of problems that might occur (I’ve written it and it runs to many words), I’ve decided to run a little series over the next few days which is drawing on some of the topics I and other speakers will be covering at the IAIDQ/ICS IQ Network Conference on the 28th of September.

    Examples of problems that might occur (Part 1)

    Address Data (also known as “Postcode postcode wherefore art thou postcode?”)

    Ireland is one of the few countries that lacks a postcode system. This means that postal addresses in Ireland are, for want of a better expression, fuzzy.

    Take for example one townland in Wexford called Murrintown. only it’s not. It has been for centuries as far as the locals are concerned but according to the Ordnance Survey and the Place Names commission, the locals don’t know how to spell. All the road signs have “Murntown”.

    Yes,  An Post has the *koff* lovely */koff* Geodirectory system which is the nearest thing to an address standard database we have in Ireland. Of course, it is designed and populated to supprt the delivery of letter post. As a result, many towns and villages have been transposed around the country as their “Town” from a postal perspective is actually their nearest main sorting office.

    Ballyhaunis in County  Mayo is famously logged in Geodirectory as being in Co. Roscommon. This results in property being occasionally misfiled.

    There are also occasionally typographical errors and transcription errors in data in data. For example, some genius put an accented character into the name of the development I live in in Wexford which means that Google Maps, Satnavs and other cleverness can’t find my address unless I actually screw it up on purpose.

    Of course, one might assume that the addresses given in the title deeds to properties would be accurate and correct (and, for the most part, they are I believe). However there is still the issue of transcription errors and mis-reading of handwriting on deeds which can introduce small and insidious errors.

    It is an interesting fact that the Land Registry has moved to computerised registers in recent years but the Property Registration Authority trusts still to the trusty quill and only recently moved to put the forms for registering property deeds on-line. Please let me know what you think of the layout of their web form.

    I am Spartacus (No, I am Spartacus. No I’m Brian Spartacus).

    Identity is a somewhat fluid thing. When seeking to build their consolidated view of borrowings, NAMA will need to create a “single view of borrower”. This will require them to match names of companies to create a single view of businesses who have borrowed (and then that will likely need to have some input from the CRO to flag where such companies have a change in status such as being wound up or bought).

    The process will also likely need to have a Single View of Borrower down to the level of a person a) because some loans may be out to people and b) because the link between some borrowings and various companies who would have borrowed will likely turn out to be an individual.

    Now. Are these people the same:

    • Daragh O Brien

    • Dara O’Brien

    • Daire O Brian

    • Daragh Ó Briain

    • Dara Ó Briain

    • Darach O Brien

    • D Patrick O Brien

    • Pat O’Brien

    The answer is that they are. They are variations in spelling on my name, one possible variation in use of my middle name, and the last one is what a basketball coach I had decades ago called me because he couldn’t pronounce Daragh.

    However, the process of matching personal data is very complex and requires great care be taken, particularly given the implications under the Data Protection Act of making an error.

    The challenge NAMA potentially faces is identifying if Joe Bloggs in Bank A is Joseph Bloggs in Bank B or J.P Bloggs in Bank C or both or none at all. Recommended practice is to have name plus at least two other ‘facts’ to feed your matching processes. And at that the process inevitably requires human review.

    However, the problem is relatively solvable if you invest in the right tools and are willing to invest in the people necessary to use the tools and (unfortunately) take care of the manual review and approval that would be required.

    A related risk is the risk of not having the customer’s name correct. Simply put, where that happens the lender or controller of the loans effectively hands the borrower a “get out of jail” card as they can simply claim that the named person is not them. Courts are pedantically persnickety about accuracy of information in these types of matters.

    A corollary of this is where the lender or controller of the loans (in this case NAMA) starts chasing the wrong person for the payment of a loan due to a mismatch of data about people. Here, the aggrieved party could ultimately sue the controller of the loans for libel if they publish to anyone the allegation that the person owed money and wasn’t paying it back.

    While each of these are risks that the banks individually manage on their own at the moment, the simple fact of pulling all this information together under NAMA increases the risk factor here. Each bank may have individually had problems with errors and mismatching etc. but resolved them quietly and locally. The root causes of those errors may not be addressed and may not be possible to address in a bulk data migration. Therefore, the data may get muddled again, leading to the issues outlined above.

    Conclusion (for Part 1)

    NAMA will not be managing physical loans or physical assets. What NAMA will work with is the information about those things, the facts and figures that describe the attributes of those things in the abstract.

    To assume that individually Irish banks have levels of data quality that are as pure as the driven snow is naive. To assume that taking a load of crummy data from multiple sources and mixing it together to create a new “Single View” of anything without first understanding the quality of that information and ensuring that you have steps in place to manage and mitigate the risks posed by non-quality information means you risk failure rates of between 70% and 84%.

    To put it another way, there is only between 16% and 30% of a chance that NAMA will be able to deliver the kind of robust management of all loans out to individual property developers and property development companies that would be necessary to ensure that the risk to the taxpayer is properly managed.

    The key to managing that risk of failure will be discussed in my next post and at the IAIDQ/ICS IQ Network conference on the 28th of September

  • IAIDQ Festival del IDQ Bloggers – Episode #2

    Right – I’m opening with an apology. This should have gone out hours ago but it’s a Bank Holiday in Ireland, the sun is (uncharacteristically) shining so I took off to the beach with my wife and lost track of time… but better late than never.

    As some of you may know, I’m a member of the IAIDQ, an international not-for-profit dedicated to developing the profession of Information Quality Management (a profession that spans both business and IT, and a host of professional disciplines from Compliance to Risk Management, to Legal, to Marketing, to Sales/CRM… Basically, if you need good quality information to succeed in a role, you need good quality information quality management).

    This year the IAIDQ is 5 years old and is having a series of rolling celebrations, the Blog Carnival “Festival del IDQ Bloggers” being one of the strands of those celebrations. I’m honoured to be counted among the cadre of IDQ Bloggers (people who blog about Information Quality issues) and take immense pride in presenting to you, dear reader, the Roll of Honour for IDQ Bloggers from May 2009.

    Entry #1 Steve Sarsfield

     Steve Sarsfield of the Data Governance and Data Quality Insider with this great post about Data Quality/Data Governance as a Movie. In it, he compares the “heroes” of the Data Governance/Data Quality profession as they battle (á la Neo or John McClane) to eliminate the “bad guys” of poor quality information and sloppy or ineffective data governance. 

    Personally, I’d have added Kelly’s Heroes to the mix here, but then those of you who know me would say that I’d try and add Kelly’s Heroes to anything.

    Steve Sarsfield is a data quality evangelist and author of the book the Data Governance Imperative.  His blog covers the world of data integration, data governance, and data quality from the perspective of an industry insider.

    Entry #2: Bob Lambert

    In this thought provoking post, Bob Lambert  shares his insights into why Project Sponsors aren’t blind, they just need glasses. In it, he highlights an all to common problem in poorly aligned IT projects and ‘re-engineering’ efforts where the project hits a “speed bump” of poor quality information and missed data integration requirements which leads to an inevitable project failure. Bob argues that the Project team should be given the mandate to have a checkpoint for the Project Sponsors to reality test the project costs and business case before blindly tilting at windmills trying to make the project work.

    This one should be mandatory reading for anyone working in an IT/Business interface role who is staring down the barrel of a “rationalisation” programme or a “next generation business/systems architecture” programme. 

    Bob Lambert is an IT professional interested in information management, business analysis, databases, & projects, and how IT and business get together to plan, build, and maintain business value. His blog at RobertLambert.net is about “aligned IT:” Aligned IT means IT integrated with business to create business value, and as such implies on time, on budget projects that meet their goals and motivated professionals working together to solve problems.

    Entry #3 Jim Harris

    Jim “the Gentleman” Harris returns this month with yet another amusing and thought provoking post on how the path to poor quality data is often paved with good intentions. In his post “The Nine Circles of Data Quality Hell“, Jim collates a number of factors (explored in earlier posts on his blog) which can lead to the Hell of Poor Quality data.

    While a few commeters on Jim’s blog have suggested a few more, I think Jim has done a very admirable job documenting the common pitfalls that leave poor data quality managers every where facing yet another day pushing boulders up hills.

    Jim Harris is an independent consultant, speaker, writer and blogger with over 15 years of professional services and application development experience in data quality. His blog, OCDQBlog.com is an independent blog offering a vendor-neutral perspective on data quality.

    Entry #4 William Sharp

    Entry number four comes from “new kid” on the Information Quality blogging block, William Sharp. In his post “Begin at the End – Ensuring Data Quality Success” elegantly sums up one of the challenges in developing, presenting, and implementing information quality improvement – the Value Proposition. William very nicely spells out the need to link you data quality project to clear business objectives in order to sell the value as, unlike ‘traditional’ IT projects, the impact of an information quality project is not as immediately apparent.

    A great post from a promising new arrival to the Community.

    William’s blog is the “DQ Chronicle“,  attempt to capture the  opportunities and challenges that exist as part of the various data quality initiatives encountered in the enterprise environment. He tries to keep the topics in a format easy to digest and direct as possible, side stepping profound pronouncements on Information Quality theory in favour of more direct content aimed new comers to the profession and people wanting to learn more.

    William is a skilled business professional with 12 years experience in client partnering. He is based in US.

    Entry #5 Tuppenceworth.ie

    Tuppenceworth.ie is one of the leading blogs in the Irish Blogging community. Earlier this month they ran a post about poor quality information in one of the leading Irish banks and its impact on customers – a touching “real world” story of a real customer impact (I blogged about it myself and it was picked up by IQTrainwrecks.com).

    Read the post here

    Founded in 2001, initially as a static HTML site before morphing into its current blog format in recent years, Tuppenceworth.ie has become a noted fixture in the Irish Blogging community. Members of its writing team have featured on Irish media discusing blogs and blogging and bloggers (amongst other things). With themes ranging from media, arts, culture, politics and legal issues, Tuppenceworth is an eclectic read.

    Tuppenceworth.ie is the brainchild of Simon McGarr and Fergal Crehan, with frequent guest contributions.

    Entry #6 IQTrainwrecks.com 

    IQTrainwrecks.com posted a story in May about a banking error by a bank in New Zealand which left a young couple with a massive overdraft facility, which they proceeded to drain before absconding. What IQTrainwrecks pointed out which was missed in mainstream media was that this was not the first time that this particular bank has made an error of this kind.

    Read: Antipodean Bankers Sheepish over Overdraft Bungle (again)

    Since 2006, IQTrainwrecks.com, which is a community blog provided and administered by the International Association for Information and Data Quality (IAIDQ), has been serving up regular doses of information quality disasters from around the world.

    Entry #7 The DoBlog.

    Despite having a busy month in work, I found time to put one post up that was inspired by the Tuppenceworth post.

    In “Software Quality, Information Quality, and Customer Service”  I let a picture from a recent Dilbert strip do the talking for me (eventually). 

    Perhaps if the Pointy Haired Boss had someone explaining the value of Information to his objectives (á la William’s post),  and if the project team had the mandate to cry “Halt” when things stopped making sense (as Bob suggests), then the team and customers wouldn’t find themselves descending the 9 Circles of Data Quality Hell, and the organisation wouldn’t need to cast around for a hero (see Steve’s post) to fix the inevitable IQTrainwreck.

    Wrap up

    Thanks to everyone who submitted a post for the June published, May reflecting edition of the IDQ Blog Carnival. Steve Sarsfield is the host for the next edition, hitting the Internet on or just before the 1st of July, covering Information/Data Quality blog posts published in the month of June (no cheating people – if you have a really good one from January.. update it and submit it). 

    Literally within seconds of writing the first draft of this, I spotted a few more new Information Quality bloggers joining the fray. Welcome to them and I hope they submit a post or three.

    If you want to submit a post for that edition, please visit the IAIDQ’s Blog Carnival page for details on how to submit your post.

    Keep blogging!

  • Certified Information Quality Professional

    Recent shenanigans around the world have highlighted the importance of good quality information. Over at idqcert.iaidq.org I’ve written a mid-sized post explaining why I am a passionate supporter of the IAIDQ’s Certified Information Quality Practitioner certification.

    Basically, there is a need for people who are managing information quality challenges to have a clear benchmark that sets them and their profession apart from the ‘IT’ misnomer. A clear code of ethics for the profession (a part of the certification as I understand iit) is also important. My reading of the situation, particularly in at least one Irish financial institution, is that people were more concerned with presenting the answer that was wanted to a question rather than the answer that was needed and there appears to have been some ‘massaging’ of figures to present a less than accurate view of things – resulting in investors making decisions based on incomplete or inaccurate information.

    Hopefully the CIQP certification will help raise standards and the awareness of the standards that should be required for people working with information in the information age.

  • Information Quality Train Drivers

    The IAIDQ is working to develop an industry standard certification/accreditation programme for Information/Data Quality Professionals (similar to the PMI for Project Managers). This is a valuable and significant initiative that will (hopefully) lead to a reduction in the types of issues we see over at IQTrainwrecks.com.

    The IAIDQ has set up a blog over at idqcert.iaidq.org to share news and feedback from the Certification development project. Currently there are some good posts there about the first international workshop that was held in October in North Carolina to thrash out the ‘knowledge areas’ that needed to be addressed. That workshop was a key input into the next stage of the project – a detailed Job Analysis study.

    Of course, industry defining initiatives like this need to be funded and the IAIDQ is eager that this be a Community lead project “by IQ Professionals for IQ Professionals”, rather than being driven by the objectives of vendors (although vendors are good and the IAIDQ is looking for vendor sponsorship to help this initiative as well). To make this a ‘community’ initiative it was felt that individuals might like to ChipIn a few quid. If you are in the US it is tax-deductible due to the legal status of the IAIDQ (a 501(3) not for profit). The rest of us might just need to be less generous.

    I personally think this is a great initiative that will raise standards and objectivity in the field of Information Quality. Please give generously.

  • An IQ Trainwreck…

    From Don Carlson, one of my IAIDQ cronies in the US comes this YouTube vid from Informatica (a data quality software tool vendor) that sums up a lot of why Information Quality matters.

    Of course, I could get snooty and ask what gave them the idea to juxtapose Information Quality and Trainwrecks…. gosh, I’d swear I’ve seen that somewhere before

  • Cripes, the blog has been name-checked by my publisher…

    TwentyMajor isn’t the only blogger in the pay of a publisher (I’m conveniently ignoring Grandad and the others as Irish bloggers are too darned fond of publishing these days. If you want to know who all the Irish bloggers with publishers are then Damien Mulley probably has a list)!

    I recently wrote an industry report for a UK publisher on Information Quality strategy. The publisher then swapped all my references to Information Quality to references to Data Quality as that was their ‘brand’ on the publication. I prefer the term Information Quality for a variety of reasons.

    As this runs to over 100 pages of A4 it has a lot of words in it. My fingers were tired after typing it. Unlike Twenty’s book, I’ve got pictures in mine (not those kind of pictures, unfortunately, but nice diagrams of concepts related to strategy and Information Quality. If you want the other kind of pictures, you’ll need to go here.)

    In the marketing blurb and bumph that I put together for the publisher I mentioned this blog and the IQTrainwrecks.com blog. Imagine my surprise when I opened a sales email from the publisher today (yes, they included me on the sales mailing list… the irony is not lost on me… information quality, author, not likely to buy my own report when I’ve got the four drafts of it on the lappytop here).

    So, for the next few weeks I’ll have to look all serious and proper in a ‘knowing what I’m talking about’ kind of way to encourage people to by my report. (I had toyed with some variation on booky-wook but it just doesn’t work – reporty-wort… no thanks, I don’t want warts).

    So things I’ll have to refrain from doing include:

    1. Engaging in pointless satirical attacks on the government or businesses just for a laugh, unless I can find an Information Quality angle
    2. Talking too loudly about politics
    3. Giving out about rural/urban digital divides in Ireland
    4. Parsing and reformatting the arguments of leading Irish opinion writers to expose the absence of logic or argument therein.
    5. Engaging in socio-economic analysis of the fate of highstreet purveyors of dirty water parading as coffee.
    6. Swearing

    That last one is a f***ing pain in the a**.

    If any of you are interested in buying my ‘umble little report, it is available for sale from Ark Group via this link.. . This link will make them think you got the email they sent to me, and you can get a discount, getting the yoke for £202.50 including postage and packing (normally £345+£7.50p&p. (Or click here to avoid the email campaign software…)

    And if any of you would like to see the content that I’d have preferred the link in the sales person’s to send you to (coz it highlights the need for good quality management of your information quality) then just click away here to go to IQTrainwrecks.com

    Thanks to Larry, Tom, Danette, the wifey for their support while I was writing the report and Stephanie and Vanessa at Ark Group for their encouragement to get it finished by the deadline.

  • Information Quality in 2008…

    So yet another year draws to a close. Usually around this time of year I try to take a few hours to review how things went, what worked and what still needs to be worked on in the coming year. In most cases that is very personal appraisal of whether I had a ‘quality’ year – did I meet or exceed my own expectations of myself (and I’m a bugger for trying to achieve too much too quickly).

    Vincent McBurney’s Blog Carnival of Data Quality has invited submissions on the theme “Happy New Year”, so I thought I’d take a look back over 2007 and see what emerging trends or movements might lead to a Happy New Year for Information Quality people in 2008.

    Hitting Mainstream
    In 2007 Information Quality issues began to hit the mainstream. It isn’t quite there yet but 2007 saw the introduction of taught Master’s degree programmes in Information Quality in the University of Arkansas at Little Rock and there have been similar developments mooted in at least one European University. If educators think they can run viable courses that will make money then we are moving out of the niche towards being seen asa a mainstream discipline of importance to business.

    The IAIDQ’s IDQ Conference in Las Vegas was a significant success, with numbers up on 2006 and a wider mix of attendees. I did an unofficial straw poll of people at that conference and the consensus from the delegates and other speakers was that there were more ‘Business’ people at the conference than previous Information Quality conferences they’d attended, a trend that has been growing in recent years. The same was true at the European Data Management and Information Quality Conference(s) in London in November. Numbers were up on previous years. There were more ‘Business’ people in the mix, up even on last year. – this of course is all based on my unofficial straw poll and could be wrong.

    The fact that news stories abounded in 2007 about poor quality information and the initial short sharp shock of Compliance and SOx etc. has started to give rise to questions of how to make Compliance a value-adding function (hint – It’s the INFORMATION people) may help, but the influence of bloggers such as Vincent, and the adoption of blogs as communications tools by vendors and by Professional Associations such as the IAIDQ is probably as big if not more of an influence IMHO.

    Also, and I’m not sure if this is a valid benchmark, I’ve started turning down offers to present at conferences and write articles for people on IQ issues. because a) I’m too busy with my day job and with the IAIDQ (oh yeah… and with my family) and b)there are more opportunities arising than I’d ever have time to take on.

    Unfortunately, much of the ‘mainstream’ coverage of Information Quality issues either views it either as a ‘technology issue’ (most of my articles in Irish trade magazines are stuck in the ‘Technology’ section) or fails to engage with the Information Quality aspects of the story fully. The objective of IQTrainwrecks.com is to try to highlight the Information Quality aspects of things that get into the media.

    What would make 2008 a Happy Year for me would be to have more people contributing to IQ Trainwrecks but also to have some happy path stories to tell and also for there to be better analysis of these issues in the media.

    Community Building
    There is a strong sense of ‘community’ building amongst many of the IQ practitioners I speak with. That has been one of the key goals of the IAIDQ in 2007 – to try and get that sense of Community triggered to link like-minded-people and help them learn from each other. This has started to come together. However it isn’t happening as quickly as I’d like, because I have a shopping list of things I want yesterday!

    What would make 2008 a happy new year for me would be for us to maintain the momentum we’ve developed in connecting the Community of Information/Data Quality professionals and researchers. Within the IAIDQ I’d like us to get better at building those connections (we’ve become good… we need to keep improving).

    I’d like to see more people making contact via blogs like Vincent’s or mine or through other social networking facilities so we can build the Community of Like Minded people all focussing on the importance of Information Quality and sharing skills, tips, tools, tricks and know how about how to make it better. I’d be really happy at the end of 2008 a few more people make the transition from thinking they are the ‘lonely voice’ in their organisation to realising they are part of a very large choir that is singing an important tune.

    Role Models for Success
    2007 saw a few role models for success in Information Quality execution emerging. All of these had similar stories and similar elements that made up their winning plan. It made a change from previous years when people seemed afraid to share – perhaps because it is so sensitive a subject (for example admitting you have an IQ problem could amount to self-incrimination in some industries)? In the absence of these sort of ‘role models’ it is difficult to sell the message of data quality as it can come across as theoretical.

    I’d be very happy at the end of 2008 if we had a few more role models of successful application of principles and tools – not presented by vendors (no offence to vendors) but emerging from within the organisations themselves. I’d be very happy if we had some of these success stories analysed to highlight the common Key Success Factors that they share.

    Break down barriers
    2007 saw a lot of bridges being built within the Information Quality Community. 2006 ended with a veritable bloodbath of mergers and acquisitions amongst software vendors. 2007 had a development of networks and mutual support between the IAIDQ (as the leading professional organisation for IQ/DQ professionals) and MIT’s IQ Programme. In many Businesses the barriers that have prevented the IQ agenda from being pursued are also being overcome for a variety of reasons.

    2008 should be the year to capitalise on this as we near a signicificant tipping point. I’d like to see 2008 being the year were organisations realise that they need to push past the politics of Information Quality to actually tackle the root causes. Tom Redman is right – the politics of this stuff can be brutal because to solve the problems you need to change thinking and remould governance all of which is a dangerous threat to traditional power bases. The traditional divide between “Business” and “IT” is increasingly anachronistic, particularly when we are dealing with information/data within systems. If we can make that conceptual leap in 2008 to the point were everyone is inside the same tent peeing out… that would be a good year.

    Respect
    For most of my professional life I’ve been the crazy man in the corner telling everyone there was an elephant in the room that no-one else seemed able to see. It was a challenge to get the issues taken seriously. Even now I have one or two managers I deal with who still don’t get it. However most others I deal with do get it. They just need to be told what they have. 2007 seems to be the year that the lights started to go on about the importance of the Information Asset. Up to now, people spoke about it but didn’t ‘feel’ it… but now I don’t have trouble getting my Dept Head to think in terms of root causes, information flows etc.

    2008 is the year of Respect for the IQ Practitioner…. A Happy New Year for me would be to finish 2008 with appropriate credibility and respect for the profession. Having role models to point to will help, but also having certification and accreditation so people can define their skillsets as ‘Information Quality’ skill sets (and so chancers and snake-oil peddlers can be weeded out).

    Conclusion
    2007 saw discussion of Information Quality start to hit the mainstream and the level of interest in the field is growing significantly. For 2008 to be a Happy New Year we need to build on this, develop our Community of practitioners and researchers and then work to break down barriers within our organisations that are preventing the resolution of problems with information quality. If, as a community of Information/Data Quality people we can achieve that (and the IAIDQ is dedicated to that mission) and in doing so raise our standards and achieve serious credibility as a key management function in organisations and as a professional discipline then 2008 will have been a very Happy New Year.

    2008 already has its first Information Quality problem though…. looks like we’ve got a bit of work to do to make it a Happy New Year.

  • The evolution of Information Quality

    I was googling today (or doing some googlage) for blogs that deal with Information and Data Quality topics. Needless to say yours truly did appear reasonably highly the search results. One post that I came across that really made me think a bit was this one from Andrew Brooks, currently a Senior Consultant with Cap Gemini in the UK.

    In his post he asks if we are at a ‘tipping point’ for Information Quality where

    organisations are starting to move from ‘unconscious incompetence’ to ’conscious incompetence’ and see the need to spend money in this area (hence the growing number of vendors and consultancies) which are feeding off the back of this.

    He mentions that he gets calls from recruiters looking for Data Quality Management roles to be filled and wonders when we will reach the stage of ‘Concious Competence’.

    My personal feeling is that we are at a very large tipping point. Those organisations that truly make the leap will gain significant advantage over those that don’t. Those that make the leap half-heartedly by putting a few job titles and tools in the mix with no commitment or plan will limp along, but the pressure of competing with lean and efficient opposition (those who jump in wholeheartedly) will squeeze on these organisations. Those that don’t leap at all will fall foul of Darwinian evolution in the business context.

    The danger that we face at this juncture is that when the ship is sinking any bandwagon looks like a lifeboat. The risk that we face is that we will not have learned the lessons of the CRM adoption age when organisations bought ‘CRM’ (ie software) but didn’t realise the nature of the process and culture changes that were required to successfully improve the management of Customer Relationships. Tools and job titles do not a success make.

    The same was true of Quality management in manufacturing. As Joseph Juran said:

    “They thought they could make the right speeches, establish broad goals, and leave everything else to subordinates… They didn’t realize that fixing quality meant fixing whole companies, a task that cannot be delegated.”

    So, what can be done?

    The International Association for Information and Data Quality was founded in 2004 by Tom Redman and Larry English (both referenced in Mr Brook’s article) to promote and develop best practices and professionalism in the field of Information and Data Quality.

    As a vendor neutral organisation part of the Association’s mission is to cut through the hype and sales pitches to nail down, clarify and refine the core fundamental principles of Information Quality Management and to support Information/Data Quality professionals (I use the terms interchangeably, some people don’t…) in developing and certifying their skills so that (for example) the recruiter looking for a skilled Data Quality Manager has some form of indicator as to the quality of the resource being evaluated.

    The emergence of such an organisations and the work that is being done to develop formal vendor independent certification and accreditation evidences the emergence of the ‘early adopters’ of the ‘Concious committment’ that Mr. Brooks writes about. As an Information Quality professional I am concious that there is a lot of snake-oil swilling around the market, but also a lot of gems of wisdom. I am committed to developing my profession and developing the professional standards of my profession (vocation might be another word!).

    Having a rallying point where interested parties can share and develop sound practices and techniques will possibly accelerate the mainstreaming of the Concious Committment… IQ/DQ professionals (and researchers… must’t forget our colleagues in academia) need no longer be isolated or reinvent the wheel on their own.

    Let me know what you think….

  • Conferences and me for the end of 2007…

    Conference season is upon us in the Information Quality Community…

    At the end of September I’m off to Las Vegas to deliver a presentation at the IAIDQ’s North American conference the IDQ 2007 Conference.

    At the end of October I’m off to sunny London for the IRMUK Data Management and Information Quality Conferences. This will be my sixth year at this conference and my fourth as a presenter. This year I hit the ‘big leagues’ with a 3 hour tutorial on some of the legal aspects of Information Quality, going head to head with Larry English (amongst others)on the time table.

    Then in November the Irish CoP of the IAIDQ, the IQ Network will be hosting our IQ Forum… we’re planning it to co-incide with World Quality Day on the 8th of November to tie in with some IAIDQ events that will be taking place world wide.

    Who knows, maybe I’ll meet somebody from Dell at one of those conferences who might be able to fix my laptop problem before Christmas. 😉
    That would be nice.