Category: Business

A top level category for posts on business issues such as Web2.0 tools and trends, customer service issues etc.

  • Who then is my customer?

    Two weeks ago I had the privilege of taking part in the IAIDQ’s Ask the Expert Webinar for World Quality Day (or as it will now be know, World Information Quality Day).

    The general format of the event was that a few of the IAIDQ Directors shared stories from their personal experiences or professional insights and extrapolated out what the landscape might be like in 2014 (the 10th anniversary of the IAIDQ).

    A key factor in all of the stories that were shared was the need to focus on the needs of your information customer, and the fact that the information customer may not be the person who you think they are. More often than not, failing to consider the needs of your information customers can result in outcomes that are significantly below expectations.

    One of my favourite legal maxims is Lord Atkin’s definition of who your ‘neighbour’ is who you owe legal duties of care to. He describes your ‘neighbour’ as being anyone who you should reasonably have in your mind when undertaking any action, or deciding not to take any action. While this defines a ‘neighbour’ from the point of view of litigation, I think it is also a very good definition of your “customer” in any process.

    Recently I had the misfortune to witness first hand what happens when one part of an organisation institutes a change in a process without ensuring that the people who they should have reasonably had in their mind when instituting the change were aware that the change was coming.

    My wife had a surgical procedure and a drain was inserted for a few days. After about 2 days, the drain was full and needed to be changed. The nurses on the ward couldn’t figure out how to change my wife’s drain because the drain that had been inserted was a new type which the surgical teams had elected to go with but which the ward nurses had never seen before.

    For a further full day my wife suffered the indignity of various medical staff attempting to figure out how to change the drain.

    1. There was no replacement drain of that type available on the ward. The connections were incompatible with the standard drain that was readily available to staff on the ward and which they were familiar with.
    2. When a replacement drain was sourced and fitted, no-one could figure out how to actually activate the magic vacuum function of it that made it work. The instructions on the device itself were incomplete.

    When the mystery of the drain fitting was eventually solved, the puzzle of how to actually read the amount of fluid being drained presented itself, which was only of importance as the surgeon had left instructions that the drain was to be removed once the output had dropped below a certain amount. The device itself presented misleading information, appearing to be filled to one level but when emptied out in fact containing a lesser amount (an information presentation quality problem one might say).

    The impacts of all this were:

    • A distressed and disturbed patient increasingly worried about the quality of care she was receiving.
    • Wasted time and resources pulling medical staff from other duties to try and solve the mystery of the drain
    • A very peeved and increasingly irate quality management blogger growing more annoyed at the whole situation.
    • Medical staff feeling and looking incompetent in front of a patient (and the patient’s family)

    Eventually the issues were sorted out and the drain was removed, but the outcome was a decidedly sub-optimal one for all involved. And it could have been easily avoided had there been proper communication about the change to the ward nurses and the doctors in the department from the surgical teams when they changed their standard. Had the surgical teams asked the question of who should they have in their minds to communicate with when taking an action, surely the post-op nurses should have featured in there somewhere?

    I would be tempted to say “silly Health Service” if I hadn’t seen exactly this type of scenario play out in day to day operations and flagship IT projects during the course of my career. Whether it is changing the format of a spreadsheet report so it can’t be loaded into a database or filtered, changing a reporting standard, changing meta-data or reference data, or changing process steps, each of these can result in poor quality information outcomes and irate information customers.

    So, while information quality is defined from the perspective of your information customers, you should take the time to step back and ask yourself who those information customers actually are before making changes that impact on the downstream ability of those customers to meet the needs of their customers.

  • Bank of Ireland – again

    The Irish Times today reports that Bank of Ireland are again investigating incidents of double charging of customers who use LASER cards.

    I wrote about this last month (see the archives here), picking up on a post from Tuppenceworth.ie earlier in the summer. I won’t be writing anything more about the issue (at least not for now).

    Looking back through my archives I found the picture below in a post that I’d written back in May when Simon on Tuppenceworth first raised his issue with BOI’s Laser Cards.

  • A game changer – Ferguson v British Gas

    Back in April I wrote an article for the IAIDQ’s Quarterly Member Newsletter picking up on my niche theme, Common Law liability for poor quality information – in other words, the likelihood that poor quality information and poor quality information management practices will result in your organisation (or you personally) being sued.

    I’ve written and presented on this theme many times over the past few years and it always struck me how people started off being in the “that’s too theoretical” camp but by the time I (and occasionally my speaking/writing partner on this stuff, Mr Fergal Crehan) had finished people were all but phoning their company lawyers to have a chat.

    To an extent, I have to admit that in the early days much of this was theoretical, taking precedents from other areas of law and trying to figure out how they fit together in an Information Quality context. However, in January 2009 a case was heard in the Court of Appeal in England and Wales which has significant implications for the Information Quality profession and which has had almost no coverage (other than coverage via the IAIDQ and myself). My legal colleagues describe it as “ground breaking” for the profession because of the simple legal principle it creates regarding complex and silo’d computing environments and the impact of disparate and plain crummy data. I see it as a clear rallying cry that makes it crystal clear that poor information quality will get you sued.

    Recent reports (here and here) and anecdotal evidence suggest that in the current economic climate, the risk to companies of litigation is increasing. Simply put, the issues that might have been brushed aside or resolved amicably in the past are now life and death issues, at least in the commercial sense. As a result there is now a trend to “lawyer up” at the first sign of trouble. This trend is likely to accelerate in the context of issues involving information, and I suspect, particularly in financial services.

    A recent article in the Commercial Litigation Journal (Frisby & Morrison, 2008) supports this supposition. In that article, the authors conclude:

    “History has shown that during previous downturns in market conditions, litigation has been a source of increased activity in law firms as businesses fight to hold onto what they have or utilise it as a cashflow tool to avoid paying money out.”

    The Case that (should have) shook the Information Quality world

    The case of Ferguson v British Gas was started by Ms. Ferguson, a former customer of British Gas who had transferred to a new supplier but to whom British Gas continued to send invoices and letters with threats to cut off her supply, start legal proceedings, and report her to credit rating agencies.

    Ms Ferguson complained and received assurances that this would stop but the correspondence continued. Ms Ferguson then sued British Gas for harassment.

    Among the defences put forward by British Gas were the arguments that:

    (a) correspondence generated by automated systems did not amount to harassment, and (b) for the conduct to amount to harassment, Ms Ferguson would have to show that the company had “actual knowledge” that its behaviour was harassment.

    The Court of Appeal dismissed both these arguments. Lord Justice Breen, one of the judges on the panel for this appeal, ruled that:

    “It is clear from this case that a corporation, large or small, can be responsible for harassment and can’t rely on the argument that there is no ‘controlling mind’ in the company and that the left hand didn’t know what the right hand was doing,” he said.

    Lord Justice Jacob, in delivering the ruling of the Court, dismissed the automated systems argument by saying:

    “[British Gas] also made the point that the correspondence was computer generated and so, for some reason which I do not really follow, Ms. Ferguson should not have taken it as seriously as if it had come from an individual. But real people are responsible for programming and entering material into the computer. It is British Gas’s system which, at the very least, allowed the impugned conduct to happen.”

    So what does this mean?

    In this ruling, the Court of Appeal for England and Wales has effectively indicated a judicial dismissal of a ‘silo’ view of the organization when a company is being sued. The courts attribute to the company the full knowledge it ought to have had if the left hand knew what the right hand was doing. Any future defence argument grounded on the silo nature of organizations will likely fail. If the company will not break down barriers to ensure that its conduct meets the reasonable expectations of its customers, the courts will do it for them.

    Secondly, the Court clearly had little time or patience for the argument that correspondence generated by a computer was any less weighty or worrisome than a letter written by a human being. Lord Justice Jacob’s statement places the emphasis on the people who program the computer and the people who enter the information. The faulty ‘system’ he refers to includes more than just the computer system; arguably, it also encompasses the human factors in the systemic management of the core processes of British Gas.

    Thirdly, the Court noted that perfectly good and inexpensive avenues to remedy in this type of case exist through the UK’s Trading Standards regulations. Thus from a risk management perspective, the probability of a company being prosecuted for this type of error will increase.

    British Gas settled with Ms Ferguson for an undisclosed amount and was ordered to pay her costs.

    What does it mean from an Information Quality perspective?

    From an Information Quality perspective, this case clearly shows the legal risks that arise from (a) disconnected and siloed systems, and (b) inconsistencies between the facts about real world entities that are contained in these systems.

    It would appear that the debt recovery systems in British Gas were not updated with correct customer account balances (amongst other potential issues).

    Ms. Ferguson was told repeatedly by one part of British Gas that the situation was resolved, while another part of British Gas rolled forward with threats of litigation. The root cause here would appear to be an incomplete or inaccurate record or a failure of British Gas’ systems. The Court’s judgment implies that that poor quality data isn’t a defence against litigation.

    The ruling’s emphasis on the importance of people in the management of information, in terms of programming computers (which can be interpreted to include the IT tasks involved in designing and developing systems) and inputting data (which can be interpreted as defining the data that the business uses, and managing the processes that create, maintain, and apply that data) is likewise significant.

    Clearly, an effective information quality strategy and culture, implemented through people and systems, could have avoided the customer service disaster and litigation that this case represents.  The court held the company accountable for not breaking down barriers between departments and systems so that the left-hand of the organization knows what the right-hand is doing.

    Furthermore, it is now more important than ever that companies ensure the accuracy of information about customers, their accounts, and their relationship with the company, as well as ensuring the consistency of that information between systems. The severity of impact of the risk is relatively high (reputational loss, cost of investigations, cost of refunds) and the likelihood of occurrence is also higher in today’s economic climate.

    Given the importance of information in modern businesses, and the likelihood of increased litigation during a recession, it is inevitable: poor quality information will get you sued.

  • Finding Red Herrings or Missing a Trick?

    This post is written by Colin Boylan, an independent market research professional based in Wicklow, Ireland with extensive experience in Market Research in pharma and other industries in the UK and Ireland. In this post, Colin explains how the quality of the population sample used in a market research study can have significant effects on the quality of the findings. His post was inspired by recent posts here and here about “Golden Databases“. I’m glad to give Colin a chance to try his blogging chops out and I hope visitors here enjoy reading his insights in to information quality and market research.

    Finding Red Herrings or Missing a Trick?

    For most businesses there are major advantages to investing money in doing direct research with your customer base   In theory it’s a ready built list of people who are familiar with your business – so they can speak with authority on their experience as your customer.
    The value of customer research to business should be by now fairly obvious, but there’s an old saying in research (and elsewhere) – “garbage in, garbage out”. The insights built off the data
    generated from your customer list is only as relevant as the list of people you ask to participate in the research.
    However if, for example, they are lapsed customers then researching them is going to give you a picture of what your past customers wanted from you (unless these people are the focus of your research of course).   Is this the same as what your present customers want?  And if you are looking for why past customers stopped dealing with you and use a list full of current  customers you end up with either few people able to answer the questions you set or …worse….data from people who shouldn’t have answered the question – which leads to another scenario.
    Picture an important piece of research done with a list of past and present customers mixed in together with no way to tell who is who.  Do current and ex-customers differ in their wants and needs from your business?     I don’t know – and neither do you.   So how useful are any insights generated from this research?  Not being able to separate these two groups gives rise to two potential scenarios.  Either the excess numbers in there are throwing up ‘clear’ results that are not applicable to your current customers or the combination of both bodies is adding noise which stops you uncovering real insights about the customers you’re interested in – you’re either finding red herrings or you’re missing a trick!
    I’ve used just one scenario here to make a point that can be applied to lots of customer data stored by companies – be it incorrect regional information, incorrect gender, you can add whatever block of data is relevant to your own company here and the story is the same.   If the data is not accurate then any use it is put to suffers.

    For most businesses there are major advantages to investing money in doing direct research with your customer base In theory it’s a ready built list of people who are familiar with your business – so they can speak with authority on their experience as your customer.

    The value of customer research to business should be by now fairly obvious, but there’s an old saying in research (and elsewhere) – “garbage in, garbage out”. The insights built off the data generated from your customer list is only as relevant as the list of people you ask to participate in the research.

    However if, for example, they are lapsed customers then researching them is going to give you a picture of what your past customers wanted from you (unless these people are the focus of your research of course). Is this the same as what your present customers want? And if you are looking for why past customers stopped dealing with you and use a list full of current customers you end up with either few people able to answer the questions you set or …worse….data from people who shouldn’t have answered the question – which leads to another scenario.

    Picture an important piece of research done with a list of past and present customers mixed in together with no way to tell who is who. Do current and ex-customers differ in their wants and needs from your business? I don’t know – and neither do you. So how useful are any insights generated from this research? Not being able to separate these two groups gives rise to two potential scenarios. Either the excess numbers in there are throwing up ‘clear’ results that are not applicable to your current customers or the combination of both bodies is adding noise which stops you uncovering real insights about the customers you’re interested in – you’re either finding red herrings or you’re missing a trick!

    I’ve used just one scenario here to make a point that can be applied to lots of customer data stored by companies – be it incorrect regional information, incorrect gender, you can add whatever block of data is relevant to your own company here and the story is the same. If the data is not accurate then any use it is put to suffers.

  • The Risk of Poor Quality Information (2) #nama

    84% fail. Do you remember that statistic from my previous post?

    In my earlier post on this topic I wrote about  how issues of identity (name and address) can cause problems when attempting to consolidate data from multiple systems into one Single View of Master Data. I also ran through the frightening statistics relating to the failure rate of these types of projects, ranging up to 84%.

    Finally, I plugged two IAIDQ conferences on this topic. One is in Dublin on the 28th of September. The other is in Cardiff (currently being rescheduled for the end of next month).

    A key root cause of these failure rates has been identified. At the heart of many of these failures is a failure to understand and profile data  to better understand the risks and issues in the data that is being consolidated.

    So, if we assume that the risk that is NAMA is a risk that the Government will take, surely then it behoves the Government and NAMA to ensure that they take necessary steps to mitigate the risks posed to their plan by poor quality information and reduce the probability of failure because of data issues from around 8 in 10 to something more palatable (Zero Defects anyone?)

    Examples of problems that might occur (Part 2)

    Last time we talked about name and address issues. This time out we talk a little about more technical things  like metadata and business rules. (You, down the back… stay awake please).

    Divided by a Common Language (or the muddle of Metadata)

    Oscar Wilde is credited with describing America and Britain as two countries divided only by a common language.

    When bringing data together from different systems, there is often an assumption that if the fields are called the same thing or a similar thing, then they are likely to hold the same data. This assumption in particular is the mother of all cock-ups.

    I worked on a project once where there were two systems being merged for reporting purposes. System A had a field called Customer ID. System B had a field called Customer Number. The data was combined and the resulting report was hailed as something likely to promote growth, but only in roses. In short, it was a pile of manure.

    The root cause was that System A’s field was a field that uniquely identified customer records with an auto-incrementing numeric value (it started at 1, and added 1 until it was done). The Customer Number field in System B, well it contained letters and numbers and, most importantly, it didn’t actually identify the customer.

    ‘Metadata’ (and I sense an attack by Metadata puritans any moment now) is basically defined as “data about data” which helps you understand the values in the field and also helps you make correct assumptions about whether Tab A can actually be connected to Slot B in a way that will actually make sense. It ranges from the technical (this field has only numbers in it for all the data) to the conceptual (e.g. “A customer is…”).

    And here is the rub. Within individual organisations, there is often (indeed I would say inevitably) differences of opinion (to put it politically) about the meaning of the meta data within that organisation. Different business units may have different understandings of what a customer is. Software systems that have sprung up in silo responses to immediate tactical (or even strategic need) often have field names that are different for the same thing (synonyms) or are the same for different things (homonyms). Either can cause serious problems in the quality of consolidated data.

    Now NAMA, it would seem, will be consolidating data from multiple areas from within multiple banks. This is a metadata problem squared, which increases the level of risk still further.

    Three Loan Monty (or “One rule to ring them all”)

    One of the things we learned from Michael Lynn (apart from how lawyers and country music should never mix) was that property developers and speculators occasionally pull a fast one and give the same piece of property as security to multiple banks for multiple loans. The assumption that they seemed to have made in the good times was:

    1. No one would notice (sure, who’d be pulling all the details of loans and assets from most Irish Banks into one big database)
    2. They’d be able to pay off the loans before anyone would notice

    Well, number 1 is about to happen and number 2 has stopped happening in many cases.

    To think about this in the context of a data model or a set of business rules for a moment:

    • A person (or persons) can be the borrowers on more than one Loan
    • One loan can be secured against zero (unsecured), one (secured), or more than one (secured) assets.

    What we saw in the Lynn case broke these simple rules.

    An advantage of NAMA is that it gives an opportunity to actually get some metrics on how frequently this was allowed to happen. Once there is a Single View of Borrower it would be straightforward to profile the data and test for the simple business rules outlined above.

    The problem arises if incidents like this are discovered where there are three or four loans secured  against the same asset and one of them has a fixed charge or a crystallised charge over the asset and the others have some sort of impairment on their security (such as paperwork not being filed correctly and the charge not actually existing).

    If the loan with the charge is the smallest of the four, this means that NAMA winds up with three expensive unsecured loans as the principle in Commercial Law is that first in time prevails- in other words the first registered charge is the one that secures the  asset.

    It may very well be that the analysis of the banks loan books has already gone into the detail here and there is a cunning plan to address this type of problem as it arises. I’d be interested to see how such a plan would work.

    Unfortunately, I would fear that the issues uncovered in the Michael Lynn debacle haven’t gone away and remain lurking under the surface.

    Conclusion (for now)

    Information is ephemeral and intangible. Banks (and all businesses) use abstract facts stored in files to describe real-world things.

    Often the labels and attributes associated with those facts are not aligned or are created and defined in “silos” which create barriers to effective communication within an organisation. Such problems are multipled manifold when you begin to bring data from multiple independent entities together into one place and try to make it into a holistic information asset.

    Often things get lost or muddled in translation.

    Furthermore, where business rules that should govern a process have been broken or not enforced historically there are inevitably going to be ‘gaps of fact‘ (or ‘chasms of unknowingness’ if there is a lot of broken rules). Those ‘gaps of fact’ can undermine critical assumptions in processes or data migrations.

    When we talk of assumptions I am reminded of the old joke about how the economist who got stranded on a desert island was eventually rescued. On the sand he wrote “First, assume a rescue party will find me”.

    Where there is a ‘gap of fact’, it is unfortunately the case that there is a very high probability (around 84%) that the economist would be found, but found dead.

    Effective management of the risk of poor quality information requires people to set aside assumptions and act on fact and mind the gap.

  • Bank of Ireland Overcharging – another follow up

    Scanning the electronic pages of the Irish Independent this morning I read that

    1. They claim to have had the scoop on this story (no, it was Tuppenceworth.ie and IQTrainwrecks.com)
    2. They have “experts” (unnamed ones) who tell them that the actual number of impacted customers over the weekend could be up to 200,000.
    3. “Some other banks admitted there have been cases where Laser payments have mistakenly gone through on the double. But they said they have not had any serious problems.” (BOI had that angle on the issue back in June).
    4. The bank cannot guarantee that it won’t happen again.

    I’ll leave points one to three for another time and focus at this point (as my bus to Dublin leaves soon) on the matter of the Bank of Ireland not being able to guarantee that it won’t happen again.

    The Nature of Risk

    Fair play to BOI for admitting that they can’t guarantee that this problem won’t happen again. It has happened before (in May), it has happened now, it is only prudent to say that it may happen again.

    But are they not able to guarantee that it won’t happen again because they understand the causes of this problem, have properly mapped the process and information flows, understand the Happy Path and Crappy Path scenarios and the triggering factors for them  and have established robust detective and preventative controls on the information processes to prevent or flag errors to have a foolproof process but are hedging their bets against the occurence of idiots?  In that case, they have a managed process which will (hopefully) have effective governance structures around it to embed a quality culture that promotes early warning and prompt action to address the incidence of idiots which inevitably plagues fool proof processes.

    Or are they unable to guarantee it won’t happen again because they lack some or all of the above?

    Again I am forced to fall back on tortured analogies to explain this I fear.

    A few years ago I had an accident in my car. I am unable to guarantee that for the rest of my driving life I won’t have another accident. Hence I have taken out insurance. However, I have also taken a bit of time to understand how the first accident occured and modified my driving to improve my ability to control the risk of accident. Hence I am able to get insurance.

    Had I not modified my driving the probability of the same type of accident occuring would have been high, and as a result the cost of my insurance would be higher (no no-claims bonus for example).

    However, because I understand the “Happy Path” I want to travel on when driving and also understand the Crappy Path that I can wind up on if I don’t take the appropriate care and apply the appropriate controls (preventative and detective) on how I drive I haven’t had an accident since I reversed into the neighbour’s car many moons ago.

    I can’t guarantee it won’t happen again, but that is because I understand the nature of the risk and the extent to which I can control it, not because I am blissfully unaware of what is going on when I’m driving.

    Information Quality and Trust

    What does this idea of Information Quality and Trust mean? Well, the Indo put it very well this morning:

    Revelations about the Laser card glitch, disclosed in yesterday’s Irish Independent, have shaken confidence in banks’ payments systems at a time when people are nervous about all financial transactions.

    As I have said elsewhere, information is the key asset that fuels business and trade and is a key source of competitive advantage. In a cashless society it is not money that moves between bank accounts when you buy something, it is bits of information. Even when you take money from an ATM all you are really doing is turning the electronic fact into the physical thing it describes – €50 in your control to spend as you will.

    When the quality of information is called into question there is an understandable destruction of trust. “The facts don’t stack up”…. “the numbers don’t add up”… these are common exasperated comments one can often hear, usually accompanied by a reduction in trust in what you are being told or a reluctance to make a decision on that information.

    Somewhat ironically, it is the destruction of trust in the information around sub-prime mortgage lending and the bundled loan products that banks started trading to help spread their risk of in mortgage lending that has contributed to the current economic situation.

    In the specific case of Bank of Ireland and the Laser card problems, the trust vacuum is compounded by

    • The bank’s failure to acknowledge the extent or timescale of the issue
    • The bank’s apparent lack of understanding of how the process works or where it is broken. [Correction & Update: Yesterday’s Irish Daily Mail says that the Bank does know what caused the problem and is working on a solution. The apparent cause is very similar to the hypotheses I set out in the post  previous to this one.]

    This second one isn’t helped unfortunately by the fact that these issues can sometimes be complex and the word count available to a journalist is not often amenable to a detailed treatise on the finer points of batch processing transactions and error handling in complex financial services products.

    That’s why it is even more important for the bank to be communicating effectively here in a way that is customer focussed not directed towards protecting the bank.

    To restore trust, Bank of Ireland (and the other banks involved in Laser) needs to

    1. Demonstrate that they know how the process works… give a friendly graphic showing the high level process flow to the media (or your friendly neighbourhood voice of reason blogger). Heck, I’d even draw it for them if they would talk to me.
    2. From that simple diagram work out the Happy Path and Crappy Path scenarios. This may require a more detailed drill down than they might want to publish, but it is necessary. (they don’t need to publish the detail though).
    3. Once the Happy and Crappy paths are understood, identify what controls you currently have in place to keep things on the Happy Path. Test these controls. Where controls are lacking or absent, invest ASAP in robust controls.

    The key thing now is that the banking system needs to be able to demonstrate that it has a handle on this to restore Trust. The way to do this is to ensure that the information meets the expectations of the customer.

    I am a BOI customer. I expect to only pay for my lunch once. Make it so

  • Bank of Ireland Double Charging – a clarifying post

    Having spent the day trading IMs and talking to journalists about the Bank of Ireland Laser Card double charging kerfuffle, I thought it would be appropriate to write a calmer piece which spells out a bit more clearly my take on this issue, the particular axe I am grinding, and what this all means. I hope I can explain this in terms that can be clearly understood.

    What is LASER?

    For the benefit of people reading this who aren’t living and working in Ireland I’ll very quickly explain what LASER card is.

    LASER is a debit card system which operates in Ireland. It is in operation in over 70,000 businesses in Ireland. It is operated by Laser Card Services Ltd. Laser Card Services is owned by seven of Ireland’s financial services companies (details here) and three of these offer merchant services to Retailers (AIB, Bank of Ireland, and Ulster Bank). In addition to straightforward payment services, LASER allows card holders to get “cashback” from retailers using their card.

    There are currently over 3million Laser Cardholders nationwide, who generated more than €11.5billion in retail sales in 2008. On average, over 300 Laser card transactions are recorded per minute in Ireland.

    How it works (or at least the best stab I can get at it)

    As Jennifer Aniston used to say in that advert… “now for the science bit”. Children and persons of a sensitive disposition should look away now.

    One problem I’ve encountered here is actually finding any description of the actual process that takes your payment request (when you put your card in the reader and enter your pin) , transfers the money from you to the retailer, and then records that transaction on your bank statement.  Of course, there are valid security reasons for that.

    So, I’ve had to resort to making some educated guesses based on my experience in information management and some of the comments in the statement I received from Bank of Ireland back in June. If I have any of this wrong, I trust that someone more expert than me will provide the necessary corrections.

    1. The card holder presents their card to the retailer and puts it in the card reader. The card reader pulls the necessary account identifier information for the card holder for transmission to the LASER processing system (we’ll call this “Laser Central” to avoid future confusion).
    2. The retailer’s POS (point of sale) system passes the total amount of the transaction, including any Cashback amount and details of the date, time, and retailer, to the Laser card terminal.  Alternatively, the Retailer manually inputs the amount on the Laser POS terminal.
    3. This amount and the amount of the transaction is transmitted to the Laser payment processing systems.
    4. ‘Laser Central’ then notifies the cardholder’s bank which places a “hold” on an amount of funds in the customer’s account. This is similar in concept to the “pre-authorisation” that is put on your credit card when you stay in a hotel.
    5. At a later stage, ‘Laser Central’ transmits a reconciliation of transactions which were actually completd to the Laser payment processing sytem. This reconciliation draws down against the “hold” that has been put on funds in the card holder’s account, which results in the transaction appearing on the card holder’s bank statement.

    Point 5 explains why it can sometimes take a few days for transactions to hit your account when you pay with your laser card.

    The Problem

    The problem that has been reported by Bank of Ireland today and which was picked up on by Simon over at Tuppenceworth.ie in May is that customers are being charged twice  for transactions. In effect, the “hold” is being called on the double.

    Back in May, Bank of Ireland explained this as being (variously):

    • A problem caused by a software upgrade
    • A problem caused by retailers not knowing how to use their terminals properly
    • A combination of these two

    The Software Upgrade theory would impact on steps 3,4, and 5 of the “strawman” Laser process I have outlined above. The Retailer error theory would impact on steps 1 and 2 of that process, with potentially a knock on onto step 5 if transactions are not voided correctly when the Retailer makes an error.

    But ultimately, the problem is that people are having twice as much money deducted from their accounts, regardless of how it happens in the course of this process. And as one of the banks that owns and operates Laser Card Services, Bank of Ireland has the ability to influence the governance and control of each step in the process.

    The Risk of Poor Information Quality

    Poor quality information is one of the key problems facing businesses today. A study by The Data Warehousing Institute back in 2002 put the costs to the US economy at over US$600billion. Estimated error rates in databases across all industries and from countries around the world range between 10% and 35%. Certainly, at the dozens of confernces I’ve attended over the years, no-one has ever batted an eyelid when figures like this have been raised. On a few occasions delegates have wondered who the lucky guy was who only had 35% of his data of poor quality.

    The emerging Information Quality Management profession world wide is represented by the International Association for Information & Data Quality (IAIDQ).

    Information Quality is measured on a number of different attributes  (some writers call these Dimensions). The most common attributes include:

    • Completeness (is all the information you need to have in a record there?)
    • Consistency (do the facts stack up against business rules you might apply- for example, do you have “males” with female honorifics? Do you have multiple transactions being registered against one account within seconds of each other or with the same time stamp?)
    • Conformity (again, a check against business rules  – does the data conform to what you would expect. Letters in a field you expect to contain just numbers is a bad thing)
    • Level of duplication ( simply put… how many of these things do you have two or more of? And is that a problem?)
    • Accuracy (how well does your data reflect the real-word entity or transaction that it is supposed to represent?)

    In models developed by researchers at MIT there are many more dimensions, including “believability”.

    In Risk Mangement there are three basic types of control:

    • Reactive (shit, something has gone wrong… fix it fast)
    • Detective (we’re looking out for things that could go wrong so we can fix them before they become a problem that has a significant impact)
    • Preventative (we are checking for things at the point of entry and we are not letting crud through).

    Within any information process there is the risk that the process won’t work the way the designers thought/hoped/planned/prayed (delete as appropriate) it would.  In an ideal world, information would go in one end (for example the fact that you had paid €50 for a pair of shoes in Clarks on O’Connell Street in Dublin on a given day) and would come out the other end either transformed into a new piece of knowledge through the addition of other facts and contexts (Clarks for example might have you on a Loyalty card scheme that tracks the type of shoes you buy) or simply wind up having the desired outcome… €50 taken from your account and €50 given to Clarks for the lovely pair of loafers you are loafing around in. This is what I term the “Happy Path Scenario”.

    However lurking in the wings like Edwardian stage villains is the risk that something may occur which results in a detour off that “Happy Path” on to what I have come to call the “Crappy Path”. The precise nature of this risk can depend on a number of factors. For example, in the Clarks example, they may have priced the shoes incorrectly in their store database resulting in the wrong amount being deducted from your account (if you didn’t spot it at the time). Or, where information is manually rekeyed by retailers, you may find yourself walking out of a shop with those shoes for a fraction of what they should have cost if the store clerk missed a zero when keying in the amount (€50.00 versus €5.00).

    Software upgrades or bugs in the software that moves the bits of facts around the various systems and processes can also conspire to tempt the process from the Happy Path. For example if, in the Laser card process, it was to be found that there was a bug that was simply sending the request for draw down of funds against a “hold” to a bank twice before the process to clear the “hold” was called, then that would explain the double dipping of accounts.

    However, software bugs usually (but not always) occur in response to a particular set of real-world operational circumstances.  Software testing is supposed to bring the software to as close to real-world conditions as possible. At the very least the types of “Happy Path” and “Crappy Path” scenarios that have been identified need to be tested for (but this requires a clear process focus view of how the software should work). Where the test environment doesn’t match the conditions (e.g. types of data) or other attributes (process scenarios) of the “real world” you wind up with a situation akin to what happened to Honda when they entered Formula 1 and spent buckets of cash on a new wind tunnel that didn’t come close to matching actual track conditions.

    This would be loosely akin to giving a child a biscuit and then promising them a second it if they tidied their room, but failing to actually check if the room was tidied before giving the biscuit. You are down two bikkies and the kid’s room still looks like a tip.

    In this case, there is inconsistency of information. The fact of two “draw downs” against the same “hold” is inconsistent. This is a scenario that software checks ont he bank’s side could potentially check for and flag for review before processing them. I am assuming of course that there is some form of reference for the “hold” that is placed on the customer’s account so that the batch processing knows to clear it when appropriate.

    In the case of my horrid analogy, you just need to check within your own thought processes if the posession of two biscuits is consistent with an untidy room. If not, then the second biscuit should be held back. This is a detective control. Checking the room and then trying chasing the kid around the houseto get the biscuit back is a reactive control

    Another potential risk that might arise is that the retailer may have failed to put a transaction through correctly and then failed to clear it correctly before putting through a second transaction for the same amount. This should, I believe, result in two “holds” for the exact same amount being placed on the customer’s account within seconds of each other. One of these holds would be correct and valid and the process should correctly deduct money and clear that hold. However it may be (and please bear in mind that at this point I am speculating based on experience not necessarily an in-depth insight into how Laser processing works) that the second hold is kept active and, in the absence of a correct clearance, it is processed through.

    This is a little more tricky to test for in a reactive or detective controls. It is possible that I liked my shoes so much that I bought a second pair within 20 seconds of the first pair. Not probable, but possible. And with information quality and risk management ultimately you are dealing with probability. Because, as Sherlock Holmes says, when you have eliminated the impossible what remains, no matter how improbable, is the truth.

    Where the retailer is creating “shadow transactions” the ideal control is to have the retailer properly trained to ensure consistent and correct processes are followed at all time. However, if we assume that the idea of a person validly submitting more than one transaction in the same shop for the same amount within a few moments of each other is does not conform with what we’d expect to happen then one can construct a business rule that can be checked by software tools to pick out those types of transaction and prevent them going through to the stage of the process that takes money from the cardholder’s account.

    Quite how these errors are then handled is another issue however. Some of them (very few I would suggest) would be valid transactions. And this again is where there is a balance between possiblity and probability. It is possible that the transaction is valid, but it is more probable that it is an error. The larger the amount of the transaction, the more likely that it would be an error (although I’ve lost track of how many times I’ve bought a Faberge egg on my Laser card only to crave another nanoseconds later).

    Another key area of control of these kinds of risk is, surprisingly, the humble call centre. Far too often organisations look on call centres as being mechanisms to push messages to customers. When a problem might exist, often the best way to assess the level of risk is to monitor what is coming into your call centres. Admittedly it is a reactive control once the problem has hit, but it can be used as a detective control if you monitor for “breaking news”, just as the Twitter community can often swarm around a particular  hashtag.

    The Bank of Ireland situation

    The Bank of Ireland situation is one that suggests to me a failure of Information governance and Information risk management at at least some level.

    1. It seems that Call Centre staff were aware in May of a problem with double dipping of transactions. This wasn’t communicated to customers or the media at the time.
    2. There was some confusion in May about what the cause was. It was attributed variously to a software upgrade or retailers not doing their bit properly.
    3. Whatever the issue was in May, it was broken in the media in September as an issue that was only affecting recent transactions.

    To me, this suggests that there was a problem with the software in May and a decision was taken to roll back that software change.

    • Where was the “detective” control of Software Testing in May?
    • If the software was tested, what “Crappy Path” scenarios were missed from the test pack or test environment that exposed BOI customes (and potentially customers of the other 7 banks who are part of Laser) to this double dipping?
    • If BOI were confident that it was Retailers not following processes, why did they not design effective preventative controls or automated detective controls to find these types of error and automatically correct them before they became front page news?

    Unfortunately, if the Bank’s timeline and version of events are take at face value, the September version of the software didn’t actually fix the bug or implement any form of effective control to prevent customers being overcharged.

    • What is the scenario that exists that eluded Bank of Ireland staff for 4 months?
    • If they have identified all the scenarios… was the software adequately tested and is their test enviroment a close enough model of reality that they get “Ferrari” performance on the track rather than “Honda” performance?

    However, BOI’s response to this issue would seem to suggest an additional level of contributory cause which is probably more far reaching than a failure to test software or properly understand how the Laser systems are used and abused in “the wild” and ensure adequate controls are in place to manage and mitigate risks.

    A very significant piece of information about this entire situation is inconsistent for me. Bank of Ireland has stated that this problem arose over the past weekend and was identified by staff immediately. That sounds like a very robust control framework. However it is inconsistent with the fact that the issue was raised with the Bank in May by at least one customer, who wrote about it in a very popular and prominent Irish blog. At that time I also wrote to the Bank about this issue asking a series of very specific questions (incidentally, they were based on the type of questions I used to ask in my previous job when an issue was brought to our attention in a Compliance context).

    I was asked today if Simon’s case was possibly a once off. My response was to the effect that these are automated processes. If it happens once, one must assume that it has happened more than once.

    In statistical theory there is a forumla called Poisson’s Rule. Simply put, if you select a record at random from a random sample of your data and you find an error in it then you have a 95% probability that there will be other errors. Prudence would suggest that a larger sample be taken and further study be done before dismissing that error as a “once off”, particularly in automated structured processes. I believe that Simon’s case was simply that random selection falling in my lap and into the lap of the bank.

    Ultimately,  I can only feel now that Simon and I were fobbed off with a bland line. Perhaps it was a holding position while the Bank figured out what was going onand did further analysis and sampling of their data to get a handle on the size of the problem. However, if that was the case I would have expected the news reports to day to have talked about an “intermittent issue which has been occurring since May of this year”, not a convenient and fortuitous “recent days”.

    Unfortunately this has the hallmark of a culture which calls on staff to protect the Bank and to deny the existence of a problem until the evidence is categorically staring them in the face. It is precisely this kind of culture which blinkers organisations to the true impact of information quality risks. It is precisely this kind of culture which was apparent from the positions taken by Irish banks (BOI included) in the run up to the Government Bank Guarantee Scheme and which continues to hover in the air as we move to the NAMA bailout.

    Tthis kind of culture is an anathema to transparent and reliable managment of quality and risk.

    Conclusion

    We will probably never know exactly what the real root cause of the Bank of Ireland double dipping fiasco is. The Bank admitted today in the Irish Times that they were not sure what the cause was.

    Given that they don’t know what the cause was and there are differences of record as to when this issue first raised its head between the Bank and its own customers, it is clear that there are still further questions to ask and have answered as to the response of Bank of Ireland to this issue. In my view it has been a clear demonstration of “mushroom management” of risk and information quality.

    Ultimately, I can only hope that other banks involved in Laser learn from BOI’s handling of this issue which, to my mind, has been poor. What is needed is:

    • A clear and transparent definition of the process by which a laser transaction goes from your fingers on the PIN number pad to your bank account. This should not be technical but should be simple, business process based, ideally using only lines and boxes to explain the process in lay-person’s terms.
    • This can then form the basis in Banks and audit functions for defining the “Happy Path” and “Crappy Path” scenarios as well as explaining to all actors involved what the impact of their contribution is to the end result (a customer who can pay their mortgage after having done their shopping for example)
    • Increased transparency and responsiveness on the part of the banks to reports of customer over charging. Other industries (and I think of telecommunication here) have significant statutory penalties where it is shown that there is systemic overcharging of customers. In Telco the fine is up to €5000 per incident and a corporate criminal conviction (and a resulting loss in government tendering opportunities). I would suggest that similar levels of penalties should be levied at the banks so that there is more than an “inconvenience cost” of refunds but an “opportunity cost” of screwing up.
    • A change in culture is needed away towards ensuring the customer is protected from risk rather than the bank. I am perfectly certain that individual managers and staff in the banks in question do their best to protect the customer from risk, but a fundamental change in culture is required to turn those people from heroes in the dark hours to simply good role models of “how we do things here”.

    There is a lot to be learned by all from this incident.

  • Bank of Ireland Double Charging

    I read with interest a story on the Irish Times website this morning about Bank of Ireland double charging customers for Laser transactions in “recent days”. What interested me is that this was not something that happened in “recent days”. Far from it.

    Back in May 2009, Simon over on Tuppenceworth.ie reported this problem to Bank of Ireland and blogged about his customer service experience. On foot of what Simon had written, I emailed Bank of Ireland to try and get details on the issue before I wrote it up over at IQTrainwrecks.com.

    The response I received from Bank of Ireland on the 4th of June was:

    When BoI receives an authorisation request from a retailer, a ‘hold’ is placed on those funds until the actual transaction is presented for payment. The transaction is posted to the customer’s account on receipt from the retailer.

    Relative to the number of transactions processed there are a very small number of instances where a transaction may appear twice. For example these may occur if the retailer inputted the wrong amount and then re-input the correct amount or the transaction is sent in error twice for authorisation. These types of transactions are not errors or a system issue created by the Bank. The Bank receives an authorisation request and subsequently places a hold on those funds. These types of transactions are not unique to Bank of Ireland.

    Bank of Ireland responds to all customer queries raised in connection with above.

    (I have the name and contact details of the Bank of Ireland Press Office person who sent me that response).

    So. Basically the response in June was “those kind of things happen. They happen in all banks. If customers complain to us we sort it out on a case by case basis”.

    These are the questions I submitted to BoI in June. The quote above was the response I received to these detailed questions. (more…)

  • Golden Databases – another quick return

    I just received an email from an information quality tool vendor. It was sent to an email address I had provided to them in my capacity as a Director of the IAIDQ as part of registering for events they had run.

    The opening line of the email reads:

    I’m writing to you as a follow-up your recent telephone conversation with an [name of company deleted] representative.

    Two small problems

    1. I haven’t had a telephone conversation with any representative from this company regarding any survey or anything else recently. (I did meet one of their Dublin based team for lunch about 6 weeks ago – does that count?)
    2. The personal data I provided to them was not provided for the purpose of being emailed about surveys. (But at least they have an opt out).

    I’m going to take a look at the survey but I bet you the €250 raffle prize for participants that my responses will be statistically irrelvant.

    For a start, the survey is about the importance of the investment in data in IT planning. I’ve never worked in the IT organisation of any business. I have been on the business side interacting with IT as a provider of services to me.

    Also, as a Director of the IAIDQ and as someone trying to set up an SME business, I am basically hands on in all aspects of the business and implemenation of systems (I was working at 2am this morning doing a facelift on my company website and working on a web project for the IAIDQ). So, my responses will be misleading from a statistical point of view.

    It looks like the company in question:

    • Had an email address for me.
    • Knew that my former employer was a customer (customer #1 for this particular company’s DQ offering)
    • Forgot that I’d told their Dublin team that I’d left
    • Had failed to update their information about me.
    • Have recorded a contact with me but have recorded it incorrectly.

    Should I respond to the survey?  Will my responses be meaningful or just pointless crap that reduces the quality of the study?

    I’m goint to ask a friend of mine to write a guest post here on survey design for market research and the importance of Information Quality.

  • Golden Databases – a slight return

    Last week I shared a cautionary note about companies relying on their under-touched and under-loved Customer databases to help drive their business as we hit the bottom of the recessionary curve. The elevator pitch synopsis… Caveat emptor – the data may not be what you think it is and you risk irritating your customers if they find errors about them in your data.

    Which brings me to Vodafone Ireland and the data they hold about me. I initially thought that the poor quality information they have about me existed only in the database being used to drive their “Mission Red” campaign. For those of you who aren’t aware, “Mission Red” is Vodafone Ireland’s high profile customer intimacy drive wher they are asking customers to vote for their preference of add-on packages. Unfortunately, what I want isn’t listed under their options.

    What I want is for Vodafone Ireland to undo the unrequested gender reassignment they’ve subjected me to. (more…)