Retail Banking Archives - ĢƵ /tag/retail-banking/ Unlock your data's true potential Sun, 28 Jul 2024 22:45:02 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 /wp-content/uploads/2023/01/ĢƵFavIconBluePink-150x150.png Retail Banking Archives - ĢƵ /tag/retail-banking/ 32 32 Part 2: Self-service data improvement is the route to better data quality /blog/marketing-insights/new-self-service-data-improvement-is-the-route-to-better-data-quality/ Thu, 08 Oct 2020 12:00:37 +0000 /new-self-service-data-improvement-is-the-route-to-better-data-quality/ The route to better data quality – It’s easy to say that planning a journey has been made far simpler since the introduction of live traffic information to navigation apps. You can now either get there faster, or at the very least phone ahead to explain how long you’ll be delayed. It’s just as easy […]

The post Part 2: Self-service data improvement is the route to better data quality appeared first on ĢƵ.

]]>
The route to better data quality – It’s easy to say that planning a journey has been made far simpler since the introduction of live traffic information to navigation apps. You can now either get there faster, or at the very least phone ahead to explain how long you’ll be delayed.

This image has an empty alt attribute; its file name is Retail-Banking-Part-2-1-1-1024x1024.png

It’s just as easy to say that we wouldn’t think of ignoring this kind of data. Last week’s blog looked at the reasons for why measuring data is important for retail banks, but unless there is a strategy taken to react to the results it’s arguably pretty much meaningless.

Internal product owners, risk and compliance teams all need to use specific and robust data measurements for analytics and innovation; to identify and serve customers; and to comply with the reams of rules and regulations handed down by regulatory bodies. Having identified a way of scoring the data, it would be equally as bizarre to ignore the results.

However, navigating a smooth path in data management is hampered by the landscape being vast, unchartered and increasingly archaic. Many executives of incumbent banks are rightly worried about the stability of their ageing systems andare finding themselves ill-equipped for a digital marketplace that is evolving with ever-increasing speed.

Key business aims of using data to achieve necessary cost-savings, and grow revenues through intelligent analytics, snarl up against the sheer volume of human and needing to be ploughed into these systems, in an effort to and to reduce the customer impact, regulatory pressure andpainfulbad press .

Meanwhile,for those who have them, data metrics arerevealing quality problems, and fixing these issues tends to find its way into a once-off project that relies heavily on manual rules and even more manual re-keying into core systems. Very often, such projects have no capacity to continue that analysis and remediation or augmentation into the future, and overtime data that has been fixed at huge cost starts to decay again and the same cycle emerges.

But if your subject matter experts (SMEs) – your regulatory compliance specialists, product owners, marketing analytics professionals – could have cost-effective access to their data, it could put perfecting data in the hands of those who know what the data should look like and how it can be fixed.

If you install a targeted solution that can access external reference data sources, internal standards such as your data dictionary, and user and department-level information to identify the data owner, you can self-serve to fix the problems as they arise.

This can be done via a combination of SME review and through machine learning technology that evolves to apply remedial activities automatically because the rules created through correcting broken records can contain the information required to fix other records that fail the same rules.

It might sound like futuristic hype – because AI is so hot right now – but this is a very practical example of how new technology can address a real and immediate problem, and in doing so complement the bank’s overarching data governance framework.

It means that the constant push towards optimised customer journeys and propositions, increased regulatory compliance, and IT transformation can rely on regularly-perfected data at a granular, departmental level, rather than lifting and dropping compromised or out-of-date datasets.

Then the current frustration at delays in simply getting to use data can be avoided, and cost-effective, meaningful results for the business can be delivered in days or weeks rather than months or years.

Head over the next part: ‘Build vs Buy – Off-the-shelf or do-it-yourself? ‘ or click here for part 1 of this blog, covering the need for data quality metrics in retail banking.

This image has an empty alt attribute; its file name is part-1-1024x594.png

Matt Flenley is currently plying his trade as chief analogy provider at . If your data quality is keeping you awake at night, check out Self-Service Data Quality™ our award-winning interactive data quality analysis and reporting tool that is built to be used by business teams who aren’t necessarily programmers.

The post Part 2: Self-service data improvement is the route to better data quality appeared first on ĢƵ.

]]>
Part 3: Build vs Buy – Off-the-shelf or do-it-yourself? /blog/marketing-insights/new-part-3-off-the-shelf-versus-do-it-yourself/ Thu, 08 Oct 2020 02:29:33 +0000 /new-part-3-off-the-shelf-versus-do-it-yourself/ Build vs Buy? The 1970 space mission Apollo 13 famously featured one of the finest pieces of patched-together engineering ever seen on or indeed above this planet: making a square peg fit a round hole. In Houston, legions of expert NASA engineers worked alongside the reserve crew to improvise a way of fitting a square […]

The post Part 3: Build vs Buy – Off-the-shelf or do-it-yourself? appeared first on ĢƵ.

]]>
Build vs Buy? The famously featured one of the finest pieces of patched-together engineering ever seen on or indeed above this planet: making a square peg fit a round hole.

In Houston, legions of expert NASA engineers worked alongside the reserve crew to improvise a way of fitting a square type of carbon dioxide filter into a system designed for circular cartridges, using only what the astronauts had on board: space suit hoses, pieces of card, plastic bags and lots of silver duct tape.

Less famously, this fascinating, life-savingdemonstration of ingenuity burned up in the atmosphere shortly after the crew left it behind on their safe return to Earth.

Now, retail banks might never be faced with genuine life and death situations, but they are frequently challenged by problems which draw teams of engineers and operational staff into War Room settings, amid a search for a fix, workaround or hack that will save the day.

When faced with a new regulation or demand of the vast reams of data held by the bank, the temptation can be to follow the same pattern and assemble the consultants, engineers and specialists to try and figure out how to build a solution from the available knowledge, systems and parts.

But what if the answer to the problem could be bought instead – off-the-shelf and ready to go? What if it was a simple case of plugging in an existing, scoped and developed solution?

The pros and cons of build vs buy

It’s fair to say that any decision on building or buying a data quality solution needs to satisfactorily answer the following questions:

1. Will this do what I need, or just what I can do?

Your internal programmers and developers are well able to adapt your systems to add new data fields, system requirements or processes, but will these deliver the results the business wants to achieve and to a known level of accuracy? For instance, if your existing data records are not cross-referenced against external data sources to aid de-duplication and augmentation, will it be cost-effective or indeed possible to develop that capability internally, or should a plug-and-play option be considered instead? Then once you have the deployment in place, can your end users adopt the solution themselves or will it have to keep going back to IT for any changes?

2. Will it deploy correctly?

Many IT changes are delayed as existing systems are rarely fully understood by everyone. As they are adapting core (and very frequently, old) systems, building a solution yourself will require a significant level of scoping to comprehend how the processes currently work and establishing that no downstream impacts will occur. Buying a solution to do a specific task can sit alongside core systems and interface to the desired level; that way, differences in how data are used between completely different parts of the business do not have to have an impact on customers or regulatory reporting further downstream.

3. Will it be possible to measure what is happening?

Internal teams will be able to assess the requirements and deliver a solution. After all, that’s why you continue to hire them! But can they measure the quality of the data, advise on its condition and conformity to your or external standards, and give you guidance as to how to fix anything that doesn’t fit? If you can’t view or report on whether the data is improving or deteriorating over time, it makes the objective pretty meaningless. (The first blog looked at the reasons why data quality measurement is something that cannot be ignored in retail banking)

4. What training and support will be required?

Leveraging existing systems should mean little training is required as users are familiar with the current setup. But how will the end-user actually use the solution, what interface will they have? What level of knowledge or interpretative skill will be required? What process manuals need to be written, tested, and maintained? Will buying a solution prove more practical if training manuals are already a part of the offering on the table? Will your existing development teams also provide support or does the external vendor offer this as part of their deployment?

5. What budget is available?

Build-it-yourself can leverage staff already hired in key roles as well as knowledge of operating systems. This can make the build option seem more cost-effective, but if that were truly the case it’s arguable that the fintech world wouldn’t exist. Banks would simply do all their own development .

The temptation can be to try and find one solution to solve all problems at an enterprise level, but being specific about one critical issue and running a proof-of-concept can lead to a better understanding of the problem, and the relative benefits and demerits of the built and bought options. Budget ought to also include the cost of what could be incurred if things go wrong: how readily can the change be backed out, and what customer impact could occur?

In summary, the build vs buy decision needs to take into account not just the time it will take to investigate and deploy the change, but also the subsequent time in maintenance and updates for development teams and downstream users.

If your internal solution delivers only what you can do and not what you need to do then some level of a manual workaround, interrogating or running queries may well still be required, and the risk is that your people end up doing what the machine should be doing rather than being freed up to engage in activities that grow the business.

When it came to planning Apollo 14, NASA didn’t install the electrical-tape workaround; instead, they started afresh with . Life or death may not be on the cards for a bank but profitability, capability and compliance always are; against that backdrop, the choice to build vs buy is rarely simple, but always critical.

Click here for Parts 1 and 2 of this blog, taking an in-depth look at the need for data quality metrics in retail banking.

The post Part 3: Build vs Buy – Off-the-shelf or do-it-yourself? appeared first on ĢƵ.

]]>
What is driving the need for Data Quality Metrics in Retail Banking? /blog/marketing-insights/data-quality-metrics-retail-banking/ Mon, 01 Jun 2020 11:03:47 +0000 /data-quality-metrics-retail-banking/ Retail Banking Part 1 – Marathons present a titanic challenge that has long dominated the human psyche: the will to finish, to get over the line, to win. Whether in-person or virtual, the London Marathon is no different! In previous years, the official app has contained a feature to help friends, family and supporters locate […]

The post What is driving the need for Data Quality Metrics in Retail Banking? appeared first on ĢƵ.

]]>
Retail Banking Part 1 – Marathons present a titanic challenge that has long dominated the human psyche: the will to finish, to get over the line, to win. Whether in-person or virtual, the London Marathon is no different!

ATM blog opener

In previous years, the official app has contained a feature to help friends, family and supporters locate their chosen runner by name or vest number, and so track their progress at every stage of their torturous journey of 26.2 miles.

It’s said that many runners hit “the wall” at around 22 miles, so very often loved ones wait at that marker to help spur them on to the finish a little over four miles away. But imagine for a moment that there are no distance markers, no Fitbits, and no stopwatches. Over 38,000 runners crowding the starting line, with no way of knowing critical data dimensions such as where they are, the pace at which they’re running, or how long it will take to finish.

It seems absurd to think that anyone would choose that way of running such a race, yet this is the decision being made by any retail bank not currently using metrics to measure its customer and financial data quality. Understanding the condition, accuracy, quality and maturity of datasets across the vast array of products and service channels is impossible if these elements aren’t actually being measured.

At this point, retail banks can quite rightly indicate four major issues confronting them that are very often seen as preventers of that holy grail of data management: a timely, accurate and complete single view of a personal banking customer.

1. Size and scale

For starters, the amount of general public data to be measured is usually vast. One data record for a personal current account customer might include first name, middle name, last name, current address (five lines), previous address (another five lines), phone number, mobile number, email, employer and their address (five more lines), National Insurance number, date of birth, dependants, spouse… and that’s before their transactional data, banking IDs, credit card accounts and so on are included! The opportunities for data quality issues to exist are tremendous.

2. IT infrastructure

Further complicating matters, many retail banks operate systems which are decades old and highly immovable. New products and services demand features old systems can’t deliver; mergers and acquisitions bring in whole new datasets, and separate systems in silos rarely categorise data in the exact same way. Yet the IT department is frequently held responsible for owning data quality, without specialist knowledge of what the data is, what it seeks to represent and what purposes the bank has for it. They have no authority or budget to change or improve it, so even when they do report on it, it’s usually only able to say that the data is deteriorating.

3. Operational processes

Single view of customer is fine in theory, but continually compromised in reality: even if a customer’s data is fine today and entered into the best system money can buy, if it’s not being measured and referenced regularly there’s no way of telling if it’s better or worse than any of the rest of the data in the warehouse.

4. Regulation and the marketplace

On top of that, ‘big data’ remains big news and subject to never-ending scrutiny. Since the financial crisis, the importance of measuring data quality has swung from a nice-to-do to a must-do, with key dependencies increasingly assigned to compliance and risk functions. In retail banking, teams managing risk and compliance have seen their numbers increase by thousands of percent as they are increasingly regulated by the Financial Conduct Authority and Prudential Regulation Authority.

This is in stark contrast with the 3,303 bank branches closed in the UK in the past five years, whilst challenger banks start to pursue the opposite strategy. Put simply, if getting on top of big data is seen as too big and broad a problem, investment in data quality solutions quickly becomes a major, centralised IT infrastructure purchase with a correspondingly hefty price tag attached – and this multi-million, multi-year outlay makes it much harder to justify getting it done at all.

marathon, running, runner

5. Getting tactical

This is where targeted, tactical data quality metrics can provide genuine and demonstrable insight. Quick wins that utilise industry-standard definitions of data quality (such as the Enterprise Data Management’s Data Management Capability Assessment Model, DCAM™) work alongside enterprise data management routines to solve specific data problems, meet changing regulations and free up resources to help the bank develop market-leading customer propositions.

Instead of being hamstrung by an unwieldy data warehouse, implementing data quality metrics that visualise critical quality dimensions such as conformity, completeness, and integrity of datasets can not only enhance compliance with regulatory obligations, but also yield an accurate picture of the current landscape and its progression over time.

It moves the needle from simply understanding whether a data element is right or wrong, to intelligent analysis of how right or wrong it is, and whether its quality is improving or decreasing. Insights like these are critical for commercial banks (as well as other financial services such as investment banks and wealth management) to identify poor data quality and also to solve real world problems.

Thanks to ongoing data measurement, our marathon runners know where they are, how far they have to go, and how long it’s taken them. They know how they stack up against their competitors, updated to the second through constant data analysis and review. Being just as diligent, and demanding detailed metrics on the condition of vital datasets may seem daunting when it comes to retail banking data, but it has to be the cornerstone of any rigorous approach to data quality management.

This blog is Part 1 of a series looking at how data quality in consumer banking can be reviewed, monitored and remediated. Next part will cover how banks can utilise their SME knowledge to adopt a self-service approach to data quality improvement.

herefor the latest news from ĢƵ, or find us on,ǰ

The post What is driving the need for Data Quality Metrics in Retail Banking? appeared first on ĢƵ.

]]>
5AMLD & Data Quality: new regulation, same problems? /blog/marketing-insights/5amld-data-quality-new-regulation-same-problems/ Mon, 27 Jan 2020 14:28:53 +0000 /5amld-data-quality-new-regulation-same-problems/ With the EU’s fifth Anti-Money Laundering directive (5AMLD) having gone live on the 10th January, Matt Flenley took some time with Alex Brown, CTO at ĢƵ, to find out what implications there are when it comes to data quality. Firstly, what do you think are the biggest impacts for firms? Naturally we’re going to focus […]

The post 5AMLD & Data Quality: new regulation, same problems? appeared first on ĢƵ.

]]>
money image

With the EU’s having gone live on the 10th January, took some time with , CTO at ĢƵ, to find out what implications there are when it comes to data quality.

Firstly, what do you think are the biggest impacts for firms?

Naturally we’re going to focus on where data quality is concerned, and from that perspective the biggest challenge is about ultimate beneficial owners, or UBOs. Being able to stand over the accuracy of information a firm holds on those who have significant control of a company, trust or other legal entity is a massive challenge in itself; you’ve any one of input error, out of date information, or intentional misleading by bad actors – or a combination of all three – that could lead to significant variances between what a firm thinks is accurate, and what the truth really is. It really undermines a firm’s capacity to combat money laundering and comply with all associated regulations.

I read that member states must have beneficial ownership registers. Can’t regulated firms just check their records against what’s held there?

Yes, member states will be expected to have beneficial ownership registers that are publicly searchable, and that they’ll hold adequate, accurate and current information on corporate and other legal entities such as trusts and so on. However, while some countries already have these in place, they can’t be seen as the golden source of accuracy and truth, as the UK Companies House explains , “The fact that the information has been placed on the public record should not be taken to indicate that Companies House has verified or validated it in any way.” Clearly, there’s a significant imperative for data quality validation and verification in central records, and it won’t be enough just to compare what you have against what the Companies House record says.

What sort of approaches are you seeing firms taking to meet the data quality requirements of 5AMLD, and fight money laundering?

The options usually taken are to outsource, build or buy. Outsourcing due diligence activities to third parties definitely feels like the quickest fix especially when a new regulation comes along; then it’s just down to managing SLAs between parties, but ultimately there’s a risk that on its own it can be sticking plaster that doesn’t do anything about the quality of the underlying data held by the firm. Lots of the activities that outsource partners will need to do will be manual lookups of entity information and cross-referencing against multiple sources of data to determine the truth; it can be accurate, but it’s extremely time-consuming and costly as a result.

Building the technology stack is favoured by tech-heavy leaders who have invested significantly in their own IT capabilities. That approach can yield the data quality improvement needed but often the timescales needed to deliver all high-priority infrastructure projects simply won’t align with regulatory demands. Often this leaves teams relying on overtime to complete audit work manually via spreadsheets, and even with the best robotic processes to update data it can lead to a spiralling cost of compliance.

In the “Regtech” era, many providers offer parts of the compliance journey that can be bought off-the-shelf, though in reality this isn’t the normal pathway firms are taking. Whether that’s a cultural thing of simply needing to “get it done” or a reluctance to onboard more solutions, it can mean firms miss out on game-changing capabilities offered by Regtech startups and scaleups.

That’s true. At I saw a demo of how ING Bank has developed a platform to “orchestrate” together a number of Regtech solution providers to help it with compliance. Do you see this as the way forward for 5AMLD?

It’s certainly one way of approaching it, though clearly ING has invested significantly in this platform. In the meantime, when it comes to getting the data right, we’ve already been asked to help firms resolve entity data duplication in their core systems and in those they have access to, including Companies House. Fuzzy matching is key to resolving these sorts of discrepancies and reduce manual workloads, and was central to the winning entry at last year’s . It’s something we’ve been working on in some pretty massive regulated datasets for well over fifteen years, so for us of course it’s good to see the industry being switched on to the possibilities. Elsewhere of course the FCA’s focus on preventing “” is something that scalable, fuzzy match technology can really help in.

Where can people go to find out more about what ĢƵ does in this space?

Well, of course we’d be delighted to provide a demo, for which people can simply contact our sales team to set one up.

If you are looking at5AMLD, then there’s a number ofareaswe can help with貹پܱ:

  • Entitydataqualityboth measurement and remediation,ensuring your entity datais up to scratch;
  • Matching entities in disparate data silos with AI-powered human-in-the-loop entity resolution (for which we recently hosted a webinar).

We’ve also developed some publicly-available showcases of our software around matching for sanctions screening; it’s not 5AMLD reporting but clearly demonstrates how multiple records for sanctioned individuals can be mistyped, out of date or intentionally obscured – but can still be fuzzy-matched on metadata, with an accompanying confidence score.

Additionally, our LEI Match Engine does a similar job for entities, fuzzy-matching to the’s list of Legal Entity Identifier information. Both are free to use.

The post 5AMLD & Data Quality: new regulation, same problems? appeared first on ĢƵ.

]]>
You are what you eat /blog/marketing-insights/you-are-what-you-eat-banks-data/ Tue, 06 Feb 2018 09:44:40 +0000 /you-are-what-you-eat/ Congratulations, blog fans! We’ve made it to the end (well almost, you have to read this one first). Now, if you’ve ever worked with or for a bank you’ll know that over the years that banks data collection will be huge. There’s loads of it. An amount I believe is scientifically referred to as “oodles […]

The post You are what you eat appeared first on ĢƵ.

]]>
Congratulations, blog fans! We’ve made it to the end (well almost, you have to read this one first).

Now, if you’ve ever worked with or for a bank you’ll know that over the years that banks data collection will be huge. There’s loads of it. An amount I believe is scientifically referred to as “oodles of data.” It’s all stored somewhere, whether in data stores locally or offshore, or master data management systems in a private cloud, or a combination of most of the above.

Where’s your bank’s data? And it’s in loads of different formats.

This is because making data consistent is hard, and matching data is hard; implementing new policies around the collection and storage of data harder still.

So, when the bank eventually sets up a major IT project to replace its thirty data management systems with a new one, in time what tends to happen instead is that the new system becomes number thirty-one.

But for any of our banks who really want to make New Year’s Resolutions they can achieve, it is time to stop thinking of data quality as something for IT to sort and realise thatdata quality is a business asset.

If the data quality is poor, questionable or unknown, it should be considered equally as damaging as, say, credit risk would be.

The theoretical jump to this mindset is reasonably easy; bringing it into reality is perhaps on the trickier side. This can be because IT departments are typically tasked with storing and securing the data, yet business units want to use, analyse and evolve it, opposing views which can cause tension.

It’s the same idea as committing to eat calorie-controlled lunches, but then getting home and eating the leftover cream-and-deliciousness dinner your flatmate or partner had leftover from the night before: unless the policy applies across the board, compromises will ensue and goals will be missed.

If the bank treats its data as an asset and makes perfecting data a responsibility of every business unit and individual, it means that everyone is focusing on addressing the core problem and making the data the best it can be. But in order to do this, the business units are going to need tools they can use themselves, without expecting developers or programmers to do it for them: after all, banks can hardly be expected to roll out a developer per team to code and script bespoke rule sets!

These tools have to be able to measure the underlying data in terms of absolute quality and also tailored to the teams’ individual requirements. In that respect, it’s no different to the importance of knowing precisely how bad one of those delicious doughnuts is going to be in terms of sugar, fat and carbohydrates: everything must be measurable if a beneficial and life-changing adjustment is going to be made.

In the EU there are regulations concerning the display on food packaging of key information concerning nutritional value to set quantities (100g, ml ~3.5oz, fl oz), to enable better-informed decisions to be made. Banks seeking an equivalent measure for data quality can use something like the standard to give clear definitions of data quality so that abaseline measurement can be establishedinto the underlying health of the data at hand.

This way, the bank can monitor what it’s eatingandwhat it’s already eaten; it applies equally to measuring data already in the systemandthe new information the bank hoovers up each day.

Lastly, these tools need to connect equally as easy with the bank’s data systems, because “rip and replace” is an extremely costly activity to get right and an . Fast plug-in connectivity also means that when plugged into a remediation loop, it sets up a natural virtuous circle that shores up and improves the bank’s data assets.

For this and all our resolutions – whether,or eating more healthily – all that remains to achieve these in the near term is the classic final line from the bottom of any diet recommendation:

Willpower Required!

If you’ve read something here that’s piqued your interest, even if it’s just in doughnuts, biscuits or The Crown, and we’ll have a chat.

The post You are what you eat appeared first on ĢƵ.

]]>
Danske Bank selects ĢƵ self-service data quality for regulatory compliance /press-releases/danske-bank-selects-datactics-regmetrics-for-regulatory-compliance/ Thu, 18 Jan 2018 13:39:53 +0000 /danske-bank-selects-datactics-regmetrics-for-regulatory-compliance/ New contract delivers optimised single customer view and aids compliance with Financial Services Compensation Scheme (FSCS) Belfast, Dublin, London, January 2018 – Following a highly successful Proof of Concept, Danske Bank UK has chosen ĢƵ’ award-winning data quality software tools to deliver enhancements to its single customer view processes for customers in Northern Ireland. Note: […]

The post Danske Bank selects ĢƵ self-service data quality for regulatory compliance appeared first on ĢƵ.

]]>
New contract delivers optimised single customer view and aids compliance with Financial Services Compensation Scheme (FSCS)

Danske Bank

Belfast, Dublin, London, January 2018 – Following a highly successful Proof of Concept, Danske Bank UK has chosen ĢƵ’ award-winning data quality software tools to deliver enhancements to its single customer view processes for customers in Northern Ireland.

Note: We are delighted to add that this implementation won the Bobsguide Partnership Award for Best Regulatory Implementation in 2019.

The new deal sees Danske Bank implementing ĢƵ’ Self-Service Data Quality solution to provide greater assurance of compliance with FSCS requirements for accuracy of customer data, and better customer experiences through higher levels of data quality.

Self-Service Data Quality is a simple add-on to Danske Bank’s systems, running each night against a rule set extracted from the FSCS regulation to deliver actionable insights into the health of its customer data records and enables the Bank to cleanse records that are failing data quality rules.

Danske Bank, part of the Danske Bank Group, is one of the leading banks in Northern Ireland. ĢƵ is focused purely on data quality, especially within the financial services sector and particularly for the purpose of regulatory requirements, making this a logical move for both parties.

marion and stuart from danske and datactics
Stuart Harvey (L), CEO of ĢƵ and Marion Rybnikar (R), Head of Data at Danske Bank UK

“Self-Service Data Quality is fast and easy to use, allowing us to properly cleanse and match our data to one single customer view ahead of submission to the FSCS,” says Marion Rybnikar, Danske’s Head of Data. “On top of this, its usability means our SMEs can now develop data quality rules themselves – extending the functionality to multiple regulatory requirements and broader data quality and governance applications – and automatically generate meaningful interactive reports in Tableau, our house reporting tool. It’s the quality and accuracy of these outputs that ensure we can save time, improve our data and enhance the efficiency of our processes.

“Through our partnership with ĢƵ, we are maintaining best practice, and improving our data records to aid customer care and retention.”

Stuart Harvey, ĢƵ CEO adds: “We are delighted to be embarking on a new relationship with Danske Bank. Our Self-Service Data Quality software tool has been specifically designed to help financial organisations to ensure their business is able to meet existing regulations such as FSCS, as well as future ones. We know from our existing customers that it enables them to build deeper insights through the use of optimised data and clear, fine-level graphic dashboards employing the Enterprise Data Management Council’s DCAM standard for data quality dimensions, such as completeness, accuracy, timeliness and duplication.

“This new deal with Danske Bank is a clear demonstration of the value and efficiency our powerful data solutions bring to financial institutions, reducing the risk of financial penalties to banks by enabling compliance across the spectrum of regulation, from FSCS and Section 17 to MiFID II, BCBS 239 and beyond.”

Click to read Data Management Review’s interview with Stuart and Marion, to see the story on Banking Technology’s site and more information on how this will benefit Danske’s customer experiences via story.

Click here for the latest news from ĢƵ, or find us on , or

The post Danske Bank selects ĢƵ self-service data quality for regulatory compliance appeared first on ĢƵ.

]]>