Matt Flenley /author/matt-flenley/ Unlock your data's true potential Thu, 29 Jan 2026 11:42:18 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 /wp-content/uploads/2023/01/ĢƵFavIconBluePink-150x150.png Matt Flenley /author/matt-flenley/ 32 32 AI Data Readiness: A Privacy Day special /blog/ai-data-readiness-a-privacy-day-special/ Thu, 29 Jan 2026 10:59:24 +0000 /?p=28497 “This convenience is appealing, but the uncertainty is risky. When AI is everywhere, it becomes harder to recognise when you’re passing sensitive data to a system that may not be the right place for the task.” Hubert Składanowski, Machine Learning Manager at ĢƵ recently penned an article for Techerati on Data Privacy Day. Click the […]

The post AI Data Readiness: A Privacy Day special appeared first on ĢƵ.

]]>
“This convenience is appealing, but the uncertainty is risky. When AI is everywhere, it becomes harder to recognise when you’re passing sensitive data to a system that may not be the right place for the task.”

Hubert Składanowski, Machine Learning Manager at ĢƵ recently penned an article for Techerati on Data Privacy Day. Click the link below the graphic to read the article in full.

Read the article .

The post AI Data Readiness: A Privacy Day special appeared first on ĢƵ.

]]>
AI Unhappy? Stuart Harvey on data’s critical path to success /blog/ai-unhappy-stuart-harvey-on-datas-critical-path-to-success/ Tue, 27 Jan 2026 17:28:47 +0000 /?p=28481 The IoT Insider has shared insight from ĢƵ CEO on how “”More than 80% of AIprojects fail, not because the mmodels are flawed, but because the data isn’t ready.” Head on over to IoT Insider to read the full article.

The post AI Unhappy? Stuart Harvey on data’s critical path to success appeared first on ĢƵ.

]]>
The IoT Insider has shared insight from ĢƵ CEO on how “”More than 80% of AIprojects fail, not because the mmodels are flawed, but because the data isn’t ready.”

Head on over to to read the full article.

The post AI Unhappy? Stuart Harvey on data’s critical path to success appeared first on ĢƵ.

]]>
Coverage in i-Invest Online: FSCS Data Readiness /blog/coverage-in-i-invest-online-fscs-data-readiness/ Tue, 27 Jan 2026 17:08:10 +0000 /?p=28477 Head of Product and Marketing, Matt Flenley, recently penned an article on the data foundations behind financial services compliance. Read the article in full here. And to explore the benefits that ĢƵ FSCS Data Readiness can offer, head to /get-data-readiness-fscs/.

The post Coverage in i-Invest Online: FSCS Data Readiness appeared first on ĢƵ.

]]>
Head of Product and Marketing, Matt Flenley, recently penned an article on the data foundations behind financial services compliance.

Read the article in full .

And to explore the benefits that ĢƵ FSCS Data Readiness can offer, head to /get-data-readiness-fscs/.

The post Coverage in i-Invest Online: FSCS Data Readiness appeared first on ĢƵ.

]]>
Press Release: Digital Data Validation Sandbox /press-releases/press-release-digital-data-validation-sandbox/ Mon, 14 Apr 2025 04:00:00 +0000 /?p=28212 ĢƵ supports digital transformation for Investec in the UK with FSCS digital data validation sandbox London, 14th March 2025 ĢƵ has developed a digital data validation sandbox for the Financial Services Compensation Scheme (FSCS) designed to help assess data systems and processes prior to regulatory audits. Investec, a leading international financial services provider, adheres to […]

The post Press Release: Digital Data Validation Sandbox appeared first on ĢƵ.

]]>
ĢƵ supports digital transformation for Investec in the UK with FSCS digital data validation sandbox

London, 14th March 2025

ĢƵ has developed a digital data validation sandbox for the Financial Services Compensation Scheme (FSCS) designed to help assess data systems and processes prior to regulatory audits.

, a leading international financial services provider, adheres to multiple regulatory standards, including those set by the FSCS, which protects depositors by guaranteeing savings of up to £85,000 per person if a regulated firm is unable to meet claims.

The Prudential Regulation Authority (PRA) conducts regular assessments of Investec’s compliance with the FSCS rules, which necessitates quick access to accurate data.

The ĢƵ sandbox is designed to help streamline Investec’s data management processes, focusing on data quality to prepare for these assessments. The sandbox has features such as rule matching, data categorisation, and data de-duplication to support compliance with FSCS regulations.

Cherie Lefever, FSCS Coordinator in Investec’s UK Central Compliance Team, has developed rules and exceptions to expedite data management and meet the PRA’s auditing requirements efficiently.

Cherie Lefever stated: “Streamlined data is crucial for complying with the PRA’s Depositor Protection Rules. The collaboration with ĢƵ has further simplified our data management processes, allowing us to ensure we maintain high standards of data quality and integrity.”

Stuart Harvey, CEO of ĢƵ, commented: “Financial institutions like Investec manage vast amounts of client and internal data, necessitating robust data management, governance, and quality capabilities. With the FSCS requiring regular audits, access to streamlined data management tools enhances banks’ readiness for these assessments. Our digital sandbox solution empowers firms to evaluate their data systems in preparation for regulatory audits.”

some data points and a cube representing the Data Validation Sandbox

The post Press Release: Digital Data Validation Sandbox appeared first on ĢƵ.

]]>
ĢƵ Snaps Up Award For Regulator-Ready Data /blog/datactics-snaps-up-award-for-regulator-ready-data/ Thu, 21 Nov 2024 21:30:00 +0000 /?p=27643 New York, Nov 21st 2024  ĢƵ has secured the ‘Most Innovative Technology for Regulatory Compliance’ award at this year’s A-Team Group RegTech Insight Awards USA. The Belfast-based firm, which has made its name specializing in innovative solutions to challenging data problems, developed its Data Readiness solution in response to the prevalence of data-driven regulations across […]

The post ĢƵ Snaps Up Award For Regulator-Ready Data appeared first on ĢƵ.

]]>
New York, Nov 21st 2024 

ĢƵ has secured the ‘Most Innovative Technology for Regulatory Compliance’ award at this year’s A-Team Group RegTech Insight Awards USA.

The Belfast-based firm, which has made its name specializing in innovative solutions to challenging data problems, developed its Data Readiness solution in response to the prevalence of data-driven regulations across both sides of the Atlantic. 

Data Readiness grew out of ĢƵ’ close work with banking customers in the UK, primarily around data used to identify and reporting on customer deposits in line with stringent regulation from the Financial Conduct Authority () and Prudential Regulatory Authority ().  

In 2024, ĢƵ developed Data Readiness as a specific defined solution to measure and prepare data to specific regulatory standards, offering the solution with a range of deployment options, including as-a-service.  

Behind the solution

Matt Flenley, Head of Marketing at ĢƵ, noted, “This award is brilliant news for all our talented developers and engineers back home. We’ve a long record of working alongside customers to help them with specific problems they’re encountering, and for which the risk of bad data affecting their ability to demonstrate compliance is often significant.

“With our experience in developing rule-sets for customers seeking to comply with UK depositor protection regulations, alongside international standards such as BCBS 239, we felt the time was right to offer this solution as its own piece of regulatory compliance technology.

“We’d like to thank the A-Team panel and all those who saw fit to recognise the approach we’ve taken here – thank you all so much!” 

Angela Wilbraham, CEO at A-Team Group, and host of the 4th annual RegTech Insight Awards USA 2024, commented, “Congratulations to ĢƵ for winning the Most Innovative Technology for Regulatory Compliance award in this year’s A-Team Group Awards USA 2024.

“These awards celebrate providers of leading RegTech solutions, services and consultancy and are uniquely designed to recognise both start-up and established providers who are creatively finding solutions to help with regulatory challenges, and span a wide range of regulatory requirements.

“Our congratulations for their achievement in winning this award in a highly competitive contest.” 

For more on ĢƵ, visit www.datactics/get-data-readiness 

For more on A-Team Group, visit   

The post ĢƵ Snaps Up Award For Regulator-Ready Data appeared first on ĢƵ.

]]>
Nightmare on LLM Street: How To Prevent Poor Data Haunting AI /blog/nightmare-on-llm-street/ Fri, 25 Oct 2024 14:34:36 +0000 /?p=27295 Why risk taking a chance on poor data for training AI? If it's keeping you awake at night, read on for a strategy to overcome the nightmare scenarios!

The post Nightmare on LLM Street: How To Prevent Poor Data Haunting AI appeared first on ĢƵ.

]]>
How to prevent poor data haunting AI

It’s October, the Northern Hemisphere nights are drawing in, and for many it’s time when things take a scarier turn. But for public sector leaders exploring AI, that fright need not apply to your data. It definitely shouldn’t be something that haunts your digital transformation dreams.

With a reported £800m budget unveiled by the previous government to address ‘digital and AI’, UK public sector departments are keen to be the first to explore the sizeable benefits that AI and automation offer. The change of government in July 2024 has done nothing to indicate that this drive has lessened in any way; in fact, the Labour manifesto included the commitment to a “single unique identifier” to “better support children and families”[1].

While we await the first Budget of this Labour government, it’s beyond doubt that there is an urgent need to tackle this task amid a cost-of-living crisis, with economies still trying to recover from the economic shock of COVID and deal with energy price hikes amid several sizeable international conflicts.

However, like Hollywood’s best Halloween villains, old systems, disconnected data, and a lack of standardisation are looming large in the background.

Acting First and Thinking Later

It’s completely understandable that the pressures would lead us to this point. Societal expectations from the emergence of ChatGPT, among others, have only fanned the flames, swelling the sense that technology should just ‘work’ and leading to an overinflated belief in what is possible.

Recently, LinkedIn attracted some consternation[i][2] by automatically including members’ data in its AI models without seeking express consent first. For whatever reason, the idea that people would just accept this change was overlooked. It took the UK’s Information Commissioner’s Office, the ICO, to intervene for the change to be withdrawn – in the UK, at least.

A dose of reality is the order of the day. Government systems are lacking integrated data, and clear consent frameworks of the type that LinkedIn actually possesses seldom exist in one consistent way. Already lacking funds, the public sector needs to act carefully, and mindfully, to prevent their AI experiments (which is, after all, what they are) from leading to inaccuracies and wider distrust from the general public.

One solution is for Government departments to form one, holistic set of consents concerning use of data for AI, especially Large Language Models and Generative AI – similar to communication consents under the General Data Protection Regulation, GDPR.

The adoption of a flexible consent management policy, one which can be updated and maintained for future developments and tied to an interoperable, standardised single view of citizen (SCV), will serve to support the clear, safe development of AI models into the future. The risks of building models now, on shakier foundations, will only serve to erode public faith. The evidence of the COVID-era exam grades fiasco[3] demonstrates the risk that these models present to real human lives.

Of course, it’s not easy to do. Many legacy systems contain names, addresses and other citizen data in a variety of formats. This makes it difficult to be sure that when more than one dataset includes a particular name, that name actually refers to the same individual. Traditional solutions to this problem use anything from direct matching technology to the truly awful exercise of humans manually reviewing tens of thousands of records in spreadsheets. This is one recurring nightmare that society really does need to stop having.

Taking Refuge in Safer Models

Intelligent data matching uses a variety of matching algorithms and well-established machine learning techniques to reconcile data held in old systems, new ones, documents, even voice notes. Such approaches could help the public sector to streamline their SCV processes, managing consents more effectively. The ability to understand who has opted in, marrying opt-ins and opt-outs to demographic data is critical. This approach will help model creators to interpret the inherent bias in the models built on those consenting to take part, to understand how reflective of society the predictive models are likely to be – including whether or not it is actually safe to use the model at all.

It’s probable that this transparency in process could also lead to greater trust in the general public to take part in data sharing in this way. In the LinkedIn example, the news that data was being used without explicit consent, raced around like wildfire on the platform itself. This sort of outcome cannot be what LinkedIn anticipated, which in and of itself is a concern about the mindset of the model creators.

It Doesn’t Have to Be a Nightmare

It’s a spooky enough season without adding more fear to the bonfire; certainly, this article isn’t intended as a reprimand. The desire to save time and money to deliver better services to a country’s citizens is a major part of many a civil servant’s professional drive. And AI and automation offer so many opportunities for much better outcomes! For just one example, NHS England’s AI tool already uses image recognition to detect heart disease up to 30 times faster than a human[4] . Mid and South Essex (MSE) NHS Foundation used a predictive analytical machine learning model called Deep Medical to reduce the rate at which patients either didn’t attend appointments or cancelled with short notice (referred to as Did Not Attend, or DNA). Its pilot project identified which patients were more likely to fall into the DNA category, developed personalised reminder schedules, and through identifying frail patients who were less likely to attend an appointment, highlighted them to relevant clinical teams.[5]

The time for taking action is now. Public sector organisations, government departments and agencies should focus on the need to develop systems that will preserve and maintain trust in the AI-led future. This blog has shown that better is possible, through a dedicated desire to align citizen data and their consents to contact. In a society where people have trust and transparency in the ways that their data will be used to train AI, the risk of nightmare scenarios can be averted and we’ll all sleep better at night.


[1]

[2]

[3] .

[4]

[5]


[i]

The post Nightmare on LLM Street: How To Prevent Poor Data Haunting AI appeared first on ĢƵ.

]]>
ĢƵ Awards 2024: Celebrating Customer Innovation /blog/datactics-awards-2024-celebrating-customer-innovation/ Tue, 24 Sep 2024 15:28:14 +0000 /?p=27124 In 2024, our customers have been busy delivering data-driven return on investment for their respective organisations. We wanted to recognise and praise their efforts in our first-ever ĢƵ Customer Awards! The oak-panelled setting of historic Toynbee Hall provided the venue for the 2024 ĢƵ Summit, which this year carried a theme of ‘Data-Driven Return on […]

The post ĢƵ Awards 2024: Celebrating Customer Innovation appeared first on ĢƵ.

]]>
In 2024, our customers have been busy delivering data-driven return on investment for their respective organisations. We wanted to recognise and praise their efforts in our first-ever ĢƵ Customer Awards!

The winners of the ĢƵ awards gather for a photograph. Caption describes who is in the picture.
ĢƵ Customer Awards winners 2024 gather for a group photo.
(From L to R: Erikas Rimkus, RBC Brewin Dolphin; Rachel Irving, Daryoush Mohammadi-Zaniani, Nick Jones and Tony Cole, NHS BSA; Lyndsay Shields, Danske Bank UK; Bobby McClung, Renfrewshire Health and Social Care Partnership). Not pictured: Solidatus.

The oak-panelled setting of historic Toynbee Hall provided the venue for the 2024 ĢƵ Summit, which this year carried a theme of ‘Data-Driven Return on Investment.’

Attendees gathered for guest speaker slots covering:

  • Danske Bank UK’s Lyndsay Shields presenting a ‘Data Management Playbook’ covering the experiences of beginning with a regulatory-driven change for FSCS compliance, through to broader internal evangelisation on the benefits of better data;
  • ĢƵ’ own data engineer, Eugene Coakley, in a lively discussion on the data driving sport, drawing from his past career as a professional athlete and Olympic rower with Team Ireland;
  • and Renfrewshire HSCP’s Bobby McClung explaining how automation and the saving of person-hours or even days in data remediation was having a material impact on the level of care the organsation is now able to deliver to citizens making use of its critical services.

The ĢƵ Customer Awards in full

In recent months, the team at ĢƵ has worked to identify notable achievements in data in the past year. Matt Flenley, Head of Marketing at ĢƵ, presented each with a specific citation, quoted below.

Data Culture Champion of the Year –
Data Culture Champion Award graphic

“We’re delighted to be presenting Lyndsay with this award. As one of our longest-standing customers, Lyndsay has worked tirelessly to embed a positive data culture at Danske Bank UK. Her work in driving the data team has helped inform and guide data policy at group level, bringing up the standard of data management across Danske Bank.

“Today’s launch of the Playbook serves to showcase the work Lyndsay and her team have put into driving the culture at Danske Bank UK, and the wider culture across Danske Bank.”

Data-Driven Social Impact Award –
Data Driven Social Impact Award graphic

“Through targeted use of automation, Renfrewshire Health and Social Care Partnership has been able to make a material difference to the operational costs of local government care provision.

“Joe Deary’s early vision and enthusiasm for the programme, and the drive of the team under and alongside Bobby, has effectively connected data automation to societally-beneficial outcomes.”

Data Strategy Leader of the Year –
Data Strategy Leader of the Year Award graphic

“RBC Brewin Dolphin undertook a holistic data review towards the end of 2023, culminating in a set of proposals to create a rationalised data quality estate. The firm twinned data this strategy with technology innovation including being early adopters of ADQ from ĢƵ. They overcame some sizeable hurdles, notably supporting ĢƵ in our early stages of deployment. Their commitment to being an ambitious, creative partner makes them stand out.

“At ĢƵ we’re delighted to be giving the team this award and would also like to thank them for being exemplars of patience in the way they have worked with us this year in particular.”

ĢƵ Award for Partner of the Year –
Partner of the Year award graphic

“Solidatus and ĢƵ have been partnered for the last two years but it’s really in 2023-2024 that this partnership took off.

“Ever since we jointly supported Maybank, in Malaysia, in their data quality and data lineage programme, we have worked together on joint bids and supported one another in helping customers choose the ‘best of breed’ option in procuring data management technology. We look forward to our next engagements!”

ĢƵ Data Champion of the Year –
Data Champion of the Year Award graphic

“For all the efforts Tony, Nick and team have made to spread the word about doing more with data, we’d like to recognize NHS Business Services Authority with our ĢƵ Data Champion of the Year award.

“As well as their advocacy for our platform, applying it to identify opportunities for cost savings and efficiencies across the NHS, the team has regularly presented their work to other Government departments and acted as a reference client on multiple occasions. Their continued commitment to the centrality of data as a business resource is why they’re our final champions this year, the ĢƵ Data Champion 2024.”

:yndsay Shields of Danske Bank celebrates winning her award.
Lyndsay from Danske Bank UK
Bobby McClung from Renfrewshire HSCP celebrates winning their award.
Bobby from Renfrewshire HSCP
Clive Mawdesley and Erikas Rimkus from RBC Brewin Dolphin celebrate winning their award
Erikas and Clive from RBC Brewin Dolphin
Winners from NHS BSA celebrate winning their award.
Tony, Rachel, Nick and Daryoush from NHS BSA

Toasting success at Potter & Reid

The event closed with its traditional visit to Shoreditch hot spot Potter & Reid. Over hand-picked canapés and sparkling drinks, attendees networked and mingled to share in the award winners’ achievements in demonstrating what data-driven culture and return on investment looks like in practice. Keep an eye out for a taster video from this year’s event!

The post ĢƵ Awards 2024: Celebrating Customer Innovation appeared first on ĢƵ.

]]>
What is Data Quality and why does it matter? /glossary/what-is-data-quality/ Mon, 05 Aug 2024 17:27:17 +0000 /?p=15641 Data quality refers to how fit your data is for serving its intended purpose. Good quality data should be reliable, accurate and accessible.

The post What is Data Quality and why does it matter? appeared first on ĢƵ.

]]>

What is Data Quality and why does it matter?

Data Quality refers to how fit your data is for serving its intended purpose. Good quality data should be reliable, accurate and accessible.

What is Data Quality

Good quality data allows organisations to make informed decisions and ensure regulatory compliance. Bad data should be viewed at least as costly as any other type of debt. For highly regulated industries such as government and financial services, achieving and maintaining good data quality is key to avoiding data breaches andregulatoryfines.

As data is arguably the most valuable asset to any organisation, there are ways to improve data quality through a combination of people,processesand technology. Data quality issues can include data duplication, incompletefieldsor manual input (human) error. Identifying these errors relies on human eyes and can take a significant amount of time. Utilising technologies can benefit an organisation to automate data quality monitoring, improving operational efficiencies and reducing risk.

These dimensions apply regardless of the location of the data (where it physically resides) and whether it is conducted on a batch or real time basis (also known as scheduling or streaming). These dimensions help provide a consistent view of data quality across data lineage platforms and into data governance tools.

How to measure Data Quality:

According to, data quality is typically measured against six main data quality dimensions, including –Accuracy, Completeness, Uniqueness, Timeliness, Validity (also known as Integrity) and Consistency.

Accuracy

Data accuracy is the extent to which data succinctly represents the real-world scenario and confirms with a source that is independently verified. For example, an email address incorrectly recorded in an email list can lead to a customer not receiving information. An inaccurate birth detail can deprive an employee of certain benefits. The accuracy of data is linked to how the data is preserved through its journey. Data accuracy can be supported through successful data governance and is essential for highly regulated industries such as finance and banking.

Completeness

For products or services completeness is required. Completeness measures if the data can sufficiently guide and inform future business decisions. It measures the number of required values that are reported – this dimension not only affects mandatory fields but also optional values in some circumstances.

Uniqueness

Uniqueness links to showcasing that a given entity exists just once. Duplication is a huge issue and is frequently common when integrating various data sets. The way to combat this is to ensure that the correct rules are applied to unifying the candidate records. A high uniqueness score infers minimal duplicates will be present which subsequently builds trust in data and analysis. Data uniqueness has the power to improve data governance and subsequently speed up compliance.

Timeliness

Data is updated with timely frequency to meet business requirements. It is important to understand how often data changes and how subsequently how often it will need updated. Timeliness should be understood in terms of volatility.

Validity

Any invalid data will affect the completeness of the data. It is key to define rules that ignore or resolve the invalid data for ensuring completeness. Overall validity refers to data type, range, format, or precision. It is also referred to as data integrity.

Consistency

Inconsistent data is one of the biggest challenges facing organisations, because inconsistent data is difficult to assess and requires planned testing across numerous data sets. Data consistency is often linked with another dimension, data accuracy. Any data set scoring high in both will be a high-quality data set.

How does ĢƵ help with measuring Data Quality?

ĢƵ is a core component of any data quality strategy. The Self-Service Data Quality platform is fully interoperable with off-the-shelf business intelligence tools such as PowerBI, MicroStrategy, Qlik and Tableau. This means that data stewards, Heads of Data and Chief Data Officers can rapidly integrate the platform to provide fine-detail dashboards on the health of data, measured to consistent data standards.

The platform enables data leaders to conduct a data quality assessment, understanding the health of data against business rules and highlighting areas of poor data quality against consistent data quality metrics.

These business rules can relate to how the data is to be viewed and used as it flows through an organisation, or at a policy level. For example, a customer’s credit rating or a company’s legal entity identifier (LEI).

Once a baseline has been established the ĢƵ platform can perform data cleansing, with results over time displayed in data quality dashboards. These help data and business leaders to build the business case and secure buy-in for their overarching data management strategy.

What part does Machine Learning play?

ĢƵ uses Machine Learning (ML) techniques to propose fixes to broken data, and uncover patterns and rules within the data itself. The approach ĢƵ employs is of “fully-explainable” AI, ensuring humans in the loop can always understand why or how an AI or ML model has reached a specific decision.

Measuring data quality in an ML context therefore also refers to how well an ML model is monitored. This means that in practice, data quality measurement strays into an emerging trend of Data Observability: the knowledge at any point in time or location that the data – and its associated algorithms – is fit for purpose.

Data Observability, as a theme, has been explored further by Gartner and others. This article from provides deeper insights into the overlap between these two subjects.

What Self-Service Data Quality from ĢƵ provides

The ĢƵ Self-Service Data Quality tool measures the six dimensions of of data quality and more, some of which include:Completeness, Referential Integrity, Correctness, Consistency, Currency and Timeliness.

Completeness– The DQ tool profiles data on ingestion and gives the user a report on percentage populated along with a data and character profiles of each column to quickly spot any missing attributes. Profiling operations to identify non-conforming code fields can be easily configured by the user in the GUI.

Referential Integrity– The DQ tool can identify links/relationships across sources with sophisticated exact/fuzzy/phonetic/numeric matching against any number of criteria and check the integrity of fields as required.

Correctness– The DQ tool has a full suite of pre-built validation rules to measure against reference libraries or defined format/checksum combinations. New validations rules can easily be built and re-used.

Consistency– The DQ tool can measure data inconsistencies via many different built-in operations such as validation, matching, filtering/searching. The rule outcome metadata can be analysed inside the tool to display the consistency of the data measured over time.

Currency– Measuring the difference in dates and finding inconsistencies is fully supported in the DQ tool. Dates is any format can be matched againsteachotheror converted toposixtime and compared against historical dates.

Timeliness– The DQ tool can measure timeliness by utilizing the highly customisable reference library to insert SLA reference points and comparing any action recorded against these SLAs with the powerful matching options available.

OurSelf-Service Data Qualitysolution empowers business users to self-serve for high-quality data, saving time, reducing costs, and increasing profitability.Our Data Quality solution can help ensure accurate, consistent, compliant and complete data which will help businesses to make better informed decisions.

And for more from ĢƵ, find us on,or

The post What is Data Quality and why does it matter? appeared first on ĢƵ.

]]>
Got three minutes? Get all you need to know on ADQ! /blog/adq-in-three-minutes/ Wed, 17 Apr 2024 11:17:55 +0000 /?p=25382 To save you scrolling through our website for the essential all you need to know info on ADQ, we’ve created this handy infographic. Our quick ADQ in three minutes guide can be downloaded from the button below the graphic. Happy reading! As always, don’t hesitate to get in touch if you’re looking for an answer […]

The post Got three minutes? Get all you need to know on ADQ! appeared first on ĢƵ.

]]>
To save you scrolling through our website for the essential all you need to know info on ADQ, we’ve created this handy infographic.

Our quick ADQ in three minutes guide can be downloaded from the button below the graphic. Happy reading! As always, don’t hesitate to get in touch if you’re looking for an answer that you can’t find here. Simply hit ‘Contact us’ with your query and let us do the rest.

adq in three minutes part one: augmented data quality process from datactics - connect to data, profile data, leverage AI rule suggestion, configure controls.
adq in three minutes part 2:
measure data health; get alerts and remediations; generate AI powered insights, and work towards a return on investment.

Wherever you are on your data journey, we have the expertise, the tooling and the guidance to help accelerate your data quality initiatives. From connecting to data sources, through rule building, measuring and into improving the quality of data your business relies on, let ADQ be your trusted partner.

If you would like to read some customer stories of how we’ve already achieved this, head on over to our Resources page where you’ll find a wide range of customer case studies, white papers, blogs and testimonials.

To get hold of this infographic, simply hit Download this! below.

The post Got three minutes? Get all you need to know on ADQ! appeared first on ĢƵ.

]]>
Shaping the Future of Insurance: Insights from Tia Cheang /blog/shaping-the-future-of-insurance-with-tia-cheang/ Tue, 02 Apr 2024 13:55:13 +0000 /?p=25115 Tia Cheang, Director of IT Data and Information Services at Gallagher, recently delivered an interview with Tech-Exec magazine drawing from her knowledge and experience in shaping the future of the insurance industry at one of the world’s largest insurance brokers. You can read the article here. Tia is also one of DataIQ’s Most Influential People […]

The post Shaping the Future of Insurance: Insights from Tia Cheang appeared first on ĢƵ.

]]>

Tia Cheang, Director of IT Data and Information Services at Gallagher, recently delivered an interview with Tech-Exec magazine drawing from her knowledge and experience in shaping the future of the insurance industry at one of the world’s largest insurance brokers. You can read the article .

Tia is also one of DataIQ’s (congratulations, Tia!). We took the opportunity to ask Tia a few questions of our own, building on some of the themes from the Tech-Exec interview.

In the article with Tech-Exec, you touched on your background, your drive and ambition, and what led you to your current role at Gallagher. What are you most passionate about in this new role?

In 2023, I started working at Gallagher after having an extensive career in data in both public and private sectors. This job was a logical next step for me, as it resonates with my longstanding interest in utilising data in creative ways to bring about beneficial outcomes. I was eager to manage a comprehensive data transformation at Gallagher to prepare for the future, aligning with my interests and expertise.

I am responsible for leading our data strategy and developing a strong data culture. We wish to capitalise on data as a route to innovation and strategic decision-making. Our organisation is therefore creating an environment where data plays a crucial role in our business operations, to allow us to acquire new clients and accomplish significant results rapidly. The role offers an exciting opportunity to combine my skills and lead positive changes in our thinking towards data and its role in the future of insurance.

The transition to making data an integral part of business operations is often challenging. How have you found the experience? 

At Gallagher, our current data infrastructure faces the typical challenges that arise when a firm is expanding. Our data warehouses collect data from many sources, which mirrors the diverse aspects of our brokerage activities. These encompass internal systems, such as customer relationship management (CRM), brokerage systems, and other business applications. We handle multiple data types in our data estate, ranging from structured numerical data to unstructured text. The vast majority of our estate is currently hosted on-premise using Microsoft SQL Server technology, however, we also manage various other departmental data platforms such as QlikView. 

“…we want data capabilities that provide flexibility and agility, to enable us to quickly react to new market opportunities.”

A key challenge we face is quickly incorporating new data sources obtained through our mergers and acquisitions activity. These problems affect our data management efforts in terms of migration, seamless data integration, maintaining data quality, and providing data accessibility.
To overcome this, we want data capabilities that provide flexibility and agility, to enable us to quickly reactto new market opportunities. Consequently, we are implementing a worldwide data transformation to update our data technology, processes, and skills to provide support for this initiative. This transformationwill move Gallagher data to the cloud, using Snowflaketo leverage the scalability and elasticity of the platform for advanced analytics. Having this flexibility gives usa major advantage, offering computational resources where and when they are required.

How does this technology strategy align with your data strategy, and how do you plan to ensure data governance and compliance while implementing these solutions, especially in a highly-regulated industry like insurance?

Gallagher’s data strategy aims to position us as the leader in the insurance sector. By integrating our chosen solutions within the Snowflake platform, we strive to establish a higher standard in data-driven decision-making. 

This strategy involves incorporating data management tools such as Collibra, CluedIn, and ĢƵ into our re-platforming efforts, with a focus on ensuring the compatibility and interoperability of each component. We are aligning each tool’s capabilities with Snowflake’s powerful data lake functionality with the support of our consulting partners to ensure that our set of tools function seamlessly within Snowflake’s environment.

“…we are contemplating upcoming AI and automation regulations and considering how to futureproof our products and approaches…”

We are meticulously navigating the waters of data governance and compliance. We carefully plan each stage to ensure that all components of our data governance comply with the industry regulations and legislation of the specific region. For example, we are contemplating upcoming AI and automation regulations and considering how to futureproof our products and approaches to comply with them.

The success of our programme requires cooperation across our different global regions, stakeholders, and partners. We are rethinking our data governance using a bottom-up approach tailored to the specific features of our global insurance industry. We review our documentation and test the methods we use to ensure they comply with regulations and maintain proper checks and balances. We seek to understand the operational aspects of a process in real-world scenarios and evaluate its feasibility and scalability.

Could you expand on your choice of multiple solutions for data management technology? What made you go this route over a one-stop shop for all technologies?

We have selected “best of breed”solutions for data quality, data lineage, and Master Data Management (MDM), based on a requirement for specialised, high-performance tools. We concentrated on high-quality enterprise solutions for easy integration with our current technologies. Our main priorities were security, scalability, usability, and compatibility with our infrastructure.

By adopting this approach, we achieve enhanced specialisation and capabilitiesin each area, providing high-level performance. This strategy offers the necessary flexibility within the organisation to establish a unified data management ecosystem. This aligns with our strategic objectives, ensuring that our data management capability is scaleable, secure, and adaptable.

Regarding the technologies we have selected, Collibra increases data transparency through efficient cataloguingand clear lineage; CluedIn ensures consistent and reliable data across systems; and ĢƵ is criticalfor maintaining high-quality data.

“As we venture into advanced analytics, the importance of our data quality increases.”

In ĢƵ’ case, it provides data cleansing tools that ensure the reliability and accuracy of our data, underpinning effective decision-making and strategic planning. The benefits of this are immense, enhancing operating efficiency, reducing errors, and enabling well-informed decisions. As we venture into advanced analytics, the importance of our data quality increases. Therefore, ĢƵ was one of the first technologies we started using.

We anticipate gaining substantial competitive advantages from our strategic investment, such as improved decision-making capabilities, operational efficiency, and greater customer insights for personalisation. Our ability to swiftly adapt to market changes is also boosted. Gallagher’s adoption of automation and AI technologies will also strengthen our position, ensuring we remain at the forefront of technological progress.

On Master Data Management (MDM), you referred to the importance of having dedicated technology for this purpose. How do you see MDM making a difference at Gallagher, and what approach are you taking?

Gallagher is deploying Master Data Management to provide a single customer view. We expect substantial improvements in operational efficiency and customer service when it is completed. This will improve processing efficiency by removing duplicate data and offering more comprehensive, actionable customer insights. These improvements will benefit the insurance brokerage business and will enable improved data monetisation and stronger compliance, eventually enhancing client experience and increasing operational efficiency.

Implementing MDM at Gallagher is foundational to our ability to enable global analytics and automation. To facilitate it, we need to create a unified, accurate, and accessible data environment. We plan to integrate MDM seamlessly with our existing data systems, leveraging tools like CluedIn to manage reference data efficiently. This approach ensures that our MDM solution supports our broader data strategy, enhancing our overall data architecture.

“By including data quality activities in our approach, we anticipate significant benefits from the MDM initiative.”

Data quality is crucial in Gallagher’s journeyto achieve this, particularly in establishinga unified consumer view via MDM. Accurate and consistentdata is essential for consolidating several client data sources into a masterprofile; we see it as essential, as without good data quality the benefits of our transformation will be reduced. By including data quality activities in our approach, we anticipate significant benefits from the MDM initiative. We foresee a marked improvementin data accuracy and consistency throughout all business units. We want to empowerusers across the organisation to make more informed, data-driven decisions to facilitate growth. Furthermore, a single source of truth enables us to streamlineour operations, leading to greater efficiencies by removing manual processes. Essentially, this strategic MDM implementation transforms data into a valuable asset that drives innovation and growth for Gallagher.

Looking to the future of insurance, what challenges do you foresee in technology, data and the insurance market?

Keeping up with the fast speed of technology changes can be challenging. We are conducting horizon scanning on new technologies to detect emerging trends. We wish to include new tools and processes that will complement and improve our current systems as they become ready.

“We prioritise the security of our data assets and our clients’ privacy because it is essential for our reputation and confidence in the market.”

Next is ensuring robust data security and compliance,particularly when considering legislation changes about AI and data protection. Our approach is to continuously strengthen our data policies as we grow and proactively manage our data. We prioritise the security of our data assets and our clients’ privacy because it is essential for our reputation and confidence in the market.

Finally, we work closely with our technology partners to leveragetheir expertise. This collaborative approach ensures that we take advantage of new technologies to their maximum capacity while preserving the integrity and effectiveness of our current systems.

Are there any other technologies or methodologies you are considering for improving data management in the future beyond what you have mentioned?

Beyond the technologies and strategies already mentioned, at Gallagher, we plan to align our data management practices with the principles outlined in /(Data Management Body of Knowledge). This framework will ensure that our data management capabilities are not just technologically advanced but also adhere to the best practices and standards in the industry.

In addition to this, we are always on the lookout for emerging technologies and methodologies that could further enhance our data management. Whether it’s advancements in AI, machine learning, or new data governance frameworks, we are committed to exploring and adopting methodologies that can add value to our data management practices.

For more from Tia, you can find her on .



The post Shaping the Future of Insurance: Insights from Tia Cheang appeared first on ĢƵ.

]]>
What are Large Language Models (LLM) and GPTs? /glossary/what-are-large-language-models-llm-and-gpt/ Tue, 05 Mar 2024 10:57:56 +0000 /?p=24807 Data remediation: Identifying and correcting errors, inconsistencies and inaccuracies in data to ensure quality and accuracy.

The post What are Large Language Models (LLM) and GPTs? appeared first on ĢƵ.

]]>

What are Large Language Models (LLMs) and GPTs?

In today’s rapidly evolving digital landscape, two acronyms have been making waves across industries: LLMs and GPTs. But what do these terms really mean, and why are they becoming increasingly important?

an image depicting a road with a data management superhighway heading towards a future nexus point

What are Large Language Models (LLMs) and GPTs?

As the digital age progresses, two terms frequently emerge across various discussions and applications: LLMs (Large Language Models) and GPTs (Generative Pre-trained Transformers). Both are at the forefront of artificial intelligence, driving innovations and reshaping human interaction with technology.

Large Language Models (LLMs)

LLMs are advanced AI systems trained on extensive datasets, enabling them to understand and generate human-like text. They can perform tasks such as translation, summarisation, and content creation, mimicking human language understanding with often remarkable proficiency.

Generative Pre-trained Transformers (GPT)

GPT, a subset of LLMs developed by, demosntrates exactly what can be done with the capabilities of these models in processing and generating language. Through training on a wide range of internet text, GPT models are capable of understanding context, emotion, and information, making them invaluable for various applications, from automated customer service to creative writing aids.

The Intersection of LLMs and GPTs

While GPTs fall under the umbrella of LLMs, their emergence has spotlighted the broader potential of language models. Their synergy lies in their ability to digest and produce text that feels increasingly human, pushing the boundaries of machine understanding and creativity.

The Risks of LLMs and GPTs

Quite apart from the data quality-specific risks of LLMs, which we go into below, there are a number of risks and challenges facing humans as a consequence of Large Language Model development, and in particular the rise of GPTs like ChatGPT. These include:

  • A low barrier to adoption: The incredible ease with which humans can generate plausible-sounding text has created a paradigm shift. This new age, whereby anyone, from a school-age child to a business professional or even their grandparents, can write human-sounding answers on a wide range of topics, means that the ability to distinguish fact from fiction will become increasingly complex.
  • Unseen bias: Because GPTs are trained on a specific training set of data, any existing societal bias is baked-into the programming of that GPT. This is necessary, for example, when developing a training manual for a specific program or tool. But it’s riddled with risk when attempting to make credit decisions, or provide insight into society, if the biases lie undetected in the training dataset. This was already a problem with machine learning before LLMs came into being; their ascendency has only amplified the risk.
  • Lagging safeguards and guardrails: The rapid path from idea to mass adoption for these technologies, especially with regard to OpenAI’s ChatGPT, has occurred much faster than company policies can adapt to prevent harm, let alone regulators acting to create sound legislation. As of August 2023,wrote that ‘75% of businesses are implementing or considering bans on ChatGPT.’ Simply banning the technology doesn’t help either; the massive benefits of such innovation will not be reaped for some considerable time. Striking a balance between risk and reward in this area will be crucial.
The Role of Data Quality in LLMs and GPTs

High-quality data is the backbone of effective LLMs and GPTs. This is where ĢƵ’ Augmented Data Quality comes into play. By leveraging advanced algorithms, machine learning, and AI, Augmented Data Quality ensures that the data fed into these models is accurate, consistent, and reliable. This is crucial because the quality of the output is directly dependent on the quality of the input data. With ĢƵ, businesses can automate data quality management, making data more valuable and ensuring the success of LLM and GPT applications.

Risks of Do-It-Yourself LLMs and GPTs in Relation to Data Quality

Building your own LLMs or GPTs presents several challenges, particularly regarding data quality. These challenges include:

  • Inconsistent data: Variations in data quality can lead to unreliable model outputs.
  • Bias and fairness: Poorly managed data can embed biases into the model, leading to unfair or skewed results.
  • Data privacy: Ensuring the privacy of the data used in training these models is crucial, especially with increasing regulatory scrutiny.
  • Complexity in data management: The sheer volume and variety of data needed for training these models can overwhelm traditional data management strategies.

Conclusion

The development and application of LLMs and GPTs are monumental in the field of artificial intelligence, offering capabilities that were once considered futuristic. As these technologies continue to evolve and integrate into various sectors, the importance of underlying data quality cannot be overstated. With ĢƵ’ Augmented Data Quality, organisations can ensure their data is primed for the demands of LLMs and GPTs, unlocking new levels of efficiency, innovation, and engagement while mitigating the risks associated with data management and quality.

And for more from ĢƵ, find us on,or

The post What are Large Language Models (LLM) and GPTs? appeared first on ĢƵ.

]]>
Net Promoter Score: H1 2024 /blog/net-promoter-score-h1-2024/ Mon, 15 Jan 2024 16:22:32 +0000 /?p=24393 Please find our latest Net Promoter Score below. Not seeing a form? Please contact your account manager for more information.

The post Net Promoter Score: H1 2024 appeared first on ĢƵ.

]]>
Please find our latest Net Promoter Score below. Not seeing a form? Please contact your account manager for more information.

The post Net Promoter Score: H1 2024 appeared first on ĢƵ.

]]>
Preparing Your Data for AI: Register for our webinar with BigID /webinar/preparing-your-data-for-ai-register-for-our-webinar-with-bigid/ Thu, 30 Nov 2023 12:23:19 +0000 /?p=24071 The impact of Large Language Models (LLMs) cannot be overstated. In just twelve months, the use of generative AI in everyday life has become commonplace, with GPT-aided processes embedded in apps and websites enhancing and augmenting everything from school essays to startup pitch decks. In their respective roles as Chief Data Officer at BigID, and […]

The post Preparing Your Data for AI: Register for our webinar with BigID appeared first on ĢƵ.

]]>

The impact of Large Language Models (LLMs) cannot be overstated.

In just twelve months, the use of generative AI in everyday life has become commonplace, with GPT-aided processes embedded in apps and websites enhancing and augmenting everything from school essays to startup pitch decks.

In their respective roles as Chief Data Officer at BigID, and Chief Technology Officer at ĢƵ, Peggy Tsai and Fiona Browne are passionate advocates for the responsible adoption of AI practices. Revisit this conversational webinar from December 12th 2023 to learn:

  • The current state of play in LLMs and GPTs – and why data readiness isn’t simply a given
  • Why data quality, data privacy and data governance are critical to successful LLM integration
  • The questions to ask to ensure ethics, bias and fairness are central in LLM deployment

If you are already using LLMs, or considering them in your role or organisation, this webinar will be crucial in shaping your approach.

The post Preparing Your Data for AI: Register for our webinar with BigID appeared first on ĢƵ.

]]>
5 Steps to Build the Case for Data & Analytics Governance /blog/good-data-culture/5-steps-to-build-the-case-for-data-analytics-governance/ Fri, 06 Oct 2023 10:20:22 +0000 /?p=23746 The task of creating a compelling business case for data and analytics governance is often complex and daunting. In this blog, we break down the big picture into five simple steps to follow and turbocharge your data management strategy. Why is data and analytics governance important? Data and analytics governance is crucial for organisations […]

The post 5 Steps to Build the Case for Data & Analytics Governance appeared first on ĢƵ.

]]>

5 Steps to Build a Business Case for Data and Analytics Governance

The task of creating a compelling business case for data and analytics governance is often complex and daunting. In this blog, we break down the big picture into five simple steps to follow and turbocharge your data management strategy.

Why is data and analytics governance important?

Data and analytics governance is crucial for organisations in a number of ways.

  • It ensures that the quality of information is maintained to internally agreed standards for accuracy, consistency, and reliability and that these measures (or ) are consistent across the business;
  • It means that the business knows which data can be relied upon for making informed decisions, building trust in data-driven insights, and helping the business to grow;
  • It helps organisations comply with regulations and industry standards, ensuring proper data handling practices and minimising potential legal and reputational risks;
  • It enables better data access and sharing practices, fostering collaboration across departments and promoting a data-driven culture within the organisation.

Overall, it plays a vital role in maximising the value of data assets, mitigating risks, and driving better business outcomes.

What does a successful business case for data and analytics governance look like?

“Data and analytics leaders often struggle to develop a compelling business case for governance. Those who succeed directly connect data and analytics governance with specific business outcomes — and then sell it.”

–Gartner®, ‘‘5 Steps to Build a Business Case for Data and Analytics Governance That Even Humans Will Understand’, By Saul Judah, 3 January 2023.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

Making a business case simple and relatable should be the top priority of data leaders in developing their data strategy. Keeping it up-to-date and relevant should also be top of mind.

This blog explores the five key steps set out by Gartner® in their research, “5 Steps to Build a Business Case for Data and Analytics Governance That Even Humans Will Understand.”, looking at the areas that data leaders need to consider and providing suggestions on how to implement them.

Step 1: Identify Which Business Outcomes Cannot Be Delivered Because of Poor or No Data and Analytics Governance

Business and data leaders will find their tasks much easier if they can demonstrate the pain that the business will encounter if data and analytics governance is not implemented. Quantifying this can be both persuasive and informative.

While stories that focus on the negative impacts can be considered a high-risk strategy, they are greatly effective in highlighting the urgency of needing to act. Identifying these pain points and challenges associated with unmanaged data and analytics is a key part of building a successful business case.

Step 2: Connect Business Performance With Information Value Through Key Metrics

Data-driven organisations require metrics to demonstrate the value of data and analytics governance. These organisations heavily depend on metrics, key performance indicators (KPIs), and data to make informed business decisions. Budgets held by senior executives are under constant pressure; data and analytics governance leaders will need business-critical metrics that drive actions to demonstrate why data and analytics governance deserve a bigger slice of the pie.

Starting with metrics that matter to the business will help to ensure that the data is speaking the same language.

Step 3: Outline the Scope and Footprint for Governance Through the People, Process and Technology Needed to Deliver the Prioritized Outcomes

Nobody likes ‘scope creep’, and data and analytics governance is no different. A clear scope will help drive the budget to the areas that will make the biggest difference, in prioritising programs that address clear business outcomes.

This will help identify the roles needed to deliver these outcomes, including specialists, engineers, owners of processes, and those responsible for the metrics that have already been identified.

It follows that after identifying the scope, roles, and people, you are now in a position to start specifying the technology needed to help deliver it.

Step 4: Define the approach, outputs, timescales, and outcomes.

Many IT projects and programmes have a reputation for being over-large and taking too long. Creating a business case at this stage is the perfect opportunity to challenge that myth.

There are multiple methodologies for implementing data governance, and assessing a variety of models will help immensely. Equally, seeking guidance from specialist consultancies (there are many!) will be of great use. At ĢƵ, we have hand-picked a few consultancies in the UK and USA who we trust to offer great recommendations. You can find more details on our ‘Partners’ pages.

From a timing perspective, it’s important to balance your desire for quick results, with the discipline to deliver; a healthy bias for action will serve you well at this early stage.

Step 5: Complete the Financials for Your Proposal

When it comes down to it, senior executives will expect a price to be paid for the work you are proposing. You can create a compelling business case for data and analytics governance by incorporating:

  • Demonstrations of cost saving, risk mitigation, and new business opportunities arising from better quality data;
  • Total cost of ownership for the proposed technological solution(s);
  • Return on investment;
  • A “Do-Nothing” option – with financial implications included.

The post 5 Steps to Build the Case for Data & Analytics Governance appeared first on ĢƵ.

]]>
Typical deployment architecture /deployment/typical-deployment-architecture/ Tue, 25 Jul 2023 16:36:13 +0000 /?p=23683 Our platform is highly configurable and supported by our expert DevOps team.    

The post Typical deployment architecture appeared first on ĢƵ.

]]>
Our platform is highly configurable and supported by our expert DevOps team.

 

architecture diagram showing how to deploy ĢƵ

 

     

    The post Typical deployment architecture appeared first on ĢƵ.

    ]]>
    Additional support /deployment/additional-support/ Tue, 25 Jul 2023 16:23:52 +0000 /?p=23681 Alongside our Deployment options we provide a range of support.  

    The post Additional support appeared first on ĢƵ.

    ]]>
    Alongside our Deployment options we provide a range of support.
    • Security best practices.
    • Fully documented Rest APIs.
    • Secure file sharing.

     

    The post Additional support appeared first on ĢƵ.

    ]]>
    Solution configuration and building /deployment/solution-configuration-and-building/ Tue, 25 Jul 2023 15:34:34 +0000 /?p=23677 Leverage our team’s expertise to support your delivery.

    The post Solution configuration and building appeared first on ĢƵ.

    ]]>
    Leverage our team’s expertise to support your delivery.
    • With their deep knowledge and expertise, our team can build and configure robust and scalable solutions that seamlessly integrate with your existing infrastructure

    The post Solution configuration and building appeared first on ĢƵ.

    ]]>
    Dashboarding /deployment/dashboarding/ Tue, 25 Jul 2023 15:32:10 +0000 /?p=23674 We offer multiple options for dashboarding and BI  

    The post Dashboarding appeared first on ĢƵ.

    ]]>
    We offer multiple options for dashboarding and BI
    • Visualise and analyse DQ results using your preferred dashboarding platform or leverage our ready-to-use templates for popular tools like Power BI to gain valuable insights from your data.
    • Additional built-in dashboards are a feature of Augmented Data Quality, delivering insights into data health, trends and further analysis.

     

    The post Dashboarding appeared first on ĢƵ.

    ]]>