CEO Vision - Archives from Stuart Harvey - ĢƵ /category/blog/ceo-vision/ Unlock your data's true potential Tue, 15 Mar 2022 10:48:37 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.1 /wp-content/uploads/2023/01/ĢƵFavIconBluePink-150x150.png CEO Vision - Archives from Stuart Harvey - ĢƵ /category/blog/ceo-vision/ 32 32 Using Self-Service Data Quality to Gain an Edge /blog/ceo-vision/using-self-service-data-quality-to-gain-an-edge/ Mon, 29 Nov 2021 14:37:35 +0000 /?p=17151 Demand for better quality data has never been higher. Stuart Harvey, CEO ofĢƵ, writes that self-service data quality could be the key that unlocks significant competitive advantage for financial firms.

The post Using Self-Service Data Quality to Gain an Edge appeared first on ĢƵ.

]]>

Amidst ever-changing regulatory requirements and hype around the potential of data-driven technologies, demand for better quality data in the financial industry has never been higher. Stuart Harvey, CEO ofĢƵ, writes that a self-service approach could be the key that unlocks significant competitive advantage for financial firms.

Demand for higher data quality in the financial industry has exploded in recent years. A tsunami of regulations such as BCB23, MiFID and FATCA stipulate exacting standards for data and data processes, causing headaches for compliance teams.

At the same time, financial firms are trying to grasp the fruitful benefits of becoming more data and analytics driven. They are embracing technologies such as artificial intelligence (AI) and machine learning (ML) to get ahead of their competitors.

Through their attempts at meeting regulatory requirements and gaining meaningful insights from these technologies, they are coming to the realisation that high quality, reliable data is absolutely critical – and extremely difficult to achieve.

But there is an evolution underway. At the core of this evolution is the establishment of ‘self-service’ data quality whereby data owners have ready access to robust tools and processes to measure and maintain data quality themselves, in accordance with data governance policies. This not only simplifies the measuring and maintenance of data quality; it can help turn it into a competitive advantage.

High quality data in demand

As the urgency for regulatory compliance and competitive advantage escalates, so too does the urgency for high data quality. But it’s not plain sailing and there is a variety of disciplines required to measure, enrich, and fix data.

Legacy data quality tools were traditionally owned by IT teams as by its very nature, digital data can require significant technical skills to manipulate. However, this created a bottleneck as maintaining data also requires significant knowledge about its content – what good data and bad data looks like, and what its context is – and this resides with those who use the data, rather than a central IT function.

Each data set will have its own users within a business who have the special domain knowledge required to maintain the data. If a central IT department is to maintain quality of data correctly, it must liaise with many of these business users to correctly implement the controls and remediation required. This creates a huge drain on IT resources and a slow-moving backlog of data quality change requirements within IT that simply can’t keep up.

Due to the lack of scalability of this method, many have come to the realisation that this isn’t the answer and so have started moving data quality operations away from central IT back into the hands of business users. 

This move can accelerate information measurement, improvement and onboarding processes, but it isn’t without flaws. It can be difficult and expensive for business users to meet the technical challenges of this task and unless there is common governance around data quality there is the risk of a ‘wild west’ scenario where every department manages data quality differently across the business.

Utilising self-service data quality platforms

Financial firms are maturing their data governance processes and shifting responsibility away from IT to centralised data management functions. These Chief Data Officer functions are seeking to centralise data quality controls while empowering data stewards who know and understand the business context of the data best. 

As part of this shift, they require tooling that matches the skills and capabilities of different profiles of user at each stage of the data quality process. And this is where self-service data quality platforms come into a league of their own. However, not all are made equal and there a few key attributes to look for.

For analysts and engineers, a self-service data quality platform needs to be able to provide a profiling and rules studio that enables the rapid profiling of data and configuring/editing of rules in a GUI. It must also offer a connectivity and automation GUI to enable DataOps to automate the process.

For business users, it needs to offer easy-to-understand visualisation or dashboard so they can view the quality of data within their domain, and an interface so they can remediate records which have failed a rule or check.

Agility is key to quickly onboard new datasets and the changing data quality demands of end consumers such as AI and ML algorithms. 

It should be flexible and open, so it integrates easily with existing data infrastructure investment without requiring changes to architecture or strategy and advanced enough to make pragmatic use of AI and machine learning to minimise manual intervention.

This goes way beyond the scope of most stand-alone data prep tools and ‘home grown’ solutions that are often used as a tactical one-off measure for a particular data problem.

Gaining an edge

The move towards a self-service oriented model for data quality is a logical way to keep up with the expanding volumes and varieties of information being discovered, accessed, stored or made available. However, data platforms need to be architected carefully to support the self-service model in accordance with data governance to avoid the ‘wild west’ scenario.

Organisations that successfully embrace and implement such a platform are more likely to benefit from actionable data, resulting in deeper insight and in more efficient compliance, in turn unlocking significant competitive advantage.

And for more from ĢƵ, find us on ,  or 

The post Using Self-Service Data Quality to Gain an Edge appeared first on ĢƵ.

]]>
It was good to be in London to take part in NI Business and Innovation with ĢƵ! /blog/ceo-vision/it-was-good-to-be-in-london-to-take-part-in-ni-business-and-innovation-with-datactics/ Fri, 17 Sep 2021 10:18:04 +0000 /?p=16047 It was good to be in London to take part in NI Business and Innovation with my colleagues at ĢƵ where we celebrated 100 years of NI entrepreneurship and business success. I grew up in Belfast. I remember attending the 1971 festival which marked Northern Ireland’s first fifty years. I was eight years old that […]

The post It was good to be in London to take part in NI Business and Innovation with ĢƵ! appeared first on ĢƵ.

]]>

It was good to be in London to take part in NI Business and Innovation with my colleagues at ĢƵ where we celebrated 100 years of NI entrepreneurship and business success.

I grew up in Belfast. I remember attending the 1971 festival which marked Northern Ireland’s first fifty years. I was eight years old that summer. My parents allowed me to walk the three miles from home with my older friends, to Botanic Gardens, where the event took place. The country had gone decimal earlier that year and my Dad gave me two 50p coins which I made last all day at the funfair. I had my first ride on a helter-skelter that day. I think that it cost just 5p.

It’s worth taking a moment to reflect on how much NI has changed since that time and how far we’ve come as an economy and as a society. I work at a tech company where we have thirteen nationalities on the staff. We are of all faiths and none. The median age at ĢƵ is 28.

We employ a wide range of staff in software and data science – from young apprentices just starting their technical education to PhDs. Our clients are mostly outside Ireland and this year we’ve won new customers in places like London, New York, and Singapore. Belfast has become a world-class location for #fintech and #regtech with 7000+ people working in the sector.

At breakfast, the Prime Minister spoke about how NI, for such a small place, always punched well above its weight in terms of technology and innovation. He talked about the creativity and engineering prowess of NI – from Ferguson’s tractor, through the Wrightbus Routemaster replacement, and up to Axial3D’s ability to 3D print an exact replica of a human heart.

Like the helter-skelter ride, the history of NI has had many twists and turns. Yesterday was a day to celebrate how far we’ve come and look forward to our future with hope and confidence.

The post It was good to be in London to take part in NI Business and Innovation with ĢƵ! appeared first on ĢƵ.

]]>
Self-Service Data Quality forDataOps /blog/ceo-vision/ceo-vision-self-service-data-quality-for-dataops/ Tue, 05 May 2020 11:12:48 +0000 /ceo-vision-self-service-data-quality-for-dataops/ At the recent A-Team Data Management Summit Virtual, ĢƵ CEO Stuart Harvey delivered a keynote on “Self-Service Data Quality for DataOps – Why it’s the next big thing in financial services.” The keynote (available here) can be read below, with slides from the keynote included for reference. Should you wish to discuss the subject with […]

The post Self-Service Data Quality forDataOps appeared first on ĢƵ.

]]>
At the recent A-Team Data Management Summit Virtual, ĢƵ CEO Stuart Harvey delivered a keynote onSelf-Service Data Quality for DataOps – Why it’s the next big thing in financial services.” The keynote () can be read below, with slides from the keynote included for reference. Should you wish to discuss the subject with us, please don’t hesitate to contact Stuart,or Kieran Seaward, Head of Sales.

I started work in banking in the 90’sa programmer,developing real-time software systems written in C++.In these good old days,I’d be given a specification, I’d write some code, test and document it. After a few weeks it would be deployed on the trading floor. If my software broke or the requirements changed it would come back to me and I’d start thisprocessall over again. This‘wٱڲ’approach was slow and, if I’m honest, apart from the professional pride of not wanting to create buggy code,I didn’t feel a lot of ownership for what I’d created.

In the last five years a new methodology in software engineering has changed all that – it’s calledDevOps,Իbrings a very strategic andagileapproach to building new software.

More recently DevOps had a baby sister calledDataOps,and it’s this subject that I’d like to talk about today.

Many Chief Data Officers (CDO) and analysts have been impressed by the increased productivity and agility their C󾱱T𳦳ԴDZDzOfficer (CTO)colleagues are seeing through the use of DevOps. Now they’d like to get in on the act. In the last few months at ĢƵ we’ve been talking a lot to CDO clients about their desire to have a moreagileapproach todata governanceand how DataOps fits into this picture.

In these conversations we’ve talked a great deal about theownershipof data. A key question is how to associate the measurement and fixing of a pieceofbrokendata with the person most closely responsible for it. In our experience the owner of a piece of data usually makes the bestdata steward. These are the people who can positively affect business outcomes through accurate measuring and monitoring of data and is typically a CDO’s role.

We have seen a strong desire to push data science processes, including data governance and the measurement of actual data quality(at arecordlevel) into the processes and automation that exist in a bank.

I’d like to share with you through some simple examples of what we are doing with our investment bank and wealth management clients. I hope that this shows that aself-serviceapproach to data quality (with appropriate tooling) can empower highly agile data quality measurement for any company wishing to implement the standard DataOps processes of validation, sorting, aggregation, reporting and reconciliation.

Roles in DataOps and Data Quality

We work closely with the people who use the ĢƵ platform, the people that are responsible for the governance of data and reporting on its quality. They have titles like Chief Data Officer, Data Quality Manager, Chief Digital Officer and Head of Regulation. These data consumers are responsible for large volumes of often messy data relating to entities, counterparties, financial reference data and transactions. This data does not reside in just one place; it transitions through multiple bank processes. It is sometimes “at rest” in a data store and sometimes “in motion” as it passes via Extract,Transform,Load (ETL)processes to other systems that live upstream of the point at which it was sourced.

For example, a bank might download counterparty information from Companies House to populate its Legal Entity Master. This data is then published out to multiple consuming applications for KnowYourCustomer (KYC), Anti-MoneyLaundering (AML)and Life Cycle Management. In these systems the counterparty records are augmented with information such as aLegalEntityIdentifier (LEI), a BankIdentifierCode (BIC)or a ticker symbol.

This ability to empower subject matter experts and business users who are not programmers to measure data at rest and in motion has led to the following trends:

  • Ownership:Data quality management moves from being the sole responsibility of a potentially remote data steward toall of those who are producing and changing data, encouraging a data driven culture.
  • Federation:Data quality becomeseveryone’s job.Let’s think about end of day pricing at a bank. The team that owns the securities master will want to test accuracy and completeness of data arriving from a vendor.The analyst working upstream who takes an end of day price from the securities master to calculate avolume-weighted average price(VWAP)will have different checks relating to the timeliness of information. Finally,the data scientist upstream of this who uses the VWAP to create predictive analytics. They want to build their own rules to validate data quality.
  • Governance:A final trend that we are seeing is the tighter integration with standard governance tools. To be effective, self-service data qualityand DataOps require tight integration with the existing systems that hold data dictionaries,metadata, and lineage information.

Here’s an illustration of how of how we see ĢƵ Self Service Data Quality(SSDQ) Platformintegrating with DataOps in a highimpact way that you might want to consider in your own data strategy.

1. Data Governance Team

Firstoff,we offer a set of pre-built dashboards for PowerBI, Tableau and Qlik that allow your data stewards to have rapid access to data quality measurements which relatejust to them. A user in the London office might be enabled to see data for Europe or, perhaps, just dataintheir department. Within just a few clicks a data steward for the Legal Entity Master system could identify all records that are in breach of anaccuracy checkwhere an LEI is incorrect,or atimeliness checkwhere the LEI has not been revalidated in theGlobal LEI Foundation’s(GLEIF) database inside 12 months.


2. Data Quality Clinic: Data Remediation

Data Quality Clinic extends the management dashboard by allowing a bank toreturn broken data to its owner for fixing. It effectively quarantines broken records and passes them to the data engineer in a queue, improving data pipelines and overall data governance & data quality. Clinic runs is a web browser and is tightly integrated with information relating to data dictionaries, lineage and thirdparty sources for validation. Extending our LEI example just now,I might be the owner of a bunch of entities which have failed an LEI check. Clinic would show me the records in question and highlight the fields in error. It would connect to GLEIF as the source of truth for LEIs and provide me with hints on what to correct. As you’d expect,this process can be enhanced by Machine Learning to automate thisentity resolutionprocess under human supervision.


3. FlowDesignerStudio: Rule creation, documentation, sharing

FlowDesigner is the rules studio in which the data governance team of super users build, manage, document and source-control rules for the profiling, cleansing and matching of enterprise data. We like to share these rules across our clients so FlowDesigner comes pre-loaded with rules for everything from nameԻaddress checking to CUSIPorISIN validation.


4. Data Quality Manager: Connecting to data sources;scheduling, automating solutions

This part of the ĢƵ platform allows your technology team to connect to data flowing from multiple sources, schedule how rules are applied to data at rest and inmotion. It allows for the sharing and re-use of rules across all parts of your business. We have many clients solving big data problems involvinghundreds ofmillions of records using Data Quality Manageracross multiple different environments and data sources, on-premise or in public (or more typically private) cloud.


Summary: Self-Service Data Quality for DataOps

Thanks for joining me today as I’ve outlined how self-service data quality is a key part of successful DataOps. CDOs need real-time data quality insights to keep up with business needs while technical architects require a platform that doesn’t need a huge programming team to support it. If you haveanyquestions aboutthis topic, or how we’veapproachedit,then we’d be glad to talk with you.Please get in touch below.

Clickherefor the latest news from ĢƵ, or find us on,or

The post Self-Service Data Quality forDataOps appeared first on ĢƵ.

]]>