Scalable Data Provision for Standardised Client Reporting

Scalable Data Provision for Standardised Client Reporting

Author: Andy Wood, Liqueo Senior Consultant

This article explores the creation of a scalable data delivery platform for Internal and External client reporting.

Migration to a Cloud based SaaS data model changes the extraction process, storage and distribution of that new enterprise data set. Often this is an afterthought and is not considered thoroughly enough from the outset. This creates architectural technical debt that needs to be unwound post-implementation which can increase costs and significantly extend the delivery time.

To remedy this disconnect between your old processes and your new Dataset, a scalable data model can be created. By building a Data Platform using Microsoft Azure Data Lake Store, you can benefit from Data Products that could provide different areas of the business with access to pull their own source Data from a centralised and governed source. This is a big improvement on them building, sourcing and maintaining their own bespoke solutions.

As you can imagine, this type of programme can be a very lengthy and costly resolution to the problem. But the need for a single source of accurate data that can scale with rapid business growth, means it is a necessary one.

Centralised Data Lake Store 

Following a successful implementation of a new Front to Back Office solution, the user base now has a single hub for their Investment Management needs. As a result, they will inevitably find that different areas of the business must replace their legacy data sources by using the new IBOR (Investment Book of Record) data via direct downloads into Excel. This will provide continuity to their internal or external clients.

Taking a snapshot of data via downloads into Excel means it’s not being validated or governed centrally and could change rapidly during the course of a business day. This can yield lots of back-dated data points that would in some cases significantly change the outcomes of provided reporting.

A validation and governance process must be put into place and a centralised Data Platform created which would collect all sources - such as Security Master, Transactions and Valuations, internal and external. All this data would be fed into the Lake Store and Data Products created on top. This provides consumers with accurate time-sliced Datasets that can be rolled forward and backwards to understand the impact of transactions on the position sets.

An example of this would be IBOR (Investment Book of Record). This is an end-of-day or start-of-day view of all positions held against every Fund/Portfolio within the system at that point in time. An ABOR (Accounting Book of Record) could sit alongside this, which is provided by each Fund Accountant.

In order to maintain the completeness and accuracy of the distributed position set, a full peer-to-peer reconciliation process would need to be built between the internal valued positions and external Fund Accountants data. This opens the possibility of an internal Accounting module that could be enabled to provide a ABOR proxy position set as an additional Data Product
 

Aggregation & Custom Compute

It’s all very well having these source data products, but there will always be a level of aggregation required in order to use the data properly for reporting.

A Data Vault / Operational Database could be constructed to take in a collection of the Data Platform products. A Compute Engine could also be built to run the aggregation and the joining of Positions / Security Master / Benchmark Data together.

The data projections from this would be pushed via pipelines into the Operational Database where a reporting schema can be devised.

The downside of this could be that hundreds of different views end up existing on this Operational Database for differing purposes.

Here are some examples:

  • Credit Rating Breakdown

  • Country Breakdown

  • Currency Breakdown

  • Region Breakdown

  • Sector Breakdown

However, the scalability of these can be quickly corrected. This is because although the projected output may look the same for any Fund, the underpinning calculation method or aggregation could be slightly different. A good example of this is the Credit Rating breakdown - where one client may require the Average of Moody’s, S&P and Fitch whereas another client may want Lowest of just Moody’s and S&P.

A Front-End user tool should be created to allow self-service configuration of each Fund and its reporting requirements. For example, the user could apply the correct Credit Rating averaging methodology or Sector Classification scheme and level.

This allows the Data Provision team to create a single Credit Rating Breakdown view on the database. The results provided in the output would be based on how the user had configured the Fund.

This diagram shows a high-level architectural view of the data flow 

Chart, waterfall chartDescription automatically generated

Distribution Data Pools 

An efficient way of surfacing data to each area of the business or function is the creation of Data Pools. For some of the more technology sophisticated consumers, creating APIs avoids putting any unnecessary strain on the Operational Database. This means you wouldn’t have any outside queries running directly on the Database, over which you have no control.

Based on a Publishing trigger, you can push Datasets into separate Databases or Pools and provide these to the business to use as they see fit. Users can do this by having their own Power BI front end over the data or even a simple Excel query.

Each Data Pool can then be owned by the team it was provided to and is supported centrally by the operational framework which populates the Pool daily. This leads to a scalable function where you can add additional views into the Pools or extend existing Datasets to include additional Attributes without creating a breaking change.

Conclusion

As an organisation you now have the external client reporting driven off validated, governed and correctly aggregated data and then that same data being provided back to internal business areas. If that data is then used by client facing teams in presentations, then they can be assured they are showing the client the same information that was provided on their monthly or quarterly report.

Here at Liqueo, we provide organisations with the skills to implement programmes successfully through our flexible workforce model, tailoring solutions for our clients’ strategic goals. We deliver an exceptional, bespoke service to every client via a dynamic and agile framework. If you are interested in how we can help you implement successful programmes or want more information, contact us.

Our use of cookies

We use necessary cookies to make our site work. We'd also like to set analytics cookies that help us make improvements by measuring how you use the site. These will be set only if you accept.

For more detailed information about the cookies we use, see our Cookies page. Cookie Control Link Icon


Necessary cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.