
Breaking the Bad - the case for better exception management (Part 1)
Aug 22, 2024
4 min read
3
170
0

Exceptions spiralling out of control
The purchase and implementation of a reconciliation tool is almost always the result of either regulatory requirement, cost reduction initiative or transformation program. While all reconciliation systems produce the desired match rates at the point of go-live, it is a little-known fact that automated match rates deteriorate over the course of time. The increase in unmatched transactions is often overlooked as it is perceived to be either increased complexity or volume of records.
In reality, data leakage occurs when there are subtle (or not so subtle) pattern changes upstream to the underlying data that causes barely noticeable decreases in match rates. The problem is exacerbated by staffing changes, inadequate handover documentation and a lack of training in the process and data flow behind the reconciliation.
All of these factors contribute to a significant increase in recurring breaks over time that are often known to the day-to-day reconcilers but who are unable to determine the root cause. The result is often manual workarounds and an increase in team numbers.
To understand why this occurs, we need to examine the initial steps that were taken at the beginning of the project. In my many years in the reconciliation industry I have seen first-hand how reconciliation projects are staffed and managed.
Crew
Often this is a mix of professional project managers, business analysts, technologists and vendor implementation experts. In most cases it is only the vendor consultants who have reconciliation subject matter expertise and are relied on to deliver a robust implementation. In most cases the vendors focus will be on technological installation and not the business optimisation, therefore not achieving the positive outcomes set out by the business during the PDD phase. However, they are only involved at the point of their system delivery and are not consulted during the overall project initiation, analysis and design phases.
Leadership
Reconciliations usually form part of a larger program of work to implement another system or process transformation. As a result, the reconciliation is an integral part of complex data flows that either need to have controls, validation and reporting. As outlined above, the vendor team is only involved during the implementation and are limited to their own work. They therefore do not have the full view of the data architecture, data flows and, more importantly, data behaviours and potential pattern changes.
Chaotic energy of data
The end result is often a reconciliation system that is not fully optimized for the organization's specific data flows and processes. This can lead to several issues:
1. Incomplete data mapping: Without a comprehensive understanding of the entire data architecture, crucial data points may be overlooked or incorrectly mapped in the reconciliation process.
2. Inflexible rules: The reconciliation rules may be too rigid, unable to adapt to subtle changes in data formats or structures that occur over time.
3. Limited exception handling: The system may not be equipped to handle exceptions that are unique to the organization's data flows, leading to an increase in manual interventions.
4. Inadequate reporting: The reporting capabilities may not align with the organization's specific needs, making it difficult to identify and address recurring issues.
Breaking the cycle of bad data
To mitigate these problems and maintain high match rates over time, organizations should consider the following best practices:
1. Holistic approach: Involve reconciliation expert consultants in the early stages of the project, including the analysis and design phases, let the SMEs understand upstream and downstream dependencies, more often than not a upgrade or implementation just carries bad behaviours from old model to new model. This ensures a more comprehensive understanding of the data architecture and flows.
2. Continuous monitoring: Implement a system, report or process for ongoing monitoring of match rates and exception trends. This allows for early detection of deteriorating performance.
3. Regular review and optimization: Schedule periodic reviews of the reconciliation rules and processes to ensure they remain aligned with evolving data structures and business needs.
4. Knowledge transfer: Develop comprehensive documentation and training programs to ensure that knowledge is retained even as team members change, an effective way of doing this is by short 3 min “quick tip videos”
5. Root cause analysis: Establish a process for investigating recurring breaks and identifying their root causes, rather than relying on manual workarounds, ensuring internal counterparts are accountable and therefore happy to resolve issue at root.
6. Adaptive rules engine: Implement a flexible rules engine with AI or ML capabilities to learn from repetitive manual matching patterns and deliver increased match rates.
7. Cross-functional collaboration: Foster ongoing communication between IT, business units, and the reconciliation team to ensure all stakeholders are aware of changes that might impact the reconciliation process.
8. Data quality initiatives: Implement upstream data quality controls to minimize the introduction of errors that could impact reconciliation performance.
9. Ensure a full change management function aligns to the business, continuous improvements are evaluated, prioritised & delivered. Don’t get left behind !
Our experts at ReconIQ can assist your company in adopting all of these practices, to improve and maintain high match rates over time, reduce manual interventions, and ensure that your reconciliation systems continue to provide value long after the initial implementation. This approach not only improves operational efficiency but also enhances your organization's ability to meet regulatory requirements and achieve cost reduction goals.






