As one of my business contacts is fond of saying; “if it gets measured, we change it” and this is very relevant to Mike Despo’s recent blog An Ounce of Prevention is Worth a Pound of Cure. Any business function should be self-aware of the quality of service they deliver and the cost of delivery, then use this information to identify areas for improvement which could actually be upstream rather than internal.
As an illustration, a number of years ago I ran a number of business functions at an asset manager including the team responsible for the configuration of monthly and quarterly client reports. After every month and quarter end reporting cycle we went through a detailed review process:
The purpose of the process was not “naming and shaming,” but rather to prevent recurrence. It was noticeable that in a significant number of instances the cause of a re-run/inaccurate data was not down to errors or mistakes by that team, but were the result of upstream systems, departments or even external market conditions. Finding the right things to measure is critical—if we measure something we change it—as it is human nature to work to improve one's performance against a benchmark. In physics this is known as the Observer Effect which Wikipedia summarises as “the mere observation of a phenomenon inevitably changes that phenomenon”.
As an example, at a company I worked for, the IT department had an objective to fix every PC problem within 24 hours and had to produce monthly statistics on their performance. I arrived at work one Monday to find my PC would not start up so I called help desk who raised a ticket and sent someone to fix it. That person arrived with a preconception of what the problem was and when that proved to be false, they recorded this on the ticket, closed it and raised a new one for their next idea of what the problem was. Next day someone else turned up and tried the second fix which also proved not to work. They updated and closed the ticket and raised a new one for someone to address on the third day…
As their client, my PC did not work for three days yet their measurement shows they’d addressed 3 tickets, each within the 24 hour deadline of their being raised (great results, no?). An extreme example of changing the process to meet the measured objective, but a real one.
The key thing is that whatever the measures are, they must measure the right process and provide information which can be used to identify where effort is required to achieve improvement. In my first example there were a couple of occasions where reports had to be re-run due to corporate actions missing a dividend from the previous month. On the face of it, the corporate actions team messed up, but when we looked into the issue we found that the issuer of the security concerned had announced the dividend this month, back-dated into last month and nobody in the market knew about it until afterwards. It turned out this was a regular occurrence in that particular market and could be addressed accordingly.
The actual cause of the “problem” was therefore a feature of the markets the account was invested in rather than errors within corporate actions. This was communicated to the individual asset manager for that portfolio who then reviewed the report production timescales with the client and agreed new deadlines. Production of the client reports was moved back to allow sufficient time for late announcements to be captured and processed, thus preventing the need for re-run and repeated manual checking.
In complex asset management organizations, there are nearly limitless ways to improve operations through measurement. It’s up to the organization to determine priorities and up to operations and technology leaders to find the right way to measure against these priorities and improve. Beating the benchmark isn’t just for the front office—it should be the ethos across any competitive firm.