How do we create meaningful, collectable indicators and sensible measurement frameworks to assess financial inclusion? What role can transactional data play, particularly with respect to payments?

In the past, financial inclusion was measured with face-to-face, nationally representative demand-side surveys. While this method provided useful data to inform financial inclusion strategies, it suffers from some shortcomings. Firstly, it’s expensive. A robust, nationally representative survey typically costs between USD225,000 and USD315,000 . And that’s just the fieldwork cost!

Secondly, survey data is prone to both sampling and non-sampling errors. Sampling errors happen if the sample chosen for the research does not accurately reflect the population. Non-sampling errors include all other errors. This can include coverage errors (e.g. interviewers skipping households or duplicating interviews), non-responses or response errors (i.e. respondents refusing to answer questions or providing inaccurate information), interviewer error or processing errors. Researchers are well aware that non-sampling errors can be significant; financial behaviours and attitudes are deeply private matters. Respondents may prefer not to answer certain questions or may be more frugal with the truth than they are with their money.

These methods of measuring financial inclusion focused on one-dimensional access or take-up indicators, but there is now a shift to using indicators aligned with people’s financial usage and needs. With this new focus, the level of detail required from survey respondents will continue to increase. Responses may be less accurate – particularly when discussing high-frequency transactions. And the time it takes to ask and answer detailed questions on needs, as well as the tools used by customers to meet those needs, can be extensive. As survey instruments shift to accommodate needs-based and use-based approaches, they will need to forgo some questions to maintain a reasonable length. This may create more pressure at the survey design stage. For more information on this process, please read about our approach to SMS surveys here.


Shifting towards a more meaningful measurement of financial inclusion indicators

Other data, specifically transactional data, can potentially augment the more traditional demand- and supply-side data used to monitor use-based financial inclusion indicators. Financial service providers (FSPs), as well as messaging platforms or payment switches, produce this “big data” – high-velocity, high-volume data generated as a by-product of the millions of transactions processed in any month. Already, many central banks are reporting on the number and value of electronic transactions, while switches report on the number and value of the transactions they process. The challenge is to reframe the unit of analysis away from volumes and values of transactions to accounts, or better yet, to customers. This reframing is increasingly common as FSPs leverage customer data to drive loyalty. Many banks already actively measure and report on retention and cross-sell by using a unique customer identifier to link accounts to specific customers. 

Less common is the application of similar approaches to data generated by payment switches – the infrastructure that enables transactions between FSPs. This is often for the simple reason that switches have account numbers but typically would not have access to other unique customer identifiers. A rare exception to this is the case of Nigeria. In Nigeria, all customers must register for a bank verification number (BVN), a centralised, biometric identity issued by the banking sector to mitigate fraud risk. The BVN, together with initiating and terminating account numbers and channels used to initiate the transaction, is passed through the switch with each transaction. The switch also retains information on failed transactions together with the reasons for failure. 

While the data is by no means complete, it provides a valuable additional data resource that can be mined to develop replicable usage indicators. 


To read more about recent analysis of NIBSS data, see our second blog in this series here.