Social protection responses were opaque, lacked confidentiality
* All opinions expressed in this opinion piece are those of the author and not of the Thomson Reuters Foundation.
Many social protection programs lacked fundamental human rights considerations from inception to implementation, leaving the most vulnerable excluded
Nuno Guerreiro de Sousa is a technologist at Privacy International @privacyint
The impact of the Covid-19 pandemic around the world has been seismic, not only in obvious health terms, but also in the devastation it has caused to the economies and livelihoods of hundreds of millions of people.
Many governments around the world have accelerated emergency relief programs, undoubtedly protecting large numbers of people from the worst hardships. But Privacy International research shows that many of these programs lacked fundamental human rights considerations from design to implementation, leaving the most vulnerable excluded. Because “in technology we trust” is becoming a sort of absolutist philosophy of governance, those involved have had little or no chance to empirically challenge poor decisions.
While the digitalization and automation of processes in the context of social protection predates the pandemic, their adoption has accelerated in the face of increasing poverty. The difficulty of identifying beneficiaries, combined with containment measures such as lockdowns, as well as the need for individual isolation, has provided governments with the perfect opportunity to experiment with ever more data-intensive methods of reaching people. in need. This increased digitization of social benefits is flagged in the latest report of the UN Special Rapporteur on extreme poverty and human rights as a problem of exclusion, a fact confirmed by our research.
Many of these welfare programs were shrouded in opacity, with their eligibility criteria denied to the very population they sought to serve.
As documented by our global partners, Colombian “Ingreso Solidario” Fundación Karisma is a prime example. In this case, the government implemented an unconditional cash transfer system targeting 3 million citizens in just under two weeks. It may sound impressive, but it used all sorts of undisclosed administrative records and data managed by private and public actors to make automated decisions about who was or was not eligible, raising serious questions.
Thus, although the scoring criteria were made public, Colombians were nevertheless not informed of the data used to assign the scores and the real meaning of the scores. It was later revealed that there were nearly 17,000 records with inconsistencies that were automatically flagged as ineligible. Because the system is a “black box”, it was – and is – impossible to know how many people were unfairly excluded from the cross-checks between the different databases.
Transparency is necessary, but not sufficient. In Mozambique, we learned that the government decided to identify priority geographic areas using multidimensional poverty index mapping, which combined social and economic indicators, as well as data collected during censuses and imaginary high-resolution satellite maps of urban poverty. In this case, the eligibility criteria have been made public, but not in a way that people can really understand. According to independent observers, only 61% of beneficiaries knew why they had been registered.
Other cases we have studied show a blatant disregard for the privacy of beneficiaries. You might assume that information about whether you qualify for benefits would be private. But under Paraguay’s emergency social protection program, for example, the list of beneficiaries was made available in an Excel spreadsheet accessible to everyone from the government’s website, making public the names, numbers identification and districts of all beneficiaries.
These are not just start-up problems in otherwise laudable state interventions. These are inevitable consequences of data-intensive social protection systems, leaving huge swaths of people – often those who are already the most marginalized and vulnerable – outside of this system.
No social benefit system will be perfect in terms of coverage. However, much can be done to mitigate some of the serious failings we found, including ensuring transparency and clarity about eligibility criteria and the data used to make welfare decisions, as well as recourse mechanisms.
No matter how good the technology, some people will always fail. If we believe that progressive social protection systems are the cornerstone of our humanity, then our social protection systems must be more humane.