AI for social protection: pay attention to people

The technology that allowed passengers to ride elevators without an operator was tested and ready for deployment in the 1890s. But it wasn’t until the elevator operators’ strike of 1946 – which cost $100 million dollars in New York – that automated elevators began to be installed. It took over 50 years to persuade people that they were as safe and convenient as those operated by humans. The promise of radical change from new technologies has often overshadowed the human factor that ultimately determines if and when these technologies will be used.

Interest in artificial intelligence (AI) as an instrument for improving efficiency in the public sector is at an all-time high. This interest is motivated by the ambition to develop neutral, scientific and objective techniques of governmental decision-making (Harcourt 2018). As of April 2021, the governments of 19 European countries had launched national AI strategies. The role of AI in achieving the Sustainable Development Goals has recently caught the attention of the international development community (Medaglia et al. 2021).

Advocates argue that AI could radically improve the efficiency and quality of public service delivery in education, health, social welfare and other sectors (Bullock 2019; Samoili et al 2020; de Sousa 2019; World Bank 2020). In the area of ​​social protection, AI could be used to assess eligibility and need, make enrollment decisions, deliver benefits, and monitor and manage benefit delivery (ADB 2020). Given these benefits and the fact that AI technology is readily available and relatively inexpensive, why has AI not been widely used in social care?

Large-scale applications of AI in social protection have been limited. A study by Engstrom et al. (2020) of 157 public sector uses of AI by 64 US government agencies found seven welfare-related cases, where AI was primarily used to predict risk screening of referrals to welfare agencies. childhood (Couldechova et al. 2018; Clayton et al. 2019).

Only a handful of evaluations of AI in social protection have been conducted, including evaluations of homelessness assistance (Toros and Flaming 2018), unemployment benefits (Niklas et al. 2015) and child protective services (Hurley 2018; Brown et al. 2019; Vogl 2020). Most of them were based on proofs of concept or pilots (ADB 2020). Examples of successful pilot projects include the automation of Swedish social services (Ranerup and Henriskon 2020) and the government of Togo’s experimentation with machine learning using mobile phone metadata and satellite imagery to identify households most in need of social assistance (Aiken et al. 2021).

Some debacles have reduced public confidence. In 2016, Services Australia, an Australian government agency that provides social, health and child support services and payments, launched Robodebt, an AI-based system designed to calculate overpayments and issue debt advice to social assistance recipients by comparing social system data. secure payment systems and Australian Taxation Office income data. The new system mistakenly sent more than 500,000 people debt notices amounting to $900 million (Carney 2021). The failure of the Robodebt program has impacted public perception of the use of AI in Social Security administration.

In the United States, the Illinois Department of Child and Family Services stopped using predictive analytics in 2017, based on warnings from staff that poor data quality and concerns about the procurement process made the system unreliable. The Los Angeles Child Protection Bureau has terminated its AI-based project, citing the algorithm’s “black box” nature and high incidence of errors. Similar data quality issues have plagued the application of a data-driven approach to identifying vulnerable children in Denmark (Jørgensen 2021), where a project was halted in less than a year, before it even started. is not fully implemented.

The human factor in the adoption of AI for social protection

Research on the use of AI in social care draws at least four cautionary tales of the risks involved and the consequences on people’s lives from biases and algorithmic errors.

The problem of responsibility and “explainability”: Public officials are often required to explain their decisions – for example why someone was denied benefits – to citizens (Gilman 2020). However, many AI-based results are opaque and not fully explainable because they integrate many factors into multi-step algorithmic processes (Selbst et al. 2018). A key consideration for advancing AI in social protection is how AI discretion fits into the social protection system’s regulatory, transparency, grievance and accountability frameworks (Engstrom 2020). The broader risk is that without adequate grievance redress systems, automation can weaken citizens, especially minorities and disadvantaged people, by treating citizens as analytical data points.

Data quality: The quality of administrative data profoundly affects the effectiveness of AI. In Canada, poor data quality created errors that led to poor foster care and the failure to remove children from unsafe environments (Vogl 2020). The tendency to favor legacy systems can undermine efforts to improve data architecture (Mehr et al. 2017).

Misuse of integrated data: Applications of AI in social protection require a high degree of data integration, which relies on data sharing between agencies and databases. In some cases, the use of data can turn into exploitation of data. For example, the Florida Department of Child and Family collected multidimensional data on students’ education, health, and home environment. However, this data has since been interfaced with Sheriff’s Office records to identify and maintain a database of minors who are at risk of becoming prolific offenders. In such cases, data integration creates new opportunities for controversial overreach, departing from the intentions under which the data was originally collected (Levy 2021).

Response from public officials: Adopting AI should not assume that welfare managers can easily transform from complaint processors and decision makers to managers of AI systems (Renerup and Henrisksen (2020) and Brown et al. ( 2019). take into account the recommendations of the predictive algorithms or use this information in a way that could harm the performance of the system and violate assumptions about its accuracy (Garvie 2019).

Public Response and Public Trust: The use of AI to make decisions and judgments about the provision of benefits could exacerbate inclusion and exclusion errors due to data-driven biases and ethical concerns about accountability for decisions that change lives (Ohlenburg 2020). Thus, building trust in AI is key to scaling up its use in social protection. However, a survey of Americans shows that almost 80% of respondents have no confidence in the ability of government organizations to manage the development and use of AI technologies (Zhang and Dafoe 2019). These concerns are fueling growing efforts to counter the potential threats of AI-based systems to people and communities. For example, AI-based risk assessments are being challenged on due process grounds, such as denial of housing and public benefits in New York (Richardson 2019). Mikhaylov, Esteve and Campion (2018) argue that for governments to use AI in their public services, they must promote its public acceptance.

The future of AI in social protection

Too few studies have been conducted to suggest a clear path to expand the use of AI in social protection. But it is clear that the design of the system must take into account the human factor. The successful use of AI in social protection requires an explicit institutional redesign, not just the adoption of AI as a tool in the pure sense of information technology. Effective use of AI requires coordination and evolution of the legal, governance, ethical, and accountability components of the system. Fully autonomous AI stealth may not be appropriate; a hybrid system in which AI is used in conjunction with traditional systems may be more effective in reducing risk and driving adoption (Chouldechova et al. 2018; Ranerup and Henrikson 2020; Wenger and Wilkins 2009; Sansone 2021).

International development institutions could help countries address people-centred challenges within the public sector in the adoption of new technologies. This is their comparative advantage over the technology sector. Investments in research into bottlenecks in the use of AI for social protection could yield high development returns.

Joel C. Hicks