UN Special Rapporteur warns of the dangers of a ‘digital welfare state’

While developments in technologies such as facial recognition and artificial intelligence (‘AI’) used to deliver social welfare and public services have been widely welcomed, UN Special Rapporteur for Poverty and Human Rights, Professor Philip Alston, in his latest report, states that many nations are at risk of “stumbling zombie-like into a digital welfare dystopia” which will punish the poorest in society.

There is a wealth of support for developments in facial recognition and AI. For example, UN Under-Secretary-General Armida Salsiah Aisjahbana argued recently at an urban conference in Penang, Malaysia, that these developments “hold promise to reimagine how the public sector can better serve sustainable needs… fast-evolving technologies have the potential to transform the traditional way of doing things across all government functions and domains”.

According to a report jointly prepared by the United Nations Economic and Social Commission for Asia and the Pacific (‘ESCAP’) and Google called ‘Artificial Intelligence in the Delivery of Public Services’, AI, which includes machine-learning, autonomous and data processing systems, is currently being used in crime prevention, trademark applications and to improve crop yields. This report largely indicates the limitless applications of AI.

However, Professor Philip Alton argues that “there are a great many cheerleaders extolling the benefits, but all too few counselling sober reflection on the downsides”. His report shines a light on how facial recognition and AI developments can greatly enhance bias and interfere with security and privacy.

Alston writes that “crucial decisions to go digital have been taken by government ministers without consultation, or even by departmental officials without any significant policy discussions taking place”. As a result of the absence of accountability, “digital technologies are employed in the welfare state to surveil, target, harass and punish beneficiaries, especially the poorest and most vulnerable among them”. In other words, such developments introduce automated decision-making and removes human discretions.

Alston says that the developments are hastily being adopted due to the “allure of digital systems that offer major cost savings along with personnel reductions, greater efficiency, and fraud reduction, not to mention the kudos associated with being at the technological cutting edge”.

He argues that the inevitabilities are being ignored. He says the future is one in which “unrestricted data matching is used to expose and punish the slightest irregularities in the record of welfare beneficiaries; evermore refined surveillance options enable around the clock monitoring of beneficiaries; conditions are imposed on recipients that undermine individual autonomy and choice in relation to sexual and reproductive choices, and in relation to food, alcohol and drugs and much else; and highly punitive sanctions are able to be imposed on those who step out of line”.

One of the negative impacts Alston explores is the bias that predictive analytics, algorithms and other of forms of AI can impose on key groups and individuals seeking social protection. He argues that such individuals “need to be able to understand and evaluate the policies that are buried deep within the algorithms” which would require transparency and inclusion in decision-making.

Alston ends his report by arguing that the “real digital welfare state revolution” would involve using these developments to ensure a higher standard of living for the vulnerable and disadvantaged and to stop “obsessing” over fraud, cost savings, sanctions and market-driven definitions of efficiency.

Click here for the report.




Sustaining Partners