AI & the New Age of Social Control

Posted by Walter Pasquarelli on Nov 30, 2018 4:56:32 PM
Walter Pasquarelli

By Henry Ajder & Walter Pasquarelli

 

Social control, the mechanism for maintaining social order within communities, is as old as humanity itself. Without the necessary mechanisms for ensuring some degree of social control, communities would have been difficult to organize and maintain. Artificial intelligence (AI) is a game changer in this field, reflecting fundamental shifts in the contemporary technological and socio-political landscape. This piece will explore these various shifts, along with the unique ethical questions that accompany AI in the new age of social control.

A (Very) Brief History of Social Control

 

Throughout history, dominant classes have developed systems to maintain social order and sanction against ‘deviant’ behaviour. To this end, police and other ‘enforcement’ entities have acted as social regulators, countering destabilizing or criminal behaviours that are deemed a threat to social order. However, social control has frequently been utilised to viciously consolidate power and prevent social change. The Spanish inquisition, German Gestapo, and Snowden’s NSA revelations are all examples where state authorities crossed this line between enforcing social order and using mass surveillance systems to crack down on ‘dissenting behaviour’. Nonetheless, these enforcement agencies have historically functioned as an extension of a single, centralised state entity, holding a monopoly on the means of exercising social control.

 

AI and the Commodification of Social Control

 

Rapid commercial developments in AI have fundamentally changed this centralised state monopoly on the means of social control. This is intimately connected to a skyrocketing individual production of personal data, particularly through digital consumer technologies. Constantly connected smartphones and the Internet of Things, among others, provide companies with an abundance of data on customers like never before. By feeding this mass of data through machine learning, AI and predictive analytics programmes, businesses can influence  individual purchasing behavior, such as through the production of bespoke ads, or personalised discounts. Corporates have long sensed the immense profitability of the analytics market, where personal data is traded between companies and governments. This is reflected by its over $200bn USD valuation as of 2017.

 

These developments have led to the commodification of social control, where digital tools are commercially developed or repurposed to aid social control initiatives. Cambridge Analytica provided a prime example of how commercial data analytics tools can easily be repurposed to build voter profiles in order to influence political campaigns. Alternatively, Palantir’s ‘predictive policing’ programme illustrates how commercial entities are developing AI powered tools explicitly for state implementation. By collecting and mining personal data Palantir are building ‘a constantly expanding surveillance database that’s fully accessible without a warrant’ allowing law enforcement to ‘predict’ who may be about to commit a crime. The result is in the age of AI social control is no longer a singular, state led activity - it is a joint venture made up of public as well as private sector agents.

 

AI and Social Control in Practice  

 

China has long been considered the prime example of how AI is used for social control. Most infamously through its social credit system, the Chinese government has used technology for surveilling its citizens and conditioning their behavior. Since the introduction of the social credit system, however, China has made giant leaps forward. First it announced increasing the number of CCTVs in the country to 626m by 2020. Additionally, the South China Post reported initiatives of the Chinese government to develop pidgeon-drones equipped with facial recognition software to surveil ‘problematic’ areas. Potential infringements would then feed into the credit system creating a vicious surveillance circle. Chinese company Baidu has been at the forefront of these technological developments, working conjointly with the Chinese government.  Nevertheless, B2G cooperation for social control is not exclusive to China. Palantir has repeatedly been under public scrutiny for collecting massive amounts of personal data and selling it off to its customers. Similar as in the Chinese-Baidu cooperation, Palantir  won an $876m USD contract for replacing the US army’s Distributed Common Ground System –the US military data and intelligence AI-software. Furthermore several European countries have announced expanding the application of facial recognition software in public spaces.

 

So What’s Left of the Ethics?

 

The use of AI to social control systems creates a unique set of ethical problems. These are not exclusive to a particular use-context, but all contexts where AI may contribute to structures of social control, whether that be through authoritarian regimes or liberal-democracies.

 

Data Privacy and Threats to Civil Liberties

 

One crucial point is whether using AI to collect and analyse public data is morally permissible. Mass collection and mining  of personal data could be seen as an affront to civil liberties and abuse of power.  Especially the collection of biometric data that enables face or gait identification present unprecedented means of profiling and surveilling individuals.  Surely, this may empower the police’s ability to track and identify wanted criminals. Yet, it may be viewed as a pernicious overreach of state power violating citizens’ right to basic privacy. Certain legislation, such as the EU’s General Data Protection Regulation, have moved to prevent the standardised misuse of personal data. Yet he effectiveness of this legislation in combating data collection for government surveillance and other forms of social control is far from clear.

 

Algorithmic Bias and Accountability

 

Secondly, there are ethical issues with AI itself. One of these stems from  the threat of algorithmic bias, where algorithms reflect the biases of their human programmers or provide inaccurate analysis based on incomplete source data. This is especially problematic, as Cathy O’Niel identified, with the current tendency to place ‘blind faith’ in AI and its outputs. In the previously mentioned predictive policing systems, algorithms simply reinforced police biases that ethnic minorities were most like to offend. This was in part due to incomplete and racially biased police data providing a disproportionate number of profiles of ethnic minority criminals to train algorithms. If these outputs were dogmatically followed, AI informed social control strategies may become reproduction mechanisms for biases and prejudice against individuals or groups within society.

 

Finally, whereas previously these strategies could be traced back to analysis and decisions made by human agents, the automation of these processes by AI makes assigning responsibility highly ambiguous. What follows from this is the question of who is liable for the decisions AI makes. When considered with the aforementioned commodification of social control, it is not clear which parties should be held accountable for miscalculations or the sanctioning of AI generated recommendations. Here it may be argued that some decisions and actions must be treated as necessarily human tasks, in order to establish this chain of liability to a human agent. However, if AI can routinely deliver more accurate results or analysis on average, it is debatable whether we should value human liability over an improved average accuracy of AI when implementing social control systems.

 

Conclusion

Whilst the ethical questions we have considered are by no means easy to resolve, it is undeniable that AI will continue to have a profound impact on the practice of social control. The general trajectory identified in this article indicates that social control will become increasingly commodified, spearheaded by joint public and private sector initiatives. These initiatives will likely harness future advancements in AI, whilst adapting in response to changing socio-political trends. The challenge will be to ensure that AI is responsibly applied, establishing the right balance between social order and civil liberties.

 

Walter Pasquarelli

 

Walter Pasquarelli is a recent Politics (MPhil) graduate from Queens' College, Cambridge, and founder of machine-kind.com, a digital platform publishing research on the impact of AI on society. Walter Pasquarelli has published several articles and collaborated with various research institutes such as with Boston Consulting Group's CPI, Harvard's Future Society, and the Future of Life Institute.

 

Topics: Social control