The adoption of AI technologies into society will disproportionately affect different sectors and raise ethical questions that need to be addressed
By Liu Zhiyi, Digital Economist/Executive Director of the Center for Computational Law and AI Ethics, and Distinguished Researcher of the AI and Marketing Research Center of Antai College of Economics and Management, Shanghai Jiaotong University
Artificial intelligence (AI) has undergone dramatic development since first emerging to becoming the core driver of the next industrial revolution. It has quickly grown from algorithmic models in labs to powerful industrial applications and is now facilitating major transformations of many industries. The development of intelligent algorithms has encompassed deep learning, Big Data technology and the expansion of supercomputing capabilities such as cloud computing. All of these have been crucial to the development of AI and are now being widely used across multiple industries and fields of study. AI has also heavily influenced global economic and social structures, and will no doubt continue to have an even greater impact on the development of human society in the future.
Robotics, natural language processing (NLP), speech recognition and autonomous driving, for instance, are just a few examples of the practical applications of AI and each has gradually grown in importance, with constant breakthroughs in related technical fields and application innovations emerging.
These innovations are, in the short-term at least, mostly concentrated in specific regions of the world—such as China, the US and parts of Europe—and only in certain industries. But looking towards the future, as AI and robotics systematically enter social and economic systems they will alter the fundamental logic of how everything operates from the bottom up. Such dramatic changes pose many challenges and questions that must be addressed to ensure a smooth integration of AI into our world.
It is well understood that AI is bringing about major changes to the economies of all societies in which it is utilized, and the most obvious change is in terms of employment. There is a narrative which states that the introduction of AI will create new jobs by raising productivity and real income levels, while at the same time replacing many older jobs, and the key is to find a balance of creation and replacement. It is predicted that AI technologies will replace around 30% of existing jobs in China over the next 20 years, but is unlikely to create anywhere near as many replacement roles, and this disparity will impact many industries and sectors.
Such effects will manifest themselves in two ways, both through “substitution” and “income effects.” Substitution is the replacement of workers with AI and robots, leading to a reduction in jobs for humans, particularly in industries such as agriculture and manufacturing.
While “income effects” will see companies needing to hire more people to meet the new demand created by the arrival of AI tech. This will be common in service sectors such as health care, where medical robots will need supervision from those trained in the use of AI technologies—a created job.
In countries with large service sector-driven economies, the level of “substitution” will be lower and the impact of the introduction of AI on employment will be less marked. However, global predictions of the long-term impact on employment of these technological advances, which are based on technical characteristics and development trends, are mostly pessimistic.
With the development of AI technology, traditional methods of teaching, learning, research and governance are all being challenged. From a macro perspective, adoption of AI technologies will have an impact on four main areas of education: education management and supply, with AI services being used to supply corresponding education management content; learning and assessment, such as the grading of tests; the empowerment of teachers to improve teaching quality; and meeting the increasing requirement for lifelong learning. Within each of these areas there are a host of related ethical issues that need to be considered, and the US and European Union are already pushing to make sure that AI risks are accounted for.
In China, AI in education has been used mostly to create digital, interactive classroom tools using Big Data networks, which in turn significantly improve educational capacity and performance. A typical example is the introduction of tablets in schools, which can be used as educational “chat bots” and interactive sources of information, whether through standardized questions or through AI, natural language processing and machine learning. Such programs also exist in other countries.
An advantage of this is that it helps to reduce the costs of education and teaching as much of certain content can be handled without teacher involvement. However, there was recently an instance where an AI application was used to monitor students’ level of attention in class, track attendance and predict teachers’ performance—all things that raise ethical questions around the efficiency-centric attempt to optimize human behavior.
AI must be implemented in a way which allows for sustainable development, takes into account the need for human dignity and values, and is credible—it needs to work. Each of these requirements needs to be seriously addressed before the widespread adoption of an AI technology.
In addition to changes to education and employment, it should be recognized that the content and delivery methods of health care systems will change as a result of the introduction and development of AI technology. This includes a vast array of different sub-sectors such as surgical and health care robots, intelligent public health and epidemic prevention systems based on Big Data, intelligent transportation facilities, and precise allocation systems for various public service resources.
Given that the health care industry, whether that be clinical care, biomedical research or medical device manufacturing, has a direct impact on people’s lives and well-being, the potential for knock-on impacts from AI development are huge. China’s medical industry currently faces many endemic issues, but at the same time it is digitizing rapidly and the development of AI technology continues to deepen with medical robots, intelligent consultation, and other medical AI applications all emerging.
Insurance giant Ping An has an app, Ping An Good Doctor, which is a so-called “AI Doctor.” This can function as a diagnostic tool, and just as with teacherless education systems, the app can respond with standardized comments without the presence of a human being. AI-based diagnostic tools can also be used to predict clinical conditions and increase optimization of systems, such as has been seen in Shanghai, and China more broadly, during the COVID-19 pandemic. With the vast amount information collected on each individual there is the possibility that the entire virus chain can be monitored incredibly quickly and the virus located and contained.
But at the same time, this raises questions related to people’s privacy and control over their own personal data—an issue that is being addressed worldwide. These data sovereignty concerns are particularly relevant when it comes to the intensely personal nature of health care, but they also spread into social governance as a whole.
The connection between commercial organizations and public administrations on technology platforms has greatly increased the interaction between governance subjects and governance objects. And while it can provide a greater level of convenience in many cases, it has also increased the risk of rights boundary violations and information security risks, as well as potentially further increasing the digital divide. Biometric information recognition is used by public service institutions and government departments when they deliver services to the public, and facial recognition and virtual digital imaging are becoming a growing part of public infrastructure.
But, based on data from the China Internet Network Information Center, there are still over 100 million people in China who do not have access to the internet, and a large proportion of those are vulnerable groups such as the elderly and the disabled. This means that commercial organizations and public administration organizations may not be able to connect with them effectively on technology platforms, resulting in possible denial of access to social public services and guarantees. Such an expansion of the digital divide may lead to the further division of rights.
Given that each individual is supposed to have the right to decide whether their private data, such as biometric information, will be used for other purposes, choosing not to share this data may deprive citizens of access to certain societal benefits in lieu of reducing their technological risks or defending their private data. A refusal to allow new technologies to be deeply ingrained in their lives can impinge on their rights to basic social public services and infrastructure. The relationship between convenience and security requires a concerted effort from the legal system to recognize and enforce the moral standards and norms of social life among members of various social groups, including families, to ensure social harmony and sustainable development. This indicates that the means of social governance must become more diverse and intelligent. There must be a process of active adaptation by governments, social organizations, participating businesses and individual citizens.
From a global perspective national AI regulators are mostly composed of political entities and need to form effective linkages with communities of expertise when addressing the challenges of new topics such as AI. Therefore, regulators should utilize individuals with specialized knowledge as the cornerstone of governance and build up a focused governance team covering AI-related mathematics, computer science and other disciplines to provide the necessary knowledge reserves and intellectual support for regulation.
What we need to make clear is that the principles, subjects and tools used for governing AI differ from traditional governance frameworks, meaning there is a need for greater agility in AI governance. It is necessary to continuously improve technological approaches, optimize the governance mechanisms, detect and solve any possible risks in a timely manner, and continuously carry out research and due diligence on the potential risks of more advanced AI. Only if countries around the world jointly seek effective paths and explore solutions to global problems can AI better benefit humanity in the near or even distant future.
You may also like
Fu Chengyu, former chairman of SINOPEC and former chairman and CEO of CNOOC, elaborates on what China has to do to achieve.
| Apr. 18 2022