Global Initiatives to Support AI Governance and Ethics
5/5 (3)
Spread the love
5/5 (3)

Any new technology that changes our businesses or society for the better often has a potential dark side that is viewed with suspicion and mistrust. The media, especially on the Internet, is eager to prey on our fears and invoke a dystopian future where technology has gotten out of control or is used for nefarious purposes. For examples of how technology can be used in an unexpected and unethical manner, one can look at science fiction movies, Artificial Intelligence (AI) vs AI chatbots conversations, autonomous killer robots, facial recognition for mass surveillance or the writings of Sci-Fi authors such as Isaac Asimov and Iain M. Banks that portrays a grim use of technology.

This situation is only exacerbated by social media and the prevalence of “fake news” that can quickly propagate incorrect, unscientific or unsubstantiated rumours.

As AI is evolving, it is raising some new ethical and legal questions. AI works by analysing data that is fed into it and draws conclusions based on what it has learned or been trained to do. Though it has many benefits, it may pose a threat to humans, data privacy, and the potential outcomes of the decisions. To curb the chances of such outcomes, organisations and policymakers are crafting recommendations about ensuring the responsible and ethical use of AI. In addition, governments are also taking initiatives to take it a step further and working on the development of principles, drafting laws and regulations. Tech developers are also trying to self-regulate their AI capabilities.

Amit Gupta, CEO, Ecosystm interviewed Matt Pollins, Partner of renowned law firm CMS where they discussed the implementation of regulations for AI.

To maximise the benefits of science and technology for the society, in May 2019, World Economic Forum  (WEF) – an independent international organisation for Public-Private Cooperation – announced the formation of six separate fourth industrial revolution councils in San Francisco.

The goal of the councils is to work on a global level around new technology policy guidance, best policy practices, strategic guidelines and to help regulate technology under six domains – AI, precision medicine, autonomous driving, mobility, IoT, and blockchain. There is participation of over 200 industry leaders from organisations such as Microsoft, Qualcomm, Uber, Dana-Farber, European Union, Chinese Academy of Medical Sciences and the World Bank, to address the concerns around absence of clear unified guidelines.

Similarly, the Organization for Economic Co-operation and Development (OECD)  created a global reference point for AI adoption principles and recommendations for governments of countries across the world. The OECD AI principles are called “values-based principles,” and are clearly envisioned to endorse AI “that is innovative and trustworthy and that respects human rights and democratic values.”

Likewise, in April, the European Union published a set of guidelines on how companies and governments should develop ethical applications of AI to address the issues that might affect society as we integrate AI into sectors like healthcare, education, and consumer technology.

The Personal Data Protection Commission (PDPC) in Singapore presented the first edition of a Proposed Model AI Governance Framework (Model Framework) – an accountability-based framework to help chart the language and frame the discussions around harnessing AI in a responsible way. We can several organisations coming forward on AI governance. As examples, NEC released the “NEC Group AI and Human Rights Principles“, Google has created AI rules and objectives, and the Partnership on AI was established to study and plan best practices on AI technologies.

 

What could be the real-world challenges around the ethical use of AI?

Progress in the adoption of AI has shown some incredible cases benefitting various industries – commerce, transportation, healthcare, agriculture, education – and offering efficiency and savings. However, AI developments are also anticipated to disrupt several legal frameworks owing to the concerns of AI implementation in high-risk areas. The challenge today is that several AI applications have been used by consumers or organisations only for them to later realise that the project was not ethically fit. An example is the development of a fully autonomous AI-controlled weapon system which is drawing criticism from various nations across the globe and the UN itself.

“Before an organisation embarks on the project, it is vital for a regulation to be in place right from the beginning of the project. This enables the vendor and the organisation to reach a common goal and understanding of what is ethical and right. With such practices in place bias, breach of confidentiality and ethics can be avoided” says Ecosystm Analyst, Audrey William. “Apart from working with the AI vendor and a service provider or systems integrator, it is highly recommended that the organisation consult a specialist such as Foundation for Responsible Robotics, Data & Society, AI Ethics Lab that help look into the parameters of ethics and bias before the project deployment.”

Another challenge arises from a data protection perspective because AI models are fed with data sets for their training and learning. This data is often obtained from usage history and data tracking that may compromise an individual’s identity. The use of this information may lead to a breach of user rights and privacy which may leave an organisation facing consequences around legal prosecutions, governance, and ethics.

One other area that is not looked into is racial and gender bias. Phone manufacturers have been criticised in the past on matters of racial and gender bias, when the least errors in identification occur with light-skinned males. This opened conversations on how the technology works on people of different races and genders.

San Francisco recently banned the use of facial recognition by the police and other agencies, proposing that the technology may pose a serious threat to civil liberties. “Implementing AI technologies such as facial recognition solution means organisations have to ensure that there are no racial bias and discrimination issues. Any inaccuracy or glitches in the data may tend to make the machines untrustworthy” says William.

Given what we know about existing AI systems, we should be very concerned that the possibilities of technology breaching humanitarian laws, are more likely than not.

Could strong governance restrict the development and implementation of AI?

The disruptive potential of AI poses looming risks around ethics, transparency, and security, hence the need for greater governance. AI will be used safely only once governance and policies have been framed, mandating its use.

William thinks that, “AI deployments have positive implications on creating better applications in health, autonomous driving, smart cities, and a eventually a better society. Worrying too much about regulations will impede the development of AI. A fine line has to be drawn between the development of AI and ensuring that the development does not cross the boundaries of ethics, transparency, and fairness.”

 

While AI as a technology has a way to go before it matures, at the moment it is the responsibility of both organisations and governments to strike a balance between technology development and use, and regulations and frameworks in the best interest of citizens and civil liberties.

4

Please rate this

Similar Blogs

Join the community and receive insights and analysis directly to your inbox.

Connect with an Expert
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments