Your Organisation Needs an AI Ethics Policy TODAY!

5/5 (2)

5/5 (2)

It is not hyperbole to state that AI is on the cusp of having significant implications on society, business, economies, governments, individuals, cultures, politics, the arts, manufacturing, customer experience… I think you get the idea! We cannot understate the impact that AI will have on society. In times gone by, businesses tested ideas, new products, or services with small customer segments before they went live. But with AI we are all part of this experiment on the impacts of AI on society – its benefits, use cases, weaknesses, and threats. 

What seemed preposterous just six months ago is not only possible but EASY! Do you want a virtual version of yourself, a friend, your CEO, or your deceased family member? Sure – just feed the data. Will succession planning be more about recording all conversations and interactions with an executive so their avatar can make the decisions when they leave? Why not? How about you turn the thousands of hours of recorded customer conversations with your contact centre team into a virtual contact centre team? Your head of product can present in multiple countries in multiple languages, tailored to the customer segments, industries, geographies, or business needs at the same moment.  

AI has the potential to create digital clones of your employees, it can spread fake news as easily as real news, it can be used for deception as easily as for benefit. Is your organisation prepared for the social, personal, cultural, and emotional impacts of AI? Do you know how AI will evolve in your organisation?  

When we focus on the future of AI, we often interview AI leaders, business leaders, futurists, and analysts. I haven’t seen enough focus on psychologists, sociologists, historians, academics, counselors, or even regulators! The Internet and social media changed the world more than we ever imagined – at this stage, it looks like these two were just a rehearsal for the real show – Artificial Intelligence. 

Lack of Government or Industry Regulation Means You Need to Self-Regulate 

These rapid developments – and the notable silence from governments, lawmakers, and regulators – make the requirement for an AI Ethics Policy for your organisation urgent! Even if you have one, it probably needs updating, as the scenarios that AI can operate within are growing and changing literally every day.  

  • For example, your customer service team might want to create a virtual customer service agent from a real person. What is the policy on this? How will it impact the person? 
  • Your marketing team might be using ChatGPT or Bard for content creation. Do you have a policy specifically for the creation and use of content using assets your business does not own?  
  • What data is acceptable to be ingested by a public Large Language Model (LLM). Are are you governing data at creation and publishing to ensure these policies are met?  
  • With the impending public launch of Microsoft’s Co-Pilot AI service, what data can be ingested by Co-Pilot? How are you governing the distribution of the insights that come out of that capability? 

If policies are not put in place, data tagged, staff trained, before using a tool such as Co-Pilot, your business will be likely to break some privacy or employment laws – on the very first day! 

What do the LLMs Say About AI Ethics Policies? 

So where do you go when looking for an AI Ethics policy? ChatGPT and Bard of course! I asked the two for a modern AI Ethics policy. 

You can read what they generated in the graphic below.

YourOrganisationNeedsanAIEthicsPolicyTODAY-1
YourOrganisationNeedsanAIEthicsPolicyTODAY-2
YourOrganisationNeedsanAIEthicsPolicyTODAY-3
previous arrowprevious arrow
next arrownext arrow
YourOrganisationNeedsanAIEthicsPolicyTODAY-1
YourOrganisationNeedsanAIEthicsPolicyTODAY-2
YourOrganisationNeedsanAIEthicsPolicyTODAY-3
previous arrow
next arrow
Shadow

I personally prefer the ChatGPT4 version as it is more prescriptive. At the same time, I would argue that MOST of the AI tools that your business has access to today don’t meet all of these principles. And while they are tools and the ethics should dictate the way the tools are used, with AI you cannot always separate the process and outcome from the tool.  

For example, a tool that is inherently designed to learn an employee’s character, style, or mannerisms cannot be unbiased if it is based on a biased opinion (and humans have biases!).  

LLMs take data, content, and insights created by others, and give it to their customers to reuse. Are you happy with your website being used as a tool to train a startup on the opportunities in the markets and customers you serve?  

By making content public, you acknowledge the risk of others using it. But at least they visited your website or app to consume it. Not anymore… 

A Policy is Useless if it Sits on a Shelf 

Your AI ethics policy needs to be more than a published document. It should be the beginning of a conversation across the entire organisation about the use of AI. Your employees need to be trained in the policy. It needs to be part of the culture of the business – particularly as low and no-code capabilities push these AI tools, practices, and capabilities into the hands of many of your employees.  

Nearly every business leader I interview mentions that their organisation is an “intelligent, data-led, business.” What is the role of AI in driving this intelligent business? If being data-driven and analytical is in the DNA of your organisation, soon AI will also be at the heart of your business. You might think you can delay your investments to get it right – but your competitors may be ahead of you.  

So, as you jump head-first into the AI pool, start to create, improve and/or socialise your AI Ethics Policy. It should guide your investments, protect your brand, empower your employees, and keep your business resilient and compliant with legacy and new legislation and regulations. 

AI Research and Reports
0
Going Green: The Impact of COVID-19 on ESG Investing

5/5 (1)

5/5 (1)

Environmental, social, and governance (ESG) ratings towards investment criteria have become popular for potential investors to evaluate companies in which they might want to invest. As younger investors and others have shown an interest in investing based on their personal values, brokerage firms and mutual fund companies have begun to offer exchange-traded funds (ETFs) and other financial products that follow specifically stated ESG criteria. Passive investing with robo-advisors such as Betterment and Wealthfront have also used ESG criteria to appeal to this group.

The disruption caused by the pandemic has highlighted for many of us the importance of building sustainable and resilient business models based on multi-stakeholder considerations. It has also created growing investor interest in ESG.

ESG signalling for institutional investors

The increased interest in climate change, sustainable business investments and ESG metrics is partly a reaction of the society to assist in the global transition to a greener and more humane economy in the post-COVID era.  Efforts for ESG standards for risk measurement will benefit and support that effort.

A recent study of asset managers by the investment arm of Institutional Shareholder Services (ISS) showed that more than 12% of respondents reported heightened importance of ESG considerations in their investment decisions or stewardship activities compared to before the pandemic.

In the area of hedge funds, there has been an increased demand for ESG-integrated investments since the start of COVID-19, according to 50% of all respondents of a hedge fund survey conducted by BNP Paribas Corporate and Institutional Banking of 53 firms with combined assets under management (AUM) of at least USD 500B.

ESG criteria may have a practical purpose beyond any ethical concerns, as these criteria may be able to help avoidance of companies whose practices could signal risk. As ESG gets more traction, investment firms such as JPMorgan Chase, Wells Fargo, and Goldman Sachs have published annual reports that highlight and review their ESG approaches and the bottom-line results.

But even with more options, the need for clarity and standards on ESG has never been so important. In my opinion, there must be an enhanced effort to standardise and harmonise ESG rating metrics.

How are ESG ratings made?

ESG ratings need both quantitative and qualitative/narrative disclosures by companies in order to be calculated. And if no data is disclosed or available, companies then move to estimations.

No global standard has been defined for what is included in a given company’s ESG rating. Attempts at standardising the list of ESG topics to consider include the materiality map developed by the Sustainable Accounting Standard Board (SASB) or the reporting standards created by the Global Reporting Initiative (GRI). But most ESG rating providers have been defining their own materiality matrices to calculate their scores.

Can ESG scoring be automatically integrated?

Just this month, Morningstar equity research analysts announced they will employ a globally consistent framework to capture ESG risk across over 1,500 stocks. Analysts will identify valuation-relevant risks for each company using Sustainalytics’ ESG Risk Ratings, which measure a company’s exposure to material ESG risks, then evaluate the probability those risks materialise and the associated valuation impact. ESG rating firms such as MSCI, Sustainalytics, RepRisk, and ISS use a rules-based methodology to identify industry leaders and laggards according to their exposure to ESG risks, as well as how well they manage those risks relative to peers.

Their ESG Risk Ratings measure a company’s exposure to industry-specific material ESG risks and how well a company is managing those risks. This approach to measuring ESG risk combines the concepts of management and exposure to arrive at an assessment of ESG risk – the ESG Risk Rating – which should be comparable across all industries. But some critics of this form of approach feel it is still too subjective and too industry-specific to be relevant. This criticism is relevant when you understand that the use of the ESG ratings and underlying scores may in future inform asset allocation. How might this better automated and controlled? Perhaps adding some AI might be useful to address this? 

In one example, Deutsche Börse has recently led a USD 15 million funding round in Clarity AI, a Spanish FinTech firm that uses machine learning and big data to help investors understand the societal impact of their investment portfolios. Clarity AI’s proprietary tech platform performs sustainability assessments covering more than 30,000 companies,198 countries,187 local governments and over 200,000 funds. Where companies like Cooler Future are working on an impact investment app for everyday individual users, Clarity AI has attracted a client network representing over $3 trillion of assets and funding from investors such as Kibo Ventures, Founders Fund, Seaya Ventures and Matthew Freud.

What about ESG Indices?  What do they tell us about risk?

Core ESG indexing is the use of indices designed to apply ESG screening and ESG scores to recognised indices such as the S&P 500®S&P/ASX 200, or S&P/TSX Composite. SAM, part of S&P Global, annually conducts a Corporate Sustainability Assessment, an ESG analysis of over 7,300 companies. Core ESG indices can then become actionable components of asset allocation when a fund or separately managed accounts (SMAs) provider tracks the index.

Back in 2017, the Swiss Federal Office for the Environment (FOEN) and the State Secretariat for International Finance (SIF) made it possible for all Swiss pension funds and insurance firms to measure the environmental impact of their stocks and portfolios for free. Currently, these federal bodies are testing use case with banks and asset managers. Its initial activities will be recorded in an action plan, which is due to be published in Spring 2021.

How can having a body of sustainable firms help create ESG metrics?

Creating ESG standard metrics and methodologies will be aided when there is a network of sustainable companies to analyse, which leads us to green fintech networks (GFN) of companies interested in exploring how their own technology investments can be supportive of ESG objectives. Switzerland is setting up a Green Fintech Network to help the country take advantage of the “great opportunity” presented by sustainable finance. The network has been launched by SIF alongside industry players, including green FinTech companies, universities, and consulting and law firms. Stockholm also has a Green Fintech Network that allows collaboration towards sustainability goals.

Concluding Thought

We should be curious about how ESG can provide decision-oriented information about intangible assets and non-financial risks and opportunities. More information and data from ESG data providers like SAM, combined with automation or AI tools can potentially provide a more complete picture of how to measure the long-term sustainable performance of equity and fixed income asset classes.

Singapore FinTech Festival 2020: Investor Summit

For more insights, attend the Singapore FinTech Festival 2020: Investor Summit which will cover topics tied to 2021 Investor Priorities, and Fundraising and exit strategies

Get Access
2
Telstra using AI for Recruitment

5/5 (1)

5/5 (1) In 2018, DBS Bank came together with AI start-up impress.ai to implement Jim – Job Intelligence Maestro –  a chatbot that helps the bank shortlist candidates for positions in their wealth planning team. This is primarily for screening for entry-level positions. Apart from process efficiency, the introduction of AI in the recruitment process is also aimed at eliminating bias and objectively finding the right candidate for the right job. The DBS chatbot uses cognitive and personality tests to assess candidates, as well as providing them with answers to the candidates’ frequently asked questions. The scores are then passed on to actual recruiters who continue with the rest of the recruitment process. DBS claims that they have curtailed the initial assessment time of each applicant by an average of 22 minutes.

While some organisations have started evaluating the use of AI in their HR function, it has not reached a mass-market yet. In the global Ecosystm AI study, we find that nearly 88% of global organisations do not involve HR in their AI projects. However, the use cases of AI in HR are many and the function should be an active stakeholder in AI investments in customer-focused industries.

Telstra employs AI to vet Applicants

Last month, Australia’s biggest telecommunications provider Telstra announced its plans to hire 1,000 temporary contact centre staff in Australia to meet the surge in demand amidst the global pandemic. In response to the openings, Telstra received overwhelming 19,000 applications to go through and filter, with limited workforce. To make the recruitment process more efficient, the company has been using AI to filter the applications – and has been able to make initial offers two weeks from the screening. The AI software takes the candidates’ inputs and processes them to find the right match for the required skills. The candidates are also presented with cognitive games to measure their assessment scores.

Ecosystm Principal Advisor, Audrey William speaks about the pressure on companies such as Telstra to hire faster for their contact centres. “Several organisations are needing to replace agents in their offshore locations and hire agents onshore. Since this is crucial to the customer experience they deliver, speed is of essence.” However, William warns that the job does not stop with recruiting the right number of agents. “HR teams will need to follow through with a number of processes including setting up home-based employees, training them adequately for the high volume of voice and non-voice interactions and compliance and so on.”

The Future of AI in HR

William sees more companies adopting AI in their HR practices in the Workplace of the Future – and the role of AI will not be restricted to recruitment alone. “A satisfied employee will go the extra mile to deliver better customer experience and it is important to keep evaluating how satisfied your employees are. AI-driven sentiment analysis will replace employee surveys which can be subjective in nature. This will include assessing the spoken words and the emotions of an individual which cannot be captured in a survey.”

In the future, William sees an intelligent conversational AI platform as an HR feedback and engagement platform for staff to engage on what they would like to see, what they are unhappy about, their workplace issues, what they consider their successes and so on. This will be actionable intelligence for HR teams. “But for a conversational AI platform to work well and to encourage users within the organisation to use it, it must be designed well. While it has to be engaging to ensure employee uptake, the design does not stop at user experience. It must include a careful evaluation of the various data sets that should be assessed and how the AI can get easy access to that data.”

AI and Ethics

With the increased use of AI, the elephant in the room is always ethical considerations. While the future may see HR practices using conversational AI platforms, how ethical is it to evaluate your employees constantly and what will be the impact on them? How will the organisation use that data? Will it end up giving employers the right reasons to reduce manpower at will? These and allied issues are areas where stricter government mandates are required.

Going back to AI-assisted recruitment, William warns, “Bias must be assessed from all angles – race, education, gender, voice, accents. Whilst many platforms claim that their solution removes bias, the most important part of getting this right is to make sure that the input data is right from the start. The outcomes desired from the process must be tested – and tested in many different ways – before the organisation can start using AI to eliminate bias. There is also the added angle of the ethical use of the data.”

2
Global Initiatives to Support AI Governance and Ethics

5/5 (3)

5/5 (3) Any new technology that changes our businesses or society for the better often has a potential dark side that is viewed with suspicion and mistrust. The media, especially on the Internet, is eager to prey on our fears and invoke a dystopian future where technology has gotten out of control or is used for nefarious purposes. For examples of how technology can be used in an unexpected and unethical manner, one can look at science fiction movies, Artificial Intelligence (AI) vs AI chatbots conversations, autonomous killer robots, facial recognition for mass surveillance or the writings of Sci-Fi authors such as Isaac Asimov and Iain M. Banks that portrays a grim use of technology.

This situation is only exacerbated by social media and the prevalence of “fake news” that can quickly propagate incorrect, unscientific or unsubstantiated rumours.

As AI is evolving, it is raising some new ethical and legal questions. AI works by analysing data that is fed into it and draws conclusions based on what it has learned or been trained to do. Though it has many benefits, it may pose a threat to humans, data privacy, and the potential outcomes of the decisions. To curb the chances of such outcomes, organisations and policymakers are crafting recommendations about ensuring the responsible and ethical use of AI. In addition, governments are also taking initiatives to take it a step further and working on the development of principles, drafting laws and regulations. Tech developers are also trying to self-regulate their AI capabilities.

Amit Gupta, CEO, Ecosystm interviewed Matt Pollins, Partner of renowned law firm CMS where they discussed the implementation of regulations for AI.

To maximise the benefits of science and technology for the society, in May 2019, World Economic Forum  (WEF) – an independent international organisation for Public-Private Cooperation – announced the formation of six separate fourth industrial revolution councils in San Francisco.

The goal of the councils is to work on a global level around new technology policy guidance, best policy practices, strategic guidelines and to help regulate technology under six domains – AI, precision medicine, autonomous driving, mobility, IoT, and blockchain. There is participation of over 200 industry leaders from organisations such as Microsoft, Qualcomm, Uber, Dana-Farber, European Union, Chinese Academy of Medical Sciences and the World Bank, to address the concerns around absence of clear unified guidelines.

Similarly, the Organization for Economic Co-operation and Development (OECD)  created a global reference point for AI adoption principles and recommendations for governments of countries across the world. The OECD AI principles are called “values-based principles,” and are clearly envisioned to endorse AI “that is innovative and trustworthy and that respects human rights and democratic values.”

Likewise, in April, the European Union published a set of guidelines on how companies and governments should develop ethical applications of AI to address the issues that might affect society as we integrate AI into sectors like healthcare, education, and consumer technology.

The Personal Data Protection Commission (PDPC) in Singapore presented the first edition of a Proposed Model AI Governance Framework (Model Framework) – an accountability-based framework to help chart the language and frame the discussions around harnessing AI in a responsible way. We can several organisations coming forward on AI governance. As examples, NEC released the “NEC Group AI and Human Rights Principles“, Google has created AI rules and objectives, and the Partnership on AI was established to study and plan best practices on AI technologies.

 

What could be the real-world challenges around the ethical use of AI?

Progress in the adoption of AI has shown some incredible cases benefitting various industries – commerce, transportation, healthcare, agriculture, education – and offering efficiency and savings. However, AI developments are also anticipated to disrupt several legal frameworks owing to the concerns of AI implementation in high-risk areas. The challenge today is that several AI applications have been used by consumers or organisations only for them to later realise that the project was not ethically fit. An example is the development of a fully autonomous AI-controlled weapon system which is drawing criticism from various nations across the globe and the UN itself.

“Before an organisation embarks on the project, it is vital for a regulation to be in place right from the beginning of the project. This enables the vendor and the organisation to reach a common goal and understanding of what is ethical and right. With such practices in place bias, breach of confidentiality and ethics can be avoided” says Ecosystm Analyst, Audrey William. “Apart from working with the AI vendor and a service provider or systems integrator, it is highly recommended that the organisation consult a specialist such as Foundation for Responsible Robotics, Data & Society, AI Ethics Lab that help look into the parameters of ethics and bias before the project deployment.”

Another challenge arises from a data protection perspective because AI models are fed with data sets for their training and learning. This data is often obtained from usage history and data tracking that may compromise an individual’s identity. The use of this information may lead to a breach of user rights and privacy which may leave an organisation facing consequences around legal prosecutions, governance, and ethics.

One other area that is not looked into is racial and gender bias. Phone manufacturers have been criticised in the past on matters of racial and gender bias, when the least errors in identification occur with light-skinned males. This opened conversations on how the technology works on people of different races and genders.

San Francisco recently banned the use of facial recognition by the police and other agencies, proposing that the technology may pose a serious threat to civil liberties. “Implementing AI technologies such as facial recognition solution means organisations have to ensure that there are no racial bias and discrimination issues. Any inaccuracy or glitches in the data may tend to make the machines untrustworthy” says William.

Given what we know about existing AI systems, we should be very concerned that the possibilities of technology breaching humanitarian laws, are more likely than not.

Could strong governance restrict the development and implementation of AI?

The disruptive potential of AI poses looming risks around ethics, transparency, and security, hence the need for greater governance. AI will be used safely only once governance and policies have been framed, mandating its use.

William thinks that, “AI deployments have positive implications on creating better applications in health, autonomous driving, smart cities, and a eventually a better society. Worrying too much about regulations will impede the development of AI. A fine line has to be drawn between the development of AI and ensuring that the development does not cross the boundaries of ethics, transparency, and fairness.”

 

While AI as a technology has a way to go before it matures, at the moment it is the responsibility of both organisations and governments to strike a balance between technology development and use, and regulations and frameworks in the best interest of citizens and civil liberties.

4