Search content...

What's Trending...

Ethics, Artificial Intelligence & Human Rights

May 7th 2019

Although Artificial Intelligence (AI) has been in existence for over 70 years, it was previously in the domain of universities and tech companies. Today AI is regarded as the most powerful transformation to affect business and society since the invention of electricity - so it is now the domain of the Board and Executive team. Frameworks, governance models, organisational design, human rights and ethics considerations for AI are all topics that leadership should be currently discussing.

AI is defined as applying to any technique that enables computers to mimic human intelligence, using logic, if-then rules, decision trees and machine learning. The main types of AI that leaders should be aware of include:

  1. Machine Learning – used for Modelling: Predictive, prescriptive, fraud, recommendations
  2. Computer Vision – used for Recognition: Image analysis, facial detection, sensors
  3. Conversational Platforms – used for Engagement: Virtual Assistants, chatbots, translations
  4. Autonomous Machines -used for Motion: Self driving cars, drones, robotic delivery

According to the analysts AI is the fastest growing tech sector in the world (50% CAGR). Currently there is $7.3bn of investment per annum will surge twelve-fold to $89bn in the next five years (JP Morgan 2018). Forrester (2017) states that AI driven companies will take $1.2 trillion from competitors by 2020 and Gartner notes that AI will generate $2.9 trillion in business value and recover 6.2 billion hours of worker productivity by 2021.

Although AI is clearly one of the most powerful transformational forces impinging upon humankind, there are surprisingly few regulations, laws or guiding frameworks to set up AI to do good. And most of us who are leading the discussions and strategies around AI are about 50/50 on whether AI will result in progressing humankind, through enhancing business performance or physical, mental and emotional wellbeing of humans or whether it will power negative outcomes, such as automated weapons and biased decision making.

Hence, there is a tremendously important role for leaders to play in making sure that AI is not coded with bias, does not have ‘bad’ algorithms, that it is used for ‘good’, that it does no harm and that the people who will be negatively impacted by AI, such as with job loss from automation (at this stage predicted to be mainly women and minority groups) are well protected.

As concerning as it is that there are yet no substantial ethics and human rights frameworks in common use, this also provides the opportunity for progressive leaders and organisations to take a stand and collaborate to create AI for Good and to implement frameworks within organisations that guide the development of AI so that it will improve human experience as well as business performance.

In fact, the Australian government has committed $29million to developing a Responsible Innovation Organisation to start to create frameworks for emerging technologies such as AI. In the US the founders of eBay, LinkedIn, the Knight Foundation, Harvard and MIT, among others, formed the $27 million Ethics and Governance of Artificial Intelligence fund to help solve AI challenges.
Topics that you may consider in your organisation thinking about AI are:

  • What is our company’s AI strategy?
  • Who will be the Head of AI and who will co-ordinate all types of AI being introduced?
  • How will you ensure the algorithms that are being used by your organisation are accurate and the right models for your business?
  • Who has trained or is training the algorithms?
  • Are the data sets being used to train the algorithms clean and been checked for bias?
  • How will you ensure that the algorithms have not been trained with gender, diversity or other bias?
  • What are your company’s security requirements around AI?
  • Given AI will replace or augment human jobs, what will be your future organisational structure that will include Digital Labour positions as well as HAVA (Human Assisted Virtual Assistants) or HAMA (Human Assisted Machine Assisted) roles?
  • How do you develop a culture that embraces Human-Machine workplaces in order for the technology to amplify legacy system capability and augment human capability?
  • How will your company use AI for good?

So ultimately the question to be answered in the field of AI is, “How do we ensure that the algorithms we code and the machines we train do not perpetuate and amplify the same human biases that plague humankind?”

It’s time to answer this.

For further information on Dr. Catriona Wallace or to enquire about making a booking for your next conference or event please contact the friendly ODE team

Asia/Pacific

  • +61 2 9818 5199

United States

  • +1 877 950 5633
Ethics, Artificial Intelligence & Human Rights
Go To Top