THE ETHICS OF ARTIFICIAL INTELLIGENT ALGORITHMS
WHAT’S GOING ON
Artificial intelligence algorithms are becoming increasingly important in the development of product functionality.
Despite enabling greater accessibility, flexibility and personalisation, little consideration has been given to their ethics.
Recently, there have been warning signs that AI algorithms have the capacity to replicate our own unequal society and history of systemic biases.
Last month, Stanford University researchers made an AI algorithm that also works as a ‘gaydar’. When given photos of gay white men and straight white men from dating sights, the algorithm could guess which one was gay more accurately than the people participating in the study.
Even though the algorithm was developed to protect gay people, public outrage about the algorithm suggests otherwise. Critics are labelling the algorithm ‘flawed research’ and say that the tool could also be used to identify and persecute gay people.
This exposes the lack of transparent ethical guidelines in the AI age, where researchers must make ad hoc rules as they go.
Other examples include:
- A Google image recognition program accidentally labelled the faces of several black people as ‘gorillas’
- A LinkedIn advertising program showed a preference for male names in searches
- A Microsoft Chabot learnt from Twitter and constructed multiple anti-Semitic messages
Whereas these incidents have been dismissed as “gaffes”, they signal a larger problem; that they are reproducing the systemic biases we have spent decades crusading against.
‘Deepmind’, Google’s AI research lab, has just opened a new unit that focuses on the ethical and societal problems that could be embedded in Artificial Intelligence software.
The aim of the new research unit is “to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all”.
GROWTH MANTRA’S PREDICTION
- Future developments and advancements of AI will need to be more human-centred.
- Although there are a few watch-dog organisations such as ‘AI Now’ that have recently emerged, the large scale impact of AI will mean considerable change in the world of work. We anticipate that this would include individuals in jobs such as ‘bias detectors’ and ‘algorithm analysts’ that ensure their AI system is as least biased as possible.
- It will be impossible to remove all forms of bias and build truly objective machines whilst humans are the creators. Therefore it will be important to be as self-reflexive as possible in this process in making algorithms as transparent as they can be.