With the rise of big data, companies have shifted their focus to drive automation and data-driven decision-making across their organizations. Automating different tasks can be a cost saving benefit to many companies, as this practice can increase efficiency and save time on manual tasks that a machine can do. While the intention is usually to improve business outcomes, businesses are experiencing unforeseen consequences in some of their AI applications, particularly due to poor upfront research design and biased datasets. While many AI applications are built with good intentions, there are also many that are not. Regardless of the reasoning, the tech world is finding themselves questioning the morality of AI use cases more and more.
Below are explanations surrounding ethics and the use of AI.
Credit: University of Buffalo Artificial Intelligence LibGuide
Wikipedia defines algorithmic bias as: Systematic and repeatable errors that create unfair outcomes, such as privileging one arbitrary group of users over others.
Algorithmic bias can present itself in many ways. One example, provided by the Brookings Institution, is bias in online recruitment tools. Online retailer Amazon, whose global workforce is 60 percent male and where men hold 74 percent of the company’s managerial positions, recently discontinued use of a recruiting algorithm after discovering gender bias. The data that engineers used to create the algorithm were derived from the resumes submitted to Amazon over a 10-year period, which were predominantly from white males. The algorithm was taught to recognize word patterns in the resumes, rather than relevant skill sets, and these data were benchmarked against the company’s predominantly male engineering department to determine an applicant’s fit. As a result, the AI software penalized any resume that contained the word “women’s” in the text and downgraded the resumes of women who attended women’s colleges, resulting in gender bias.
Credit: University of Buffalo Artificial Intelligence LibGuide
The way that chat bots get trained is through Natural Language Processing (NLP). IBM defines Natural Language Processing as
"A branch of computer science—and more specifically, the branch of artificial intelligence or AI—concerned with giving computers the ability to understand text and spoken words in much the same way human beings can. NLP combines computational linguistics—rule-based modeling of human language—with statistical, machine learning, and deep learning models. Together, these technologies enable computers to process human language in the form of text or voice data and to ‘understand’ its full meaning, complete with the speaker or writer’s intent and sentiment."
This means, the more conversations it processes, the more it learns, and the smarter it gets. Training AI bots is a practice that has the potential to be very exploitative.
An investigative report done by Time revealed that OpenAI, ChatGPT's parent company, outsourced work to Kenyan laborers who earned less than $2 per hour. OpenAI is one of the most valuable AI companies, as they are in talks with investors to raise funds at a bank $29 billion valuation.
Another element to the "ethical grey area" of training AI chat bots, is the use of public websites, like Reddit, to analyze and interpret conversations to continue training them. The New York Times said, "In recent years, Reddit’s array of chats also have been a free teaching aid for companies like Google, OpenAI and Microsoft. Those companies are using Reddit’s conversations in the development of giant artificial intelligence systems that many in Silicon Valley think are on their way to becoming the tech industry’s next big thing."
Credit: University of Buffalo Artificial Intelligence LibGuide
There are several elements to facial recognition technology that are widely viewed as unethical due to their potential for being abused/misused.
Credit: University of Buffalo Artificial Intelligence LibGuide
The development of artificial intelligence systems involves culling through existing works such as books, music, movies, and more. Some artists, actors, and writers have been protesting that their work is being used to train AI systems without permission or compensation.