Many people are concerned about the possibility of artificial intelligence (AI) acting as a replacement for human workers at some point in the future, maybe within the next decade or two. However, this argument seems debatable only because there have been several research and multiple findings that vary from one research to another. The term 'artificial intelligence' was first coined by a computer scientist by the name of John McCarthy of the Massachusetts Institute of Technology (MIT), along with his colleagues Alan Turing, Marvin Minsky, Herbert A. Simon, and Allen Newell in 1956. Artificial intelligence refers to the development of computer systems that are allowed to perform such tasks that traditionally require human intelligence. These tasks include language translation, speech recognition, as well as visual perception. AI technology has been around for many decades, and it is still growing in today's time. Although the chances of artificial intelligence (AI) replacing human employees are quite slim if not very unlikely, and that AI technology has many positive benefits in the workplace, there are several drawbacks/disadvantages both employers and employees should take into consideration. This leads to the following question: What are the overall consequences if artificial intelligence is to replace human employees and take over the workforce completely?
The similarities found between two different sources are the main argumentative standpoints regarding the use of AI technology in the workplace. In his article "How to be 'Smart' About Using Artificial Intelligence," Kevin D. Finley highlights the possibility of workplace discrimination against employees that stems from using AI technology. In another journal article titled "Artificial Intelligence and the Challenges of Workplace Discrimination and Privacy," both Pauline T. Kim and Matthew T. Bodie addresses similar issues when it comes to discriminatory practices of AI as well as privacy violation. According to Finley, the possibilities in which AI is utilized unjustly to target job postings, hire candidates, or make or help in recruiting decisions where systems purposefully blackball or unfavorably affect protected groups (22). According to Kim and Bodie, since residency is nearly in correlation to race in a number of cities, an algorithm that sorts job applicants based on their zip code could potentially discriminate minority groups. In a 2023 journal article, "Future Regulation of AI and Employment Law Considerations," Karen Farrell discusses regulatory practices involving artificial intelligence (AI) in the workplace. By contrast, Charlotte Stix talks about the controversial term "trustworthy AI" in her 2022 article titled "Artificial Intelligence by any Other Name: A Brief History of the Conceptualization of "Trustworthy" Artificial Intelligence." There are a few similarities in argumentative points between the two authors because they generally discuss the negative aspects of artificial intelligence in terms of reliability. Authors Natali Belusic and Jasminka Samardzija highlights certain positive aspects of AI in their article titled "Artificial Intelligence: Less Need for Blue Collar or White-Collar Jobs?" AI has much of an influence on lower-skilled employees who generally do not work in a particular setting in which is susceptible to technological change (Belusic & Samardzija, 82). Chen Guang also discussed the positive side of artificial intelligence and its use in blue-collar employment. According to the journal article, "Development of Migrant Workers in Construction Based on Machine Learning and Artificial Intelligence" (2021), mobile phones create mobile databases automatically, using algorithms based upon machine learning classification, and are creating an action recognition model for intelligent reconnaissance and tracking of staff operations (Guang, 6631). According to authors Flanagan and Walker, chatbots should be willfully programmed in a way that supposes 'facts' inside the narratives of power and that highlight the potential power that flows from labor 'expertise,' as well as collective networks of active communities of organized workers (171). Both authors focused primarily on the disadvantages of AI. However, Sonal Pandey discusses the pros of artificial intelligence. According to her article "Intelligent Collaboration of AI and Human Workforce," the exposure of AI technology would at no time remove the human workforce, but it may help change for improvement of the workplace. (Pandey, 20). Another journal article titled "Artificial Intelligence and the Future of Work," discusses the sociology of expectations (SE) relating to AI technology. According to this source, "Expectations concerning the diffusion of a technology often neglect the fact that cultural change can take place, whilst the expectations themselves often reflect current cultural beliefs" Vicsek, 6). Authors Guanglu Xu and Ming Xue had a different standpoint in regard to risks pertaining to AI technology. Workers may encounter the dangers of unemployment and have the perception that their career development goals are being threatened if organizations were to start utilizing AI technology to replace certain jobs (Xu & Xue).
In Finley's article, he highlights five of the best practices employers can follow in order to manage the risk when using AI tools: (1) knowing the data; (2) disclosing the topics and methodology; (3) considering undergoing a bias audit; (4) implementing human oversight; (5) and reviewing vendor agreements. It was also stated that the NLRB general counsel proposed a plan to "crack down on electronic monitoring in the workplace based upon concerns of infringement upon employees' rights to engage in protected concerted activity" (Finley, 23). This new framework could possibly reduce the growth of "smart" workplaces across America, if successful. Technically, the goal is to enforce restrictions on electronic monitoring in order to protect the rights of employees to partake in such activities within or outside the workplace. Farrell highlights one key disadvantage of AI on the job. It is said that "potential bias in algorithmic decision-making, rooted in the adaptivity and autonomy of AI, may result in discriminatory outcomes" (Farrell, 54). In this case, there may have been concerns over possible violations of both the Equality Act 2010 and Human Rights Act 1998 stemming from such bias in the AI software. Kim and Bodie also criticized the functionality of artificial intelligence in employment settings. For instance, "if an employer is using a biased algorithm because it wants to screen out members of a protected group, that is clearly a form of intentional discrimination prohibited under disparate treatment theory" (Kim & Bodie). Disparate treatment theory is a theory that bans unfavorable decisions that are made based on race, color, sex, religion, or other protected class. Belusic and Samardzija argues that workers need to adapt to certain changes when it comes to AI being used in the workplace. When artificial intelligence started taking over mechanical jobs, people become focused more on thinking jobs. According to the authors, now when artificial intelligence is successfully taking over thinking jobs, "people tend to find feeling jobs as artificial intelligence is still not developed enough to complete these" (Belusic & Samardzija, 84). Research has shown that certain jobs would focus on feelings and that workers would be "people-oriented" and less data-oriented. Chen Guang conducted research on AI technology in construction jobs. In terms of safety on the job, physical and physical insecurity and their interaction are the direct causes of security incidents, all while the number of temporary structures and facilities are big, such conditions on the ground have significantly changed. On top of that, physical insecurity (working environment) takes place throughout the system. To a large extent, "construction insecurity is also caused by the insecurity of people, especially managers, such as unsafe construction schemes, unreasonable safety management methods or safety measures" (Guang, 6631). Basically, the purpose of incorporating AI technology in the construction industry is to improve safety levels for its workers. Frances Flanagan and Michael Walker addresses the problem with employees giving sensitive data to sources like Facebook and/or third parties. Far from being worker-owned/controlled, "information on Facebook was subjected to surveillance, advertising and was non-portable and vulnerable to being shut down or modified as a consequence of unilateral changes of rules and/or functionality" (Flanagan & Walker, 164). This is a sign that employees should be very careful on where they give their information, because such information should never be allowed on unsecure sources. Sonal Pandey addresses the perspective of individuals who are pessimistic that there still will be jobs for human workers, while automation is growing. According to Pandey, "the higher the organizational degree and focus on workforce and technology collaborations, the higher the productivity would be" (22). For instance, production has increased by 85% due to great collaboration of human workers and machines in the BMW factory, according to the MGI Report.
Lilla Vicsek discusses the analytical component of fictional expectations, which is agency. According to her argument, "agency is awarded to AI technology from both a Positive Effects and Negative Effects perspective, whilst reduced agency is often only attributed to humans, governments, or other organizations (in relation to getting ready for responding to changes caused by technology)" (Vicsek, 8). The author is pointing out the difference between the concept of agency in relation to artificial intelligence and the type of agency connected to people, the government, or any other entity. Charlotte Stix addresses the meaning behind the term "trustworthy AI." It was stated that: "In order for an AI system to count as "trustworthy," (1) it must be lawful; that is, adhering to all legal obligations which are binding and required at that time, (2) it should be ethical; that is, adhering to and fulfilling all ethical key requirements that have been put forward in the Ethics Guidelines for Trustworthy AI, and (3) it should be robust, both from a technical and a social perspective" (Stix). The so-called "trustworthy" AI is defined as being composed of three parts in its final form. Lastly, research has shown descriptive statistics and correlations regarding AI. Researchers reported that "AI application was not significantly correlated with the perception of unemployment risk" (Xu & Xue, 5). The reason for this result might have been due to the fact that AI application comes with both positive and negative effects on employees. However, certain risks leading to unemployment should never be taken lightly.
Karen Farrell covers the legal risks involving the use of artificial intelligence in the employment sector. In her viewpoint, considering the risks associated with AI may help prevent the likelihood of encountering workplace bias and/or discrimination. Kevin Finley also have a similar viewpoint as well. He suggested that employers work with their counsel to ensure such policies/practices are current or up to date when it comes to using AI. Pauline T. Kim and Matthew T. Bodie claimed that AI should undergo regulations prior to its use in the workplace. That way, future employees would not have to worry about facing any unfair treatment.
Artificial intelligence still has a long way to go in terms of improving its reliability. Although AI can be beneficial when it comes to performing certain tasks in the workplace, this technology comes with many drawbacks as discussed earlier. There is still more research that needs to be conducted further because AI is in fact, still growing and changing almost every day. Readers must take these issues of AI technology very seriously before pursuing a future career in mind, and consider doing research as well.
BELUŠIĆ, N, and Jasminka SAMARDŽIJA. “Artificial Intelligence and Employment: Less Need for Blue-Collar or White-Collar Jobs?” FEB Zagreb International Odyssey Conference on Economics & Business, vol. 5, no. 1, June 2023, pp. 81–90. EBSCOhost, research.ebsco.com/linkprocessor/plink?id=3d5521bf-92e9-3505-b287-8395f3e8ac1b.
Farrell, Karen. “Future Regulation of AI and Employment Law Considerations.” Employee Relations Law Journal, vol. 49, no. 2, Sept. 2023, pp. 52–55. EBSCOhost, research.ebsco.com/linkprocessor/plink?id=ea68e311-f319-3452-831e-d0482eae4cd7.
Finley, Kevin D. “How to Be ‘Smart’ About Using Artificial Intelligence in the Workplace.” Employee Relations Law Journal, vol. 49, no. 1, June 2023, pp. 21–24. EBSCOhost, research.ebsco.com/linkprocessor/plink?id=058763b5-d0a3-3687-b199-fccd36d48e13.
Flanagan, Frances, and Michael Walker. “How Can Unions Use Artificial Intelligence to Build Power? The Use of AI Chatbots for Labour Organising in the US and Australia.” New Technology, Work & Employment, vol. 36, no. 2, July 2021, pp. 159–76. EBSCOhost, https://doi-org.ezproxy.solano.edu/10.1111/ntwe.12178.
Guang, Chen. “Development of Migrant Workers in Construction Based on Machine Learning and Artificial Intelligence Technology.” Journal of Intelligent & Fuzzy Systems, vol. 40, no. 4, Apr. 2021, pp. 6629–40. EBSCOhost, https://doi-org.ezproxy.solano.edu/10.3233/JIFS-189499.
Kim, Pauline T., and Matthew T. Bodie. “Artificial Intelligence and the Challenges of Workplace Discrimination and Privacy.” ABA Journal of Labor & Employment Law, vol. 35, no. 2, June 2021, pp. 289–315. EBSCOhost, research.ebsco.com/linkprocessor/plink?id=a64a5a1d-0b01-3922-b823-ab73e42c2045.
Pandey, Sonal. “Intelligent Collaboration of AI and Human Workforce.” Aweshkar Research Journal, vol. 27, no. 2, Sept. 2020, pp. 20–26. EBSCOhost, search.ebscohost.com/login.aspx?direct=true&AuthType=ip,cpid,uid&custid=s4302453&db=bth&AN=159373200&site=ehost-live.
Stix, Charlotte. "Artificial Intelligence by any Other Name: A Brief History of the Conceptualization of “trustworthy Artificial Intelligence”." Discover Artificial Intelligence, vol. 2, no. 1, 2022, pp. 26. ProQuest, http://ezproxy.solano.edu/login?url=https://www.proquest.com/scholarly-journals/artificial-intelligence-any-other-name-brief/docview/2756518018/se-2, doi:https://doi.org/10.1007/s44163-022-00041-5.
Vicsek, Lilla. "Artificial Intelligence and the Future of Work – Lessons from the Sociology of Expectations." The International Journal of Sociology and Social Policy, vol. 41, no. 7, 2021, pp. 842-861. ProQuest, http://ezproxy.solano.edu/login?url=https://www.proquest.com/scholarly-journals/artificial-intelligence-future-work-lessons/docview/2543921315/se-2, doi:https://doi.org/10.1108/IJSSP-05-2020-0174.
Xu, Guanglu, and Ming Xue. “Unemployment Risk Perception and Knowledge Hiding under the Disruption of Artificial Intelligence Transformation.” Social Behavior & Personality: An International Journal, vol. 51, no. 2, Feb. 2023, pp. 1–12. EBSCOhost, https://doi-org.ezproxy.solano.edu/10.2224/sbp.12106