Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Jana Alzahid, Munira Mall, Muath Salah Al-Yalak
DOI Link: https://doi.org/10.22214/ijraset.2023.50149
Certificate: View Certificate
Automation biases stemming from deep-rooted computer algorithms are blamed for the false results and misinterpretations it presents; However, algorithmic biases in machine learning models are based on user interpretation and input, and therefore the machine\\\'s output may not correspond with the input\\\'s attitude. This stems from the variety of information that is fed in machine learning models and possible faulty glitches within computers themselves.
I. INTRODUCTION
Automation biases are mistakes or errors that are classified under specific conditions in human psychology. It's the tendency that humans have in terms of decision making, which ultimately causes them to classify automated results as far more reliable than human analysis. Although this claim is prevalent in many situations, it's a tough call to state that automated decisions are always correct. Here, the biased view does not stem from the computer algorithm itself, but rather from how individuals interpret and use the output information.
Computer algorithms are built on complex blocks and databases that often take years to complete, therefore, they are viewed as more accurate. The argument many will bring up is that algorithms themselves have no opinion but humans do. This chain of tough decisions is seen in many departments and workplaces around our local community. Computer algorithms rely on many techniques and systems such as the Decision Support System (DSS) to support the reliability of the output. DSS is a computerized program that offers a clear judgment on courses of actions in organizations to help manage conflicts that might be present in a wide range of systems. Such systems elevate the efficiency of algorithms, as humans are more prone to depend on them today more than ever.
Multiple studies in the past proposed a relevant theory regarding the perceived trust from DSS systems by human operators. Despite many researchers focusing on the idea that humans often underestimate the reliability of such systems, it was concluded that the development of trust is mainly influenced by the elements in those decision systems themselves (Ziad, 2021).
Due to the high variation of environments that DSS is used in, it cannot be determine whether an operator's trust stems from a sole reason, but rather it depends on the dynamic and active environment that the system is built in. Despite the high capability of Artificial Intelligence and Machine Learning in the detection of errors and picking up skills by the software, they definitely present a volatile efficiency and effectiveness depending on the factors playing in the place it is operating in. Detection systems are characterized as systems that work well when they target somewhat vague problems and solve them through simple technical skills that are built upon traditional databases and retrieval functions. By understanding the primary characteristics of detection systems, it can be inferred that when a DSS is utilized to resolve a complex issue with multiple factors that refute itself and may seem problematic can limit the effectiveness of the results. Therefore, using detection systems in sensitive environments such as ICU’s may detect false indicators that will definitely alarm medical professionals, which will in turn face a tough decision-making situation due to the automation bias phenomenon.
II. DISCUSSION
In order to have a clear image on the consequences of this phenomenon, building a solid understanding regarding the psychology behind it is a must. Therefore, an interview with a certified psychologist working in King Fahad’s Military Medical Complex in Dhahran regarding this phenomenon was essential via Zoom on the 12th of November, 2022 (AlSafra, 2022). Through the interview, it was clear that “Automation Bias” is indeed a real problem that is present in nearly every field, especially with the sudden surge in technology and automation. She proceeded to provide examples about situations that she encountered as a psychologist in her profession, and how such an “error vulnerable” field that heavily depends on human interaction still depends on technology in many aspects. Detection devices for ADHD patients to diagnose them with a specific subcategory showed frequent errors in her clinic, as she often noticed the presence of contradictory results from what she had observed during visits and the data displayed from the computer. She mentioned that most people would doubt their examinations, and in order to rule out any possible error margins, she would re-examine the patient through the computerized tests. Although this might be the case for this professional only, she mentioned that in many fields, especially STEM based fields, having accessibility to double-tests is a privilege, and therefore those individuals are often stuck with unsatisfaction and doubt, which is exactly what the automation biases is characterized by.
Despite the satisfying results of the previous interview, it was still necessary to contact a medical professional that faced automation bias themselves, to understand the complete view and the behaviors they are faced with. An investigation (Rose, 2021) with a private nurse that worked as a physiotherapist- in the past, but now takes care of an elderly patient with progressive Parkinson’s and dementia that is currently heavily dependent on plenty of machines and monitors was sufficient to understand their perspective. Her experience and knowledge made her the perfect candidate to fall into automation biases, as she are more exposed to sensitive and life dependent decisions between human error and computerized systems biases. Through the interview, struggles that come with taking care of machine dependent patients became clearer, as she mentioned having to choose between manual and battery based devices for basic measurements through out the day. She expressed her preference for manual devices, as battery based devices tend to have more errors. This proves the theory that there is doubt due to possible mistakes and biases that can occur, and taking into consideration her experience with automation biases, she portrayed an opposing behavior that can be justified as a reaction to the doubt and unsatisfaction that was present in her own examination in the past, but now revealed to be in the machine itself.
A feedback loop is a system, as the name suggests, that is characterized by having a repeating action over and over again. This is a method that programs use in order to generate uniform responses to the users of a specific software. Feedback loops use the input that is originally built upon to generate all future outputs. This signals a possible threat to the diversity of data, and possibly a biased result that might stem from this limited dataset. There are two main types of feedback loops, positive and negative. Negative feedback loops are highly regulated, as they are usually preferred for limited programs that have specific boundaries that cannot be crossed. Negative feedback loops are not just present in complex code scripts, as they are found in simple instruments and systems such as thermostats and controllers. Unlike negative feedback loops, positive feedback loops are not as restricted, as they are open for additions, extensions, and changes to the original database. Positive feedback loops are dynamic, as they actively react and respond to the challenges and changes present in its virtual environment.
III. RESULTS
From the previous insights, we can infer that almost all social media systems feed on positive feedback loops. Although it's a positive feedback loop, it is still classified as a closed feedback loop. This means that information can be inserted into the platform’s database, but it will remain there and produce a relatively large effect on the type of output present. Any outliers that somehow reach the inside dataset of a social media platform, are not going anywhere. Therefore, many data scientists came to the conclusion that social media platforms are ineffective in many environments that can produce biased information and conclusions that harm the user. Developers of a platform do try to make apps and programs as inclusive as they can, but eventually programs produce biased results the second they are open to the public. Loops become vulnerable because developers give the false assurance that these close ended loops can be fed without producing a biased product. As developers stick to close ended loops, bias is always present, as the same information goes around and around. Although the benefits of this open-ended system are valuable, as more individuals have a spec to express their ideas and beliefs, it still has its drawbacks. What is really interesting is that open-ended platforms were created to embrace collaboration and unity despite differences between individuals and groups in communities, but often times, social media does the opposite by embracing discriminatory fundamentals and principals. For example, a popular platform used by “woke” individuals is Twitter, which is infamously known for the strong arguments that happen on the platform. 5 neutral bots were left in the wild in Twitter as a part of a Study conducted by the University of Indiana (Kroges, 2016), and after 4 months of feeding and engagement with the local trends and conversations, the bots were examined.
Upon examination, it was found that these bots were no longer neutral politically, although they engaged in “neutral spaces” in Twitter itself, as they were all inclined to the political right. These findings reveal that even neutral spaces in social media do not exist, bias is always present in a closed loop.
Collaborative Filtering (CF) is a type of recommender system that social media platforms regularly adopt. This type of recommender system uses previously gathered data from users and the interactions between them to recommend personalized content for them. It is considered one of the most popular tools as most social media apps are free, and can only generate money through adverts, and how do these platforms create such successful adverts? By collecting data that allows them to recommend relevant products to consumers. CF comes into play when we are constantly faced by choices and pathways provided by the platforms that we are addicted to. This includes the many checks that apps like TikTok and Instagram provide to see if a user is interested in certain content.
According to a survey (Alzahid, 2022) that was conducted from a random sample of 108 participants that frequently use social media based in the Eastern Province, Saudi Arabia, the whole majority of responders consider the Ads that they view on social media relevant to their preferences and interests. However, the majority is also not comfortable with the idea that social media has access to a large database of user data as an effort to generate profits using 3rd party sponsors. After the analysis of the data found, it is clear that a common ground in this long-lasting debate is nowhere to be found, as users settle on the extreme and companies apply strategies that align with the opposite extreme.
Stopping this investigation at this point is not efficient, and does not answer any questions and concerns that are addressed. As we have collected many responses that allow us to conclude the overall stance that community members have against systems and strategies that social platforms use, we must provide a legitimate judgment on whether such strategies form a real threat against personal privacy and security. I have reached out to a local tech startup based here in Saudi: “Future Tech Solutions'', to address one of the main concerns that people have, which is whether open-feedback loops, like the ones used in social media platforms, are harmful or useful to both the consumer and the producer. Although the startup itself is a bit biased due to having most of its profits generated from developing projects that are based on feedback loops, they provided an interesting view on the positive aspects of open feedback systems. Open feedback systems are more sincere and personalized for each user, it's a tailored version of the internet, meeting everyone's interests and likes, quite literally. Hitting a like button or a follow adds to your personal profile at major social media companies, but it all benefits both the platform by monetizing content published, and the user by providing opportunities and products that they are probably interested in. It is a win-win situation, and after all, sharing personal data and cookies might not be regarded as a negative consequence of using social media.
Lack of diversity is a problem that is present in many fields including technology. It is not considered as a “new surprise” to the world, as even the most prestigious tech companies have “skewed up” diversity standards (Chakravorti, 2020). Looking back at the 2020 diversity report published by Google, only about 35% of its employees are women, 6.6% identify as Latinx, and only 5.5% identify as black. These statistics are alarming, as Google is regarded as one of the most diverse workplaces that seem to provide including and diversity through out the last few decades.
When it comes to local companies and tech star-ups, males, once again, make up the majority. These start-ups tend to hire university graduates from the three major American cities: New York, Massachusetts, and Los Angeles, which are known as home for high-profile academic institutions. The problem here is not with the justified preference that successful ventures have for high-profile individuals, but rather the socio-economic background that these individuals come from. As the most represented group in ivy league and top 20 university students are often those that come from much privileged and wealthier origins. We cannot completely disregard the contributions of these passionate graduates in the world, but we can still be more inclusive towards those that have the ability to make a change, but lack the all rounded profile to get into positions that allow them to expand their potential.
The largest gender gaps in entrepreneurship today are in the Information Technology sector, where male entrepreneurs are approximately two times as likely to lead as aspiring female entrepreneurs. The current gender gap in tech society must be bridged, and more women involved in computer science through study or labor should consider a career in tech entrepreneurship (Wilson, 2022).
This is critical for developing technology that is appropriate for a wider population, fitting more needs, and reaching more potential consumers. in fact, switching to a more diverse workspace within start-ups sets these entrepreneurs to success as they produce a product that will hopefully appeal to a greater proportion of the community. The global community is a fluid mosaic that is composed of people from different tax brackets, cultures, and experiences, and a one-fits all product is not the way to success in the technological field.
According to a study on the Technical Symposium on Computer Science Education by a group of female engineers, the root cause of the underrepresentation of women in tech seems to be “the fear they [women] do not belong or are not "smart enough", resulting in women switching to a different major” (Heingans, 2018). This subliminal fear arises from stereotypes and gender biases that have circulated co-ed spaces ever since they were initiated. Women are labeled as “try hards” when they attempt to excel in their subjects, but they’re considered as “diversity admits” the moment they fall off the academic pedestal that no male is accustomed to. In order to provide more inclusive products, change has to start in classroom, as education is the epitome of success in tech, and so far, only non-significant progress was made.
In a controlled experiment that attempted to detect gender biases, participates were considering two applicants to an engineering program, and deciding which applicant would fit the position more. Equivalent academic profiles of university applicants were presented, having the only difference in name and gender, as they were named Manuel or Mara. Participants were asked to assess the student's mathematical ability and advise him or her on whether or not to pursue a career in engineering in the future. As it was expected, participants tend to favor the male applicant more, as men are often stereotyped for having stronger math and technical skills, therefore would make them a good fit in the major. If society starts discriminating and hindering women’s dreams this early in their carriers, then no substantial solution will be reached.
Another major issue with technology is racial diversity. Only 2.1% of the tech jobs at Facebook are held by people of color. As the common scene in "Big Tech" companies is dominated with Asian and White employees. In the tech industry, prejudice against people of color is also common. 42% of Hispanic workers and 62% of Black workers, respectively, reported having faced discrimination at work (Wooll, 2021).
This can include receiving less support from senior leaders, being passed over for opportunities for growth and development, or being underpaid compared to a White or Asian coworker doing the same job. Race-based discrimination is also widely present within board members, as workplaces are generally becoming better at inclusivity, but board rooms are not changing at all. Companies can go on and on about having a diverse workplace, but nothing will change in terms of real inclusivity unless women are given power and leadership positions. Hiring more women and people of color is useless unless it genuinely changes the methodology of product lines and discriminatory policies.
To conclude, the tech industry is notably one of the fastest growing industries that are changing the world in every aspect possible; however, it is still stuck back in old stereotypes and biases that hinder its progression. Automation and machine learning are by no means terrible or useless tools, despite their obvious biases. When discussing the validity of an opinion or a result shared by an automated program, its important that we recognize the consequences that we unravel after the creation of an open and regulated space that feeds into these powerful machines and programs. After all, programmers and engineers work on creating products that are worthy of sharing and usage.
Unfortunately, the main aim of such individuals is to make a revenue, and no engineer will design a program that will be treated poorly by the local or global community, therefore it designs an open machine that listens to what the consumers themselves ask for. These scene is present locally here in Saudi, as Ahmed Alzahid, an engineer / entrepreneur that works in the upstream, division at Aramco reservoirs renounces the internal and external discouragement that women face at such firms, but still emphasizing on the positive measures that the women empowerment movement in Saudi Arabia have set (Alzahid, 2023).
Biased solutions are attractive and unattractive simultaneously, as a more inclusive product line seems like the moral solution, but it seems like the consumers themselves are not ready for such lines from their biased inputs into various systems and mediums. This challenge takes us back to the 90’s, back when mobile companies produced touch-screen phones that remained on store shelves for months on end, as consumers were not ready for them.
Racial and gender discriminatory practices have been around longer than the time that tech startups and products were around, and this alone marks a great challenge for programmers and change advocates to overcome.
[1] “Data Versioning & Feedback Loop Best Practices.” Picsellia, https://www.picsellia.com/post/feedback-loops-and-versioning-in-computer-vision. [2] Reardon, Jayne. “Can We Avoid the Feedback Loop of Social Media?” 2Civility, 5 Oct. 2021, https://www.2civility.org/avoid-feedback-loop-social-media/. [3] Mansoury, Masoud, et al. “Feedback Loop and Bias Amplification in Recommender Systems.” ArXiv.org, 25 July 2020, https://arxiv.org/abs/2007.13019. [4] “Decision Support System (DSS).” Corporate Finance Institute, 27 Oct. 2022, https://corporatefinanceinstitute.com/resources/management/decision-support-system-dss/. [5] Jacob, Daniel. “Cross-Fitting and Averaging for Machine Learning Estimation of Heterogeneous Treatment Effects.” ArXiv.org, 26 Aug. 2020, https://arxiv.org/abs/2007.02852. [6] Obermeyer, Ziad, et al. “Algorithmic Bias Playbook.” The University of Chicago Booth School of Business, 2021, https://www.chicagobooth.edu/re search/center-for-applied-artificial-intelligence/research/algorithmic-bias/playbook. [7] “Similarities and Differences between Human–Human and Human–Automation Trust: An Integrative Review.” Taylor & Francis, https://www.tandfonlin e.com/doi /full/10.1080/14639220500337708. [8] Google . (2020). Google\\\'s Diversity Report. Google Inc. Retrieved March 15, 2023, from https://kstatic.googleusercontent.com/files/25badfc6b6d1b33f3b8 7372ff7545d79261520d821e6ee9a82c4ab2de42a01216be2156bc5a60ae3337ffe7176d90b8b2b3000891ac6e516a650ecebf0e3f866 [9] Chakravorti, B. (2021, February 2). To increase diversity, U.S. tech companies need to follow the talent. Harvard Business Review. Retrieved March 15, 2023, from https://hbr.org/2020/12/to-increase-diversity-u-s-tech-companies-need-to-follow-the-talent [10] Del Pozo-García, E. (2018, February). Whether your name is Manuel or maría matters: Gender biases in recommendations to study engineering. Taylor & Francis. Retrieved March 15, 2023, from https://www.tandfonline.com/doi/abs/10.1080/09589236.2020.1805303 [11] Google . (2020). Google\\\'s Diversity Report. Google Inc. Retrieved March 15, 2023, from https://kstatic.googleusercontent.com/files/25ba dfc6b6d1b33f3b87372ff7545d79261520d821e6ee9a82c4ab2de42a01216be2156bc5a60ae3337ffe7176d90b8b2b3000891ac6e516a650ecebf0e3f866 [12] Maryland, P. R. U. of, Rheingans, P., Maryland, U. of, Maryland, E. D. E. U. of, D\\\'Eramo, E., University, C. D.-E. B., Diaz-Espinoza, C., University, B., Maryland, D. I. U. of, Ireland, D., University, N. C. S., California, U. of, College, U. C., Charlotte, U. N. C., & Metrics, O. M. V. A. (2018, February 1). A model for increasing gender diversity in technology: Proceedings of the 49th ACM technical symposium on computer science education. ACM Conferences. Retrieved March 15, 2023, from https://dl.acm.org/doi/abs/10.1145/3159450.3159533 [13] Sings, M. (2018). Diversity in tech: Closing the gap in the modern industry. Diversity in Tech: Closing the Gap in the Modern Industry. Retrieved March 15, 2023, from https://www.betterup.com/blog/diversity-in-tech
Copyright © 2023 Jana Alzahid, Munira Mall, Muath Salah Al-Yalak. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET50149
Publish Date : 2023-04-06
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here