Don’t Sacrifice your HI for AI
According to Forbes, the Artificial Intelligence (AI) market is projected to reach a staggering $1,339 billion by 2030, experiencing substantial growth from its estimated $214 billion revenue in 2024. The primary reason behind this rapid adoption of AI, especially GenAI models, lies in its phenomenal ability to combine automation, intelligence, and prediction to improve daily life and business operations. It addresses increasing demands for faster results, better data analysis, and productivity. Yet, alongside its undeniable benefits, AI presents a spectrum of risks and concerns that must be addressed with vigilance. The desired relationship between Human Intelligence (HI), the creator of AI, and AI, is of mutualism where both parties benefit from each other, and not of parasitism where one thrives at the cost of the other. Like the mutual funds market, AI solutions should also be accompanied by a disclaimer statement, maybe something like, “Irresponsible usage of AI is subject to diverse risks, please be sufficiently aware before using.”
GenAI
A category of AI that generates content based on natural language prompts in the form of text, images, or audio is known as Generative AI. All widely used large language models (LLMs) like ChatGPT, Gemini, Claude, Copilot, Llama, etc. fall into this category of AI. These models are trained on vast datasets using self-supervised learning techniques to operate on the principles of deep learning, leveraging neural network architectures to process and understand human languages. They are enabled to engage in natural linguistic conversations with humans, basically to behave as humans. The end goal of these models is to develop human-level cognition and completely mimic human intelligence and behaviour.
Generative AI learns from teaching, training, and experiences, thus making decisions based on stored memories and thought processes which are generated as per the datasets it got trained upon. It can remember and perceive its surroundings too which allows it to develop and study patterns and continuously keep learning from them, opening a doorway to numerous risks.
Risk and Concerns
- Misinformation:
All LLMs are prone to developing a fake sense of intuition wherein they perceive patterns that do not exist and have no real-world context, hence generating nonsensical results, popularly termed “AI Hallucinations.” It will just make things up which often arises due to incorrect or incomplete training data. Research shows that over 75% of consumers are concerned about misinformation from AI.
This raises serious concerns about reliability and reinforces that one must not solely depend on the results provided by these tools. Human oversight is critical in this case, and all the outcomes must be thoroughly cross verified before using.
- Inaccuracy:
Although these frequently used models are trained on huge petabytes of datasets, this does not guarantee them to be error-free. There are recorded instances of models struggling to perform basic mathematics tasks, providing content with numerous spelling errors, and generating variable inconsistent outputs, largely due to misinterpretation of training data. These incorrect results if used as it is without monitoring, might lead to a domino effect of inaccurate reports.
It’s vital to ensure a human is always in the loop to double-check outcomes.
- Lack of Transparency:
A wide variety of information is now just a prompt away, but users are often unaware of how these systems collect, process, save, and use their data leading to major violations of privacy expectations. No information is shared whether a proper data life cycle management process/policy is in place or not which raises concerns of trust issues with sensitive personal and company data. We must practice asking ourselves a simple question before uploading any data on these AI platforms – I am providing too much information?
In addition to this, we are also unable to see how deep learning models arrive at their decisions not allowing us to trace their thinking process, thus making it a mysterious “black box”. This opaqueness prevents users from making well-informed decisions and utilizing these tools with the much-required scrutiny. Drawing necessary boundaries while using AI is essential to maintain trust and accountability.
- Surveillance Risks:
While personalization is useful in industries such as healthcare and hospitality, AI-driven hyper-personalization can be alarming. We all must have experienced receiving product/service recommendations based on our past search history, combined with AI and real-time consumer data, this process is made even more granular by studying consumer moods and patterns to influence decision making.
This type of unconsented user profiling done on the bedrock of intrusive surveillance, data analytics and constant monitoring is a threat to human agency and conscious decision making. Strong oversight mechanisms and ethical guidelines must be adopted at organization and individual levels to effectively mitigate this risk.
- Security and Privacy Threats:
GenAI solutions are often referred to as “double-edged sword” as on one hand they offer exemplary innovation, while on the other hand they also expose users to massive cybersecurity and privacy risks. Owing to its lack of data usage transparency and black box effect, confidential and personally identifying information is susceptible to unwanted disclosures. In addition to this, these widely used platforms are vulnerable to prompt-injection attacks, wherein ill-meaning inputs may cause large language model chatbots to expose data that should be kept private or perform other actions that lead to data compromises. One needs to be extremely careful before uploading files which contain vital information about self or organization as privacy is truly a myth in the age of AI.
Thorough risk assessment must be conducted to ensure that chosen AI solutions are in alignment with organizational needs and comply with data regulation policies. Organizations like Apple and Samsung, for instance, have banned the internal use of ChatGPT, especially by the software development team after realizing that potentially sensitive code had been uploaded to the platform, risking the loss of confidential company information.
Regular audits will help mitigate privacy risks and protect personal and confidential data. Practices like anonymizing and encrypting sensitive user/customer data before uploading it onto the open AI platforms should be followed to prevent privacy breaches and unauthorized access.
Just like the evolving nature of this technology, these mitigating strategies must also evolve along with creating awareness, GenAI demystifying, and training opportunities for all.
- Legal Concerns:
Prompt-based LLMs seem magical, creating lengthy reports, essays, stunning images, etc. at the blink of an eye. But who owns these outcomes? Several reports suggest that these responses include copyrighted content without permission raising concerns of data ownership and intellectual property rights infringement. Moreover, when AI systems err, determining responsibility becomes a murky territory.
Implementation of robust data verification and data protection policies is the need of the hour, more than ever.
- Misuse:
Numerous videos are circulating on social media showcasing public figures speaking about something, which they never did! These are known as “Deepfakes” wherein hyper-realistic fabricated media is created using AI, mimicking someone to spread false or misleading information that can influence public opinion. These videos not only discredit individual identity but also are a major threat to public safety due to their persuasive nature. In addition to this, these tools can also be used to create ill-motived harmful content. Additionally, the weaponization of AI in the form of autonomous drones or military systems raises fears of an arms race where machines, not humans, dictate critical decisions. Such developments could lead to catastrophic consequences if left unchecked.
The application of human values and ethical principles is vital to protect our social fabric from such incidents.
- Biased Outcomes:
AI systems often inherit biases embedded in their training data, called as “AI Bias”. These biases can result in discriminatory outcomes which can skew decisions towards a particular direction. For instance, algorithms might unfairly disadvantage certain groups based on race, gender, or socioeconomic status, perpetuating systemic inequities. This highlights serious ethical and moral concerns associated with this technology.
The outcomes generated should always be overlaid by human expertise for objective and fair decision-making.
Human Factor
Although AI offers astounding benefits, overdependence on it can erode humans of the very traits that make them human. Essential skills like creativity, communication, language competency, critical thinking, and problem-solving are under threat due to the dopamine kick generated by instant results from these machine systems. Additionally, excessive dependence on digital assistants and automated systems could lead to a society where human effort and ingenuity are undervalued.
Unrestricted interaction with AI systems also risks fostering emotional detachment, as artificial ones replace human connections. This raises existential questions about the role of AI in shaping human identity and relationships. Just like how a low-calorie diet is followed to achieve certain health goals, it is important to follow a “low prompt diet” to protect ourselves from “prompt addiction.” Also, it must be noted that emotional intelligence is the core of human intelligence that enables collaboration, effective leadership, adaptability, better customer service, and ethical decision-making. No matter how advanced AI becomes, and no matter the safeguards or lack thereof, it can never “truly” replicate or match humanity’s ability to feel, experience, and express emotions.
HI should not be mortgaged to harness the advantages of AI. Thorough checks and balances must be implemented to ensure that we are the ones using the technology and not getting used by it instead.
Way Forward
While Artificial Intelligence holds the promise of revolutionizing every aspect of our lives, human intelligence must remain the guiding force for striking the right balance. Used responsibly, guided by ethics and a sense of purpose, it surely is a powerful tool that can be used to augment human capabilities. By proactively addressing risks and concerns, we can harness AI’s potential safely, thus creating a future where technology empowers humanity without overshadowing it. Emphasizing collaboration between humans and machines can maximize benefits while minimizing risks. The eventual goal must be a harmonious coexistence where AI serves as an ally, not a master.