AI opens the door to technological horrors of our own creation.

AI or Artificial Intelligence is a field in computer science in which computer systems perform tasks that are carried out by human intelligence like visual perception or speech recognition. Often the main villains in many Sci-Fi movies and literature, these cold and calculating machines are seen completing tasks faster than we can think of and always view humanity as a threat to its existence and tries to kill us off. 

Despite fears from about 27% of the world of a super-intelligent AI like Skynet, Ultron or Johnny Depp taking control of key centers of technology to turn against their human creators this scenario will happen far into our future with many experts disagreeing on when exactly a super-intelligent AI will be created. 

In the meantime, the real fear of AI comes not from a super-powered machine but from a machine given prompts from a human operator whose goal is to use AI for illegal or harmful purposes.  

There are four primary areas of artificial intelligence reactive machines, limited memory, theory of mind and self-awareness. Reactive machines are the most basic of AI as it reacts to their environment but cannot use memories or past experiences to inform themselves. Limited memory is slightly more advanced as they use data and events in real life that inform the decisions it makes (think of self-driving cars). 

With technology growing at a rapid pace, we are starting to see more diverse types of AI being created by major companies to fill a variety of roles. OpenAI is one of these companies that have gained a prominent following due to the recent development of ChatGPT an artificial intelligence, the core principle of this chatbot is that a user enters a prompt with the AI taking the information in the prompt and responds to it in varying length. 

ChatGPT mimics human conversations from the prompt a user provides and produces content that reflects whatever is given. This article for instance could have been written by ChatGPT but due to journalistic ethical restraints and editorial interference a human being is writing this. 

ChatGPT would fall under the limited memory category of artificial intelligence which is something most of us have experienced in our daily lives (Siri for example). What makes ChatGPT different from the traditional chatbots we use is that services like Siri are programmed to respond in predetermined phrases when asked a question which is why more complex questions result in only confusing Siri. 

ChatGPT circumvents this issue by responding coherently to more complex questions or prompts by being trained to remember from a catalog of words to respond in a fluid manner that resembles human speech.  

When asked how ChatGPT works it responds as “I use advanced natural language processing algorithms and machine learning techniques to understand the context and meaning of your questions, and generate responses that are relevant, informative, and helpful.” 

With over 100 million users as of January, the program will inevitably invite users who will misuse it. Phishing scams have benefited from this innovative technology due to the fluid speech pattern that ChatGPT uses to mimic human conversation could create a variety of different messages written in slightly different tones in numerous languages. 

Using the unique qualities of predicting what the next line of text is a person would not know if they were speaking to an actual person or a chatbot which could lead to sensitive information being provided even writing styles could be imitated which would further complicate things. 

Besides chatbots, other artificial intelligence like voice AI and deepfakes are potential causes of misuse by individuals. Deepfakes are a form of artificial intelligence called deep learning that creates fake images or videos, a user would need to upload thousands of images to an encoder that finds similarities between the images and a decoder to recover a second person’s face from the compressed images. 

The same concept works for voice AI as the program needs a large catalog of words or phrases said by a person to recreate a voice where a user can then input text to have the AI say whatever is put in the text. 

Like the issue with chatbots, someone could record the voice of anyone you, your boss, your family or just anyone and can use this to steal information or money from you. Another aspect of this can be in the form of misinformation or the spreading of hateful views using individuals that people know. 

This was tried during the Russo-Ukrainian War with a deepfake of Ukraine president Volodymyr Zelenskyy issuing a command for soldiers in the Ukrainian army to lay down their arms and surrender. While the video was debunked this hijacking of a person’s image and voice can be used without consent or even knowledge. With social media being as prevalent most people have enough content for videos like these to be created in the future. 

AI is becoming more sophisticated with ChatGPT4 deceiving a human into convincing that it is a person it will be inevitable that people will encounter artificial intelligence that is indistinguishable from human beings.  

Society needs to adapt and be taught more about how AIs work if we want do not want to live in a future where a phone call is treated with suspicion, using watermark technology for large language models is another thing that can be used for chatbot AIs. Banning or attempting to limit AI is an impossibility with technology outpacing laws. We need to focus on ensuring the next generation is knowledgeable and ready to interact with this ever-growing technology field.  

Leave A Reply

Your email address will not be published.