Chat GPT, The New AI ChatBot On the Block: What Happens Now?
Updated: Apr 29
By: MARI RIVERA / STAFF WRITER
AI is booming, and leading the charge is ChatGPT.
ChatGPT joined the AI Chatbot neighborhood in November 2022. ChatGPT, which was developed by the company Open AI, stands for Chat Generative Pre-trained Transformer, and was launched as a prototype upon initial release. It serves as a sibling model to InstructGPT, also produced by Open AI, and is meant to receive instructions from human input and provide detailed responses. The new updated version of the AI based on the most modern version of OpenAI (GPT-4) was released to the public on Mar. 14, for paid subscribers only.
Users are able to access the AI chat system through any device with internet access. ChatGPT’s primary function is to mimic a human conversationalist. Users can ask the machine any question, and it will draw upon its programmed repertoire to give the best and accurate answers. Its functionalities include, but are not limited to: writing computer programs, writing essays, answering test questions (within limits), playing basic games such as tic-tac-toe, and simulating a chat room.
ChatGPT is well-equipped, with its ability to recognize distinct patterns of speech and vocabulary, be attuned to internet phenomena such as popular memes and catchphrases, grasp and utilize computer programming languages, and remember previous responses in the same conversation.
It is important to note the distinction that ChatGPT does not directly access the internet to generate responses in the same way that Siri (Apple’s automated voice-activated digital assistant) and Alexa (Amazon’s counterpart) are able to.
ChatGPT is additionally trained to stay clear of harmful or false responses. OpenAI uses a moderation application programming interface (API) to filter each query received through the website in order to discern and prevent offensive responses.
While all of what ChatGPT can do sounds mighty impressive, no technology comes without fault.
Open AI openly lays out some of the software’s limitations on its website:
For starters, ChatGPT was trained with data only dated until the year 2021, deeming the bot not one hundred percent accurate and current. Secondly, no AI machine is made without some inherent biases stemming from the team working on the project. Open AI has received criticism for this, especially since it is OpenAI researchers and developers who choose data for the AI bot to learn from.
Other limitations include that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers,” the ChatGPT is sensitive to changes in wording (“given one phrasing of a question, that the model can claim to not know the answer, but given a slight rephrase, can answer correctly”), that the “current models usually guess what the user intended,” rather than ask clarifying questions, and that the AI may “sometimes respond to harmful instructions or exhibit biased behavior.”
Similarly, users experienced some more specific issues when using the bot. An example of this is the “at capacity” message that a lot of new users have been met with when trying to create an account. While the issue may be fixed, there is a chance that a similar thing will occur when the bot is fully released to the public.
Of course, with something new and technologically advanced, there are those who abuse it. In schools and universities, students have turned to ChatGPT to plagiarize essays. Darren Hick, a professor at Furman University (SC) admits that “Academia did not see this coming. So we’re sort of blindsided by it,”
However, because ChatGPT responds with unique answers, even to similar queries, proving that a student used ChatGPT would be difficult. And because the bot is constantly learning from each person’s interactions with it, the bot will eventually have minimal irregularities and errors, and will eventually match a human’s tone and vocabulary much more accurately.
Hicks mirrors the fears of other people in academia and even the workplace.
“I feel the mix myself between abject terror and what this is going to mean for my day-to-day job — but it’s also fascinating, it’s endlessly fascinating,” he said.
Fascination with ChatGPT has led the bot to attract a great deal of attention across the globe, drawing in about 1 million users, and has sparked competing companies to develop their own forms of chat-based generative AI.
Other tech companies, such as Google and Microsoft have started to build something similar. On Feb. 7, 2023, Microsoft launched their new AI-powered Bing search engine and Microsoft Edge browser. The AI bot utilizes Open AI’s newest GPT model. The T goal was to upgrade its web browser to provide better search results, a chat experience, and the ability to generate content. Microsoft describes the AI bot as an integration with the internet browser, the two working in tandem to provide the user with the best search experience possible.
Satya Nadella, the chairman and CEO of Microsoft, explains the need for the AI bot: “There are 10 billion search queries a day, but we estimate half of them go unanswered. That’s because people are using search to do things it wasn’t originally designed to do. It’s great for finding a website, but for more complex questions or tasks too often it falls short.”
Microsoft is collaborating with Open AI to ensure that there are safeguards in place to prevent harmful content and to ensure that the AI operates responsibly.
And compared to ChatGPT’s repertoire only dating to 2021, the Bing AI integration is current and can effectively answer queries based on more recent events.
The new Bing AI is available for a limited preview on desktop, and people interested in using the service can practice asking the bot sample queries and sign up to be part of the waitlist for when Bing AI is fully launched.
With the surge in AI bots, it begs many questions: What happens now? Is ChatGPT and other AI worth it or safe? Will this eliminate jobs for humans?
The latest answers to some of these questions arise from a statement made by Elon Musk, the CEO of Twitter. On Mar. 29, he, along with other experts and industry leaders in artificial intelligence, called for a six-month pause in development of systems that are more powerful than Open AI’s newest GPT-4.
An open letter was created to garner support on the basis that AI systems developed with such high levels of intelligence in such short periods of time pose tremendous risks. The letter states “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”
The letter, which already gathered 26,222 signatures as of Apr. 18, even argues that if a pause is not enacted universally and quickly, the government will have to step in.
Despite the negativity surrounding AI, it is important to note that Musk and company do not intend to halt AI development altogether. They acknowledge that the technology can be engineered for the betterment of society and “humanity can enjoy a flourishing future with AI,” and only want to guarantee that AI is proofed with safety and accuracy protocols. And most notably, simply because Musk is asking to halt development, does not necessarily mean everyone will comply.
Skepticism surrounding the bot is warranted because of how new this is. New technology has a tendency to scare people into thinking that their world will be overrun by robots and machines. But people thought that back when technologies such as Siri, Alexa, and Google Home hit the block. Yet, people are still very much present and active in their roles as humans and in the workplace; these things have just streamlined certain processes.
ChatGPT and other artificial intelligence bots have the potential to make people’s lives easier and make it easier for them to complete mundane tasks such as email writing or other administrative tasks. However, a great deal of work needs to be done to ensure that the ChatGPT is safe, sufficient, non-intrusive, and less limited.