I. Introduction
In recent years, Artificial Intelligence (AI) has started new revolution within the broad field of Information and Communication Technology (ICT) by replacing older technologies that powered many applications [1], [2]. These advances in AI have brought many advantages to humans, animals and the environment. One of the major developments in AI that has started making revolutionary impacts is the Large Language Models (LLM) [3]–[5]. An LLM is a deep learning algorithm that is capable of performing an array of Natural Language Processing (NLP) tasks including generation and classification of text, answering questions in a human-like manner, and translating information from one language to another [6]. One of the major recent applications that make use of LLMs is the Generative Pre-trained Transformer (GPT) [3]. GPT is a framework that uses the transformer architecture of machine learning for generative AI for producing human-like text using deep learning techniques [7]. The most popular GPT implementation is ChatGPT a chatbot developed by OpenAI and launched in 2022 [8]. The other similar chatbots released by other organizations include Bing AI by Microsoft, Bard by Google, Jasper Chat by Jasper, Claude 2 by Anthropic, Llama 2 by Meta, HuggingChat by HuggingFace, Chatsonic by Writesonic, YouChat by You.com and Copilot X by GitHub [9], [10]. Like there are two sides for a coin, the advances in any technology bring in both advantages and disadvantages. In the same vein, the advances of AI also created some serious problems especially in the area of LLMs [11]. Any technology used with good intention will bring a lot positive outcomes. At the same time, the same technology, if abused will affect the same people it is expected to benefit. In this article, the authors take a deep look at WormGPT, an LLM implementation that is targeted towards users with malicious intentions.