Artificial Intelligence: Ally or Adversary?

Noman Qureshi, Cyber Security Lead at Emirates Group
Noman Qureshi, Cyber Security Lead at Emirates Group

As artificial intelligence reshapes the digital world, Noman Qureshi, Cyber Security Lead at Emirates Group, delves into its growing impact on cybersecurity—both as a powerful tool for defense and a potential threat in the wrong hands.

AI is becoming a coherent mother tongue for the modern businesses as it has massive ability to surpass human intelligence and can perform complex task much accurately and efficiently, consequently, we ought to speak this language to keep ourselves competent in this competitive world and undoubtedly, Artificial Intelligence has broken the barriers the way conventional computer machines used to operate to make human lives easier, but along with its numerous advantages there is an immense need of understanding AI-Generated Cyber Attacks as cybersecurity landscape has dramatically shifted. Two years ago, if you wanted to create or purchase malware, you needed to be a developer or must have access to the Dark Web. That is no longer the case. Today, tools like ChatGPT can write malicious code for attackers.

Nevertheless, AI is an epic structure of super computers which helps in facilitating machines to act seamlessly and perform many human-like tasks as AI algorithms can process and analyse enormous volumes of complex data, identifying patterns, trends, and anomalies that would otherwise be missed. This leads to more informed and data- driven decisions which resulted in multiple benefits which includes but not limited to 24/7 Availability via AI-powered systems that operate continuously around the clock without breaks, automating mundane, routine, and high-volume tasks, increased efficiency & productivity with improved accuracy & Reduced Human Error as AI systems do not suffer from fatigue, distraction, or emotional bias by consistently following defined rules, even AI algorithms can analyse user behaviours, preferences, and interactions to deliver highly personalized experiences, recommendations, and services across various domains like marketing, e-commerce, & content delivery. AI also gives major cost saves by optimizing resource allocation, reducing waste, and improving efficiency along with scalability.

In contrast, AI’s role in cybercrime has seen a notable surge because AI agents present an attractive option for malicious actors. They are more cost-effective than professional hackers and can orchestrate widespread attacks much faster than humans. The most profitable ransomware attacks are currently infrequent, as they demand significant human skill, whereas Artificial intelligence is significantly increasing the pace of cyberattacks, with “breakout times” that is far more less than the humanly created cyber-attack. Cybercriminals are leveraging AI tools—ranging from generating highly persuasive phishing emails, fraudulent websites, and deepfake videos, to injecting malicious prompts or code—to construct customized and authentic-looking messages and tactics. This allows them to circumvent conventional security defences and operate on an unparalleled scale.

Accordingly, the discussion is incomplete without understanding the widely spread concept of Generative AI ‘(GenAI)’. The recent surge in conversations about artificial intelligence typically revolved around machine learning models designed for predictions based on data. In contrast GenAI functions as a machine learning model specifically trained to produce new data, rather than predict outcomes from existing datasets. GenAI systems have the remarkable ability to produce added content that mirrors their training data. This technology is poised to revolutionize businesses by facilitating the creation of advanced content, writing code in diverse programming languages, and even designing product mock-ups. Increasingly, these AI tools are being integrated into enterprise systems, where they interact with users to automate tasks, streamline workflows, and boost organizational efficiency. GenAI has advanced significantly, now outperforming narrow AI in many domains. A prime example is OpenAI’s ChatGPT, which can generate thorough articles, persuasive marketing copy, and engaging creative writing that sometimes even surpasses human capabilities. Software engineers are also increasingly integrating AI into their workflows by assisting with simple code. AI tools now automate multiple phases of the development lifecycle, leading to enhanced code quality, faster project delivery, and a quicker path to market. In the branch of creative arts, AI applications are currently producing high-calibre visual content, progressing from basic AI-generated images to highly detailed product mock-ups and architectural designs & videos. This advancement enhances the creative process, enabling businesses to bring their visions to fruition with greater efficiency.

While the blessings of GenAI is quite clear, but organizations often encounter significant hurdles in its effective implementation. Key challenges include guaranteeing data quality and appropriate quantity, addressing ethical concerns related to intellectual property, the spread of misinformation, and seamlessly integrating AI with current business systems. Yet, some leaders might overlook a crucial element: how well their employees adapt to and utilize these tools. Additionally, Integration can be frustrating if the interaction between users and GenAI tools is not intuitive. Many GenAI applications demand specific prompts or commands, and without clear guidance on how to effectively communicate with them, users may struggle to achieve satisfactory results, raising a sense of mistrust. Employees should have a clear grasp of what this technology can—and can’t—do, including its potential downsides. Collaborating with AI-generated content also brings up key questions about who owns the work, who is responsible for it, and how human creativity fits in. To help employees adapt, business leaders should focus on training them to effectively use these GenAI tools. It is also vital to get their feedback and use it to keep improving how the tools work. On top of that, creating an open and collaborative culture will encourage better communication, prompting employees to share their experiences, insights, and any challenges they face.

Moreover, GenAI is proving to be a dual-edged sword in cybersecurity. While it offers powerful capabilities for defence, it also significantly bolsters cybercriminal activities. Its strengths, like analysing complex patterns and automating tasks are being exploited maliciously. Cybercriminals are increasingly weaponizing GenAI, leading to a significant surge in online dangers. This powerful AI is being used to generate highly convincing

personalized phishing messages and complex social engineering tactics to deceive targets. We are also seeing the rise of deepfakes, created with GenAI, to impersonate people for manipulation or advanced trickery. Moreover, GenAI helps in the creation of evolving malware that can bypass standard detection methods and aids in pinpointing system weaknesses for precise attacks. It can automate elements of hacking, facilitating more extensive and intricate attacks that are difficult to trace. AI models are even being trained to bypass security features like biometrics or CAPTCHAs by imitating human actions. Fundamentally, GenAI is quickly making the cyber threat environment far more perilous.

The cyber security strategy should contain the defence mechanism by maximum utilization of AI to for enhancing ability to identify and neutralize cyber threats, leveraging deep learning, which simulates advanced attack scenarios which is crucial for testing and strengthening security systems against known and emerging dangers. Additionally, GenAI should be used to automates routine security tasks to boost cybersecurity teams focus on complex challenges. Cyber Security professionals should also utilise AI to enhance threat detection & response by creating sophisticated models that predict and identify unusual patterns indicative of cyber threats, allowing for faster, more effective responses than traditional methods, adapts to new threats, helping detection mechanisms stay ahead of attackers, thus mitigating breach risks. They can also customize security protocols by analysing vast data to predict and enforce effective measures and Its applications extend for detecting complex phishing attacks by analysing communication patterns, and data masking through synthetic data creation, preserving privacy without compromising training needs. They should also use automates security policy generation and revolutionizes incident response by generating automated actions and simulating mitigation strategies.

It is crucial for companies to have a clear picture of how deeply they plan to integrate AI. This understanding should directly guide how they allocate their budget, not just for getting the AI up and running, but also for putting strong rules and oversight in place to manage its effects properly. Thus, cybersecurity professionals should ensure for the incorporation of AI-powered cybersecurity solutions into their overarching ‘Cyber Security strategy’. This strategic incorporation requires strong endorsement from the company’s Senior Executives and Board Members to ensure that their organizations possess the safest possible environments for both their clients and the nation. Therefore, it is becoming increasingly clear that AI presents a dual reality: it can be an invaluable ‘Ally’ for companies, or a formidable ‘Enemy’ for the very same organizations, depending on its use.

Leave a Reply

Don't Miss

Emirates Group joins the world’s largest solar-powered data centre

The Emirates Group has joined forces with Moro Hub, a subsidiary of

Welcome to

By signing or creating an account you agree with our Code of conduct & Privacy policy