Assessment findings will be freely accessible via the Commission.
In the ever-evolving world of Information Technology (IT), Artificial Intelligence (AI) has become a significant topic in security, with security providers integrating AI, Generative AI (GenAI), Machine Learning (ML), or AI into their products. This shift is a response to the growing threats posed by cybercriminals who are increasingly using AI techniques to refine existing methods or develop new attacks.
A recent survey reveals that 34 percent of participants believe AI serves both attackers and defenders, highlighting the dual nature of this technology in the cybersecurity landscape. One of the emerging tactics used by cybercriminals is "jailbreaking" large language models (LLMs), a tactic that some offer as a service. Trend Micro explained in early 2025 that this practice poses significant risks for IT security, enabling more sophisticated cyberattacks and fraudulent activities.
The use of AI in generating deepfakes is another concern. In the survey, 96.6 percent of global respondents identified AI-generated deepfakes as a threat, with 99 percent of German respondents agreeing. The EU AI Act, which came into effect in August 2024, requires AI systems to be transparent, traceable, and secure, especially in cyber defense. Providers must disclose how their models work, with what data they were trained, and how they deal with potential risks.
As businesses grapple with integrating AI technologies into their existing IT security architectures, they face new challenges. Implementing AI requires careful planning to ensure seamless integration with firewalls, intrusion detection systems (IDS), and Security Information and Event Management (SIEM) platforms.
The growth of AI is considered a serious threat to businesses by almost all IT security professionals, according to Bitdefender's Cybersecurity Assessment Report. However, AI can also close personnel gaps and be an ally in bridging the skills gap. 75 percent of those surveyed by Hornetsecurity believe that AI will gain importance in the field of cybersecurity in the next five years.
The NIS-2 Directive, which became mandatory throughout the EU in October 2024, sets stricter security and reporting requirements for critical and important sectors, including AI solutions. The directive aims to improve the overall cybersecurity posture of the European Union.
Post-Quantum Cryptography (PQC) is moving into focus as companies start to implement quantum-resistant algorithms to prevent today's encrypted data from being compromised by quantum computers in the future. This shift towards PQC is a proactive measure to safeguard future data against potential quantum threats.
Ethical standards and international cooperation are gaining importance to make the use of AI in security more transparent, responsible, and effective. As AI continues to shape the future of cybersecurity, it is crucial to ensure its use aligns with ethical standards and fosters international cooperation to combat cyber threats effectively.
In 2025, technology trends in cybersecurity include the use of AI agents, Cyber Resilience, and Post-Quantum Cryptography (PQC). AI agents are autonomous software modules used in corporate environments to automate defense measures. The focus on cyber resilience reflects the need for systems to be able to withstand, recover, and adapt to cyber attacks.
As businesses and governments navigate the complex landscape of AI in cybersecurity, it is essential to stay informed about the latest developments and threats. By understanding the potential risks and benefits, organisations can make informed decisions to protect their data and systems effectively.
Read also:
- Impact of Alcohol on the Human Body: Nine Aspects of Health Alteration Due to Alcohol Consumption
- Understanding the Concept of Obesity
- Tough choices on August 13, 2025 for those born under Aquarius? Consider the advantages and disadvantages to gain guidance
- Microbiome's Impact on Emotional States, Judgement, and Mental Health Conditions