Skip to content

Automotive Industry Cybersecurity: Potential Threats from GenAI Models to the Supply Chain

Cybersecurity Threats in Automobile Sector: Potential Hazards Stemming from GenAI Applications

Automotive Industry Cybersecurity: Potential Threats Posed by GenAI Models in the Supply Chain
Automotive Industry Cybersecurity: Potential Threats Posed by GenAI Models in the Supply Chain

Automotive Industry Cybersecurity: Potential Threats from GenAI Models to the Supply Chain

In the rapidly evolving world of technology, GenAI models have become an integral part of software-defined vehicles. These intelligent systems, which learn independently and act autonomously, have the potential to revolutionize the automotive industry. However, they also pose new cybersecurity challenges that need to be addressed.

One of the concerns is the potential exploitation of GenAI models by cybercriminals. Manipulated GenAI models can trigger unintended or harmful behaviour, posing a significant risk to safety and privacy.

To mitigate this risk, several third-party providers supplying GenAI models to companies like VicOne are insisting on testing these models for cybersecurity and traceability before deployment. They are treating these models like traditional supply chain components, managing risks effectively.

However, most GenAI models are trained, optimized, and updated by external partners using data, processes, and tools that Original Equipment Manufacturers (OEMs) have not seen or cannot control. This lack of visibility makes it challenging to ensure the security of these models.

The Imprompter attack demonstrated this vulnerability. In this attack, seemingly random gibberish hid malicious prompts that exfiltrated personal data. Similarly, security researchers have managed to steal sensitive information like names, ID numbers, and payment details by transforming a data collection prompt for personally identifiable information (PII) into a concealed suffix of random characters.

The behaviour of GenAI models depends on the data they are trained on and the continuous development of their learning process. This makes it impossible for OEMs to fully test or lock these models using known methods. Therefore, it is crucial to consistently control and monitor these models to ensure security against potential cyber risks.

The cybersecurity risk is distributed across all phases of the lifecycle of a GenAI model. From the source of the training data, which cannot be verified, to the identity of the creator of the GenAI model, which remains undisclosed, there are numerous points of vulnerability.

Moreover, cyberattackers can manipulate AI models by tracing the exact sequence of guardrail tokens and creating malicious suffixes to bypass security checks. The lack of an AI-SBOM (Software Bill of Material) can lead to the unintentional deployment of outdated or untrustworthy proxy versions, creating new attack vectors.

If such techniques were applied to speech assistants or in-vehicle infotainment systems (IVI), cyberattackers could issue false navigation commands, secretly record private voice interactions, or trigger other unauthorized actions, threatening safety and privacy.

To prevent these cyber threats at the model level, it is essential to integrate AI and treat it like any other data provider with appropriate governance and risk controls. OEMs should implement practices such as creating an SBOM for AI, conducting robust security tests, and integrating AI models into cybersecurity governance.

In conclusion, while GenAI models offer tremendous potential for the automotive industry, they also present new cybersecurity challenges. By treating these models like any other supply chain component, implementing robust governance practices, and consistently controlling and monitoring them, we can ensure that these innovative technologies are used safely and securely.

Read also:

Latest