The Role and Risks of Generative AI in Engineering and Manufacturing

generative AI

According to the 2024 PwC Global CEO Survey, 70 per cent of business leaders believe that generative AI will significantly change the way their business creates, delivers and captures value. However, many are concerned about the variety of risks associated with generative AI, including cybersecurity risk and the need for strict regulations over its use in the manufacturing industry. Here, Ian Gardner for Advanced Engineering, explains the great potential of generative AI, while also highlighting the significant risk it poses to manufacturers.

Generative AI is a subset of artificial intelligence that involves algorithms capable of creating new designs and solutions, rather than simply analysing existing data or making predictions based on it. 

In manufacturing and engineering, this means that it holds a lot of potential for firms to increase efficiency — be it to optimise design processes or enhance productivity and reduce the time needed to bring new products to market.

However, such speed brings risk. Extensive reliance on data significantly can increase the risk of breaches and cyberattacks, potentially exposing sensitive information. Likewise, AI tools can even create faulty designs and shut down production processes.

With that in mind, I believe that it’s important to understand the benefits of generative AI alongside the need for robust risk management strategies, which requires collaboration between companies. 

Prescriptive maintenance 

BlueScope, an Australian steel manufacturer, is among the first to adopt Siemens’ new generative AI functionality in its predictive maintenance model, called Senseye Predictive Maintenance. This now allows the platform to not only analyse machine and maintenance data but also generate new models of machine and worker behaviour. 

Typically, machine and maintenance data undergo analysis by machine learning algorithms, where the platform delivers notifications to users within static, self-contained cases. With Senseye Predictive Maintenance, its generative AI feature is allowing manufacturers to go beyond just extracting knowledge from all machines and systems, by giving maintenance engineers the most effective course of action for increased efficiency.

The platform does not require high quality data to provide actionable insights either. By scanning and grouping cases, it derives an optimised maintenance strategy that considers problems based on previously trained data, all while ensuring security within a private cloud. 

As a result, it’s facilitating a move to the next generation maintenance strategy. Following predictive maintenance, which forecasts when maintenance should be performed based on historical data, prescriptive maintenance will become a reality thanks to generative AI. This means that the technology will not only makes predictions but will take it one step further and prescribe optimal actions to be taken by engineers too. 

Control and cyber security 

Generative AI, while offering substantial benefits, also poses significant risks in the absence of effective governance. Without stringent regulations, there’s a higher likelihood of negative outcomes such as privacy breaches, biased decision-making and intellectual property infringements.

PwC argues the regulatory solutions that already exist aren’t expansive enough and that lawmakers will have to work to create a generative AI that is safer for all. As of writing, PwC state that existing regulation, including Global Data Protection Regulation and equal employment opportunity laws, currently apply aspects of the technology. 

While other laws, like the EU’s proposed AI Act, will need to be introduced to address generative AI specifically. In addition, new laws will also need to help close regulatory holes revealed by GenAI uses. 

Governments and regulatory bodies are now in the process of formulating guidelines and regulations to govern the use of AI in manufacturing. In the meantime, companies must introduce internal policies and procedures to address the risks associated with generative AI, along with staying up to date with regulations to ensure compliance. 

That’s before mentioning the cyber security threats too. As well as generative AI being used internally, it is often used by hackers to break into manufacturing systems. According to research analysis conducted by Darktrace, there has been a staggering 135 per cent rise in social engineering attacks driven by generative AI. 

Cybercriminals are exploiting these AI-powered tools to hack passwords, expose sensitive data and deceive users across various platforms. With manufacturers vulnerable to attacks, especially those that rely on outdated machinery, it’s essential for plant managers to put sufficient processes in place to address cybersecurity concerns. 

Like the evolution of traditional technologies, the adoption of generative AI presents huge opportunities for efficiency and innovation, while offering the potential to address some of the key challenges we currently face, like skills, costs and higher customer expectations. In effect, its replacing and augmenting activities.

At IBM, we believe that introducing effective governance to your AI rollout can help to reduce many risks and potential impacts. We know that AI can introduce issues including hallucinations — where outputs reflect issues with model data, drift — where model accuracy can degrade based on changes in production data versus training data, bias — where outputs are distorted due to biases in training data or models and adversarial AI — where bad actors manipulate the output of AI by tweaking input data.

Where trust in AI models is compromised, it can throttle investment, adoption and the reliance on their intended purpose. Turning AI’s potential into liability and sunk costs.

IBM’s framework for securing generative AI has three levels: securing the data, model development and usage. The Watsonx.Governance product offers a solution to govern generative AI models built on any platform and deployed on cloud or on-premises. 

Shared strategies 

Research from Capgemini has revealed that 55 per cent of manufacturers are actively exploring generative AI’s potential, with another 45 per cent having moved to pilot projects.

However, I believe that collaboration is the key to unlocking the ability for companies to fully fulfill the potential of generative AI — whether that’s developing new relationships or continuing pre-existing partnerships. 

Writing in Forbes, Sylvain Duranton the head of BCG X, reinforced that message and said that “companies should actively curate ecosystems of internal developers, external vendors, researchers and consultants to successfully navigate GenAI’s uncertainties.”

Advanced Engineering, the UK’s annual gathering of engineering and manufacturing professionals, is the ideal setting for those wanting to meet with trusted partners to learn about generative AI effectively. 

The technology’s high risk and reward nature requires companies to come together and ensure that they overcome the challenging opportunities posed by the evolving technology. A host of manufacturing, engineering and cyber security firms will be in attendance, where exhibitors and visitors alike can learn more. To confirm your attendance at the Advanced Engineering trade show, held at the NEC in Birmingham on October 30 and 31, register for a visitor pass today.


Manufacturing & Engineering Magazine | The Home of Manufacturing Industry News

Share this post

Featured MEM Technology

Subscribe to MEM Newsletters!