
A WUN project is investigating the impacts of artificial intelligence (AI) on workers in the manufacturing sector. Like many industries, manufacturing is increasingly influenced by AI innovations—tasks that once provided stable income to a wide swathe of the population are now outsourced to computerized and mechanized technologies.
While these efficiencies are a boon to the industry itself, they have the potential to make once-crucial employees irrelevant.
These ethical concerns are the focus of the project involving the University of Sheffield, and early-career researchers from the University of Auckland, and University of York to discuss these issues and devise potential solutions.
“We talk about safety and sustainability but the human side of the process—how AI is going to impact ethics and equity—is often missing. We really don’t think about it systematically,” Mohammad Zandi, Professor of Chemical Engineering at the University of Sheffield says. He is the lead of the Empowering the Manufacturing Workforce for the Future: Democratisation of Digitalisation and Ethical Integration of AI team.
To remedy that the WUN project team aims to assess the impacts of AI on workers’ rights and the democratisation of digital skills. Their goal is to allow workers who stand to be displaced to advance along with the technology that might otherwise render them obsolete.
Their ultimate goal is to develop a framework for implementing AI in manufacturing that carefully considers how biases and discrimination created by the technology will affect human workers. Perspectives from a variety of stakeholders will be included, from ethicists and academic observers to industry leaders who will be spearheading these changes. Ultimately, these insights could be built into future AI in the design stages.
The researchers found fewer than 300 academic papers that analysed these issues, underscoring the critical nature of their work. These issues are under-discussed despite the imminent peril to workers.
Beginning with their first meeting in early 2024, the team began the challenging work of assessing the problems inherent to ethical applications of AI and how they might best be implemented. Among their conclusions was that AI should be implemented on a small scale to gauge its impacts before being deployed across an organization’s operations. Further, it needs to be supervised and scrutinized at all stages to mitigate potential problems.
“AI may be smarter than us, but it is not wiser,” Zandi says.
“The framework, as a way of navigating the AI landscape, is very important,” he adds. “It’s a mindset. It should become a part of the culture, like the way we do safety assessment.”