Trade unions push for AI legislation to safeguard workers

AI and automation are already changing the world of work

Trade Unions push for AI legislation to safeguard workers in a rapidly automating UK

Image:
Trade Unions push for AI legislation to safeguard workers in a rapidly automating UK

The Trades Union Congress (TUC), has proposed a new law to safeguard workers from the "risks and harms" associated with the deployment of AI systems in the workplace.

The draft legislation, titled the Artificial Intelligence (Employment and Regulation) Bill, aims to translate ethical principles into concrete rights and obligations for employers and employees alike.

The TUC is the coordinating body for 48 trade unions, representing approximately 5.5 million members.

The proposed bill comes as AI and automation are beginning to have an impact on the UK workforce, raising concerns among unions about job security, industry disruption, and the overall transformation of work itself.

The government has shown reluctance to regulate the development and rollout of AI models, fearing it could hamper innovation.

"AI is rapidly transforming our society and the world of work, yet there are no AI-related laws in place in the UK, nor any current plans to legislate soon," said Mary Towers, TUC policy officer, for employment rights.

"Urgent action is needed to ensure that people are protected from the risks and harms of AI-powered decision making in the workplace, and that everyone benefits from the opportunities associated with AI at work," she added.

The TUC is particularly concerned about "high-risk" AI decisions that significantly impact workers' lives, such as hiring, performance evaluation, and disciplinary actions.

The bill proposes a mandatory "Workplace AI Risk Assessment" (WAIRA) for employers, requiring them to thoroughly evaluate potential risks before and after deploying AI systems.

Additionally, employers would be obligated to create registers of their AI decision-making tools.

A key feature of the bill is the reversal of the burden of proof in discrimination cases. Currently, the onus falls on the employee to prove bias in AI-driven decisions. The TUC's bill would shift this burden to employers, making it easier for workers to challenge discriminatory outcomes.

The proposed legislation also stresses transparency and worker participation. It mandates clear explanations for how AI makes high-risk decisions about individuals. The proposed bill also grants workers the right to human review of automated decisions and access to information about how these systems function.

The TUC acknowledges the government's cautious approach to AI regulation. However, they argue that existing laws like GDPR fall short in addressing issues like algorithmic bias, data ownership, and lack of worker involvement.

The bill represents a collaborative effort, shaped by a committee with diverse stakeholders, including technology experts, worker rights advocates, and policymakers.

Last month, the UK government released a white paper that outlined a regulatory approach to AI based on harms rather than risks. The government said a "patchwork of legal regimes" is holding firms back from using AI to its full potential, causing confusion and both financial and administrative burdens.

Instead of a "heavy-handed" approach, the government noted that it would pursue light-touch regulation, handing responsibility to existing regulators rather than establishing a new body.

The European Union has already implemented its AI Act, which is the world's first legislation specifically designed to address the risks associated with AI. This includes provisions on biometric categorisation, manipulation of human behaviour, and stricter rules for the introduction of generative AI.