US lawmakers introduce a bill to require algorithms to be checked for bias

clock • 2 min read

Algorithmic Accountability Act would require US tech firms to audit their algorithms before deployment

Lawmakers in the US have drafted a bill that would require technology companies ensure that their machine learning algorithms are free of gender, race, and other biases before deployment.

The bill, called the Algorithmic Accountability Act, was introduced in both the Senate and House of Representatives this week.

Reppresenative Yvette Clarke introduced the bill in the lower house, while Senators Cory Brooker and Ron Wyden did the same in the Senate.

The bill is likely to be heard first by the Senate Commerce Committee in coming months.

If passed, the bill would ask the Federal Trade Commission (FTC) to create guidelines for assessing the "highly sensitive" automated systems. Companies would be required to evaluate whether the algorithms powering their systems are discriminatory or biased, and whether they pose a security or privacy risk to consumers.

If companies find an algorithm implying the risk of privacy loss, they would take corrective actions to fix everything that is "inaccurate, unfair, biased or discriminatory" in the algorithm.

Companies would be required to evaluate whether the algorithms powering their systems are discriminatory or biased

The mandate will be applicable only to those companies that have annual revenues of $50 million or which keep data of more than one million people or devices. Data brokers that buy and sell consumer data will also come under the new law.

According to Senator Ron Wyden, the bill is needed because of the ever-increasing involvement of computer algorithms in the daily lives of people.

Wyden said that instead of abolishing bias, these algorithms often depend on biased data or assumptions that can reinforce prejudice.

Recently, a number of technology companies have faced scrutiny over their use of automated systems that decide which users will see what content, such as particular job or housing advertisements.

Earlier this year, Massachusetts Institute of Technology researchers conducted tests with Amazon's facial recognition platform, Rekognition, and found that it was less effective in accurately identifying some races and genders. The system reportedly preferred male applicants over female ones.

And last month, the US Department of Housing and Urban Development sued Facebook over allegations that the social giant allows ads on its platform to be served to particular genders and races.

However, some industry groups have criticised the proposed law.

"To hold algorithms to a higher standard than human decisions implies that automated decisions are inherently less trustworthy or more dangerous than human ones, which is not the case," Daniel Castro, a spokesman for the Information Technology and Innovation Foundation told the BBC.

According to Castro, this law will "stigmatise" artificial intelligence technology and eventually discourage its use.

The AI and Machine Learning Awards are coming! In July this year, Computing will be recognising the best work in AI and machine learning across the UK. Do you have research or a project that you think deserves wider recognition? Enter the awards today - entry is free. 

You may also like
Twitter whistleblower says site put growth over security

Social Networking

And there is at least one Chinese agent at the company

clock 14 September 2022 • 3 min read
Twitter fined $150m for exploiting users' personal data

Privacy

US authorities fined Twitter $150 million (£119 million) for misusing users' data in order to help sell targeted ads.

clock 30 May 2022 • 3 min read
Nvidia prepares to abandon Arm acquisition

Mergers and Acquisitions

Nvidia concerned about getting approvals past the regulatory hurdles the deal has faced

clock 26 January 2022 • 3 min read

Sign up to our newsletter

The best news, stories, features and photos from the day in one perfectly formed email.

More on Software

The social engineering of the self: How AI chatbots manipulate our thinking

The social engineering of the self: How AI chatbots manipulate our thinking

We need structured public feedback to better understand the risks, says red teamer Rumman Chowdhury

John Leonard
clock 27 October 2023 • 4 min read
AI doesn't care what you think

AI doesn't care what you think

Want to understand hallucinations? Look at your family

Professor Peter Cochrane
clock 26 October 2023 • 3 min read
IT Essentials: The fungal IT network

IT Essentials: The fungal IT network

Shadow IT grows best in darkness and solitude

Tom Allen
clock 16 October 2023 • 2 min read