US lawmakers introduce a bill to require algorithms to be checked for bias

Some tech companies have been challenged recently over biases in their algorithms. Image via Pixabay

Some tech companies have been challenged recently over biases in their algorithms. Image via Pixabay

Algorithmic Accountability Act would require US tech firms to audit their algorithms before deployment

Lawmakers in the US have drafted a bill that would require technology companies ensure that their machine learning algorithms are free of gender, race, and other biases before deployment.

The bill, called the Algorithmic Accountability Act, was introduced in both the Senate and House of Representatives this week.

Reppresenative Yvette Clarke introduced the bill in the lower house, while Senators Cory Brooker and Ron Wyden did the same in the Senate.

The bill is likely to be heard first by the Senate Commerce Committee in coming months.

If passed, the bill would ask the Federal Trade Commission (FTC) to create guidelines for assessing the "highly sensitive" automated systems. Companies would be required to evaluate whether the algorithms powering their systems are discriminatory or biased, and whether they pose a security or privacy risk to consumers.

If companies find an algorithm implying the risk of privacy loss, they would take corrective actions to fix everything that is "inaccurate, unfair, biased or discriminatory" in the algorithm.

Companies would be required to evaluate whether the algorithms powering their systems are discriminatory or biased

The mandate will be applicable only to those companies that have annual revenues of $50 million or which keep data of more than one million people or devices. Data brokers that buy and sell consumer data will also come under the new law.

According to Senator Ron Wyden, the bill is needed because of the ever-increasing involvement of computer algorithms in the daily lives of people.

Wyden said that instead of abolishing bias, these algorithms often depend on biased data or assumptions that can reinforce prejudice.

Recently, a number of technology companies have faced scrutiny over their use of automated systems that decide which users will see what content, such as particular job or housing advertisements.

Earlier this year, Massachusetts Institute of Technology researchers conducted tests with Amazon's facial recognition platform, Rekognition, and found that it was less effective in accurately identifying some races and genders. The system reportedly preferred male applicants over female ones.

And last month, the US Department of Housing and Urban Development sued Facebook over allegations that the social giant allows ads on its platform to be served to particular genders and races.

However, some industry groups have criticised the proposed law.

"To hold algorithms to a higher standard than human decisions implies that automated decisions are inherently less trustworthy or more dangerous than human ones, which is not the case," Daniel Castro, a spokesman for the Information Technology and Innovation Foundation told the BBC.

According to Castro, this law will "stigmatise" artificial intelligence technology and eventually discourage its use.

The Computing and Machine Learning Awards 2019 logo

The AI and Machine Learning Awards are coming! In July this year, Computing will be recognising the best work in AI and machine learning across the UK. Do you have research or a project that you think deserves wider recognition? Enter the awards today - entry is free. 

More on Threats and Risks

PyPI package 'ctx' and PHP library 'phpass' hijacked to obtain AWS keys

PyPI package 'ctx' and PHP library 'phpass' hijacked to obtain AWS keys

Both attacks appears to be the work of the same actor

clock 25 May 2022 • 3 min read
Microsoft warns of massive surge in the Linux XorDdos malware usage

Microsoft warns of massive surge in Linux XorDdos malware usage

254% increase in activity of stealthy Linux XorDdos malware observed over the past six months

clock 23 May 2022 • 3 min read
Driverless machinery like smart tractors can lower human involvement and raise yields - but they are prime targets for cybercriminals

Smart farm machines are weakness in food supply chains

Hackers could exploit flaws in modern 'smart' farm machinery to prevent them from operating, say experts.

clock 20 May 2022 • 3 min read