We need structured public feedback to better understand the risks, says red teamer Rumman Chowdhury
Model reached biased and harmful conclusions despite 'good' training data
Bug bounties, or paying ethical hackers to find software flaws, are common in the infosec space. Can the same be applied to AI?
SageMaker Clarify can discover potential bias during data preparation and after training, says AWS
Prof Noel Sharkey says existing decision algorithms are "infected with biases"