top of page

Amber's Sources

The Register: “Harmed by a decision made by a poorly trained AI? You should be able to sue for damages, says law professor” 

https://www.theregister.com/2021/02/09/legal_fines_ai/ 

The biases in datasets are carried forward in the performance of machine learning models; they are often less accurate and less effective for women or people of darker skin, for example. In the worst case scenario, patients could be mistakenly diagnosed or overlooked. 

 

Forbes: “Is Artificial Intelligence (AI) A Threat To Humans?” 

https://www.forbes.com/sites/bernardmarr/2020/03/02/is-artificial-intelligence-ai-a-threat-to-humans/?sh=7a796de1205d 

Change the jobs humans do/job automation; political, legal, and social ramifications; AI-enabled terrorism; social manipulation and AI bias; AI surveillance; Deepfakes are all part of the negative current and future consequences 

 

Towards Data Science: “Don’t blame the AI, it’s the humans who are biased” 

https://towardsdatascience.com/dont-blame-the-ai-it-s-the-humans-who-are-biased-d01a3b876d58 

​

1) Data and language used to train AI systems might be biased (e.g. computer might pick up sub-patterns and discriminate against those that don't match them); 2) AI requires large data sets to be more accurate but there are always underrepresented populations; 3) While human programmers might be biased, the thought process behind and AI's programming is not very clear 

 

https://onpassive.com/blog/ai-revolutionizing-social-media/ 

https://blog.reputationx.com/guest/ai-for-social-media 

 

BBC: “The real risks of artificial intelligence” 

https://www.bbc.com/future/article/20161110-the-real-risks-of-artificial-intelligence 

AIs could have knock-on effects that we have not prepared for, such as changing our relationship with doctors to the way our neighborhoods are policed. The real risk is that we can put too much trust in the smart systems we are building. A system trained to learn which patients with pneumonia had a higher risk of death inadvertently classified patients with asthma as being at lower risk. 

 

Columbia Review: Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission 

https://people.dbmi.columbia.edu/noemie/papers/15kdd.pdf 

Study on how the AI system's model accuracy is impeccable but unintelligible to humans 

 

BBC: “How maths can get you locked up” 

https://www.bbc.com/news/magazine-37658374 

On the Loomis case 

 

Built-in: “6 DANGEROUS RISKS OF ARTIFICIAL INTELLIGENCE” 

https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence 

On the ramifications and risks of AI, including automation-spurred job loss, privacy violations, 'deep fakes,' algorithmic bias caused by bad data, socioeconomic inequality, and weapons automatization 

 

Forbes: “Is Artificial Intelligence Dangerous? 6 AI Risks Everyone Should Know About” 

https://www.forbes.com/sites/bernardmarr/2018/11/19/is-artificial-intelligence-dangerous-6-ai-risks-everyone-should-know-about/?sh=64b19a862404 

Discusses discrimination, invasion of privacy and social grading, misalignment between our goals and the machine’s, autonomous weapons, and social manipulation 

 

Brookings: “Artificial intelligence primer: What is needed to maximize AI’s economic, social, and trade opportunities” 

https://www.brookings.edu/research/artificial-intelligence-primer-what-is-needed-to-maximize-ais-economic-social-and-trade-opportunities/ 

Suggestions on some domestic and international agendas that we can take on 

 

Robin Hauser TedTalk: https://youtu.be/eV_tx4ngVT0 

 

Kai-fu Lee TedTalk: https://youtu.be/ajGgd9Ld-Wc

bottom of page