15 posts tagged with bias and AI.
Displaying 1 through 15 of 15. Subscribe:
Humans are Biased, Generative AI is Even Worse
"Stable Diffusion generates images using artificial intelligence, in response to written prompts. Like many AI models, what it creates may seem plausible on its face but is actually a distortion of reality. An analysis of more than 5,000 images created with Stable Diffusion found that it takes racial and gender disparities to extremes — worse than those found in the real world." An analysis by Leonardo Nicoletti and Dina Bass for Bloomberg Technology + Equality, with striking visualizations. [more inside]
Spewing bullshit at the speed of AI
Yes, this is another ChatGTP post, but it's about creating chatbots that parrot Fox news, or perhaps the official propaganda of the Chinese government. The issue is not theoretical, at least two already exist, as reported by the NYT (gift link). [more inside]
n-text moral judgments
Should I run the blender at 3am in the morning when my family is sleeping?
Ask Delphi lets you try out a computational model for descriptive ethics, i.e., people’s moral judgments on a variety of everyday situations.
Relevant paper.
Ask Delphi lets you try out a computational model for descriptive ethics, i.e., people’s moral judgments on a variety of everyday situations.
Relevant paper.
The Trick of Orthodoxy
Economics truly is a disgrace - "This is very personal post. It is my story of the retaliation I suffered immediately after my 'economics is a disgrace' blog post went viral. The retaliation came from Heather Boushey–a recent Biden appointee to the Council of Economic Adviser and the President and CEO of Equitable Growth where I then worked. This is not the story I wanted to be telling (or living). Writing this post is painful. I am sorry." (via; previously) [more inside]
"This is the largest dataset of its kind ever produced."
Newspaper Navigator is a project being carried out by Ben Lee (his announcement on Twitter), Innovator in Residence at the Library of Congress. It extracts visual content from 16+ million pages of sixty years of public domain digitized American newspapers and helps people learn to search the visual content using machine learning techniques. Read the FAQ to learn more about how its creator tried to manage algorithmic bias. Fun search terms are offered if you're not feeling creative: national park, giraffe, blimp, hats, stunts. The dataset is publicly available, the code is available and here's a white paper about the process of building it.
The chickenization of everything
How to Destroy Surveillance Capitalism (thread) - "Surveillance Capitalism is a real, serious, urgent problem... because it is both emblematic of monopolies (which lead to corruption, AKA conspiracies) and because the vast, nonconsensual dossiers it compiles on us can be used to compromise and neutralize opposition to the status quo."[1,2,3] [more inside]
A blind and opaque reputelligent nosedive
Data isn't just being collected from your phone. It's being used to score you. - "Operating in the shadows of the online marketplace, specialized tech companies you've likely never heard of are tapping vast troves of our personal data to generate secret 'surveillance scores' — digital mug shots of millions of Americans — that supposedly predict our future behavior. The firms sell their scoring services to major businesses across the U.S. economy. People with low scores can suffer harsh consequences."[1] [more inside]
Ethics in AI
DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism - "The researchers detailed how to build AI systems while critically examining colonialism and colonial forms of AI already in use in a preprint paper released Thursday. The paper was coauthored by DeepMind research scientists William Isaac and Shakir Mohammed and Marie-Therese Png, an Oxford doctoral student and DeepMind Ethics and Society intern who previously provided tech advice to the United Nations Secretary General's High-level Panel on Digital Cooperation." [more inside]
Bias? In my machine learning model? It's more likely than you think.
This approachable blog post from Microsoft Research summarizes this research paper by Swinger, De-Arteaga, et al. [pdf], which demonstrated that commonly used machine learning models contain harmful and offensive racial, ethnic, gender, and religious biases (e.g. associating common Black names with harmful stereotypes while associating common white names with innocuous or positive terms). These biases are harmful in themselves and may also lead to insidious discriminatory effects in software built on these models. [more inside]
On subjective data, why datasets should expire, & data sabotage
A Dataset is a Worldview: a slightly expanded version of a talk given by Hannah Davis at the Library of Congress in September 2019.
How to design AI that eliminates disability bias
How to design AI that eliminates disability bias (Financial Times, Twitter link in case of paywall issues) — "As AI is introduced into gadgets and services, stories of algorithmic discrimination have exposed the tendency of machine learning to magnify the prejudices that skew human decision-making against women and ethnic minorities, which machines were supposed to avoid. Equally rife, but less discussed, are AI's repercussions for those with disabilities." [more inside]
Algorithms define our lives
Fairness and Bias Reduction in Machine Learning
As artificial intelligence begins to drive many important decisions (e.g. loans, college admissions, bail), the problem of biased AI has become increasingly prominent (previously, previously, previously). Recently researchers, including at Google and Microsoft, have started taking the problem of fairness seriously. [more inside]
Through a Glass, Dark Enlightenment
The World's Largest Hedge Fund Is Building an Algorithmic Model of Its Founder's Brain - "Mr. Dalio has the highest stratum score at Bridgewater, and the firm has told employees he has one of the highest in the world. Likewise, Bridgewater's software judges Mr. Dalio the firm's most 'believable' employee in matters such as investing and leadership, which means his opinions carry more weight. Mr. Dalio is always in search of new data with which to measure his staff. He once raised the idea of using head bands to track people's brain waves, according to one former employee. The idea wasn't adopted." [more inside]
Auditing Algorithms and Algorithmic Auditing
How big data increases inequality and threatens democracy - "A former academic mathematician and ex-hedge fund quant exposes flaws in how information is used to assess everything from creditworthiness to policing tactics, with results that cause damage both financially and to the fabric of society. Programmed biases and a lack of feedback are among the concerns behind the clever and apt title of Cathy O'Neil's book: Weapons of Math Destruction." [more inside]
Page:
1