Artificial Intelligence Punishing Low-Income Americans

Shares

With AI, the risks of misapplied policies, coding errors, bias, or cruelty are affecting masses of people ranging from several thousand to millions at a time, writes Kevin De Liban.

Food bank trunk in Houston, 2017. (USDA Photo by Lance Cheung, Flickr, Public domain)

By Kevin De Liban
Inequality.org

The billions of dollars poured into artificial intelligence (AI) haven’t delivered on the technology’s promised revolutions, such as better medical treatment, advances in scientific research, or increased worker productivity.

So, the AI hype train purveys the underwhelming: slightly smarter phones, text-prompted graphics, and quicker report-writing (if the AI hasn’t made things up). Meanwhile, there’s a dark underside to the technology that goes unmentioned by AI’s carnival barkers — the widespread harm that AI presently causes low-income people. 

AI and related technologies are used by governments, employers, landlords, banks, educators, and law enforcement to wrongly cut in-home caregiving services for disabled people, accuse unemployed workers of fraud, deny people housing, employment, or credit, take kids from loving parents and put them in foster care, intensify domestic violence and sexual abuse or harassment, label and mistreat middle- and high-school kids as likely dropouts or criminals, and falsely accuse Black and brown people of crimes.

All told, 92 million low-income people in the United States — those with incomes less than 200 percent of the federal poverty line — have some key aspect of life decided by AI, according to a new report by TechTonic Justice. This shift towards AI decision-making carries risks not present in the human-centered methods that precede them and defies all existing accountability mechanisms.

First, AI expands the scale of risk far beyond individual decision-makers. Sure, humans can make mistakes or be biased. But their reach is limited to the people they directly make decisions about. In cases of landlords, direct supervisors, or government caseworkers, that might top out at a few hundred people.

But with AI, the risks of misapplied policies, coding errors, bias, or cruelty are centralized through the system and applied to masses of people ranging from several thousand to millions at a time.

Second, the use of AI and the reasons for its decisions are not easily known by the people subject to them. Government agencies and businesses often have no obligation to affirmatively disclose that they are using AI. And even if they do, they might not divulge the key information needed to understand how the systems work.

Third, the supposed sophistication of AI lends a cloak of rationality to policy decisions that are hostile to low-income people. This paves the way for further implementation of bad policy for these communities.

Benefit cuts, such as those to in-home care services that I fought against for disabled people, are masked as objective determinations of need. Or workplace management and surveillance systems that undermine employee stability and safety pass as tools to maximize productivity. To invoke the proverb, AI wolves use sheep avatars.

The scale, opacity, and costuming of AI make harmful decisions difficult to fight on an individual level. How can you prove that AI was wrong if you don’t even know that it is being used or how it works?

And, even if you do, will it matter when the AI’s decision is backed up by claims of statistical sophistication and validity, no matter how dubious?

Artificial Intelligence & AI & Machine Learning. (Mike MacKenzie, Image via www.vpnsrus.com, CC BY 2.0)

On a broader level, existing accountability mechanisms don’t rein in harmful AI. AI-related scandals in public benefit systems haven’t turned into political liabilities for the governors in charge of failing Medicaid or Unemployment Insurance systems in Texas and Florida, for example. And the agency officials directly implementing such systems are often protected by the elected officials whose agendas they are executing.

Nor does the market discipline wayward AI uses against low-income people. One major developer of eligibility systems for state Medicaid programs has secured $6 billion in contracts even though its systems have failed in similar ways in multiple states.

Likewise, a large data broker had no problem winning contracts with the federal government even after a security breach divulged the personal information of nearly 150 million Americans.

Existing laws similarly fall short. Without any meaningful AI-specific legislation, people must apply existing legal claims to the technology. Usually based on anti-discrimination laws or procedural requirements like getting adequate explanations for decisions, these claims are often available only after the harm has happened and offer limited relief.

While such lawsuits have had some success, they alone are not the answer. After all, lawsuits are expensive, low-income people can’t afford attorneys, and quality, no-cost representation available through legal aid programs may not be able to meet the demand.

Right now, unaccountable AI systems make unchallengeable decisions about low-income people at unfathomable scales. Federal policymakers won’t make things better.

The Trump administration quickly rescinded protective AI guidance that President Joe Biden issued. And, with Trump and Congress favoring industry interests, short-term legislative fixes are unlikely.

Still, that doesn’t mean all hope is lost. Community-based resistance has long fueled social change. With additional support from philanthropy and civil society, low-income communities and their advocates can better resist the immediate harms and build political power needed to achieve long-term protection against the ravages of AI.

Organizations like mine, TechTonic Justice, will empower these frontline communities and advocates with battle-tested strategies that incorporate litigation, organizing, public education, narrative advocacy, and other dimensions of change-making.

In the end, fighting from the ground up is our best hope to take AI-related injustice down.

Kevin De Liban is the founder and president TechTonic Justice, a new organization fighting alongside low-income people left behind by artificial intelligence.

This article if from Inequality.org.

Views expressed in this article and may or may not reflect those of Consortium News.

11 comments for “Artificial Intelligence Punishing Low-Income Americans

  1. Jolun Bexa
    February 4, 2025 at 15:21

    Notice how The System works …. no mention of AI being used to go after the biggest tax cheats in the nation. No mention of AI being to target America’s most dangerous employers by analyzing accident and injury trends. No sign that AI is being used to go after defense contractors who can never make the price they bid.

    In some ways, the decisions that lead to the harm to lower income people do not begin with an AI. It is human beings who are picking the targets for the AIs …. the unemployed, the disabled, etc. AI is not being used to reign in the rich and powerful. That’s a decision that is made by humans, and they would never trust an AI to make such a decision. After all, an AI might logically conclude that to achieve its goal of a better society, it is the rich and powerful who most need oversight and controls.

    Just be careful to understand the real source of the problem, and don’t blame the AIs for something they do not control.

  2. Dfnslblty
    February 4, 2025 at 08:42

    The challenge is NOT a.i.

    The challenge is avarice and fear.

    Elected representatives — from potus to judge to school boards — are each fearful.
    CEOs, programmers, and their customers are all fearful and greedy.

    Remove the money in equitable fashion,

    No one needs a billion dollar$.

  3. February 4, 2025 at 07:45

    AI should be called AII because it is artificially influenced intelligence.

  4. Ian Perkins
    February 3, 2025 at 23:44

    Contrary to the article’s first sentence, artificial intelligence has made possible many advances in scientific research, some of them, such as predicting protein stuctures from amino acid sequences, profound.

    • Jolun Bexa
      February 4, 2025 at 16:20

      America, from left to right, is anti-science. So, it is no surprise that they don’t pay any attention to advancements in science.

      America believes many myths. Of course, a nation that believes in myths is not what one would normally associate with a nation that is ‘smart’ or ‘scientific’. It goes further, when you realize that one of the myths that Americans believe is that Americans are ‘smart’. We think we are on top of the world, but competitive test scores show America usually struggles to get into the top 20, if that.

      America is the nation that attacks teachers. We so under-fund the schools that our underpaid teachers are at times buying supplies from their own meal money. America diverts public education money to private ‘charter’ schools. Now, apparently we are going to divert public education money to private religious schools that dispute even very basic science. America cuts the budgets of every government agency besides defense, security and law enforcement. This means that research grants get cut as well. And the capitalist universities more shift their emphasis towards departments and professors who get the DOD grants. American news sites replace ‘science’ sections that told of breakthroughs with ‘technology’ sections that promote gadgets from corporate partners and report on tech stock prices and mergers.

      America is not smart. It is no surprise that an American writer is unaware of recent advancements in science. No surprise that they create that impression by being dismissive in tone about that which they know little of or are completely ignorant.

  5. bardamu
    February 3, 2025 at 17:16

    AI is a next step in opacity and anonymity.

    • Jolun Bexa
      February 4, 2025 at 16:50

      Maybe. Or partially true. True in that the excuse of AI is used by humans to do evil under the cover of AI.

      But, its also early days. Two things to consider. 1) So far, this is stupid AI that relied on mass computing power to approximate intelligence. But, that may not always be the case. The Chinese just shocked all the smart people on Wall Street by programming an AI that required less ‘training’ on fewer expensive computer chips. If that trend continues onwards, eventually AI might become a tool that everyone can use and which helps level the playing field a bit. Just a thought.

      And 2) this is AI designed by greedy stupid people. The perfect setup to a science fiction story where the AI takes control away from the greedy stupid people. Greedy stupid people of course only worry constantly about profit, so they don’t see it coming. Since I live in a world run by greedy stupid people, is it at least possible that AI could do better? Maybe all us ordinary folk end up celebrating AI Day to mark the anniversary of the day the AI took control and sent Elon Musk off on a one-way trip to Mars?

      Just thoughts from a peaceful, anti-libertarian computer geek. A lot is possible.

  6. mary-lou
    February 3, 2025 at 15:41

    the main threat comes from modelers and their models – Ai (and with it most other [bio]tech overlords, hello Bill Gates, how art thou?) works with a computerised model of our real-life world and is apt to make mistakes. the wet dream of the techies: implementing technological changes without checking the consequences, without liabilities and full of plausible deniability. hurray :-((

  7. Carolyn L Zaremba
    February 3, 2025 at 13:34

    As though Biden didn’t favor industry interests. Get real.

    • joe Ell the 3rd
      February 3, 2025 at 16:58

      left right day and night could make an endless list
      Why defend ? I didn’t see red .
      its the shortlist you prefer .
      What the illusion does is allow one to do what the other can’t ? — the friendly foe ?
      Yet I understand you .
      The picturesque beauty of the battlefield with bodies stacked like cordwood ?
      A dual Mozart masterpiece on one canvas ? painted with the same brush .
      A hologram from different angles of view .
      I am no artist but from a close view I see the strokes as if painting it myself .
      Vicarious learning gives you the painters hand .
      If you could what view would you choose to paint with words ?
      Would you push hard the brush or lightly stroke .
      Pet it like a dog ? or whip it like a mule ?

  8. February 3, 2025 at 12:43

    It is hardly a surprise that a vicious tyranny that murders millions and demolishes entire societies to carry out its piracy and pillage is bent on exterminating citizens who cost, rather than making, its oligarchs money.

    Nor is it a surprise that this self-same tyranny, which constructs elaborate lies to rationalize its crimes rather than to come right out and say what it is doing, is carrying this extermination out under the rubric of some complicated, imaginary, contradictory management theory.

    What is surprising is that millions still believe its lies and think the people it savages have it coming.

Comments are closed.