TechNews Pictorial PriceGrabber Video Wed Apr 24 07:09:01 2024

0


Algorithms are simplifying humanity
Source: John Light,


For better or worse, America is in the midst of a silent revolution. Many of the decisions that humans once made are being handed over to mathematical formulas. With the correct algorithms, the thinking goes, computers can drive cars better than human drivers, trade stock better than Wall Street traders and deliver to us the news we want to read better than newspaper publishers.

But with this handoff of responsibility comes the possibility that we are entrusting key decisions to flawed and biased formulas. In a recent book that was longlisted for the National Book Award, Cathy O’Neil, a data scientist, blogger and former hedge-fund quant, details a number of flawed algorithms to which we have given incredible power — she calls them “Weapons of Math Destruction.” We have entrusted these WMDs to make important, potentially life-altering decisions, yet in many cases, they embed human race and class biases; in other cases, they don’t function at all.
Among other examples, O’Neil examines a “value-added” model New York City used to decide which teachers to fire, even though, she writes, the algorithm was useless, functioning essentially as a random number generator, arbitrarily ending careers. She looks at models put to use by judges to assign recidivism scores to inmates that ended up having a racist inclination. And she looks at how algorithms are contributing to American partisanship, allowing political operatives to target voters with information that plays to their existing biases and fears.

A veteran of Occupy Wall Street, O’Neil has now founded a company, O’Neil Risk Consulting & Algorithmic Auditing, or ORCAA (complete with whale logo), to “audit” these privately designed secret algorithms — she calls them “black boxes” — that hold public power. “It’s the age of the algorithm,” the company’s website warns, “and we have arrived unprepared.” She hopes to put her mathematical knowledge to use for the public good, taking apart algorithms that affect people’s lives to see if they are actually as fair as their designers claim. It’s a role she hopes a government agency might someday take on, but in the meantime it falls to crusading data scientists like O’Neil.

We spoke with her recently about how America should confront these WMDs. This interview has been edited for length and clarity.

John Light: What do you hope people take away from reading this book? What did you set out to do when you were writing it?

Cathy O’Neil: The most important goal I had in mind was to convince people to stop blindly trusting algorithms and assuming that they are inherently fair and objective. I wanted to prevent people from giving them too much power. I see that as a pattern. I wanted that to come to an end as soon as possible.

Light: So, for example, what’s the difference between an algorithm that makes racist decisions and a person at the head of a powerful institution or in a place of power making those racist decisions?

O’Neil: I would argue that one of the major problems with our blind trust in algorithms is that we can propagate discriminatory patterns without acknowledging any kind of intent.

Let’s look at it this way: We keep the responsibility at arm’s length. So, if you had an actual person saying, “Here’s my decision. And I made a decision and it was a systematically made decision,” then they would clearly be on the hook for how they made those decisions.

But because we have an algorithm that does it, we can point to the algorithm and say, “We’re trusting this algorithm, and therefore this algorithm is responsible for its own fairness.” It’s kind of a way of distancing ourselves from the question of responsibility.

Light: Most Americans may not even understand what algorithms are, and if they do, they can’t really take them apart. As a mathematician, you are part of this class of people who can access these questions. I’m curious why you think you were the one to write this book, and, also, why this isn’t a discussion that we’ve been having more broadly before now?

O’Neil: That second question is a hard one to answer, and it gets back to your first question: Why is it me? And it’s simply me because I realized that the entire society should be having this conversation. But I guess most people were not in the place where they could see it as clearly as I could. And the reason I could see so clearly is that I know how models are built, because I build them myself, so I know that I’m embedding my values into every single algorithm I create and I am projecting my agenda onto those algorithms. And it’s very explicit for me. I should also mention that a lot of the people I’ve worked with over the years don’t think about it that way.

Light: What do you mean? They don’t think about it what way?

O’Neil: I’m not quite sure, but there’s less of a connection for a lot of people between the technical decisions we make and the ethical ramifications we are responsible for.

For whatever reason — I can think about my childhood or whatever — but for whatever reason, I have never separated the technical from the ethical. When I think about whether I want to take a job, I don’t just think about whether it’s technically interesting, although I do consider that. I also consider the question of whether it’s good for the world.

So much of our society as a whole is gearing us to maximize our salary or bonus. Basically, we just think in terms of money. Or, if not money, then, if you’re in academia, it’s prestige. It’s a different kind of currency. And there’s this unmeasured dimension of all jobs, which is whether it’s improving the world. [laughs] That’s kind of a nerdy way of saying it, but it’s how I think about it.

The training one receives when one becomes a technician, like a data scientist — we get trained in mathematics or computer science or statistics — is entirely separated from a discussion of ethics. And most people don’t have any association in their minds with what they do and with ethics. They think they somehow moved past the questions of morality or values or ethics, and that’s something that I’ve never imagined to be true.

Especially from my experience as a quant in a hedge fund — I naively went in there thinking that I would be making the market more efficient and then was like, oh my God, I’m part of this terrible system that is blowing up the world’s economy, and I don’t want to be a part of that.

And then, when I went into data science, I was working with people, for the most part, who didn’t have that experience in finance and who felt like because they were doing cool new technology stuff they must be doing good for the world. In fact, it’s kind of a dream. It’s a standard thing you hear from startup people — that their product is somehow improving the world. And if you follow the reasoning, you will get somewhere, and I’ll tell you where you get: You’ll get to the description of what happens to the winners under the system that they’re building.

Every system using data separates humanity into winners and losers. They’ll talk about the winners. It’s very consistent. An insurance company might say, “Tell us more about yourself so your premiums can go down.” When they say that, they’re addressing the winners, not the losers. Or a peer-to-peer lender site might say, “Tell us more about yourself so your interest rate will go down.” Again, they’re addressing the winners of that system. And that’s what we do when we work in Silicon Valley tech startups: We think about who’s going to benefit from this. That’s almost the only thing we think about.

Light: Silicon Valley hasn’t had a moment of reckoning, but the financial world has: In 2008, its work caused the global economy to fall apart. This was during the time that you were working there. Do you think that, since then, Wall Street, and the financial industry generally, has become more aware of its power and more careful about how that power is used?

O’Neil: I think what’s happened is that the general public has become much more aware of the destructive power of Wall Street. The disconnect I was experiencing was that people hated Wall Street, but they loved tech. That was true when I started this book; I think it’s less true now — it’s one of the reasons the book had such a great reception. People are starting to be very skeptical of the Facebook algorithm and all kinds of data surveillance. I mean, Snowden wasn’t a thing when I started writing this book, but people felt like they were friends with Google, and they believed in the “Do No Evil” thing that Google said. They trusted Google more than they trusted the government, and I never understood that. For one reason, the NSA buys data from private companies, so the private companies are the source of all this stuff.

The point is that there are two issues. One issue is the public perception — and I thought, look, the public trusts big data way too much. We’ve learned our lesson with finance because they made a huge goddamn explosion that almost shut down the world. But the thing I realized is that there might never be an explosion on the scale of the financial crisis happening with big data. There might never be that moment when everyone says, “Oh my God, big data is awful.”

The reason there might never be that moment is that by construction, the world of big data is siloed and segmented and segregated so that successful people, like myself — technologists, well-educated white people, for the most part — benefit from big data, and it’s the people on the other side of the economic spectrum, especially people of color, who suffer from it. They suffer from it individually, at different times, at different moments. They never get a clear explanation of what actually happened to them because all these scores are secret and sometimes they don’t even know they’re being scored. So there will be no explosion where the world sits up and notices, like there was in finance. It’s a silent failure. It affects people in quiet ways that they might not even notice themselves. So how could they organize and resist?

Light: Right, and that makes it harder to create change. Everybody in Washington, DC, could stand up and say they were against the financial crisis. But with big data, it would be much harder to really get politicians involved in putting something like the Consumer Financial Protection Bureau in place, right?

O’Neil: Well, absolutely. We don’t have the smoking gun, for the most part, and that was the hardest part about writing this book, because my editor, quite rightly, wanted me to show as much evidence of harm as possible, but it’s really hard to come by. When people are not given an option by some secret scoring system, it’s very hard to complain, so they often don’t even know that they’ve been victimized.

If you think about the examples of the recidivism scores or predictive policing, it’s very, very indirect. Again, I don’t think anybody’s ever notified that they were sentenced to an extra two years because their recidivism score had been high, or notified that this beat cop happened to be in their neighborhood checking people’s pockets for pot because of a predictive policing algorithm. That’s just not how it works. So evidence of harm is hard to come by. It’s exactly what you just said: It will be hard to create the Consumer Financial Protection Bureau around this crisis.

Light: If the moderator of one of the presidential debates were to read your book, what would you hope that it would inspire him or her to ask?
O’Neil: Can I ask a loaded question?

Light: Sure.

O’Neil: You could ask, “Can you comment on the widespread usage of algorithms to replace human resources as part of the company, and what do you think the consequences will be?”

Light: Occupy Wall Street didn’t even exist in 2008 when Obama was elected, and this year, as we’re electing his successor, I think Occupy’s ideas have been a big part of the dialogue. You were a member of Occupy Wall Street — have you been struck by the ways in which they’ve contributed to the national conversation?

O’Neil: I still am a member. I facilitate a group every Sunday at Columbia. I know, for one thing, that I could not have written this book without my experiences at Occupy. Occupy provided me a lens through which to see systemic discrimination. The national conversation around white entitlement, around institutionalized racism, the Black Lives Matter movement, I think, came about in large part because of the widening and broadening of our understanding of inequality. That conversation was begun by Occupy. And it certainly affected me very deeply: Because of my experience in Occupy — and this goes right back to what I was saying before — instead of asking the question, “Who will benefit from this system I’m implementing with the data?” I started to ask the question, “What will happen to the most vulnerable?” Or “Who is going to lose under this system? How will this affect the worst-off person?” Which is a very different question from “How does this improve certain people’s lives?”


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |