Joshua B. Hoe interviews Chelsea Barabas of MIT about criminal justice monitoring, surveillance and algorithms

Full Episode

My Guest – Chelsea Barabas

Chelsea Barabas is a Ph.D. candidate in Media Arts and Sciences at MIT and her work focuses on examining the spread of algorithmic decision-making tools in the US criminal legal system. She’s a technology fellow at the Carr Center for Human Rights Policy at the Harvard Kennedy School of Government. Formerly she was a research scientist for the AI Ethics and Governance initiative at the MIT Media Lab. 

a picture of chelsea barabas, data scientist at MIT

Notes from Episode 89 Chelsea Barabas

On August 24th Nation Outside and Safe & Just Michigan will be hosting a webinar called “Business Beyond Barriers: Formerly Incarcerated CEO’s Pave the Way

It will include:

Marcus Bullock, CEO Flikshop

Richard Bronson, CEO 70 Million Jobs

Gabriel Blauer, Founding parner of Catastrophic Creations

The panel will be moderated by Troy Rienstra and convened by Tarra Simmons

You can Register HERE

Chelsea has been involved in a lot of published projects on this topic including:

Just a few months ago, she was co-author of this sign-on letter against the tech-to-prison pipeline.

Chelsea Barabas. 2020. Beyond Bias: Re-imagining the Terms of “Ethical AI” in Criminal Law, 12 Geo. J. L. Mod. Critical Race Persp. 2.

Chelsea Barabas et al. 2020. “Studying up: reorienting the study of algorithmic fairness around issues of power.” In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAccT ’20). Association for Computing Machinery, New York, NY, USA, 167–176.

Chelsea Barabas et al. 2018. Interventions Over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment. In Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency (FAT* ‘18). Association for Computing Machinery, New York, NY.

Chelsea Barabas, 2019, Invited talk at NeurIPS, Contextualizing AI Within The History of Exploitation and Innovation In Medical Research

There was also this sign-on letter about risk assessments pre-trial.

The ProPublica story Chelsea refers to during our interview is “How Machines Learn To Be Racist

You can find more of her stuff on her website.

Transcript

A full PDF transcript of episode 89 of the Decarceration Nation Podcast

Joshua B. Hoe

0:04

Hello and welcome to Episode 89 of the Decarceration Nation Podcast, a podcast about radically reimagining America’s criminal justice system. I’m Josh Hoe, and among other things, I’m formerly incarcerated, a freelance writer, a criminal justice reform advocate, and the author of the book Writing Your Own Best Story: Addiction and Living Hope.

We’ll get to my interview with Chelsea Barabas, a research scientist at the Massachusetts Institute of Technology, in just a minute but first, the news. 

I don’t have too much to share this week. On the 24th we’ll be hosting a webinar bringing together a panel of formerly incarcerated CEOs in hopes that we can share a different kind of story about formerly incarcerated people while sharing their experiences with other formerly incarcerated brothers and sisters and allies, and allow folks to ask questions about what worked for them when they came back from their own incarceration. I will get everyone more details on this very important webinar as soon as I have more to share.

And just a few days ago, I celebrated a birthday. If you missed it, that’s okay. Getting older is not my favorite part of life, but if you want to know, every year I eat at the same restaurant for my birthday, and I was able to follow that protocol again this year while remaining socially distant. 

Okay, let’s get to my interview with Chelsea Barabas. 

Chelsea Barabas is a Ph.D. candidate in Media Arts and Sciences at MIT and her work focuses on examining the spread of algorithmic decision-making tools in the US criminal legal system. She’s also a Technology Fellow at the Carr Center for Human Rights Policy at the Harvard Kennedy School of Government. Formerly she was a research scientist for the AI Ethics and Governance initiative at the MIT Media Lab. 

I probably should also mention that our fathers are really old friends, but oddly enough, until today, we’ve never actually talked to each other. So on that note, welcome to the Decarceration Nation Podcast Chelsea.

Chelsea Barabas

Thanks, Josh, for having me.

Joshua B. Hoe

2:03

I always ask some version of the same first question: how did you get from wherever you started to working on algorithms as they relate to the criminal punishment system?

Chelsea Barabas

2:13

Sure. I first got interested in civil rights issues related to algorithmic decision-making back when I was doing my master’s degree from 2013 to 2015. And at that time, I was examining the role that algorithmic recommendation systems were playing in the US labor market. I was particularly interested in how the tech sector was using data to address a growing concern about the lack of diversity for their technical staff, their engineers, and things like that. So I did some work looking at different data-driven platforms that were designed to help tech companies identify and recruit more diverse talent into their workforce.  And after that work, I was offered an opportunity to join this AI Ethics and Governance Initiative. That was in 2017, right in the wake of a pretty big expose that was carried out by ProPublica in 2016. They investigated the essence of racial bias in pretrial risk assessments. This investigative report basically showed that, first off, these algorithms are not very accurate. They were hovering around the 60% accuracy level. But that inaccuracy is disproportionately borne by people of color. So if you were a black person being evaluated by these risk assessments, you were twice as likely to be misidentified as high risk than a white person was. And if you were a white person, in contrast, you were twice as likely to be misidentified as low risk by these tools. And so this sparked a big wave of concern, both in the broader public, but also particularly in the academic community. Because this is one of the early examples of a high stakes environment where people’s lives were going to be greatly impacted by a tool that exhibited really serious racial biases in the way it developed its predictions. So it was at that point in time that I was asked to do some initial investigations specifically into how these algorithms were impacting the culture of the courtroom. How did judges integrate this information into their decision-making, or how did they resist the recommendations that were given to them by algorithms? And that was the beginning of my work in this area. Since then it’s really gone in directions I never could have predicted when I started. My work has generally evolved away from working closely with governments and major nonprofits, which is where I started a lot of my work, and moved more and more towards working with community organizations who are levelling much more fundamental critiques about these systems and also trying to develop alternatives, alternative proposals for how we might use data to actually support much more transformational change within the criminal punishment system.

Joshua B. Hoe

5:40

I’m somewhat deep in the weeds on this stuff, and you’re really deep in the weeds on this stuff. So let’s start from a basic place for the listeners who aren’t necessarily where we are. How would you define AI and algorithmic systems and data-driven decision making?

Chelsea Barabas

6:00

Yeah, that’s a good question. I think particularly this question of what exactly AI is a really good one because a lot of things get branded as AI. Because, it’s a really diverse group of things. I like to say that AI is actually a brand first and foremost. Under this rubric of AI, you see people talking about things like algorithmic risk assessments, which for the most part, are based on statistical methods that have been around for decades now. They aren’t particularly advanced in what they do, but in the current moment we’re living in, they have been rebranded as AI. But there are also other technologies that five years ago weren’t possible, technologies that basically thrive off of massive amounts of data that are generated in our current digitally-mediated world. These are technologies like facial recognition algorithms that are able to identify anybody based on their face. It also includes predictive policing algorithms which are based on trying to identify “crime hotspots” that can be used to inform where police deploy their officers. I think by and large what we mean when we talk about these things are basically tools that use statistics, or computer science methods such as machine learning, to process and identify trends and patterns, and increasingly large amounts of data.

Joshua B. Hoe

7:49

So your work seems to start with the notion that there’s a rapid proliferation of these processes and systems. Can you say the ways that you see this is happening rapidly? And if there are more systems that you’re particularly concerned about – I assume you probably already named them – but just a check?

Chelsea Barabas

8:13

Sure. For the last few years, I’ve been pretty deeply engaged with the conversation around pretrial risk assessments, which are basically assessments that claim to be able to identify and predict a person’s likelihood of being of what people within the government call pretrial failure. That includes things like failure to appear to court, being re-arrested for another crime, or being re-arrested for a violent crime. I’ve been really interested in risk assessments because they have been framed as a bipartisan vehicle for pretrial reform, and until the ProPublica expose that came out in 2016, they were considered this promising new reform that would help us decrease our jail populations. I’m happy to talk about more of the limitations with that if that’s useful, but they’re in addition to the racial biases that I mentioned earlier. What we’ve seen pan out within courtrooms is that these things have more or less not had a major impact on the way judges make decisions. I can go into more depth about why that is, if that’s of interest. Other tools that I find really interesting/terrifying are things like the use of biometric data. This could be things like, somebody’s face, somebody’s voice, somebody’s walk, their gait. Those are all things that are recorded in some sort of way. So for example, when people in prison call their families on the outside, that is increasingly predicated on the incarcerated individual agreeing, or being coerced to agree, to their voice being recorded by a company like Securus, who then is able to take that audio recording, and do all kinds of analytics on it, to identify who the speaker is, as well as make other types of claims that I think of as kind of pseudoscience: whether or not this person is lying, or whether or not this person is likely exhibiting aggression, or likely to commit a crime in the future. So there’s a whole new wave of research looking at capturing data from our bodies, and the way we look or walk or talk and then it’s being used to analyze and criminalize people in various ways. So that’s another really worrying set of AI technologies.

Joshua B. Hoe

11:03

As a follow-up, the Securus program that you’re talking about seems to have another problem with it, this idea of voice printing that they use to identify not just the speed of the person in prison talking, but also the person on the other end of the phone. Is that so?

Joshua B. Hoe

11:26

That seems fairly problematic to me, too, that it’s not just the agreement of the person in prison, but it’s also the agreement of the person who is on the phone with the person in prison. And they end up getting in some ways surveilled as well, correct?

Chelsea Barabas

11:45

Yes, absolutely. And I think that’s one of these major trends we’ve seen, with more and more digital technologies being introduced to carceral spaces; a massive net-widening of the surveillance not just of the individual who’s directly impacted, but their broader family and community that supports them. So that’s true. It’s certainly true for incarcerated individuals. It’s also true for individuals who have been subjected to new forms of what people call “e-carceration”, whereby they might not be in a brick and mortar jail or prison, but they’re required to either wear a device that is surveilling them or monitoring them almost constantly. Whether that be an ankle monitor or a mobile phone. A major concern of these types of technologies is that it’s not just collecting information about the individual who’s being targeted, but also anybody that they live with or is in their environment. Yeah, it opens up a potential for much, much broader surveillance, and criminalization.

Joshua B. Hoe

12:55

I saw a speech you gave and you quoted Einstein talking about the importance of how we formulate first questions. And so I thought of a few foundational questions that seemed, at least in the research I’ve read, to undergird a lot of your work, and one of them seems to be, is the whole notion of trying to study at-risk populations problematic?2

Chelsea Barabas

13:22

I think so, absolutely. I think it’s problematic for a number of reasons. One of those reasons is that often when people try to measure or evaluate risk, they center it squarely within an individual and think about risk in highly-individualized terms. So they’ll talk about things like somebody’s anti-social tendencies or their aggressive pathologies, or things that basically amount to either some sort of form of internalized anti-social behavior or abnormal ways of thinking, while at the same time completely ignoring any structural or environmental factors, which might be leading to the outcomes that are being evaluated. And so it basically strips context, context outside just an individual’s basic thought patterns and belief systems to evaluate risk. I think that does a lot of work in erasing the violence of structural racism, poverty, mental health issues and things like that. So that’s one thing. The other big thing though, is that a lot of the problems that we’re facing today with an ever-growing prison industrial complex are about the system itself. You know, growing and expanding  – in spite of the fact that we haven’t seen increases in crime or public safety threats over the last several decades. But rather than use data to hold a mirror up to the system and ask, okay, how is the system broken? And how is the system harming you know, individuals and communities, instead risk-based kind of discourse is always used to shift the focus and attention back on the individuals who are bearing the burden of the carceral state. And I think doing that deflects blame back on the people who are the victims and survivors of the system, as opposed to the system itself.

1

Joshua B. Hoe

15:48

Another kind of foundational question is the whole notion of crime statistics. Too problematic for us to be using?

Chelsea Barabas

15:56

Yes. I don’t think crime data that is developed or collected by our criminal legal system is completely useless. But I think it has to be radically reframed to be interpreted as the byproducts of the policies and the decisions of the people who are in power and make decisions in the carceral state. So a great example of this is the work we’ve done with pretrial reform. What we’ve seen is a massive increase in the number of people who are detained before their trial date. We have more people in jail today than the entire incarcerated population of individuals who had been convicted of crimes in 1980. That’s a massive increase, and that’s the byproduct of a cultural shift within the courtrooms. So it seems weird to me that the solution to that problem is to use this data to try to model and predict the behaviors of individuals who are charged with crimes, as opposed to trying to model and predict the behaviors of the judges who have been making these decisions over the last 20 to 30 years. But this is a pattern that we fall into all the time, is that we call this data criminal history data, as opposed to the history of criminalization, which is what it really is. But the carceral state has left behind a lot of data crumbs that we could use to basically chart out the gross racial disparities in the way that police and judges and prosecutors deal with individuals who are impacted by the system. It’s a major missed opportunity because what we end up doing is just using this data to continue to perpetuate the same justifications for the same bad behavior by those powerful actors.

Joshua B. Hoe

18:05

So, to put it a different way, can I use data that are produced in these kinds of systems to accurately predict if people might commit a crime? And if I can, does that conclusion say more about the society and bias that made the prediction, or about the person the data suggests would be likely to commit the crime?

Chelsea Barabas

18:31

I don’t think there’s any data from the system itself that can help you predict who is going to commit a crime. And the reason for that is that this data is incredibly partial. We know that there are some crimes that are pursued much more than other crimes, for example, white-collar crime, which arguably has a much larger, negative impact on society at large than say, petty street crimes or extremely petty crimes like driving on a suspended license. But what we see is a gross over-representation of those crimes, basically crimes of poverty, right. So when we’re talking about crime prediction, we really have to account for that major skew in the data. And recognize that there’s both an issue of under-representation of certain types of crimes, and an issue of over-representation in terms of the over-policing and the over-criminalization, especially of racial minorities and low socioeconomic status individuals. And I think that those issues of under and over-representation are irreconcilable, we can’t fix them with the data that we’ve got. And so I think it’s impossible to predict crime. You can predict arrest. And I think if you want to predict arrest, that will only be useful if you contextualize it within an understanding of the way that police operate in racially discriminatory ways. And take that category of arrest not to indicate criminality on the part of the individual arrested, but more of an indication of the choices and decisions that police officers make when they’re patrolling the streets.

Joshua B. Hoe

20:36

That’s an interesting distinction. How would you suggest that we tell that story differently? Because you know, the way people hear it is definitely that people are more likely to commit a crime. I like this notion of more likely to be arrested. Can you flesh that out a little bit more?

Chelsea Barabas

20:54

Maybe you can ask me a little bit more specifically what you’d like me to flesh out?

Joshua B. Hoe

21:04

Well, it seems that one of the problems that we really are facing is that people tend to vote on fear of crime, and they tend to change policies based on fear of crime, and based on statistics, and based on predictions. And so it seems to me that the way you just reconfigured that suggests that there’s a different problem, that it’s not necessarily the risk of an individual harming society, as much as problems with the way that society manages its problems. Am I getting that correct? 

Chelsea Barabas

21:45

Yeah, and I think one conversation that needs to be had a lot more is around unpacking the  emotional stakes of these conversations. I’ve done some work interviewing judges to really try to understand how they go about making decisions when they’re setting bail for folks. And the biggest thing that comes up when we start to talk about the steady rise in jail populations is that judges have a fear of releasing somebody into the community who’s going to go out and commit some horrific crime and make headlines, where a journalist says, “Judge So- and-So released this person yesterday, when we could have prevented this. Now they’ve been charged with another crime.”

Joshua B. Hoe

22:38

So we saw that all over New York after the bail reform. 

Chelsea Barabas

22:41

Right. There’s this deep fear of violence and danger, and there’s emotional baggage around that. When you really look at the numbers of the rate of violent crimes, you start to realize like, wow, it’s an exceedingly small group when you look at the overall like kind of data around kind of arrests. And by exceedingly small,  I mean between 1% and 4% of rearrests, particularly around pretrial, so my follow-on question to these guys, these judges, is often Okay, well, how do you gauge how much of a potential threat somebody is? and by and large the main way that people gauge that is by how many other times they’ve been arrested in the past. You know, what’s their history in the system? Which is a terrible proxy for things like danger. That’s much more of an indicator of what kind of neighborhood somebody grew up in and how the police interact with that neighborhood. And so, I think there’s a lot of important work to be done to kind of unpack what does it mean to evaluate a public safety threat? And what are the most effective policies for keeping our community safe? If that is kind of Top of Mind here, then what are the interventions or things that we could do to actually reduce gun violence or reduce drunk driving or other things that pose physical harm to people? Because right now, the default unspoken strategy is just prevention through detention, which down the line actually creates the environment for [detention] by tearing up the thread of communities and detention is much more of a threat to the well-being of communities than the actual threat of violence perpetrated by an individual.

Joshua B. Hoe

24:54

Let’s stick with pre-trial for just a second. This is a Hobson’s Choice that I have fallen into myself a few times, I’m not sure that I have a great way out, hoping maybe you can help me. I know it rarely works out this way. But let’s assume you have the choice between judicial discretion alone, (which has been shown to be quite biased) or the results of an algorithm. I understand that in most cases it’s not an either or it’s both. But, assuming that sometimes judges alone can theoretically be worse than the result of the algorithm, how do we get out of that box? Or can I even conceive of it as a box in the first place?

Chelsea Barabas

25:40

So you’re kind of talking about this whole man versus machine who can predict things more accurately?

Joshua B. Hoe

25:46

Well, who can be less biased? I don’t think either of them is particularly predictive. I think that in these situations, we have judges who are often very discriminatory against black and brown people. We have algorithms that are also often biased against black and brown people. And frequently we’re left in a situation where we have to choose one for some reason. I understand that the better option is not to have the box. 

Chelsea Barabas

26:13

Right. So one of my favorite jokes about algorithms is this joke of, if you ask an algorithm, what they would do if they saw their friends had all jumped off a bridge, would they jump off a bridge, and the algorithm would always say yes, right? It’s kind of like one of those things dads or somebody will say to us, like, oh, if all your friends did this stupid thing, would you do it too? And the answer with algorithms is most certainly, yes. Because all algorithms are good at is identifying historical patterns and trends based on things that have happened in the past. So it seems crazy to me that we would hope to somehow transcend human biases by developing some sort of fancy algorithmic model that’s based on data that was generated from biased human decisions; there’s no way that an algorithm can transcend those biases. Because that’s not what they’re good at. And so what I think these things end up actually serving is they provide this veneer of science and objectivity that helps us justify the way we’ve been making decisions forever, instead of transforming the way we make those decisions.

Joshua B. Hoe

27:26

That’s interesting. Let me dig just a little bit deeper into that because I know during the First Step battle one of the arguments that I made pretty frequently was that there are ways to scrub data and obviously, you’re calling into question the possibility that data can ever be disaggregated from its biases. What I think people have said in the past is, people who are experts in your field, and I’m obviously not one, is that you could make the assumptions of the datasets public and allow people to continue to test and adjust for bias within the data sets and the results? Is that totally impossible? 

Chelsea Barabas

28:18

Sure. I think that for some subset of issues, you can start to de-bias data, and that can help to a certain extent. But I think the bigger issue, particularly in this context, is the way that we define the outcomes that we care about and the way that those outcomes mask the violence of the system itself. So sticking with this pretrial example. There are pretty strong legal protections against pretrial detention, it’s outlined In our federal documents that there’s a strong presumption of release. The only reason somebody should be detained is if we think they’re a flight risk, or we think that they pose a public safety risk. So that’s important to keep in mind when we talk about risk assessment. It’s true that there is bias in the way that these algorithms predict their outcomes. And, I use that argument, often, as the first argument against these things. But I think a more fundamental critique is to say, hey, these risk assessments do not predict what matters in this situation. So, for example, when we’re talking about failure or flight risk, what people actually mean when they’re actually looking at the data, is they’re looking at any time in which somebody has a default in court:  they didn’t didn’t make it on time to court or didn’t make their first hearing or something like that. That’s really different from the initial intent of the law, which was really to look to identify people who are going to flee the country or flee the county, and abscond from justice. Even more egregious, though, is when we look at a public safety risk, most of the risk assessments that are out there today, what they mean by public safety risks is any arrest in between the time you were first arrested, for driving on a suspended license, that’s hardly a public safety risk. But the reason that risk assessments define their outcomes in that way, is because it’s extremely challenging to actually predict violence in the future because it’s so rare. But rather than abandon the endeavor of predicting violence, what they’ve done is they’ve just massively expanded the category of what they define as a public safety risk to mean anything, anything for which somebody gets arrested. So that’s not something that you can fix by de-biasing the data. That’s something that you have to fix by radically rethinking the categories that you’re using to define your goals and your outcomes down the line.

Joshua B. Hoe

31:09

So are there any instances where you want to try to do both, change the idea of what you’re looking at? And also try to de-bias data? I mean, should you default to trying to de-bias data as a general rule?

Chelsea Barabas

31:22

Sure. I think it’s a good first step. I think it just can’t be the last step. For things like consumer technologies, thinking about bias in this way is really important. But it can’t be where we stop. And I think that a really interesting example of this is when we think about facial recognition algorithms. One of my colleagues at MIT, Joy Buolamwini, did a study a few years ago where she discovered that facial recognition algorithms perform significantly worse on dark-skinned female faces. And the reason for that was because dark-skinned women are not very well represented within the datasets that are used to train facial recognition algorithms. So that’s an issue in and of itself, especially when you’re thinking about things like self-driving cars. We don’t want the cars that are out on the street to not be able to identify a dark-skinned female human as well as it could identify a white male. However, when this report first came out, there were a number of thinkers of color who said, Hey, listen, we know that bias is an issue. But we don’t want to be included in these datasets. Because although we know this stuff can be used for things like opening our smartphones, how it’s more likely going to be used and show up in our lives is going to be through law enforcement technology and technology that’s going to be used to criminalize us and oppress us in various ways. And we don’t want that technology to be more accurate. We don’t want that technology to be more efficient. We want that technology gone. So, we have to really think about the use cases and the contexts in which these things are being deployed. And that requires us to think beyond just issues of bias and accuracy, but also to think about the impact.

Joshua B. Hoe

33:17

I think that’s a really good bridge to some of the things I wanted to talk about next. The first one, I always call the Minority Report problem. I’m not really sure that people realize that there are people locked up right now in this country who’ve never committed a crime, [locked up] based mostly on assessment of risk. There are laws called civil commitment laws that allow this. Do you see things that worry you about predictions being used prior to crime to segregate or incarcerate?

Chelsea Barabas

33:52

Absolutely, I actually just collaboratively wrote a letter of concern about a publication that was coming out a few months ago, in which the researchers claim to be able to identify the kind of criminal propensities of an individual just based on an individual’s face. And this is part of this growing new wave of research that uses biometric information, which is basic information we can’t change about ourselves; we can’t change the way our face looks very easily, we can’t change the way our voice sounds very easily. And people are using these kinds of intimate details about us to make claims such as, Oh, this is somebody who has a high propensity for crime, regardless of the actions this person has taken.  Within law enforcement particularly we’re moving towards a “prevention-based model” of trying to predict things before they happen, and use that as a justification for intervention in various ways. Another really interesting example comes from Chicago where researchers developed an algorithm to try to identify youth who might be involved in gun violence down the line, or in the near future. Now, involved means could be either a victim or perpetrator of gun violence. And that’s important because it’s key to understanding the nature of gun violence, or just violence in general, like people who perpetrate violence are often people who have been victims of violence themselves before that happens. So this binary between victim and perpetrator is a really blurry line. So this algorithm was developed, and it was kind of framed as a public health intervention: Let’s identify these at-risk youth, and then we can provide them with supports and services to try to prevent their involvement in these things down the line. What these things ended up being used for, though, was actually as a lever to threaten youth with more severe charges, more severe penalties so that wherever they were, they became involved in the system in some sort of way. So when a police officer arrested somebody, like a kid on a street corner, if they were able to run their name through a database and show that they were high-risk using this algorithm, they could then take that to a judge and be like, Hey, this is a public safety threat. This kid is not somebody who should be out on the streets. And that was used to really heighten the stakes for the individual trying to navigate the system. So as we move towards these prediction-oriented approaches, what I think we will see happening is very similar to what we see happening when things like mandatory minimums become instated, that becomes a point of leverage for prosecutors to be able to drive plea bargains, drive submission, because the stakes are so high for the individual.

Joshua B. Hoe

37:13

It seems very similar, just off the top of my head, to kind of like bad magic and phrenology. I feel like I could probably predict a lot by the zip code someone lives in. You know if they’ve seen violence or something like that. Is there any reason to believe that any of that is even remotely accurate? It seems like strange magic to me. 

Chelsea Barabas

37:42

It might be accurate that if you’re from Beverly Hills 90210, you’re going to be less likely to be involved in gun violence than if you’re from downtown LA. 

Joshua B. Hoe

37:54

That’s kind of what I meant is that, in a sense, you don’t need to have a happy algorithm to figure out that some people are more likely to see violence than others, you know? 

Chelsea Barabas

38:06

And I think you’re touching on a key thing, which is: what real purpose do these algorithms serve? I think when you really look at this, it serves as a means of justifying or legitimizing the kind of common sense notions that law enforcement use to make their decisions. So we’re not necessarily revealing anything new. But what we’re doing is we’re enshrining this common sense knowledge under this brand of science. And when we’re doing that, we’re also porting over all these harmful interpretations of what that common sense knowledge means.

Joshua B. Hoe

38:50

When I had James Kilgore on the podcast,  we were discussing the dystopian possibility that combinations of these technologies, active cameras, facial recognition, open access to driver’s licenses, and social media information, could create functional Exclusion Zones. And I feel like now, after becoming more familiar with what’s happening with Operation Greenlight in Detroit, that it is actually happening. Do you have any thoughts about this whole notion of these combinations of technology creating functional Exclusion Zones or areas where people can no longer be in public space? 

Chelsea Barabas

39:33

Yeah, that’s a big one. So I think what we’re really seeing with this pandemic that we’re all living in now is like a massive normalization of surveillance, in mainstream circles, as well as a heightened level of surveillance for people impacted by the system and part of that surveillance, a key function of it, is to continue to regulate, the mobility, particularly of marginalized populations and populations who are quote-unquote, “risky” in a variety of ways. So I know that part of this concern around COVID and stuff like that is, is that we all have different levels of risk to exposure to the virus, and that could end up translating into different levels of access and mobility in our cities that could be regulated through the growing network of CCTV cameras, license plate readers, monitors in public transportation and things like that, see who’s moving in and out of different spaces. Yeah, James is thinking much more about this stuff than me, so I’m not sure about answering your question. 

Joshua B. Hoe

41:13

You’re doing fine. I think even if it isn’t the system’s job to exclude people, people could start dissociating from those social spaces, particularly because they’re constantly under surveillance. So they may serve as functional Exclusion Zones, even if they’re not official Exclusion Zones, in some ways. I don’t know how familiar you are with Project Greenlight, but a lot of stores agreed to put cameras in the lights and things around them. And then the Detroit Police Department uses that and a lot of other things so that they can do what they call “virtual police presence”. So I feel like there’s a lot happening here that hasn’t been really thoroughly thought through. Or at least not in critical spaces, unfortunately. And these things move very quickly. So, sometimes asking sort of sci-fi questions may seem a little crazy, but I think there is some actual practical application that’s happening.

Chelsea Barabas

42:29

Yeah, absolutely. And I know that James also draws a lot of parallels between the rise of electronic monitoring and the apartheid state in South Africa and thinking about, how there’s a growing number of people who are now being released on electronic monitors with specific kind of geo-fenced areas being the only places they’re allowed to go, or there are well-defined kind of Exclusion Zones where they’re not allowed to go anymore, and how this creates this dynamic where people who’ve been criminalized now have their mobility, circumscribed to a very small area, and they’re not allowed to go into more mainstream spaces. And that’s regulated through these technologies. What we’re seeing is that electronic monitoring is being used further and further upstream in the system for people who have not even been convicted of crimes, folks who’ve just had a brush with the system. And so that’s definitely a trend to keep watching. And I think that those types of parallels with things like apartheid systems are really apt and things to be taken seriously. It’s not just sci-fi. It’s also kind of like a throwback to the past in some ways.

Joshua B. Hoe

43:44

You also document a lot of growing connections between corporations, police, prosecutors, and prisons around these systems. Do you want to talk about that a little bit? 

Chelsea Barabas

44:00

Sure, I’m trying to think of what specifically to talk about.  There’s always been a for-profit sector for this work. I think one of the things that has been most interesting for me to see is the branding of for-profit companies in the carceral state. A great example: I’ve been tracking the rise of different electronic monitoring companies and the way they’re branding themselves during the pandemic. I would say the phrase that I hear the most often when I’m watching these webinars and different things that these companies provide, is they promised to enable law-enforcement, probation, and parole officers to be able to quote-unquote, “do less with more”. So in this time, where it might be a lot more challenging to physically keep tabs on an individual, because of the health risks that poses, these technologies and companies are offering an opportunity for agents of the state to be able to keep closer tabs on people through these new products that they’re developing. And they frame these products as being a more humane approach to social control and containment. So I think that’s really interesting. It’s like that branding of a humanitarian kind of discourse for new forms of social control and containment. I don’t know if you have other more specific things you’d like to talk about with that?

Joshua B. Hoe

45:34

Oh, I just saw in some of your work that you’d been talking about that. I know part of your project is trying to address these problems within your own discipline. Have you seen much success with that? How are you approaching getting people in tech and in developing this stuff to see the problems with the systems that they’re involved in creating?

Chelsea Barabas

45:54

It’s a pretty big challenge to engage with computer scientists about the social impacts of their work. I think, generally speaking, computer scientists have seen themselves as sort of neutral apolitical actors who, full of good intentions, are trying to solve problems in objective, in neutral ways. And I think that framing in general is really harmful because there is no such thing as neutrality. And when you say what you’re doing is neutral, it by default means you’re supporting the status quo. And so, a big part of my work has been trying to help computer scientists see the political nature of their work and to become more comfortable with engaging with the political stakes of their work. So for a lot of computer science work, part of the appeal of it is that it’s stuff framed as applications across a variety of different contexts and domains. So I can develop an algorithm for identifying somebody’s face. And that can be used to, you know, monitor engagement in a classroom setting like online with zoom. It can also be used in Project Greenlight to identify who’s coming and going from a business, it could be used in a variety of different contexts. And what I want to do is help researchers become more comfortable with resisting and challenging the use and abuse of their technologies and contexts where that’s harmful. Because I think that academics and the builders of these tools actually have a lot more power than they give themselves credit, in terms of dictating the social norms about how this technology gets used. But to date, they’re pretty reticent to engage in those kinds of conversations.

Joshua B. Hoe

47:51

And have you had some success though, with converting some people to believe that they have more power?

Chelsea Barabas

47:57

Sure. I think there’s a growing community of people who are committed to not only resisting this stuff, but actually having real skin in the game. I’m inspired by the tech workers in Silicon Valley who are speaking out against companies like Google and Amazon and Microsoft, and their entanglements with the Department of Defense. And I’m really inspired by the ones who quit their jobs or are fired for speaking out. I think that’s also true within academia. I think the unspoken and perhaps even unacknowledged, personal or internal reason some people are scared to speak out is because there could be repercussions for doing so; you could be seen as a troublemaker, you could be seen as somebody who isn’t worth the hassle. And within academia, I think that’s the fear, if you end up taking a proactive stance in these kinds of conversations, you’re going to be branded as a troublemaker or somebody who is misusing their platform to be a social justice warrior. But I think more and more we’re seeing that happen, and we’re seeing people challenge that framing and say, No, this isn’t about being some sort of self-righteous liberal, this is about us taking on the responsibility of the work, because we’ve been given a lot of power in the world that we live in. And I guess a specific example from my own work is, as I mentioned earlier, is this open letter that I collaboratively wrote with four other people who were specifically challenging research around predicting, quote/unquote, “criminality” using somebody’s face. We had over 2500 academics sign on to the letter, and the publication was actually removed from the publication pipeline as a result of that work, so I’m really encouraged by that kind of mass positive response from within the academic community and I hope for more of that.

Joshua B. Hoe

50:13

Earlier in the discussion, you gave hope for ways that maybe data can be used, or these tools can be used for good in different ways, where we could reconceive how we look at all of this stuff. So this is the Decarceration Nation podcast. And this year, I’m asking people if they have any interesting ideas for substantially reducing incarceration. What are ways that we could conceive of data better, or we could change data or use different data in different ways, in ways that could maybe even be de-carceral. Do you have any ideas on that area?

Chelsea Barabas

50:51

Sure. I think we could use data to build accountability for people who really have power, build accountability for judges and prosecutors. I’ve been really inspired in Massachusetts by the work of the Massachusetts Bail Fund; they’ve launched a campaign to do what they call court watching, which is basically to train lay people to be able to go into courtrooms and observe the proceedings of the court and actually collect data about the way those proceedings went down. So they might collect data about whether or not, for example, a judge inquired about an individual’s ability to pay before setting bail, which is something that they’re supposed to do, but they often don’t do. They collect data about what the bail amounts are that judges set. And what they’ve done is that they’re able to take that data and turn it into pretty fast accountability campaigns for judges and prosecutors. So in Suffolk County, which is where Boston is, when Rachael Rollins was elected, the Massachusetts Bail Fund did a First 100 Days Campaign where they tried to hold her office accountable to some specific commitments that they made on the campaign trail about specific types of things they were going to decline to prosecute, things like larceny and stuff like that. So they collected that data and then they turned the data into social media campaigns that let Rachael Rollins know that hey, people are watching and we’re gonna really hold you to the promises you made when you were on the trail, and I think that had a real impact. So yeah, those are some inspiring examples that I’d love to see more of.

Joshua B. Hoe

52:27

I love that example because I’ve had Rachael on the podcast. 

To conclude: I always ask the same last question. What did I mess up? What questions should I have asked but did not? 

Chelsea Barabas

I don’t think you messed up at all, I think you asked great questions!

Joshua B. Hoe

I always love that answer. But I’m sure there was something I could have done differently. My failed attempt at humility there, I guess. Thanks so much for doing this; it’s really nice to finally get to talk and, and to talk about some really interesting stuff.

Chelsea Barabas

52:57

Yeah, thank you so much for having me. This was really enjoyable and I’m sure our dads are gonna be really excited.

Joshua B. Hoe

53:03

I think they both will be, since my dad was the one who emailed me that you were doing this work. Anyway, thanks. Thanks again.

Chelsea Barabas

Thank you, Josh.

Joshua B. Hoe

53:14

And now my take.

I myself was on electronic monitoring for two years. Let me tell you a few stories from my own experiences from when I was on monitoring. First, to foreground the issue, I was allowed to be out from 8:30 am until 3:00 pm, Monday through Friday, and on weekends, I was not allowed out of the house at all. Every day, in order to stay in compliance and keep the monitor working, I had to plug myself into the wall for several hours in order to recharge my ankle monitor. I remember that right after I was first put on monitoring, I had my parole officer show up and run into me at the grocery store and later at the bookstore, just to remind me that she knew where I was at all times. When my parole officer met me at the bookstore, she came with several other parole officers in what seemed to be a show of force. But its real impact seemed to be to shame me, and mark me as different and alien when I was in social spaces. I also remember at least five times when my parole officer would call to yell at me about not being at home when I was literally at mandatory therapy sessions. She would do this despite the fact that all she had to do was look on her phone to see where I was. Later, when it was only a few months from the finish of my parole sentence, the parole office decided to change vendors on monitoring companies. So they called me in to change out my ankle monitor. My parole officer cut off my old monitor and then put on the new monitor so tight that it physically hurt for me to walk. Obviously, I complained and asked her to loosen the strap to which she replied: “It’s supposed to hurt. This is punishment.”  I’ll never forget her saying that. I had to wear that monitor on extra tight for several months, while it was still on my ankle. When it was finally cut off, I had a band of indented flesh there that remained for weeks. I really will never, ever forget how uncomfortable that was. On a regular basis during my two years with an electronic monitor, the ankle monitor would go off and I would have to stop whatever I was doing or leave wherever I was and go stand outside, often for as much as 10 minutes until the monitor regained a connection with the satellite. For so many people on parole, this embarrassing experience happens to them every few days while they are at work, which means that maybe all the employees didn’t  know about their criminal past, but they sure do after they have to go out and stand in the parking lot for 10 minutes, waiting for their monitor to reconnect to the satellite. And to add insult to injury, I was charged for monitoring and I’m expected to pay a large amount of money for the privilege of having been through monitoring for those two years. And let me conclude with this. To my knowledge, there is no evidence that monitoring actually increases security. In point of fact, since the system is passive, anyone who wanted to commit a new crime would simply cut off the monitor before leaving to commit that new crime. Electronic monitoring seems to me to be more of a public safety placebo than it is a meaningful protection. And the costs are massive. It would be like charging someone with chronic illness, especially if they were poor and couldn’t afford it, thousands of dollars to give them a bunch of sugar pills. Someone always responds to these conversations sharing some glib statements about how things like monitoring are just desserts. But I’m more of the opinion that Liberty should never be restrained or constrained without a damn good reason. And as near as I can tell, there’s no damn good reason for electronic monitoring. I will always go along with the notion that people should be able to –  if it’s their only way out of incarceration – people should be able to choose electronic monitoring. But that doesn’t make electronic monitoring ok. This is a huge, massive surveillance system that’s incredibly costly to people who can’t afford it.

As always, you can find the show notes or leave us a comment at DecarcerationNation.com. If you want to support the podcast directly, you can do so from patreon.com/decarcerationnation; all proceeds go to running the podcast and supporting our volunteers. For those of you who prefer a one-time donation, you can now go to our website and give a one-time donation using the tab there. You can also support us in other, non-monetary ways, by leaving a five-star review on iTunes or like us on Stitcher or Spotify. Special thanks to Andrew, who does the editing and post-production for me, and to Kate Summers who is still running our website and helping with our Instagram and Facebook pages. Make sure to add us on Twitter, Instagram, and Facebook and share our posts across your networks. Also, thanks to my employer, Safe and Just Michigan, for helping to support the DecarcerationNation podcast. Thanks so much for listening; see you next time!

Decarceration Nation is a podcast about radically re-imagining America’s criminal justice system. If you enjoy the podcast we hope you will subscribe and leave a rating or review on iTunes. We will try to answer all honest questions or comments that are left on this site. We hope fans will help support Decarceration Nation by supporting us from Patreon

A birthday cake for Joshua Hoe