Data & Society

Lawgorithms: Everything Poverty Lawyers Need to Know About Tech, Law, and Social Justice

Episode Summary

Michele Gilman joins Professor Meredith Broussard in a conversation on enhancing the digital literacy of poverty lawyers to better advocate for social justice and the low-income communities they serve.

Episode Notes

Automated decision-making systems make decisions about our lives, and those with low socioeconomic status often bear the brunt of the harms these systems cause. Poverty Lawgorithms: A Poverty Lawyers Guide to Fighting Automated Decision-Making Harms on Low-Income Communities is a guide by Data & Society Faculty Fellow Michele Gilman to familiarize fellow poverty and civil legal services lawyers with the ins and outs of data-centric and automated-decision making systems so that they can clearly understand the sources of the problems their clients are facing and effectively advocate on their behalf.

Episode Transcription

Michele Gilman: Hello, welcome to Databite #138, “Lawgorithms: Everything Poverty Lawyers Need to Know about Tech, Law, and Social Justice.” My name is Michele Gilman. I'm a Professor of Law at the University of Baltimore School of Law and a former Data & Society faculty fellow. I will be your host this afternoon, alongside CJ, Rigo, Eli, and Angie.

Data & Society is an independent research institute studying the social implications of data and automation. We produce original research and regularly convene multidisciplinary thinkers to challenge the power and purpose of technology in society. You can learn more about us through our website at datasociety.net

And now I'd like to offer a digital land acknowledgment. Data & Society began in Lënapehóking, a network of rivers and islands in the Atlantic Northeast now known as New York City. Today we are connected via the internet, a vast array of data serves and devices. This system sits on stolen land acquired under the logic of white settler colonial expansion. As an organization, we uplift the sovereignty of Indigenous people across the network, and commit to dismantling ongoing practices of colonialism and all its material implications on our digital world. 

Today, I have the pleasure of sharing this space with Meredith Broussard, associate professor at the Arthur L. Carter Journalism Institute of New York University and the author of Artificial Unintelligence: How Computers Misunderstand the World (MIT Press, April 2018). 

Meredith Broussard: Michele, it is such a pleasure to be here. Thank you so much. 

Michele Gilman: Thank you, Meredith, for joining me for today’s adventure. Today, we are highlighting a new report that I authored for Data & Society that focuses on all the different ways that data-centric technologies are harming low-income people. And it's designed to help lawyers who represent poor people identify these technologies and to strategize in order to combat those harms. But the report on “Poverty Lawgorithms” is not just for lawyers—it's for anyone interested in social justice. Because anyone interested in social justice needs to understand the tech-driven barriers facing low-income people. 

I am a clinical law professor, meaning I work with law students to represent low-income people in Baltimore who could not otherwise afford an attorney. As poverty lawyers, we help our clients overcome legal barriers to obtain housing, food, medical care, employment, education, and family stability. The impetus behind this report is that through my lawyering work I'm seeing my clients trapped in all sorts of data webs that they can't escape. 

Let me give two specific examples from my clinic's docket that highlight the sort of technologically-driven harms I'm concerned about. Right now in the clinic we're representing a family that is facing eviction for nonpayment of rent. You won't be surprised to hear that they have fallen behind on rent because they lost their jobs and had reduced hours due to the pandemic. And, for a while, the parents in the family were each sick with COVID themselves. Trial is coming up in a few weeks and we actually have some good defenses in this case, based on the unsafe condition of the home. But—regardless of whether or not we win or lose—the very fact that an eviction case was filed against this family has been scraped by tenant screening companies. These companies produce reports that a majority of landlords rely on in selecting tenants. The reports give potential tenants a score. Sort of like a credit score. Rating a person's desirability as a tenant. They generally combine information about a person’s residential history, civil and criminal case history, credit history, and the like. Although, the sources of the data and the formulas of how these companies arrive at these scores are proprietary. Meanwhile, tenants usually have no knowledge they're being screened by these algorithms and they don't have opportunities to look at the data that is fed into them or to make any corrections for incomplete or erroneous data. Housing lawyers know that the data is rotten. The data scraped from public court dockets is full of mismatched records due to similar names, incomplete records such as failing to show who won a case, the records contain obsolete data, and I could go on and on. And yet every day people are being denied housing based on these reports. 

Here's my second example: In another case that the clinic handled, we were representing an elderly disabled woman who relied on home healthcare aides funded through our state's Medicaid program to meet her basic needs. Her family came to us when the state reduced her hours of home healthcare assistance by half. We couldn’t figure it out: our client was getting sicker with age, not better; she needed more care, not less. We couldn't get an answer to the cut in hours until we were before an administrative law judge in a due process hearing. At that point, the expert for the state—who was a nurse—revealed the cut in hours was due to an algorithm that the state had purchased and adopted from an outside vendor. And the expert—nice as she was—couldn't answer our questions about the algorithm, the data it was fed, the factors it uses, how it weighs the factors. It's very hard to cross examine an algorithm and the state can hide behind it to disclaim responsibility for the care of its neediest citizens. 

So, these are only two examples—the Lawgorithms report is filled with dozens and dozens of ways that data-centric technologies are undermining economic justice in this country. So with these examples in mind, I'd love to turn now to Meredith to help us better understand the dynamics of these systems. So, Meredith, I've been throwing around some terms so far—like algorithms, automated decision-making, artificial intelligence, machine learning. What does all this mean? 

Meredith Broussard: That's such a good question because when we're talking about, “What are the harms that these systems cause and how do we remediate these harms? How do we address them in court?” We really need to start with the fundamentals and we need to acknowledge that, often, many of us don't have a total grasp of the fundamentals. So I like to start with a couple of basic definitions: artificial intelligence is a branch of computer science, the same way that algebra is a branch of mathematics, and inside artificial intelligence we've got other subfields—like deep learning or neural nets or natural language generation or machine learning—but machine learning is the one that's really popular nowadays. So often, say, when a tenant screening company says they're using artificial intelligence, what they actually mean is they're using machine learning. And this is confusing because it sounds like there's a little brain inside the computer—and there's not. It's just math. Right? It's a poorly chosen name. So, machine learning is just math. It's actually statistics—it’s very, very complicated statistics, the math is very beautiful—but it's accessible [and] we can talk about it the way we talk about normal things. So machine learning is just math. And it's important not to get it confused with kind of Hollywood ideas of artificial intelligence or “machines that think” and what have you. 

Then we go on to algorithms and automated decision-making: an algorithm is a procedure for solving a problem. It's a step-by-step itemization of what you want the computer to do in order to solve a problem. The way that computers work is they are machines that do math, and so when we use a computer to make a social decision—like “Who is the highest-scoring tenant?” so that we can then pick the highest-scoring potential tenant in order to give them a lease—the computer needs to follow a set of steps. It needs to follow an algorithm. So, the algorithm describes the steps that the computer goes through. Automated decision-making is the process of using a machine to make the kinds of decisions that were previously made by humans. 

[10:27]

Michele Gilman: Thank you. That is very helpful in breaking down some concepts that can be intimidating if, like me, you didn't study computer science in school. So here’s a follow-up question: I have appeared before judges who believe that automated decision-making is superior to any other evidence that I can present to them. This sounds like technochauvinism to me. Can you define that term and tell me:are the judges right? 

Meredith Broussard: That is absolutely technochauvinism. So “technochauvinism” is a belief that technology or technological solutions are superior to other solutions. And what I would argue is, instead of assuming the computer is always right and the computer is always better, that we should ask, "What is the right tool for the task?" Because sometimes the right tool for the task is a computer. Absolutely, whenever it’s a mathematical calculation—yes, use a computer. But other times, the right tool for the task is something simple—like a book in the hands of a child sitting on its parents' lap. And it’s not a competition; one is not inherently better than the other. 

So, in the case of something like recidivism scores—so probably all of you know about the COMPAS algorithm that was used to generate a score to predict whether somebody would be a repeat offender. And it was mathematically determined that this compass algorithm was biased against Black people. It uniformly predicted that white people would not reoffend; it predicted that Black people would reoffend: this compass algorithm was biased. 

And if somebody really believes in technochauvinism, it's very hard to convince them otherwise. It's very interesting to me that technochauvinism has become the default over the past couple of decades. One of the things I did in my book was look into the historical roots of technochauvinism, and it turns out that the ideas that we have about technological supremacy actually come from a very small and homogeneous group of people who are mostly Ivy League-educated white male mathematicians. And there’s nothing wrong with being an Ivy League-educated white male mathematician—but they do have blind spots just the same way all of us have blind spots. And so we have to consider—when we're pushing back against technochauvinism—we just have to consider the amount of time that people spent convincing judges that technology was superior. And we have to ask: “Why?” 

Michele Gilman: So how do you respond to someone who says: "Human beings are just full of prejudices and stereotypes and biases, and so if we can turn over some decision-making to computers, they'll be unbiased. They're purely objective. It seems more desirable than leaving a decision in the hands of a person.” What's your response to that, knowing what we know about COMPAS and these other algorithms? 

Meredith Broussard: It would be really nice if it were possible to build a machine that would get us away from all of the messy problems of being human. But that is merely a fantasy. It comes from science fiction—which is really fascinating to think about—but it's not real. We have to just keep in mind what's real and what's imaginary about artificial intelligence—and we have to look at the evidence in front of us. Lawyers are great at doing this. So, when somebody comes to you and says: "Oh, the machine is more objective or more unbiased," that's a clue they're stuck in this one kind of outmoded thinking. So there are lots of great resources for moving beyond technochauvinism nowadays. I mean, obviously, I think my book is a good resource. 

Michele Gilman: And I agree. Here's my trusty copy! 

Meredith Broussard: Oh, fantastic! That's so exciting. I like Cathy O’Neil’s book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown, September 2016), Ruha Benjamin’s book Race After Technology: Abolitionist Tools for the New Jim Code (Polity, June 2019), and Safiya Umoja Noble’s book Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press, February 2018). The arguments against technochauvinism are really well understood and they're laid out in the literature. 

Michele Gilman: Yeah. I agree with those recommended books. They're all also fantastic. At the end of the Poverty Lawgorithms report I included a list of recommended books and readings (including all of those that are mentioned) that give such an accessible grounding to this area. Most people would probably be happy to hear that lawyers have an ethical duty of competence and, increasingly, meeting that ethical duty of competence requires understanding how these systems work—even for many lawyers who, like I said before, didn't study computer science [or] went to law school to avoid STEM topics. We need to understand these systems to represent our clients competently and as zealous advocates. So poor people are disproportionately people of color and women. How is the fact that the tech industry is dominated by white men impact the outcomes that I am seeing in my dockets? 

Meredith Broussard: I think we can draw a straight line between the biases—conscious and unconscious—of the tech industry, and the biased embedded in automated systems. One of the human interpretation things you can do as a poverty lawyer looking at these automated systems is you can look at the people who are making the systems and you can say: "Okay, what do I know about the people making the systems? What do I know their blind spots are?" 

So, for example, the Apple Watch when it launched was touted as having all of these great health features. There were going to be these health tracking features. And then it launched—and it did not have a period tracker. Now, if there had been more women on the Apple development team, then somebody might have pointed out: “Hey, if we're going to launch a wearable that has health tracking, it actually might make more sense to have a period tracker on as the default and then you have to take it off if you don't need it.” So we can think about the blind spots of the people making the systems and use those to predict the blind spots of the systems. 

Michele Gilman: Why is it that tech is so dominated by certain perspectives? How has that pipeline been created? 

Meredith Broussard: It's really just about the history of racism, the history of institutionalized sexism, and people not examining problems inside their fields. So, computer science is a descendant of mathematics. We tend to think of tech as ubiquitous nowadays—it truly is—but computer science as a discipline started in the 1950s and it was started by mathematicians and mathematicians are, you know, mostly brilliant—but they have a couple of issues. And, as a field, they have a big problem with gender. So, for example, the American Mathematical Society, which is the major organization of professional mathematicians, has never been above 20% of women in its membership. Ever. I mean, we're in 2020. 

If you look at graduates of the most prestigious computer science departments or math departments: there are very few women, there are very few people of color. Mathematics has never reckoned with its history of racism and sexism as a bias, in part because they've been able to get away with this snobbiness of saying: “Oh, we're so superior because we're mathematicians, that we don't need to worry about the same kinds of things that, you know, the peasants worry about.”

And, I mean, it's just not a productive way to exist in the modern world. So one of the things that we can do in order to combat this is we can have more diverse teams of people working to make technology. And, on the legal side, we can work harder to help poverty lawyers, and help lawyers of all stripes, to understand the underlying technology. I mean, it's hard. I'm not going to lie—it's hard. There's a lot of stuff out there that is designed to make you think that you can learn tech overnight; that you can learn to code, that “everybody should learn to code and you should just go to boot camp and your life will be transformed.” A lot of that is empty promises. This stuff is hard. But also—law school was hard. Understanding the law is really hard. I'm a computer scientist—I am used to reading really arcane sets of documents and parsing things out and interpreting them—and, even for me, reading law is hard. Lawyers already know how to do this incredibly difficult thing. So it's about transferring those kinds of skills to this new computational domain. 

[21:15]

Michele Gilman: And, I think, working in partnership, lawyers are very familiar with working with different types of experts in litigating our cases. And so this is just yet another area of expertise that we need to master to meet our clients' needs. So shifting the focus on the lawyering side, Meredith, what questions do you have for me? 

Meredith Broussard: Well, I want to ask about class because we know that automated systems, algorithmic systems, are notorious for discriminating by race, discriminating by gender. But one of the things that’s not talked about very much is how these systems discriminate by class, so I was wondering if you could talk about that from a lawyers' perspective. 

Michele Gilman: Sure. The one factor that all my clients—who are a very diverse range of people—share in common is that they are materially poor. Social class is the binding factor for my clients and what I'm really concerned about is how data-centric technologies are increasingly performing a gatekeeping function in peoples' ability to access core life necessities. We're turning these decisions over to automated decision-making systems and yet, as I mentioned earlier, we don't really know how these systems work. The people who are impacted have no say in these systems, in their design, or whether they should be used in the first place—and it's very hard to hold anybody to account for their unfair outcomes. 

We have a robust vision of ourselves, in America, as a meritocracy. But this view of social mobility is a myth. The largest determinant of your economic success as an adult hinges on who your parents were—or the birth lottery. 40% of kids who are born in the lowest economic quintile will stay there as adults, and that same stickiness exists at the top of the ladder as well. And so I'm worried that the technologies that we’re talking about today will cement this stickiness due to the ways these systems segment and sort people into minute categories making it harder and harder for people to climb the economic ladder. 

At the same time, when you represent poor people one of the realities is that poverty is not a protected class in antidiscrimination law: it is perfectly acceptable to discriminate against the poor as a legal matter. And so we don't have all the tools we could use to push back against the targeting of poor people and the exclusion of poor people. 

So, I've talked a bit today about the ways poor people get excluded—such as from housing or public benefits—but I also see patterns of them being targeted because they are poor. So payday lenders swooping in and targeting them with exploitative financial products, or for-profit colleges targeting, through the internet, very vulnerable groups to lure them into college programs that result in amassing massive amounts of debt with very few educational opportunities on the backend. So, there are these patterns of both targeting and exclusion because people are poor. 

Meredith Broussard: So what do we do about that? What is the appropriate remedy? 

Michele Gilman: Well, there's no silver bullet here. We really need to use every tool in our toolbox to resist what's happening in the Poverty Lawgorithms report, I definitely tried to highlight ways in which lawyers and their clients are pushing back successfully. So we need litigation—that's one tool—there have been some successful class actions brought against state public benefit systems that were wrongfully denying people benefits to which they were entitled, and those lawyers really had to unpack and understand the algorithms to achieve success in those cases. But litigation isn't the only tool. We need community organizing: a fantastic example of community organizing were a group of tenants in Brooklyn who fought back against their landlord's proposal to adopt facial recognition technology in the building where the tenants lived and through organizing and working collaboratively with legal services, lawyers, they were able to get the landlord to step down from that plan. And then we also need legislative reform: we need to change the laws and enact good, new laws. Some jurisdictions, for instance, require that outcomes in certain housing cases be sealed or masked so that tenant screening companies can't pick them up, embed them in reports, that then lead to housing inaccessibility. So I think it's a mix. It's litigation, it's legislation reform, it's community organizing—it’s all hands on deck. There truly is no silver bullet. But we can't even begin to fight back if we don't have an awareness that these systems are operating and an understanding of how they're working. 

Meredith Broussard: One of the things that I like about your report is that you start with the basics of: “How did these things work?” But, also, you cover: “What are the different categories where lawyers can expect that automated systems are being used?” And: “What do you need to know about how these systems screw up?”

Because it seems like this is the missing piece of knowledge. This is the thing that is going to allow lawyers to take action about these automated systems. And I want to ask: What are some things that make these particular systems invisible? Why has it been so hard for poverty lawyers to see inside the black boxes of these algorithmic systems? 

Michele Gilman: Yeah. Good question. So, there are a few reasons: One is that a lot of these systems operate invisibly in the background. So I may have a client come to me who's struggling to find housing and not sure why—and they have no idea that the landlords they are applying to are using tenant screening reports, right? They often happen in the background. The same happens in employment and in my public benefits case. A lot of these systems just operate without the public knowing that they are being used. 

There's also a hurdle in that a lot of poverty lawyers don't have the training or background to identify or interrogate these systems. Again, that’s the point of us talking here together today and the report: there's nothing to be scared of here, we can do this. But we also have to recognize that lawyers who work in civil legal services are in extremely busy practices with hundreds and hundreds of cases and an overwhelming amount of need. There was a study that there's a huge civil justice gap in this country. So in a criminal case—you're probably aware that a criminal defendant has a right to a lawyer. But the right to a lawyer does not exist on the civil side—so when we're talking about housing, food, family rights, and the like. And so 86% of civil legal needs in this country go unmet every year because there just aren't enough lawyers in this space to meet the need. And so the lawyers who are working on behalf of low-income clients are just overwhelmed. So it's also a matter of providing the training and the resources and the breathing room for people to get their hands around these issues that are in their dockets. But we have to be realistic about that. 

[30:15]

Meredith Broussard: And it sounds like the pandemic is probably making this worse. Because you mentioned before that tenant screening reports are full of bad data. And landlords are increasingly using tenant screening reports to evaluate clients. Clients don't know that these systems are being used, and also there aren't enough poverty lawyers for civil actions like housing discrimination. And I'm also mindful of work that The Markup has been doing lately about housing discrimination and the longterm impact of people who are being evicted as a result of losing their jobs because of the pandemic. And it seems like this is a case where it might be useful, as you mentioned before, to seal certain elements of someone's housing history. 

Michele Gilman: Yes. And some jurisdictions have laws in place to do that but they are in the minority. And it's just going to take a lot of advocacy among impacted communities and their allies to ensure that these laws spread nationwide. And, you know, there's a lot of “devil’s in the details” here because we want laws that automatically seal eviction records and don't require affirmative steps on the part of tenants to do that—again, because there's such a lack of lawyers to help folks with this. So even in the realm of good laws to combat tenant screening, there are better ones and worse ones, and we want to identify the best practices—the most protective laws—and use those as models as we try to expand them to new jurisdictions. 

Meredith Broussard: Do you think that should happen at the federal level or should it happen at the state level? 

Michele Gilman: Yeah. Well, ideally both.

 [Laughter]

Some of these issues are just traditionally considered more matters of state law, such as housing. And some could definitely be addressed at the federal level. So, I think it's a mix, especially if we have laws at the federal level that are a floor, not a ceiling, and allow the states to be even more protective of their citizens if they choose to do so. One thing that would be very useful at the federal level would be to have a comprehensive data privacy law that gives all Americans more control over their personal data. Right now, we don't have such a law. In Europe, there is such a call called the General Data Protection Regulation (GDPR), that European citizens have, giving them more control over their data. In the United States, our legal regime is along the lines of Notice-and-Consent, meaning that when you access the internet and certain webpages or different apps, you are consenting to your data being gathered and used. Our legal regime puts the onus on us as individuals to protect our privacy—rather than giving responsibility to the companies that profit from our data or the government agencies that use the data to surveil and control us in different ways. And that's just a complete imbalance of how things should be: so, at the federal level, I would definitely love to see something comprehensive that extends to all Americans to give us more control of our data. That would be a start to combating some of the problems I'm talking about in the report. 

Meredith Broussard: I love that idea. Where can poverty lawyers start to understand some of the successes that people have had so far in combating automated systems? 

Michele Gilman: Yeah. I definitely would point to the report, the Poverty Lawgorithms report—it’s just chock full of footnotes and citations to other helpful resources and work by incredible lawyers around the country who have taken these systems head on. And even in the acknowledgments section of the report—there were so many lawyers who shared their expertise with me—and all of them are experts in all these different areas. Part of the goal of the report is to build a network of folks interested in these ideas and sharing ideas and sharing strategies and working together. It can be challenging because we're all in our individual jurisdictions and then we have different specialties—housing lawyers and family lawyers and consumer lawyers. But this is an issue that stretches across all jurisdictions and all practice areas, and so we really need a national movement among lawyers who practice in this area to combat these harms. 

Meredith Broussard: Clearly collaborations are key and I wonder if the field of public interest technology is a place for lawyers to go? So I've been getting increasingly involved in public interest technology. There's something called the Public Technology University Network that New America runs. It seems like, increasingly, that's where we're having conversations about technology and the public good and what do we need to do about law, about policy, about the mechanisms of these systems? 

Michele Gilman: I love that idea. We definitely need to get out of our silos and work together because we all have a lot of the same goals—increasing equality, increasing access to life opportunities—and we'll work better together than apart. So this might be a good chance to go over to the Q&A and let's see if there are some questions for Meredith and I that we can answer. 

Meredith Broussard: There are so many good questions here. 

Michele Gilman: There are. And you can vote for the questions you most would like us to pay attention to. And here’s one that I think would be good for you, Meredith. One of our questioners asks: “How impenetrable is the black box? How should we think about subpoenaing machine learning algorithms? And can biased algorithms be salvaged by conscientiously cleaned data?” 

And I think that’s such a good question because—I could ask for computer code in discovery, the other side would probably resist it, claiming trade secrecy and different doctrines that give them proprietary rights to their data. But if I could clear that hurdle, I would be left with a lot of computer code that I can't interpret on my own, and would have to work collaboratively with someone like Meredith to understand it and to litigate around it. So that takes us back to the core question, here: “What happens when you open the black box and try and peer inside?”

Meredith Broussard: I've been waiting for years for a lawyer to show up in my office and say: "Here's come computer code, can you read this for me and tell me what is going wrong?" Because I would like to do that. So lawyers who are listening: please get in touch when you need somebody to read computer code. 

And, I think that we can actually do a lot—even without reading the code. And, I'm convinced of that because we can use journalistic snooping. You can use FOIA and you can read the documentation to find out how these automated systems work. So, when a state agency contracts with a firm to buy enterprise software, there's a contract. You can FOIA the contract or foil the contract. You can read the terms of the contract: that says a bunch of things in there about how the system works and where the data is stored. Right. You can read the manuals for the system to find out: What are the data points that it collects? What are the functions that it performs? And you can extrapolate from that. So if something is rendering a score about somebody's credit worthiness, it has to calculate that score from something. It probably says—in the documentation—how the score is calculated. So those are the plain language explanations that you can actually figure it out without reading the code. 

In terms of getting your hands on the code, I don't know. It seems like that's really complicated. I know people fight about it a lot. Do you know of any cases where somebody has successfully subpoenaed code? 

Michele Gilman: Yeah. There have been cases—more on the civil side than on the criminal side— where courts have ordered that it be turned over in discovery because the litigants have a due process right to notice at a hearing when a government takes action against them. So under that norm of due process, courts have ordered that it be released. And sometimes it's with various protections so that it won't become public. There are various restraints a court can put on the code when it's turned over—but, yes, there have been successes in this area—and there's a lot of strategizing going on around that. 

Meredith Broussard: That is really good news. There's a paper about information fiduciaries—that I’ve always found really fascinating—the idea being that we need to create an information fiduciary, who has a responsibility to keep code secret, but a company would be able to upload their code to an information fiduciary, know their intellectual property is being protected, but then we would have a mechanism for auditing the code. Because we do need to audit these algorithms for fairness. We have abundant evidence now that automated decision-making systems violate peoples' civil rights, that they are arbitrary, that they just reinforce biases about race and class and gender. And we need some kind of mechanism for appealing these decisions, for auditing the algorithms, for saying: "Hey, if the algorithm is unfair, if it's not working, let's not use it." 

[41:30]

Michele Gilman: I like the sound of that.

 [Laughter]

So, one question here says: “Well, what about the Federal Trade Commission (FTC) and the Consumer Financial Protection Bureau (CFPB)? Are they watching over this? How are they performing? How should these issues be regulated?” 

So the FTC is the primary privacy law enforcer in the United States and they have done some really foundational work on helping uncover and explain the data broker industry. They do not have the funding or staffing to solve all the data privacy problems facing Americans, to say the least. And, of course, agencies are subject to political vagaries that can make them more vigorous about protecting consumers under certain administrations than others. Also, the FTC, when it's working to protect our data privacy as consumers is enforcing actions against companies that break their promises to consumers, but that doesn't get to the core question of whether companies should be doing certain things with our data in the first place. And so we can't only sit back and think: “Oh, the FTC will take care of that for us,” because that's just not their jurisdiction. But if we had a comprehensive data privacy law, that included strong enforcement mechanisms and funding for enforcement, that would go a long way towards at least building transparency around these systems, building lines of accountability, and, ideally, with more information about how these systems are operating, we could then engage in discussions as a society about what should be allowed and what should just be banned? Some of the things that I'm seeing and talking about—all the regulation in the world isn't going to solve the problem—it just shouldn't happen. 

 We're seeing that kind of discussion now around facial recognition technology. There are cities and jurisdictions around the United States who have decided: "We just don't want it. This is not how we're going to treat our citizens. It's too dangerous." And we need to be having more of those discussions about more technologies about what should just be banned in the first place and not just regulated. At the end of the day—you need both—but we're not engaging in enough of those conversations. 

Meredith Broussard: I would love it if we had more of those conversations. And, particularly, around facial recognition—and facial recognition in policing. There's a new documentary out that I highly recommend. It's called Coded Bias (2020). And it follows Joy Buolamwini, who is the MIT researcher who discovered that facial recognition systems are biased: that they are better at recognizing men than women; they're better at recognizing light skin than dark skin—and they perform worst of all on dark-skinned women. So facial recognition systems are biased. And most people would look at this problem and say: "Oh, well, you know, the problem is the training data was not representative.” And that we can go in and put in more diverse training data and then we'll make the algorithm better and then it will make everything better.

And Joy's work goes one step further. She says: "Listen, it's not enough to improve the algorithms. These kinds of facial recognition technologies are disproportionately weaponized against communities of color. They're deployed against poor communities and the solution is not to make them better at persecuting poor people and poor people of color—but it's actually not to use them at all in policing." 

So Joy’s organization is called The Algorithmic Justice League. As I said, the film, Coded Bias, follows Joy through one era of her fight against facial recognition. And then, the film also covers one of the situations that you mentioned earlier—the tenants in Brooklyn who organized against facial recognition—and that case was so fascinating because the landlord wanted to put in this facial recognition system so that people would be able to unlock their doors with their faces. Which just boggles my mind because a key works perfectly well. The key has worked for a really long time.  It doesn't require any power. It's very simple. Facial recognition to unlock your door is a totally technochauvinism idea. But it was even more foolish because we know which populations facial recognition works poorly on, and so the idea you would put in technology that makes it harder for people to get into their homes is completely absurd to me. So I'm really glad that the tenants won their case against the landlords who wanted to put in facial recognition locks. 

Michele Gilman: Yeah. It's very inspiring. I think it's also a good example that the tenants of that property recognized right away all the ways that facial recognition technology would be used to provide grounds for eviction, to surveil their comings and goings, to harm them. Whereas there may be residents who are wealthy in a very upscale building who see facial recognition technology as a key to access their apartments, as a: "Gee, whiz," technology. A new tech toy—something cool and exciting. And, I think that just goes to show that the impacts of different technologies on different people can vary—and we can't assume just because one group likes a technology or it's working for them, that that applies to everyone. We can't essentialize our discussions around privacy and surveillance to assume that these products impact everyone the same way because they don't and that's one reason why I feel so strongly that we need to have more public participation into the adoption and design of these programs so we get more perspectives. Because, as you pointed out earlier today, the perspectives of the folks who design these products are narrow and limited. The teams working on them have very shared experiences. 

Meredith Broussard: You know, there's a question here. Someone asks: “How can we expect people to go into poverty law when it doesn't pay?” Which leads me to think about the earlier question about the FTC: “Why don't we have more people who really understand technology working at the FTC and working in enforcement?” And it feels like those are the same question. And it feels like it's about economic inequality. The salaries you can get in the tech industry are so wildly beyond the salaries that you can get as a poverty lawyer or as a regulator or as somebody working on a government scale at the FTC. What do you think about this? 

Michele Gilman: Well, I do think it's a problem that pay can be low for legal services lawyers—but the bigger problem, actually, is the debt burden that lawyers graduate with. So I have many students who would love to join the legal aid bureau after graduation or to work in this space, but because of the cost of education now, it becomes prohibitive. So, I definitely think part of social justice now really has to require focusing on educational debt so that people have opportunities to live their life's passion in terms of their work. There are some loan repayment programs at different law schools and at the federal government level that give people more flexibility to pursue these sorts of jobs, but it needs to be much more widespread. When you look at the actual salaries, they're not so awful. It's just when you think of them in the context of the debt burden, then it can seem insurmountable. 

[50:49]

Meredith Broussard: That makes a lot of sense. 

Michele Gilman: And I understand that's true for many fields today, not just lawyers. But it's definitely a dynamic I see with my own law students. So we need more debt forgiveness, loan repayment programs, ways to ease this path. Because it's such meaningful work. It's such important work. It's so satisfying to be able to work one-on-one jointly with clients to help them achieve their goals. 

Meredith Broussard: So, it sounds like, in addition to more knowledge about automated decision-making systems, more knowledge about algorithms, more knowledge about AI, we also need to draw connections to other social problems—like the student debt crisis or the need for comprehensive data privacy law. 

Michele Gilman: Yes. It's all connected. 

Meredith Broussard: It's all connected. 

Michele Gilman: And we need to see these connections and work together across disciplines, across jurisdictions. I couldn't agree more. And I think that's one of the takeaways that we have for today's session. I'll give you a few of my other takeaways and then let Meredith share some of her thoughts. One thing that I'm thinking is more and more important is what I've mentioned a few times, is having stakeholders participate in the adoption and oversight of automated decision-making systems. We need to put more democracy in action, particularly when these algorithms are being adopted by government agencies. I'd love to see more partnerships, such as you just mentioned, between tech experts and poverty lawyers and low-income communities and community organizers around these issues. Really centering these issues and being proactive in strategizing against them rather than just reactive when they pop up in a case. 

There are great pro bono opportunities, here, for large law firms who have more resources and who regularly partner with legal services organizations to serve disenfranchised people and this would be such a great area for more of that. We need more tech education for lawyers—starting in law schools—I know there are law schools teaching coding for lawyers and classes of that sort and we need more of that. We need to see it as fundamental. And then, for those of us who have already graduated—for those whom it’s too late—more programs like this one and legal education and training opportunities, which is something that Meredith and I are thinking about kicking off. And so if anyone is interested in that, please contact us. 

Again, I hope you'll take a look at the Poverty Lawgorithms report if you’re interested in this. Because there are so many resources in there, based on the questions today, I really think our listeners would find incredibly useful. Meredith, what do you have to add? 

Meredith Broussard: I'm just going to echo what Michele said. The Lawgorithms report is a great place to get started. My general outlook is: What could possibly go wrong? Well, everything. And so, it's just a matter looking for where it has gone wrong—because it is inevitable that in these automated systems, things are going to go wrong—that the biases that we already know exist in the real world are going to be replicated inside these systems. And so I use Ruha Benjamin's framework—that discrimination is the default inside computational systems—instead of the technochauvinistic view that these are going to save us and bring us in to a new digital Utopia. Actually these systems are just magnifying all the existing problems. So where can you start in understanding these things? Where can you start in finding legal remedies for the benefit of humanity? I think you can start with my book. I think that's a good starting place. Also, all the other resources that we mentioned earlier. And, I know this sounds so boring, but I'm going to say it anyway. You can read the documentation. Read the manuals. Programmers have this expression “RTFM,” which stands for "Read the [bleep] Manual." 

Michele Gilman: Good advice!

 [Laughter]

Meredith Broussard: And we're all guilty of this. But when you're trying to understand the automated system: go to the source, read the plain language documentation about how the thing works, and, if you need to, talk to a tech expert, collaborate with people to have someone interpret it for you, if you need that. Or, interpret it yourself. It’s a little impenetrable—and it's hard—but it is not totally inaccessible, it just takes time. 

Michele Gilman: Yeah. And the amazing lawyers who have won cases against state public benefits systems that I mentioned earlier—they read the manual—and that is how they were able to identify where the algorithms had miscoded state law and incorrectly kicked people off. You buckle down and you do the work. That’s what lawyers do, day in and day out, and we can do it here too. So, with that, we will close for today. We want to thank you so much for joining us and I can't thank Meredith enough for sharing her expertise with us. 

You can continue this conversation, using the #Lawgorithms. And, please stay tuned for upcoming events on our website, datasociety.net/events. We also ask that you fill out a short three-question survey at the end of this event. We're sorry we couldn't get to all the amazing questions that were put up in the chat today, but feel free to email either of us, jointly or separately, any time. We'd love to engage and further dialogue with you. 

With that, I will say take care and please have a wonderful weekend. Thanks to all. 

Meredith Broussard: Thanks, Michele. Thanks, Data & Society. And, thanks, everybody, it was such a pleasure.