Data & Society

Becoming Data Episode 3: Data, AI & Automation

Episode Summary

Deb Raji and Arthur Gwagwa discuss AI and automation across different geopolitical contexts.

Episode Notes

Researchers Arthur Gwagwa and Deb Raji join our host, Natalie Kerby, to discuss data, AI, and automation, and the different ways they operate across geopolitical contexts such as the US and Africa. The episode covers not only the harms that can result from these systems, but also how we might address and prevent those harms.

Arthur Gwagwa (@arthurgwagwa) is a researcher at Utrecht University’s Ethics Institute in the Department of Philosophy.

Deb Raji (@rajiinio) is a fellow at Mozilla and works closely with the Algorithmic Justice League.

"Becoming Data" is co-produced by Data & Society and Public Books.

Episode Transcription

Data and Automation (with Arthur Gwagwa and Deb Raji) 

Annie Galvin (AG): Hello, and welcome back to Public Books 101, a podcast that turns a scholarly eye to a world worth studying. I’m Annie Galvin, an editor and producer at Public Books, which is a magazine of arts, ideas, and scholarship that is free and online. You can read the magazine at www.publicbooks.org

Natalie Kerby (NK): And I’m Natalie Kerby, digital content associate at Data & Society. Data & Society is a research institute that studies the social implications of data-centric technologies and automation. You can learn about our work at https://datasociety.net/

AG: This is the third season of our podcast, so if you’re listening for the first time, I invite you to subscribe to Public Books 101 in your podcast feed and listen back to season 1, which was about the internet, and season 2, about the novel in the 21st century. This season, we are excited to partner with Data & Society to explore the past, present, and future of human life being quantified as data. Natalie is your host this season, so I’ll let her take it from here. Thanks for listening.

NK: In this season, “Becoming Data,” my guests and I are considering a few main guiding questions. How long has human life been quantified as data, and in what contexts? What are some major implications of humans being quantified or measured as data? How are people pushing back against the datafication of human life, work, health, and citizenship, among other things?

Today, my guests are Deb Raji and Arthur Gwagwa. We’ll be discussing data, AI, and automation and the different ways they operate across geopolitical contexts, most notably, in the U.S. and Africa. We speak not only about the harms that can result from these systems, but also the ways that we might address and prevent those harms. Deb emphasizes how important it is to listen directly to impacted communities.

Alright, let’s start today’s conversation about data, automation, and AI.

 

NK: Alright, thank you both so much for being here today. To start, if you could just say your name and tell our listeners a little bit about the work that you do, that would be great. So, Arthur, why don't we start with you.

Arthur Gwagwa (AG): My name is Arthur Gwagwa, and I’m currently in Zimbabwe, but I work for Utrecht University Ethics Institute in the Department of Philosophy and Religious Studies. I am a researcher, looking at the impact of socially disruptive technologies on democratic norms and democratic processes and society. Prior to that, I was an AI researcher with University College London, working on artificial intelligence development in specific sectors in Africa, like health and agriculture.

NK: Great, thank you so much. Deb, how about you?

Deb Raji (DR): Hi, my name is Deb Raji. I'm current at fellow at Mozilla and I work very closely with the Algorithmic Justice League thinking about how we can evaluate and understand deployed systems, so systems, AI systems and algorithms that have been, not just constructed in a lab, but thrown into the real world and affecting real people.

NK: Great. So, this podcast series is about data. To start us off, I want to ask both of you what is data and I think it is really interesting to see how different people answer this question. So Deb, why don't you go first.

DR: I see data as representations of real people and real human subjects in many cases. A lot of the systems we think about at the Algorithmic Justice League are systems that try to take representations of people through data and make decisions about those people's lives, so the connection between the information about the person and the person behind that information for me is very clear. I see data as people and the way we interact and manipulate data really involving the sort of manipulations of their lives that end up happening due to these systems. 

NK: Yeah, I love that, data as people. So Arthur, how about for you in your work, what is data? How do you define data?

AG: I think the generic definition is one about facts, about statistics, about attributes, but in the context of my work, you know, data is about facts, statistics and attributes relating to not only individuals, but to groups of people, and which is collected over time. And then we end up calling it big data. We know that data about one person may not really mean much, but I think if you combine that data with data of people within that particular community, say, a town, a city or a neighborhood, then it tells a story, it can be amenable to analysis.

NK: Yeah, I like that you emphasize that it is not always about the one data point, but the patterns across multiple data points. This episode is specifically focused on data, AI and automation and I'd like to just differentiate those terms for our audience, and then give some examples of them in the real world. So, Deb, can you tell us what an algorithm is, the difference between AI and automation, and then give us a few examples of these systems in the U.S.?

DR: An algorithm means very different things to different communities. I'm coming from the machine learning community, which is a community that makes use of data and defines the rules of a program using data in order to build a model that then gets shipped out to make decisions on data that it has never seen before. That's really what an algorithm means to me: those instructions on how to make use of past data in order to set new rules for how you think about new data. 

For me, AI is a very nebulous term. I think AI is connected to this sci-fi vision of a machine with the cognitive capacity of humans, where it is predicated on this assumption of, you know, humans are really able to do a lot of really interesting things. We have a lot of impressive cognitive abilities. How cool would it be if a machine was able to do some of these things? So, you know, have a conversation, be able to see in the way that we see, manipulate language and visual cues in the way that we do. 

The difference for me between automation though is that automation is not really as much a vision. For me, it is connected to a lot more pragmatic examples. It is not as connected to the sci-fi speculation of what would it be, what would it mean to have a machine mind? There are practical applications that we want to achieve. There are certain things that we want to be able to do faster, or we want to be able to do at scale. Can machines help us make these processes automatic so that we are not manually making some of these decisions and interacting with these systems as frequently as we do? 

If you think of the press stories around Sophia the Robot, which was sort of this humanoid, robotics, cyborg thing, it was very human-like looking, lots of wires, I think that would be something that people would affiliate or associate with the vision of AI. Whereas, automation more often looks something like a Roomba, where a Roomba is much more widely deployed, you know, it affects a lot of people's lives theoretically, but it is a much simpler machine. It achieves the sort of practical application, you know, in this case vacuuming that people want to get done, that people don't want to do themselves. 

I will say that in the U.S., there are so many ways in which automation shows up and interferes with people's lives. In the criminal justice system, they use algorithms and different types of models in order to try to predict the ability of someone to rehabilitate after a certain amount of time in jail. The recidivism scores of the probability that they will come back, you know, to jail after they have been released, and then they will use those predictions as leverage points to make decisions about their real lives, such as how long they are in jail, you know, how much they will set for bail and a lot of really important things. 

We also see a lot of algorithms being used in healthcare to prioritize different applicants for organ donations, to automatically predict, you know, who is most likely to be at risk, and thus, who should get the hospital bed. I also mention education because that is something that in Covid times also became a very huge application of algorithms and AI in the U.S., where I would say like education and hiring, where people have been trying to use these systems in order to, you know, automatically assign grades and automatically do some things that, that are taking a toll on the teachers managing larger and larger classes. And then with hiring, sort of automating some parts of the process of, you know, what the hiring manager should be really doing looking at individual applications. Is there a way to whittle this down so that it is easier for me to manage in this situation where the pile is sort of overwhelming to look through?

NK: Yeah, thank you for all of those examples. I feel like you made a really good point there, where the way that we talk about AI is this very mystical future vision and it almost, I think for people who aren't really steeped in these subjects, it can mask the fact that there are all these automated systems making decisions about our lives right now.

DR: Yeah, exactly. Yeah.

NK: So Arthur, for you, I'm curious, what are some examples of AI and automated systems showing up in the African context? Because I can imagine they might be a little bit different than what we are seeing here.

AG: Yeah, I think, well, to automation I think a precedent that era of artificial intelligence where the factories, they started using automatic equipment, maybe to improve maybe efficiency on their conveyor belt, but I think what artificial intelligence is done is to disrupt automation maybe to enhance it or to augment it. Whereas I think in the past, it was more to do with the hardware and the equipment, but now we begin to see cognitive behavior in the equipment, where the equipment we have sort of in a duplicate case, or replicating the human behavior or act independently, independent of human agency. 

When you are talking of Africa, most of the services that we are seeing are leveraging platforms by global technology companies like Amazon AI services, Microsoft cognitive services, and Google AI services. Then with what independent African startups and scale-ups, if the startups and the scale-ups, I think in terms of the startups, they are being seen in fin tech and education and in health. IBM for example is working with local in startups, it is going to IBM “Hello Tractor,” which is an open source mobile platform that enables farmers to access tractor services on demand. Then, in Africa, I think what we are also seeing is machine learning communities, like there are machine learning and data science in Africa forum. These communities are also experimenting with different applications, especially natural language processing to ensure that African languages are fully reflected in, not only in cyberspace, but in AI applications. So, most of these applications, mostly forecasting on what we call AI for good. Sort of like those projects that are promoting or adding value to public utilities like climate, good climate, agricultural production, health and other non-profit making sectors. 

NK: Yeah, that is super interesting, and I think what came out in your response is that there is the geopolitics of AI in Africa, but it is not, it is not just African startups working on AI interventions, but it is actually all these other countries bringing in their technologies as well. So, Arthur, I'd love for you to talk a little bit specifically about the relationship between China and Africa when it comes to exporting AI, and then I'm curious what your thoughts are on the consequences of having foreign control of these data-centric symptoms?

AG: China has been increasing its investment in AI research and development into Africa as part of its foreign policy, but also as part of its domestic artificial intelligence practices. For example, let me just give you a few examples. Like, you know, Alibaba e-commerce, and popular products like WeChat and then ByteDance, which is behind TikTok. So some of this applications, what I call benign applications already seen in Africa. But usually they are associated with Chinese export of AI technology in terms of surveillance technology. Countries like Kenya, Rwanda, Uganda, are also importing Chinese artificial intelligence, but the same technologies are also being used to surveil or to exercise social control on the streets, and are being used by the police in order for them to combat domestic crime, but also the extent of threats like terrorism. But I think, as we have seen, the same technologies are also being used to control protest or any form of dissent. 

In terms of consequences, the export of intrusive and covert AI technologies to African countries, especially those with poor human rights records, is likely to reinforce existing systemic repression and introduce new forms of repression at the same time. One of the observations that we are making is that the harvesting of data by China for example through facial recognition technologies is going beyond conventional trading parameters. So there is some sort of colonization that is happening. But as I mentioned at the beginning, this is not only a problem from China because European countries are similarly deploying the same technologies in Africa, but of course when it comes to Europe, there are some safeguards, whereas in the relationship within China and Africa is free for all, it lacks transparency and it lacks any accountability mechanisms.

NK: Thank you, that was a really great explanation I think of the different players and I'm happy that you highlighted the difference between European exports as well. So Deb, I'd like for you to just give us a few examples, or maybe one example, of how companies in the U.S. or Canada might outsource tasks related to AI and automation to other countries. Data labeling is one that comes to mind?

DR: Yeah, I want to also note that the U.S. and Canada do also import some AI capabilities as well, especially with respect to China, a lot of surveillance companies used by government agencies for multiple capacities, specifically a lot of affect recognition tools, so trying to detect emotions or filter through applications for some indication of personality or emotion. A lot of those are developed in China, and then also a lot of surveillance tools. You know, one of the companies we audited was the Face++ product from Megvii, which is a very big company in China, so I just wanted to point out that that dynamic also does exist in the U.S. and the U.S. does make use of those surveillance tools on the ground to also surveil their citizens as well. 

There are a lot of civil rights violations happening here as well. There is a lot of exploitation happening with respect to how companies in the U.S. or Canada interact with developing nations as sources of data. So often we've heard cases of people going into those nations to collect citizen data because it is “easier,” there is less regulatory control around specific types of data like biometric data, but also like you mentioned, a lot of situations where a firm in the U.S. might collect a bunch of unlabeled data, so information that doesn't necessarily have categories assigned to it or isn't necessarily mapped onto a task that might be useful commercially. 

A company might collect that data and want to filter that information into different categories by putting a label on it, and that is a hugely tedious task to look through images for example and try to say this image goes in the bucket of images about dogs versus this set of images is about cats. They outsource that work usually. India has, you know, a huge market for this. A lot of African nations, those that are disadvantaged economically, you know, having an opportunity to gain a couple of cents per hour by labeling these data sets. There is also a moment in certain contexts where it actually becomes quite inhumane, where, and this is something that came out with Facebook, where Facebook was trying to build a model to filter out incredibly violent content and filter out nudity, and not safe for work images. But the labelers, the people that were actually filtering through these images, those people were being exposed to this horrific graphic images day in and day out in order to assign these images to a label, and it was incredibly emotionally taxing and emotionally traumatic in many cases. It can be tragic to think about how that is a situation that disproportionately affects those that are likely the least privileged or have the least agency to escape that situation. 

NK: Yeah, I feel like there are two viewpoints that really came out of that for me. One is that there is always a human behind it somewhere, right? And we have been trained to not think about the human. And then on top of that, I think you made a good point in saying that we have exports that come here as well, and my question was framed to ask you about the way that we outsource certain tasks, but it is a reminder that, yeah, some of these tasks happen here as well and that exploitation also happens here. 

DR: People forget that there are content moderators that are Americans, like there are certain parts of America, especially there is a lot of great work done by Lilly Irani on the labor dynamics of content moderation, where some of them are trying to organize to get better pay, to get better working conditions. There are these horrific dynamics that exist on certain platforms where someone might spend an entire day labeling data and then if the person that had requested that labeling work decides to reject the labels that they had assigned to the images, you know, that person doesn't get paid for that entire day of work, and you can just imagine how frustrating and exhausting that labor condition is, so, there has been a lot of really good work on just empowering these communities to start organizing and to start advocating for themselves and for better conditions for themselves. Mary L. Gray and Siddharth Suri have a great book on this called Ghost Work, where they really dive into that. 

But it was interesting for me, because when I first met mechanical tech workers doing this advocacy work, I was really surprised a lot of them were American and were speaking to their experience as Americans, technically participating in the tech industry, technically participating in the machine learning economy, but as this very, as this underclass, as this dismissed level of participants, you know, to what extent can you recognize the contribution of the people that are doing this really, really difficult job? How does that affect how we interact with these people, respect these people, and the labor conditions that they are entitled to have?

NK: Right, and expanding this notion of who is a tech worker, right?

DR: Exactly.

NK: As it encompasses all of these people. Definitely.

DR: Yeah.

NK: We have talked a little bit about some harms that come from the workers that work on AI, whether it is labeling data or the content moderators, but let's talk about some of the harms that arise from the people who are affected by the systems, the ones who the systems are making these decisions about. I'm curious to see how these harms differ across context and also are similar. So Deb, can you give us a few examples of algorithmic harm and then explain how human bias often becomes baked into these systems?

DR: Yeah, you know, algorithmic harm is such an important term to begin using. When I first started doing this work, people were very keen on the specific harm of bias and having conversations about, you know, situations where you have a model that is deployed in the real world and it doesn't work for this particular segment of the population. A horrific outcome for that segment of the population for sure, especially when it is an algorithm that is meant to make important decisions about healthcare or criminal justice or whatever it may be in terms of accountability, it very much correlates with conversations around discrimination, which we have a language for thanks to the Civil Rights Movement in our law. Only recently have we begun to expand our vocabulary of harms to understand that there are many ways that humans can be negatively impacted by an algorithm, and the definition that I use for algorithmic harm is, you know, a situation where an individual or a community is negatively impacted by a system of any kind. 

An algorithm is integrated in that system in a significant way and an algorithm is part of the contributing factors that lead to the harm that they experience. So this could be a situation where the harm, or the sort of danger of, you know, automating a system or of integrating an algorithm is not necessarily that the algorithm introduces new risk, but that it upholds maybe a system that is inherently detrimental to a particular community. So that also counts as a harmful situation for those that now have even less visibility and recourse in, you know, to be able to push back against this algorithm that has made decisions about them. 

There are many harms that I can speak about. I think I will, you know, there was a case that we were very involved in at the Algorithmic Justice League with the a group of tenants in Brooklyn, where these were a bunch of residents in a rent-controlled, rent-stabilized apartment and their landlord was keen on installing facial recognition, and when they first came to us, we were like, oh, okay, yeah, great, facial recognition, you know, the harms associated with that are, you know, there is privacy harms, you are probably worried about how your data is going to be manipulated and made use of, or bias, you are probably worried, a lot of the residents were, a lot of the tenants were people of color. The landlord was white. So we were like, okay, yeah, maybe you are worried it is not going to work for you and that could give you, you know, that could lead to physical risks if someone calls the police or exclusion if someone locks you out of your room, and they were like, yeah, we're worried about that, but that's not what we're really worried about. We're worried because we know our landlord for many years has been trying to kick us out and raise the rent and no one asks for this in our community and we had no say in this as a community, and we know that he is using this technology as another tool to sort of manipulate the situation and potentially really oppress us as a community. And we want to be able to push back against it, and I was just really blown away by that entire experience, because for me I was like, oh, when you talk to these communities, the harms that they identify and what they are worried about is not always what the research community is talking about. 

Another example is really with security, so a lot of conversations around, you know, security concerns with machine learning systems are around, you know, something that we call adversarial attacks, which is if I change a couple of pixels in an image, you know, the model will predict a completely different label and that can become really dangerous. You know, if I change, if I put a sticker on a stop sign, you know, a self-driving car won't be able to identify it anymore as a stop sign. And that seems really dangerous and that is really scary. That is a lot of what the conversation has been around, you know, security concerns in the machine learning AI world for a while. But then, you know, some people that were actually in communities with self-driving cars and with some of these technologies already being implemented were like, no, we're actually worried about other things, we are worried about the lack of control that we have on being able to stop our cars. We're worried about the fact that there is no way to turn things off, or not way to indicate that we don't want this in our community. It is these governance issues that are so much broader than just the technical, you know, aspect of, does this thing get manipulated in this situation versus not this situation? 

I think the communities that are most impacted by the situation are so articulate about what their concerns and what they are worried about, and it is becoming increasingly clear that in order to get a good sense of harms, we need to actually be interacting with these communities and talking to them. I think that a lot of community concerns are around this lack of agency or lack of ability to understand what an algorithm has been involved in a situation. There is not a lot of disclosure about that at all. But there is another case that I think about often, which is the A levels situation, where in the UK because of coronavirus, a lot of students didn't sit for their final exam. Instead, you know, their teachers gave them a grade based off of the assignments that they had done throughout the year, and that grade was adjusted based off of an algorithm that was calibrated, that was calibrating their grades based off of information like, you know, maybe the typical grade point average for the school or the region, the history of the grades received by a particular community, and you can probably guess how this resulted in a lot of frustrations and outrage because there was a community that might have a particular average, every student from that community gets penalized using this, because of this algorithm. And the conversation around these types of algorithms and the academic community for a long time had been conversations around explainability, people want to understand why they got a particular grade, and that's why they are upset. But when you actually talk to the students, a lot of the protests were around a lack of appeals, a lack of access to appeals, a lack of information around the fact that this was happening and a lack of participation in being able to determine whether or not this should happen.

NK: Definitely. A good reminder to yeah, not only think about the technical, but the social, and everyone's favorite term, the socio-technical, and I think also you make just a really good point in all of your examples that it is so important to center the impacted communities, right? Because they are -

DR: Yeah.

NK: - in the end whatever solution we come up with, has to serve them.

DR: Yeah, exactly.

NK: So, Arthur, I would be curious to hear from you about what kind of harms you are seeing in the African context as a result of AI and automated systems?

AG: Another harm that is happening in Africa relates to the new business models like Uber and Airbnb. For example, Uber taxi services replacing of traditional hailing or riding of taxis in countries like Kenya and South Africa, impacted more the future of work. So it is leading to loss of jobs in communities and where I think some of that industries might actually be wiped out in the future, and because the services are being outsources to European countries or to the U.S. Then another form of harm that we are experiencing in Africa is the risk of accentuating negative impacts of globalization because artificial intelligence means that I think the value chain of decisions along the agriculture value chain or manufacturing are now being globalized, or now being made far away from where the consequences of those decisions affect. What this basically means is that I think some of the decisions may not actually protect the interest of Africans, and then finally, related to data again, the ongoing debate about genetic sequencing data and the future of agriculture, you know, agriculture is the backbone of African economies, so companies like Bayer, Monsanto, and John Deerehave a monopoly on the seed and pesticide industries, and they are also having a monopoly over genetic sequencing data, which is important, which is the future of agriculture. So the consequences of that, because this is what is called high value data, so once high value data is owned by the global north, this is going to disadvantage Africans in sector like agriculture in health, and also I think probably sectors like education and finance. Because decisions relating to health, decisions relating to agriculture are being made in other countries.

NK: I think an interesting parallel between both of your answers is Deb was talking about how even in the research community, some of the harms that they assume aren't necessarily the harms or the solutions of the community itself is thinking about, and here you are talking about that on a major scale, right, where decisions are being made in the global north about people in the global south, like the harms aren't necessarily visible to people in the global north, right, because of that literal geographic divide. 

I want to talk a little bit about harm reduction since you both gave us a good picture there of some of the harms that algorithmic systems produce. And so, Arthur, can you tell us about one major policy conversation going on right now in Africa around AI systems?

AG: One policy response is being viewed around the post-colonial solidarity, whereby African governments and maybe pan Africanists think that artificial intelligence is a brainchild of the global north. When African, pan African and African governments are looking at the issue of artificial intelligence, they are looking at, you know, issues around solidarity, justice, fairness, equity and equality. And then the other issue that is actually falling for debate, that is actually being debated at the moment, is around data ownership, to say who owns the data? So Africa doesn't really have access to most of this data or the capabilities to turn that data into computer readable format, let alone for use in artificial intelligence. Then another policy response in Africa at the moment is to see how Africans can be included in the fourth industrial revolution because Africa has been left behind in the previous industrial revolutions. So African governments and African academics and policymakers are looking at how Africa can use its data in order for it to advance economic development and be at power with countries like China. 

NK: Do you think that African countries having ownership over their data will reduce the harms that Africans are experiencing right now?

AG: Just having data is not going to be enough. Africa needs to do much more than that improving governance structures, but also improving accountability and transparency in society to make sure that once we have the data, it is not only going to benefit the elites, but it is going to benefit the underrepresented communities and the weakest members of society.

NK: Yeah, definitely. So, Deb, I want you to talk a little bit about your project. You are helping people recognize the algorithmic harm that they have experienced, right, can you talk a little bit about that?

DR: Yeah, so at the Algorithmic Justice League, we receive these inbound requests, or just cries for help from people that feel like they have been impacted by a situation and they are unsure, but they think that maybe an algorithm has been involved in some way, or they have been tipped off to the fact that there is an algorithm involved in some way. We wanted to think about well, you know, what are the current processes available for these people to actually report these harms, or even identify these harms and acknowledge what is going on. So that is the work that we are doing and we're hoping that it can lead to the design of a system that can actually serve as this portal or this resource for people that are being hurt, and don’t understand how to talk about it, don't understand how to report it and how to get the harm that they feel result in some kind of justice for themselves. 

Education has played a very big role where, you know, the general public has a limited understanding, almost intentionally, of the way these algorithms intersect with their lives and how to talk about it. I would say almost intentionally because, you know, in certain cases, such as what we saw with the POST Act, and NYPD, really resisting any policy requirements to be open and transparent about what surveillance tools they were using. It reveals that certain institutions that make use of certain of these technologies, don't want it to be public knowledge what they are using and how it is interfering with people's lives, but I think the most important thing is that we're really hoping that those that are affected become part of the conversation in defining which harms we talk about as a research community. It has been way too long for that to not be the case. I sometimes go into certain academic spaces and I see so much speculation happening, but we don't need to speculate in many of these situations like those that are affected are right there and they can talk about what they are going through and they can really guide our work in a way that makes it meaningful for them in addition to meaningful for us.

NK: Yeah, I think that is a really beautiful, almost conclusion to this episode, but Arthur, I do want to ask you one quick question and I know that you have a philosophy background, right? I'm curious, for you, when we think about the future and when we think about automation and in AI we often think about progress and moving forward. So from your perspective, do you consider automation progress?

AG: Yeah, I think it depends really. I think if you were to ask someone from Tesla, for example, or maybe a capitalist who is running a huge industrial corporation, they will consider automation progress, because we are beginning to see AI systems that exhibit genuine agency. They can act independent of human intervention. But the drive to automate everything, even if it brings about efficiency, it may also do so at the cost of what it means to be human, threaten the values, the realities and aspirations that communities care about. So what artificial intelligence is doing is to fast track Africa into the world of progress, to say, well, you can order your food from your house. We will say, who is saying, do we ever say that Africans want, you know, that sort of progress because it's not in line with our culture. We usually have these social gatherings during the harvest time, but if all of that is going to be disrupted because agriculture has become more efficient, that doesn't really count as progress in Africa. 

Before automation or artificial intelligence is counted as progress, we need to look at the human rights, environmental, sustainability culture impacts and make an assessment before deciding which parts of our lives should be automated. Human happiness and flourishing. What does it mean to be happy in Africa. What does it mean to flourish? What do we mean by a dignified life? What are our moral obligations in respect to other human persons? 

For example, in Africa, we value Ubuntu, that is a thing: “I am because we are.” So we have a communal approach to life. If we are going to have rational machines that think like human beings, trying to replicate what we do, but dividing us asunder, that cannot be regarded as progress, but how to argue that? I think for any systems that are going to be deployed in Africa, if we are going to consider them as moral progress, we need an approach that will philosophize African philosophers, African elders and those underrepresented populations like women and children in order for them to contribute to the future deployment of artificial intelligence system, but also the policies that allocate those systems. Thank you.

NK: Thank you, yeah, that was just a beautiful way to end. 

And that’s our show! A huge thank you to Deb Raji and Arthur Gwagwa for sharing their thoughts about data, automation, and AI. You can find links to their work in the show notes to this episode. 

Next time on Becoming Data, I talk to Laura Forlano, a writer and design researcher who studies design, technology, and radical futures, and Ranjit Singh, a postdoctoral scholar at Data & Society who works on data infrastructures, global development, and public policy. Together, we investigate infrastructures of data: how these systems have infiltrated the most intimate corners of our lives, whether through medical technologies that live in our bodies or in the forms of identification that mark citizenship.

So I hope you’ll join us for our next episode, about data and infrastructure. 

This podcast is a production of Public Books, in partnership with the Columbia University Library’s Digital Scholarship Division. Thank you to Michelle Wilson at the library, for partnering with us on this project. This episode was produced and edited by Annie Galvin, with editorial input from Kelley Deane McKinney and Mona Sloane. Our theme music was composed by Jack Hamilton, and our logo was designed by Yichi Liu. Special thanks to Data & Society Director of Research Sareeta Amrute and Director of Creative Strategy Sam Hinds, and to the editorial staff of Public Books for their support for this project. Thank you for listening, and I hope to see you next time.