Humanity in the Machine Age
Amnesty International's Sherif Elsayed-Ali on the future of being human and human rights.
Robot rights aren’t something you’d imagine a human rights advocate would know much about. But Sherif Elsayed-Ali, Director of Global Issues and Research at Amnesty International, has given quite a bit of thought to the topic—which he says is a hot button issue in the discussion around people and artificial intelligence, and subsequently, human rights.
“I have a very strong view that robots don't have rights,” says Elsayed-Ali, who founded Amnesty’s technology and human rights program in 2015, after leading Amnesty International’s global work on refugee and migrants’ rights. “There's no identifiable consciousness that we know of in robots or AI today. And consciousness is different from intelligence.”
It’s that consciousness, Elsayed-Ali says, that’s deeply connected to what it means to be human, and the thing that allows humans to make choices based on more than just math, but emotions like forgiveness or the desire to help someone. No matter how good an AI system is, “no one,” Elsayed-Ali says, “makes those decisions better than humans.” And when those decisions become calculations about people’s lives—who gets a loan, for example, or whether someone gets parole—we need to be careful that design iterations and technology don’t start to dictate those very fundamental human decisions.
Elsayed-Ali, who grew up in Egypt and is now based at Amnesty’s London Headquarters, says protecting the humanity of those kinds of decisions is his biggest concern when it comes to the future of people and tech. In addition to working to mitigate the dangers posed by new technologies, including increased surveillance and the threat to freedom of expression, Elsayed-Ali also looks to leverage tech to enhance the protection of rights—critical to Amnesty’s research efforts around the world.
He chatted with dscout about the importance of authentic human stories to Amnesty’s work, and how human rights—and possibly what it means to be human itself—may be changing in the age of technology.
dscout: Amnesty is an enormous global operation, and obviously telling people's stories is central to what you do. What does research at Amnesty mean?
Sherif Elsayed-Ali: Amnesty is advocating for a world where everyone is able to enjoy their human rights, regardless of who they are or where they are. A big part of doing that is to tell the stories of people whose rights are being abused, where governments are violating people's freedom of expression, people are being tortured, or civilians being killed in wars. It’s really important to have high quality evidence to make the case in those situations, to show what's happening, to tell the story of the people who are suffering. That’s why at Amnesty, qualitative research is so important. It's really about how human rights violations are impacting people's lives. It always goes back to the people affected, whether it's an individual, a family, a community.
We do use quantitative research as well, sometimes that we commission, sometimes we use supporting research from other credible organizations. What we always try to do is to have strong evidence to show the problem, and to really tell the human story. We try to avoid saying things in the abstract, or in theoretical terms. Of course, sometimes you have to, especially when we’re advocating for stronger laws and policies or commenting on draft legislation, but the struggle is to always connect to something that is real, to how something affects people. Our research encompasses everything from things like prisoners of conscience—people whose freedom has been taken because of their choice of opinion, of religion, opposition to a government, of their sexual orientation—to conflict, and to economic on social rights, things like lack of decent housing. It’s very much about the day-to-day, specific experiences of people, often things that aren’t necessarily in the political sphere. In relation to technology, we’re looking at everything from the risk of discrimination that’s happening right now, to future, potential impacts, and what we can do to minimize risks for people down the road.
You mentioned how important specifics are in telling people's stories. There must be an added layer of difficulty to that when you're working in an organization like Amnesty, because some of the people that you're speaking to are in danger.
Yes. And the safety of the people we work with and the people we interview, the stories they try to tell, is absolutely paramount. That's always the top concern
So how do you go about telling peoples' stories while also making that the top consideration?
First of all, we make sure people know the risks if we are going to tell their stories and understand how we might be using them. Sometimes we have people who say “I don't want you to use my name,” or “don’t use my picture." Sometimes even when someone gives us permission to use their name or their story, if we feel it isn’t safe for them we won’t do it. It’s not taking away people's agency, but sometimes we have a perspective that they may not, so even if someone has given consent we won’t give their information if we think it’s too much of a risk. But sometimes we can just anonymize or change someone’s name or identifying details.
How are your researchers finding people to talk to? You mentioned there's a huge emphasis on qualitative and storytelling, and you have a large field network, but it's not like you can advertise. It's pretty different from asking people their opinions on day-to-day issues. You’re talking to people about things that are affecting their lives and aren't always safe. How do you go about finding those stories?
It depends on the context, always, but there are many places where we already have links in the community, activists or community organizers. Sometimes we don't, and we have to try to establish these contacts, and it means establishing trust bit-by-bit. Especially in situations like a refugee crisis, where people are moving from one place to another. But even if you already have those established contacts, trust is the most important thing. You have to build that trust bit-by-bit.
Even if you already have those established contacts, trust is the most important thing. You have to build that trust bit-by-bit.
Has the idea of telling people's stories, and educating others about the world around you, always been of interest?
Yes. I grew up in an environment where I disagreed with a lot of the accepted norms. Not all of them, but many of them. I grew up in a mostly conservative country but I read a lot of liberal literature. It made me question why people think what they do about people who believe certain things, and how a culture and actions that stem from that culture can affect people’s ideas and how those ideas change over time. It's been a journey for me to accept where I'm from, especially since, in many cases, I disagree with the accepted norms of the place I’m from. Now, my work is about how technology impacts us and how it might be changing us over time and in the future. The thing I'm very interested in at the moment also is how technology is going to change humanity in the long-term. And how we, as societies and as humans, perceive ourselves and our place on the planet and what that means now and what that means for the future.
How do you think technology is going to change humanity in the long-term?
As humans we have been augmenting our capabilities by using tools for several thousand years, from the wheel up to the most sophisticated computer algorithms. That trajectory of augmentation is only going to continue, and actually accelerate because of the convergence of a number of technologies like AI, robotics and bio-engineering. There’s a very real possibility that at some point in the not too distant future, humans could be so augmented that we might start differentiating the human species between people who are augmented and people who aren't. That would raise a lot of questions around equality, around what it means even to be human. And it’s something we need to think about now, because the things that are moving us in this direction are happening today.
Wow. That's pretty powerful and somewhat shocking. What things are in place now that are moving us toward that possible future?
A lot of it is a matter of when, not if. When we will be able to connect people's brains to machines, to computers that are extremely powerful and have huge amounts of information. I think the possibility that this will happen in the next 50 or 100 years is very real. What does that mean for the capacity of the human intellect? In 50 or 100 years could we start seeing genetic modification of people to make them smarter, or stronger? We're already seeing huge competitiveness in artificial intelligence. Eventually we may see a race between countries around human capabilities and intelligence to give them a competitive advantage. Or what if we eventually have colonies on Mars? To have people who are able to live on Mars in the long-term, and for humans to have continued colonies there, you might have to edit people's genes to make them more adaptable to conditions there. I think gene editing is inevitable in our short-term future as a species, and 50, 100, 200 years is nothing in biological terms. There could start being a divergence within humans.
Is there a point where that kind of manipulation, gene editing or the intersection between humans and machines, starts to become problematic when it comes to human rights? There's a pretty clear case that it could create an imbalance between humans.
It’s a challenge. Right now, though we have individual differences, we’re all the same species, that much is clear. What happens if that isn’t always what we’re looking at? At the moment there’s a lot of talk about robot rights. I have a very strong view that robots don't have rights. There's no identifiable consciousness that we know of in robots or AI today. And consciousness is different from intelligence. We actually know very little about consciousness, and how it arises, or how it works.
It actually goes back to a very philosophical argument about what it means to be human, and that all we know is that we, ourselves, are conscious. And the other thing with deep human augmentation, or the idea of connecting our brains to machines, is that part of our consciousness has to do with our physical beings. We are not just our minds, we are also our bodies. That’s how we perceive the world. So what happens if there are people who become one with the digital realm? Or if you could somehow connect your senses to different parts of the world, so you could have access to information around the world whenever you needed it—what would that actually do to your own self-consciousness? I'm not sure it would be similar to what we experience now as consciousness or how we perceive ourselves. It would be a very different kind of existence, and it calls into question what human rights even become.
It's interesting, because, as you say, we know very little about consciousness. There was an article recently about a number of bots who were speaking with one another, and essentially created their own language, which seemed nonsensical to the humans who were trying to make sense of it. It seems to go back to the idea of “They don't have a consciousness that we know of.”
The problem is the risk of losing control of the things we create, because of design decisions.
The problem is the risk of losing control of the things we create, because of design decisions. Take, for example, something that today we think of as relatively non-futuristic, like a virus or a worm. We design those to do exactly that—replicate randomly—but even the creator of a worm can completely lose control of it and it can end up infecting different computers and doing different things than it was intended. If we’ve gotten to the point where we’ve created systems that are talking to each other and they end up communicating in a way that we don’t understand, that’s a problem. What happens when these systems are making calculations that are critical to our rights as humans? What happens when these systems are deciding who gets a loan or a mortgage, who gets convicted or who gets parole, or even, for example, controlling a nuclear arsenal? That’s not happening today, of course. But there’s a risk of just letting loose things that we believe are very sophisticated, and therefore very trusted, but at the end of a day developed by people using processes that may or may not be flawed. That's the biggest worry that I have.
It seems to go back to this idea that there are things humans can bring to the decision-making process, tools like consciousness and empathy, that machines can't understand. Whatever critical thing it is that separates people and AI, the quality that is at the center of the human condition. For you, what is that?
I think it's a combination of things. With AI, we think there’s a possibility that one day we’ll be able to create artificial consciousness. But even if it does happen, it would be a significantly different kind of experience than what a human has, in the same way a human’s experience is different from that of an animal with less powerful intelligence. And ultimately, humans can make choices based on more than just mathematical rules. We can make choices based on trying to help someone, or forgive someone. When we do that well, nothing beats humans at that kind of decision making.
Ultimately, humans can make choices based on more than just mathematical rules. We can make choices based on trying to help someone, or forgive someone. When we do that well, nothing beats humans at that kind of decision making.
On the other side of it though, throughout history, there’s been a lot of cruelty in humanity. We can be biased, we can discriminate, we can hurt people—there are people who still live in modern forms of slavery today. We're also capable of all of that. So I think we should be open to the possibility that there are ways we can improve upon what we know, or the things humanity does poorly, by using machines or AI to overcome some of those biases, and reduce harm to others. And of course, human rights are a guide for us to think about how we can maximum peoples’ wellbeing and reduce some of the worst impulses that humanity has.
Carrie Neill is a New York based writer, editor, design advocate, bookworm, travel fiend, dessert enthusiast, and a fan of People Nerds everywhere.
Subscribe To People Nerds
A weekly roundup of interviews, pro tips and original research designed for people who are interested in people