Software is eating the world. We, developers and architects, are a major force influencing software, technology, and the world it creates. We don’t have the privilege of being unaware of our actions. If we really want to create a better world, we must understand the intersection of technology and humanity. We need to open our eyes to the link between ethics and software. In this session, we’ll look at some examples of ethical questions involving software and algorithms. We’ll discuss technology, sense of self, politics, truth, and try to understand what we can do about it.
25. The basic problem is that web 2.0 tools are not
supportive of democracy by design. They are tools
designed to gather spy-agency-like data in a
seductive way, first and foremost, but as a side
effect they tend to provide software support for
mob-like phenomena.
Jaron Lanier
Facebook has never been merely a social platform.
Rather, it exploits our social interactions the way a
Tupperware party does. Facebook does not exist to
help us make friends, but to turn our network of
connections, brand preferences, and activities over
time — our “social graphs” — into a commodity
for others to exploit.
Douglas Rushkoff
A couple of years ago, in a highly acclaimed university, the students in an engineering faculty were presented with a question.
“You need to design a pipe to conduct blood between London and Paris. What information do you need to collect in order to design this pipe?”
The students thought and came up with a lot of questions about the shape of the pipe, the topography, temperatures, things that can cause the blood to clot, and a lot of other technical questions.
But there was one question that nobody asked.
- Why? What is the purpose of transferring blood? Who’s blood is it?
Ethics is not a new thing. Humanity has been contemplating, discussing and arguing about ethics for centuries. Every thing we do has an ethical aspect.
We will not go into a philosophical analysis of what is ethics, there is enough debate about it. But I think we all have a common sense idea of what do we mean when we talk about ethics. And basically, in a very crude way, it’s trying to think about the consequences of what we do. Is it good or is it bad? How does it affect other people? Does it make life better, does it hurt someone? These are the type of questions we’re talking about here.
It was always important to ask these questions. But I think it’s even more important today, and in the context of what we do, in the context of software development. There are two trends that makes it so important.
Software is a technology that is now underlying almost every aspect of our world. And moreover, a lot of times its presence is hidden, unobservable. It’s becoming the fabric of our day to day existence, but one that is less and less noticed as we progress. So software engineering, as a practice, is really becoming one of the most influencing practices in our world. What we do shapes the world we live in. What we do as software developers has consequences that span society.
The second trend is the decline in liberal arts education. This is not only about the decline of liberal arts majors, but also about the tendency to create specialization in technical education and not including liberal arts studies as part of the curriculum. So philosophy, literature, history, arts, those are subjects that are not considered important at all at software engineering faculties. When you study software engineering at college, it’s all about bits and bytes. No one discusses ethics, morality, social responsibility, psychology. We’re working at one of the most influencing sectors existing today, yet our education ignores the social and cultural effects of what we do.
And another important remark about ethics – I’ve called this talk “ethical *questions* in software engineering” because I believe that when we discuss ethics there are no easy answers. That’s also something that is sometimes hard to contain, especially for those of us who are used to give concrete and close ended solutions. But the point about ethics is to ask questions, to always think about multiple aspects of what we do, and to understand that there might not be a single correct answer. We’re not looking for solutions, we’re looking for opening our minds to multiple views that might give us a broader look for guiding our actions.
Let’s look at some examples of things around technology and software and ethical questions that surrounds it. This is just an anecdotal list of examples, things I myself find interesting. There are a lot of areas I will not even touch here.
One area where ethics does get discussed a lot these days are autonomous cars. It’s a good place to start because the ethical questions here are very clear.
There’s a classical dilemma in ethics that relates to that, which is the trolley problem. It goes something like this – there’s a train on the loose which is about to run over 5 people. You’re standing near a switch that can move the train to another track where it will kill only 1 person. Should you throw that switch?
There’s no easy or single answer here, and there are different ethical considerations that may result in different answers. And that’s part of the point. It’s a situation that brings up questions and world views without a single finite solution. It’s not a computational problem.
But now, what if we place an autonomous car in this position? This is going to be quite a common scenario. The car is going to hit people, should it diverge sideways and hit another person instead? If one is an adult and one is a child, who should it save? What if it’s 3 adults vs. one child? What is the “algorithm” here? Who should decide on it? And who is responsible for the death that is going to happen? Is it the car manufacturer? Is it the programmer? Is there now no one to take responsibility for death in car accidents? What is the social impact of this?
A team of researches tried to understand what are the social expectations for such automated decisions. They conducted a global online experiment, showing people across the globe these kind of situations, and gathering the different decisions people would make. The results show that there are different clusters of social ethics. Some preferences are more global, like sparing humans over animals, or sparing more lives. Some preferences are more cultural influenced, like sparing the young rather than the old. This interesting study brings up an additional layer of questions. Should we take these cultural preferences into consideration when designing the behavior of automated cars, or other automated systems? How can this be even enforced? And should a car behave differently in different countries, according to the local ethics? Or maybe, as others suggested, we cannot make these kind of decisions at all, and the car should randomly select its behavior in such situations to preserve the principle of equality for all?
This is one of the many ethical problems we face with the current advancements with AI, machine learning. We’re building systems that makes all sort of decisions. This was always a large part of what we’re using computers for. But AI is changing the nature of those decisions and the question we ask. It’s not just about straight forward computation anymore. We’re starting to ask more open ended questions. “Who should the car hit” is one example. And we’re starting to rely on more and more of these decisions. Like decisions about the information we’re exposed to – what we see in our Facebook feed. Or decisions about who should a company hire, with automated HR systems. Or systems that are used in law enforcement, in sentencing and parole, deciding the “risk” of people to commit another crime.
These are becoming very concrete problems. For example, a case in Wisconsin, were Eric Loomis was found guilty for his role in a drive-by shooting. He was given a long sentence, partially because he was marked with a “high risk” score by a risk assessment tool that was used by the court. Loomis challenged the sentence on grounds that he was not allowed to assess the algorithm being used by this tool, which was developed by a private company. The state supreme court ruled against Loomis (on the grounds that knowing the output of the algorithm was enough). But the questions that are raised by this case remain. How much can we rely on black-box algorithms for these kind of decisions? Who is responsible for the algorithm for this decisions? Who are they answering to? Are we OK with relying on privately developed algorithms that are kept a secret by the company? And lets say this becomes a standard tool in courts. We can assume that the software rating is pretty much deterministic. Will this mean that a sentencing decision will be the same no matter who is the judge? We’re giving up on the diversity of human opinion, are we OK with that?
Here’s another example of relying on automation and machine learning. Last year a Palestinian man was arrested in Israel after publishing this post in Facebook. The post in Arabic says “good morning”. But the Facebook automatic translation has mistakenly translated it to Hebrew as “hurt them”, which in Arabic is a similar word, that has a difference in one letter. A policeman saw this post with the translation, and since the post is also geo-located in an Israeli settlement, the man was arrested. Small features can have big implications.
This is a fun example. A team from MIT wanted to show the effect of the training data on the output of machine learning algorithms. So they took an AI for image captioning and trained it on images from one of the darker subreddits. Then they showed it images of a Rorschach test, and compared it to a second AI that was trained on a more conventional data set.
So there’s a tendency to think that algorithms are less biased than people, and that we will be better of if we let algorithms make more choices. And tech companies are happy to create that image, it’s good for business.
But the reality is that a lot of times AI can take on hidden biases that are buried in the data we feed into it, biases that we might not even notice ourselves, and actually amplify it.
This is a well known example of image recognition go wrong. A couple of years ago, Google photos automatically labeled this guy’s friend as a gorilla. And there are other examples – like digital cameras that always interpreted Asian eyes as blinking and always marked their photos as flawed, and webcams that couldn’t identify faces with darker skin tones.
Now we can just mark all these incidents as bugs. But it brings out questions about how we use and rely on machine learning algorithms. And the problems of the data sets we use for training them, and the awareness around the implications it might create. And there’s also the issue of diversity. As many commented after this incident, it shows how things that seems to be OK in our controlled environment, which is, we have to admit, not very representative, can break down once it’s on the loose in the real world.
How far are we willing to go with trusting algorithms? Take a look at Rootclaim, a startup whose mission is to create the ultimate truth machine. A platform to assess real world issues in a logical way, eliminating human bias and arriving to evidence based conclusions. For now it deals with questions like “what caused the chemical calamity in Khan Sheikhoun”, “Did Pakistan know that Osama Bin Laden was hiding in Abbottabad”, and “What is the story behind Donald Trump’s Hair” (which the conclusion to, by the way, is that by 60% it’s a result of a flap surgery). That’s quite a bold and utopian task, creating such a platform. But putting aside the technological challenges, are we asking ourselves enough questions about the consequences of a technology like this? What are the social implications of relying on an algorithmic platform to decide what’s true and what isn’t? What are the implications to the judicial system? To how public opinion is generated or can be manipulated? What does it mean for human decision making? For how we perceive ourselves? How can it affect politics, social balances, cultural differences? This can have so many implications, are we asking ourselves enough questions? Do enough people ask enough questions about it? This represents a world view that sees technology as the answer to all problems, but do we see also the problems it creates?
We talked about some big question, but now let’s zoom in for a minute into the small things. What’s wrong in this picture?
This little thing – autoplay, and it’s on by default. A small and harmless feature. What does this UX feature actually means? What are the ethics of this feature? It means you need to make a conscious and active choice *not* to watch the next video (assuming you are aware of how to disable the autoplay or fast enough to cancel the next play). We are tricked into spending more time watching videos. This is an example of an opt-in/opt-out dilemma. Every time we present an option to the user and need to choose if it’s opt-in or opt-out, there might be an ethical dilemma hiding under it.
In this case, it’s also part of the attention economy. It’s just one of multiple ways social platforms are designed to keep us engaged, but not always for our benefit. Of course it’s not all bad and there are also benefits for everything. But when we create features like this, are we asking ourselves the right questions? Is it good for the users? Do they *really* benefit from it? Are we honest?
Media has always been one of the important pillars of sustaining a healthy democracy. But traditional media is changing, a lot of it is dying. It’s being replaced by new media technology. By social media.
And this changes the ways we consume and create information. The way we design social media influences the way information is being consumed and created. The technology companies that a lot of us work for are responsible today for a larger part of how information is generated and consumed.
There’s a lot of good in today’s social media, but there are also dark side. In a historical perspective it’s a very new method of spreading information, and it’s rapidly changing. So we’re just starting to understand some of the impact on society. But we already see parts of it. We see that the short attention span on this type of media and the virality effects are helping to promote hate speech and racism, in ways which has concrete effects on people’s lives.
It creates a preference to the spreading of lies and misleading information. Information that generates intense emotions and plays on primal feelings like fear and anger moves faster than information that needs more thought and attention. We amplify mob action over thoughtful and cautious data ingestion.
These are not designed effects, but they are a consequence of the way we design and build our platforms and products. The features we design and programs impact society much more than we are willing to admit.
It’s not that this issue is being ignored. There’s more and more conversation about the problems of information spreading in social media, and there are genuine attempts to deal with these issues. But there are questions regarding the foundations of these platforms. Do we understand enough the problems? Are we asking the right questions?
I think it’s important to listen to people like Jaron Lanier and to get that perspective. What he’s saying is that the whole business model of social media is flawed from essence. It’s an abusive model that is built on manipulation. Social media is always manipulating interactions between people for the gains of a third party. So the technology that has become the main thing we use daily, is one that is by design not intended for our benefit. He claims that the way to fix social media and the damage it is doing to society is to fix the business model that it is built on. It should consider directly the customer, and this is the only way that can actually serve the interest of the customer, meaning us.
As I said in the beginning, I don’t know if this is THE answer. It’s more that there are really hard questions we must ask ourselves about what we’re building, because the effects on society and democracy are not something we can ignore.
We also need to consider the ethos of technology companies. Would we accept that approach from our construction companies? From medical equipment companies? We need to start thinking about companies in their real context. Companies like Uber, AirBnB, Facebook – the fact that they use digital technology in their core does not exempt them from being responsible for the business area they operate in – taxi services, vacation rentals, media. Technology is not a get out of jail card, and we should not let companies use it as an excuse for avoiding responsibility.
And just because technology enables you to build something, doesn’t mean that it’s something that should be built.
This is an application called Parkking. It’s intended to trade parking places, not private but on the street. So if you’re about to drive and free a parking spot, you can trade it in the app, and someone looking for a parking spot can buy it from you. This is riding on the current of the sharing economy, playing in the area of applications like AirBnB. But it’s taking the concept and the possibilities technology opens into a new level of exploitation. What happens here is that they are trying to capitalize on public properties. It’s the digital equivalent of opening a private parking garage on a public field that is not yours.
Back to social networks. Let’s talk about a different aspect. A couple of years ago Facebook released a post in its “People Insight” blog about a research they did examining how break-up moment influences people’s online behavior. They looked at the changes in interactions on Facebook after a break-up, what language is used, when do people post about it. They also looked at the online purchasing behavior, and found out that people after break-up are more likely to be interested in experiences, like travel, than in just buying stuff. The post concludes with recommendations to marketers about what signals they can track to reach people at the right time. The ethical questions here are quite obvious. Should a company like Facebook track user activity to identify potential break-up? Should it use it to target ads? I guess the Facebook folks figured this was a tricky subject as from what I could find they removed this post from their blog.
Here’s a more recent example. Last Christmas Netflix tweeted this tweet which annoyed the hell out of a lot of people. People were offended to discover that their viewing habits might be used to analyze their personality. Netflix tried to explain later this was only aggregated information and not personal, but that didn’t help too much.
But there was also another interesting response to this Netflix tweet. This guy responded on reddit, telling his story. One summer he was going through an episode of depression. He spent one week doing nothing but watching Netflix. Then he got an email from Netflix asking if he was OK, as they noticed the change in his viewing behavior and that his account running non-stop. And he says that actually made him feel better.
So behavior tracking, is it good? Is it bad? These kind of questions are never simple. They are not binary. It’s just complicated.
So let’s assume we care about ethics. What can we do about it?
Well, first of all, we need to keep asking questions. Take time to think about what we do, what the companies we work in do. Don’t take things for granted, keep asking. We’re lucky in that we’re still in a position where technical talent is scarce. We have some level of influence over the directions this industry is taking.
We see cases where employees take a stand and affects what is happening in their workplace. It can be in terms of company culture and the way people are treated, and it can concern the things the company does. We can voice our opinion if we identify foul play. We can select where we work. Pick companies that are really trying to do less evil. Or at least don’t pick the obviously evil ones – porn, gambling, and other intrusive and exploitive companies.
There are pledges like neveragain.tech that group people who are concerned about the values and moral of the tech industry.
Initiatives by developers, like coed:ethics, with resources and even a conference about ethics in the software industry.
And we must understand the importance of including ethical and liberal studies as part of technical education. There are a few programs, like this one from the Santa Clara University, that are concentrating on the studies of ethics in technology. But this needs to be a perspective that we open up for anyone who is starting to work in tech. We need to make sure, as a society, that the people who are building the tools that our society is running on can ask the right questions about it.
Last thing, we need to be more humble as an industry. We don’t have all the answers. Technology doesn’t have all the answers. We need to stop idealizing technology companies leaders. They are smart people, but they also have a lot of time a very narrow world view. And sometimes a problematic personality.
I think the bottom line is, lets be good people, not just good developers.
So back to our blood pipe. This story is told about the president of the Israeli Technion Institution, a leading engineering university. A new president was appointed in the 60s, and he wanted to introduce humanistic studies to the curriculum. The other professors didn’t understand the point, and then, as the story goes, he made this experiment in one of the classes. When the professors heard how the students reacted to the question they were presented they were shocked, and all agreed to add humanistic studies.