Ethics and Technology

January 31, 2017

View of a bridge being built

What does it mean for technology to be ethical? To illustrate, let’s start with how it can be unethical.

Robert Moses, the master builder of public works in New York between 1920 and 1970, did something very unusual: he had all of the overpasses around Long Island designed and built as low as 9 feet tall, much shorter than most overpasses you might be used to. As it turns out, he did this because the city buses were about 12 feet tall--and he was trying to “protect” Long Island, and specifically Jones Beach, from poor people and racial minorities, who were more likely to use the bus than the middle and upper class whites in the area. He also vetoed a proposal to extend the rail line to the area.1

In this case, the design in question was intentionally unethical, but often design solutions can have detrimental effects without the intention. For instance, when Facebook’s “year in review” feature was unveiled, it had the unintended consequence of showing people photos of their recently deceased loved ones, causing pain and suffering for their users.2

As industries are turned upside down--disrupted--and technology has been pushed into every part of our lives, we have to evaluate the effects of our products like never before. Not only the direct effects, but also the side effects. People are often willing to take the leap into a new technology to avoid FOMO (fear of missing out) or seeming like a luddite, but the long term or side effects are typically not given much thought.

For technologists, the more widely publicized ethical questions tend to center around privacy on sites like Facebook and Google, or what liabilities and responsibilities do companies like Uber and Airbnb have to safeguard their customers. Neither Uber nor Lyft are handicap accessible; if it’s “just” a platform for other people to “share,” do they have to conform to the ADA (American Disabilities Act)?

“In our times people are often willing to make drastic changes in the way they live to accord with technological innovation at the same time they would resist similar kinds of changes justified on political grounds.”1

Given that there are no formal licensing, education, or requirements to be a web developer, there are also no formal ethical guidelines by which they must abide. Computer science courses may include some mention of ethics, but six week code schools generally do not. Many tech companies are lead by engineers, whose focus can easily become the technology and not the people using it.

Designers, and anyone involved who can take a step back to get a big picture view, have to ask these questions. How will this product affect its users, both short term and long term? Can all people access this product? Will this product have any affect on the community, and can or should any negative effects be eliminated, mitigated or addressed in some way? Pokemon Go got people outside, exploring, and getting exercise, but it also caused people to walk blindly out into traffic or even off cliffsides. Perhaps the benefits outweigh the problems, but perhaps there was also a way to design with this problem in mind.3

How can you know all effects that a new product might have once out in the wild? It’s impossible to predict everything, but with some exploratory research, it is possible to uncover previously unforeseen impacts. One-on-one interviews with potential, current, and past customers and usability studies can offer a wealth of information. But you have to look beyond the obvious and ask the questions that may not come up in a typical development cycle.

When a large new construction project is being proposed, typically the site will undergo an impact assessment; most commonly economical but also environmental. What long range effects on an area will this project have? Will it increase jobs? Drive up neighboring home prices? Is it being built on a watershed or nesting ground?

Beyond construction, there are also social impact assessments:

"Social impact assessment includes the processes of analyzing, monitoring and managing the intended and unintended social consequences, both positive and negative, of planned interventions (policies, programs, plans, projects) and any social change processes invoked by those interventions. Its primary purpose is to bring about a more sustainable and equitable biophysical and human environment."4

We could draw on these ideas and apply them to technology. Should a company be required to do social impact studies for every new app or feature they want to build? That seems like overkill, but it is true that companies are responsible for the effects their products have; certainly all of the companies mentioned in this article has seen their share of lawsuits.

One could hope that technologists can push the industry to consider ethics and impacts with a more holistic view. Europe seems to be leading in this area with the European Parliamentary Technology Assessment5 network, although they tend to be centered around bioethics, public health, and energy. In the United States, advocates have pushed to ensure the accessibility of all federal websites and apps through Section 508.6  As technology takes over more and more of our everyday lives, technology companies will be forced to step up as the public and policy makers demand it.

  1. Langdon Winner, Do Artifacts Have Politics, 1980

Next Post:

We Can Do Better

A company’s reputation feeds as much its ability to turn a profit as does the things it creates in its operations. Your design should take that into account. The people who approve your deliverables should understand that. The people that sign the cheques should demand it. It’s on all of us to stand up, be good hosts, and say “We can do better.”

Let’s work together

Have a question? Want to know more about what we can do for you? Send us a brief description of your project, and we'll get in touch.

Or send us an email if you prefer.