Doug Rushkoff discusses the side affects of technological institutions on humans and the mistake of becoming so entangled with digital media that we as humans have steered away from sustainable outcomes.
Was Ted Kaczynski (the infamous Unabomber) right? Seriously, did he actually hit the nail on the head with his belief that technology is destroying what makes us human, our relationships, and our sense of self?
Yes and no. He was right on a certain level, but you could apply his argument to any medium developed by human beings. Let’s look at speech, for example. It came with the cost of people choking on their food. It allowed us to prevaricate in ways that we couldn’t before—we could bring something into being that’s not actually true. Speech allowed us to lie for the first time.
Text—the written word—seemed to be such a terrific invention in that it could allow us to communicate with people who were not actually in our immediate presence. It was used to bind people to contractual agreements. In the good sense, we got a covenant with God; in the bad sense, it enabled slavery. The first things we wrote down were people’s possessions (be it objects or people), or their indenture.
We could also look at the printing press and radio. Did radio humanize us by bringing voices into our home? Or, did it enable Hitler to control his population? What about communist China putting loudspeakers all over the place to “program” the masses? And the internet as well. At the beginning, it had this tremendous potential to connect people in new ways, as opposed to television that directed the content at us. With the internet, we had the ability write our own programs and the freedom to share them and connect across the world. Yet all of that has now morphed into an extension of isolating, alienating corporate capitalism.
In any of these institutions, the figure can become the ground and the original intent can get subverted. Kaczynski was right in that they tend to get subverted. Now, what does that mean, though? Do we go all the way back to hunter-gatherer grunting—when true love and rapport were present in our surroundings? Or can we retrieve the values that we had in prior media environments and then embed them in this digital future? I think that’s our only hope.
A side effect of these digital media platforms is not just the bombardment of unfettered consumerism—they’ve also helped trigger regressive behavior, rather than the forward-looking utopia imagined by Vint Cerf, Larry Brilliant, Stewart Brand, and many other of the Internet’s founders. You certainly talked about how these platforms helped ferment nationalism and nativism. This makes me think of the growing anti-vax movement, or the growing movement of “Flat Earth” believers. I recently came across some statistics stating one in three people between 18 and 24 weren’t sure if the earth was really round. How and why is this happening?
The mistake that Americans in particular make, is that we think that we are moving forward in time. There’s a whole history that still impacts us, all the time. When Marshall McLuhan would look at a medium, he would ask, “What does the medium amplify? And what does it obsolesce?” But also, “What does this medium retrieve from history?” Television retrieved globalism, a certain kind of empire. Giant chartered monopolies shared in a global revival. That led to neoliberalism, which served as an extension.
The internet retrieved medievalism, and all that had happened before the last renaissance. It retrieved peer-to-peer culture and local currencies; women’s rights and gender fluidity; and all sorts of the things we see in Burning Man and the Grateful Dead—a 1960s ethos, and that “craft beer” ethos. That is all part of the digital media environment. But so is, if anything, the city state. So are all the things that got obsolesced in the big renaissance, when nation states came into the picture.
If we’re seeing more nationalism now, I think that has to do more with the boundary quality of digital media. Television was Reagan saying “Mr. Gorbachev, tear down this wall.” The internet is Donald Trump building a wall. Us versus them. This or that. Zero, one. Cuban or Mexican. That’s a kind of a dark style of distinction that people are trying to make.
That’s where the anomalous behaviors and new ideas and the strange things that make us human and that lead to novel solutions to the world’s problems—all those end up being ironed out of the human experience.
One of the things that was great about the Internet was that you could find people who shared any interest. Thereby reenforcing nature to it, which allows us to turn a blind eye to the fact that any interest might have some perversion to it. Not necessarily in a sexual way, but in a harmful, regressive way: it’s the assertion that you can find your tribe—those who won’t shame you and will shield you from criticism—thereby enforcing your worst nature. Like the blind loyalty and dedication we see in southern college football, “our team versus that team.” And it’s really driven hard. There’s no acceptance of the flaws of your team, or the positive qualities of the opponent—or in this case, the opposing view.
Yeah, and that’s the main idea behind the theme of this book. Initially, it’s about finding the others who are on your team—the ones who are going to stand with you. Your Bernie Sanders supporters. Your fantasy roleplay gamers. Whatever “they” are. But ultimately, it means finding the “other,” and finding the human in the “other.” Imagine if we had social networks that, instead of teaching us how to see our friends as adversaries, taught us to see our adversaries as fellow humans.
We could actually see through their ideology to the human need underneath. That’s the hard part; but these platforms are intentionally engineered to do the reverse. They make you a more extreme version of yourself than you already are. Because then, you’re not thinking; you’re reacting instantaneously from the brain stem rather than from this coordinated part—the front-brain—the part that can handle paradox and contradiction.
People seem to be much more limited and simplistic today. They’ll say, “Oh, here you are, complaining about some of the vices of digital technology, but you’re talking to me on a computer.” Wow, that’s a contradiction that rocks my brain. Or, “Oh my god. You’re talking about being kind to people and animals, but you just ate a chicken. You ate a chicken. How could you do that?” You know what I’m saying? I may be a sinner, but I’m trying to point the way incrementally. And, there are paradoxes in human existence. It doesn’t mean that you have to pedal-to-the-metal destroy the planet, or that we’re all going become monks. But, there are ways to try to steer human civilization towards sustainable outcomes.
There’s a paradigm that’s been developed between the online world and the Baudrillardian concepts of simulacra, simulation, and hyper-reality. In Team Human, you pointed to metadata not actually working the way we think it does. It’s not as simple as that we search for one specific thing, and then we see ads for that object or service later in our streams on that social media platform. It goes beyond that. The profiling, and depth of profiling, is shared across a much wider range of platforms, whether it be Facebook to Instagram to Amazon, or any app we use. How is that all brought together to manipulate and target us, directly and indirectly?
It sounds kind of like the Truman Show, doesn’t it? The internet is rendering itself in real time based on what it knows about us. So, as you know, your Google search is different than my Google search. Why? It’s like, “Oh, that’s Doug. Let’s show him one of these and one of these. Let’s give him a little of this. I bet he clicks on one of that. Oh, he didn’t click on it this time. Oh, share all that with all the others. Oh, Rushkoff didn’t click on that. He didn’t click on it? Let’s try something else. Oh, looking good. We got him. We got him. Good, good, good. Alright, maybe he uses this button, so show him that button. Alright, tell the others.”
What the internet is really doing, is trying to use past data about us to find what statistical bucket to put us in, and then do whatever it has to in order to get us to behave in a way that is true to our statistical profile. And that means whittling down the anomalous behavior—all the stuff we did that wasn’t what the internet expected us to do.
That’s where I find it’s kind of—I hate to use the word—dangerous. But that’s where it is, kind of dangerous. That’s where the anomalous behaviors and new ideas and the strange things that make us human and that lead to novel solutions to the world’s problems—all those end up being ironed out of the human experience.
Travis’ Law, a term recently coined by Travis Kalanick, the founder of Uber, says that a product’s superiority to the status quo should entitle it to be above the law. Essentially, a “superior” product should be “allowed” to keep breaking the law, because if people are just given the opportunity to try it, they will defend it and demand its right to exist. Any normal or decent person would be like, “Oh, I’m breaking the law; I should stop.” Enabling that kind of non-conforming behavior seems to actually be a flaw in the armor of this system. It promotes that almost sociopathic behavior of, “I’m just going keep driving forward, no matter what they say.” The Kardashians also do it. Any normal person would be horrified to be called the names they’re called every minute in threads online.
It seems to favor those with antisocial, sociopathic or narcissistic tendencies over those with neurosis, for example. Neurosis is when you’re constantly questioning yourself. You get one negative retweet, and you’re like, “Oh, oh. Should I stop? What should I do?” Whereas if you have those kinds of tendencies that tilt toward grandioseness or a blatant disregard to what anyone else thinks, you’re just going barrel on through—because you can do no wrong and you’re so great.
It’s kind of a perfect environment for those kinds of people, isn’t it? It’s certainly that kind of sociopathic behavior that works in extreme venture capitalism, because you’re going to be creating a company that’s going hurt a lot of people and the environment. You need to be okay with externalizing that pain and suffering to everybody else for your own good.
I talk to a lot of “low-level” billionaires—they’re not quite that kind of sociopathic. They made money on a hedge fund, or they invested in real estate at the right time and they got a few hundred million or maybe a billion out of that. So, they don’t have enough to build a rocket ship to get off the planet, but they have enough to know that if the world turns, people are going to come kill them.
They’re in an odd place. It’s like a new upper-middle class of a sort. They “only” have between 100 million and two billion dollars. But, they’re the ones building bunkers and shelters, and buying space on protected eco-farms—just in case contemporary society collapses.
What the internet is really doing, is trying to use past data about us to find what statistical bucket to put us in, and then do whatever it has to in order to get us to behave in a way that is true to our statistical profile.
This all plays into wealth in a deep way. Tech and finance leaders are creating a world within our world, and they don’t want us to notice the huge disparity the one percent has created in the last 40-50 years. Just this past week, we had Howard Schultz not wanting to be called a billionaire and just wanting to be called “well off.” And in parallel, at the World Economic Forum in Davos, we saw the first panel in a decade to talk about taxing the rich as opposed to philanthropy as the solution for the disparity in wealth. The former CTO of Yahoo!—and the same CTO of Yahoo!, who in 2012 was part of the group that oversaw the company’s destruction, all the while paying themselves millions—basically says, “Are we just going talk about taxes the whole time? This is boring and one-sided.” Because he’s all about discussing solutions for the mess he helped create—as long as his lifestyle doesn’t have to change an iota. So how do we as a society address this?
It’s really hard to take their money after the fact; and it’s not even the cash that we need. The US Federal Reserve and the banks printed so much money anyway, so they give us some. It kind of doesn’t matter. They’re shoving their cash into these giant buildings that are going up in San Francisco and New York. It’s like, everybody is laundering money, whether they got their money through legal ill-begotten ways or illegal ill-begotten ways. They’re just stuffing it in there.
So, I think taxation could be a good thing, if we could go back to those radical Reagan days. My god, Reagan would be considered a socialist today, because the solutions that Bernie or Ocasio suggest are nowhere near as radically socialist as Ronald Reagan’s policies. But, while that would be a start, I think that the real object of the game shouldn’t be to redistribute the spoils of capitalism after the fact, or to take 99 percent of their money. It should be to pre-distribute the means of production, before the fact.
Perhaps, if they were a different tax code, the ultra-wealthy would be more willing to do that. It wouldn’t be to their advantage just to try to collect capital. But, I think a more crucial tax code change would be to reverse the capital gains tax with a dividends and payroll tax.
Right now, if you make money with money—make money on your capital—just by growing a company, you barely pay any tax on that money. Whereas, if you earn your money, by actually selling a product or creating value, you pay super high tax on that. So, our tax code is saying, “Don’t earn your money by doing something for someone. Earn your money by taking someone else’s money. That’s the only way we’re going to reward you.” That’s what’s led to this kleptocratic kind of capitalism that we’re living in today.
Let’s touch on the topic of managing behaviors as a way to influence outcomes. I often think about how we could influence and incentivize beneficial behaviors—Team Human stuff—in the tech and financial communities. I once brought this up in a talk that I gave, noting that many developers are pure problem solvers, and it’s no secret humans are considered flawed and problematic. So, the human race becomes a problem to be solved by technology. But, many developers—although not all developers—are not the most outgoing people. They tend to have a smaller group of human relations and many of them experience their most significant relationships online. Most are just out of school. They lack human interaction and diverse experiences and, as a result, the solutions they develop don’t exhibit a lot of empathy.
How can that change? Do we bring it into the engineering school level? Start teaching engineers to understand other aspects of humans when developing solutions—almost like ethics training? Ethics was only brought into law school after it turned out the people who had been indicted under Nixon were almost all lawyers. How do we retrain people to be part of Team Human?
A few years ago, Mayor Bloomberg sparked this big bidding war when he said he’d give half of Roosevelt Island, this little island right off Manhattan, for campus space to whatever university proposed the best engineering school. Cornell won, and they built this giant campus known as Cornell Tech. Tata and other corporate giants have buildings there. Billions of dollars went into building this giant tech campus. They’re essentially trying to make it Silicon Valley East, creating start ups and all that.
I offered them to come and teach the culture and anthropology of technology—the ethics of technology. To look at how we can build apps that don’t necessarily externalize the cost onto to labor or onto land, and that have a social justice component and so we can understand how that all fits together. Their perspective was basically: ethics slows development. They see ethics as an anathema to technological progress.
The problem is, if you’re already thinking that ethics is slowing you down or distracting you from what you’re doing, then you’re in the wrong frame of mind from the beginning.
Ethics is fuel. When you see, “Oh my gosh, look how black people are getting shot by cops. What can we do with technology to enhance a police officer’s ability to distinguish friend and foe? What could we do?” There are so many real problems out there that could be solved, but instead what happens is the technologies are developed in a vacuum. Then they say, “Well, what could we do? How does this technology promote the sale of something that we already do? Or the establishment of or the reinforcement of a power dynamic that we already have?” These supposedly disruptive technologies are actually reactionary. They prevent change because they are what keeps the entrenched systems of power in place.
The problem is, if you’re already thinking that ethics is slowing you down or distracting you from [your goal ]what you’re doing, then you’re in the wrong frame of mind from the beginning.
Ethics, laws, regulations—they’re all viewed as ruinous to growth. Recently, one US city was trying to rein in the scooter companies and their parasitic behavior. They said they would award licenses to companies to offer the service, but they asked for them to account for the needs of the poor and others who wouldn’t necessarily be able to easily afford or use these things. A very small ask. One of the companies that didn’t get the license to operate said in a legal filing, “Hey, that’s just slowing down our progress and it’s not our problem.” This is a company who pays no taxes, causes dangers on sidewalks and roads, puts a burden on the infrastructure, and yet feels they don’t owe the rest of their citizens a thing and we should all be happy that they are taking from us and making money on our backs.
Right, it’s also partly because these tech developers are still relatively uneducated. There are college freshmen and sophomores who drop out because they came up with some app-business-plan-thing, meanwhile their brains aren’t even fully developed. It’s not until you get to around age 26 that you get your mileage sheets on your prefrontal cortex. So, impulse control hasn’t really taken hold yet. On top of that, these young tech developers likely didn’t take economics. Sociology. Anthropology. They assume that the rules of corporate capitalism are the given circumstance of nature.
They have their app and then they go to Goldman Sachs and say, “Daddy, what do I do?” Then, it’s basically like a restaurant owner going to the mob, except the mob is Goldman Sachs. The app becomes the excuse for the investment scheme and they have to keep pivoting that app as long as they’re trying to sell investment—which is essentially as long as they stay there. The kids don’t understand any other way. It’s not that they’re unethical, it’s that they’re a-ethical. They don’t know that there’s another option. They don’t know that you can devise a business plan in ways other than how Goldman wants you to.
What allows for your idea of Team Human to take root? Is it going to require something more along the lines of Naomi Klein’s Shock Doctrine? Is there going have to be a systemic disruption like the last financial crisis, or can it be built layer by layer? Or does it matter?
Well, the more we can bring ourselves to build community ties, mutual aid, and mechanisms for teamwork now, the less of a shock we’ll have to undergo when the shit does hit the fan later. I’m part of the 99.9 percent of academics who believe that climate change is real and manmade. I realize it seems that 30 percent of America or more doesn’t, which is shocking in its own way. But I get it. The rules do mean things, and it’s not beyond neo-liberals to fake global catastrophe in order to, I don’t know, get money for solar power companies. But, I think it’s real. I’ve seen the scientific proof. I feel like there are such shocks coming between now and 2030 as the temperatures continue to rise. I see there being no question that we will understand there’s a major crisis underway.
It’s just the way that we seem to be priming our populations now for that crisis, building walls for example, or rejecting immigrants and contextualizing refugees as less than human. It seems to me that is a society preparing itself for the inevitable, inhumane treatment of the rest of the world.
Let’s start a soup kitchen, or a food bank—those kinds of things you can do whether you’re right-wing or left-wing… Don’t call it socialism. You just call it “People helping each other out.
What are some of the other things you think will bring this concept of Team Human and allow for the concept to start moving forward? I’m sure that’s also going to probably play out in your future writings, as well as in your podcast. But what are some of the pathways there that you think will unite us?
This might sound so apt to some people, but for one, it would be devoting time to establishing human relationships. Starting with kids, at school-level, it would mean creating time for them to actually just be in the same space with each other, not looking at screens, and teaching them how to establish rapport with one another. How to make eye contact.
Every year I have more students come to me the first day of class with a note from their doctor or their psychiatrist saying, “Please excuse Johnny from all class presentations and classroom discussion. He’ll learn, he’ll do all the readings, but don’t make him look at you or make eye contact.” And I understand it’s real. I feel bad, I understand. Social anxiety is real, and there may be lots of other environmental reasons for it, but one of the reasons for it is social and practice—and not getting that. That’s the beginning.
We also have to stop trying to unite with people over ideology. That’s never going to work. We’re always going to disagree. But, you can still unite with people over tax. Whatever our ideologies, let’s clean up this park. Let’s fix this school; that wall over there. Let’s deal with these homeless people in our town. Let’s start a soup kitchen, or a food bank—those kinds of things you can do whether you’re right-wing or left-wing. You just do it. You engender a spirit of mutual aid. Don’t call it socialism. You just call it “People helping each other out.” Because that’s what we do. It begins a process.
In a way, there are parallels here, in the sense that we can take a lot of these technological tools in this landscape that we’ve created. We can take this sort of hyper-reality that’s been made and plug into it a much more humane way of existing.
But how do we exist within this kleptocratic hyper-reality and move towards being more “human”? We’re talking via Internet video right now. You probably wrote Team Human on a desktop computer. How do you find ways to be connected to technology while allowing yourself to open up and have more a deeper and personal connection?
The most important thing about when I use technology is I still go online; I don’t live online. I know that’s such a quaint idea, but back in the day, you used to make a conscious choice. “I’m going to ‘go’ onto the internet now and do this.” So, my life is not wired or wireless. I’m not quite in the Internet of Things. Who knows, maybe some of the “things” I have will go obsolete and all become embedded, and I’ll be online that way.
But, I think it’s important to treat these technologies as drugs. I remember Timothy Leary’s advice about drugs was always, “Look into the eyes of someone who’s on the drug and decide if that’s the place you want to be.” You just ask yourself, “Do I want to be on Facebook right now? Do I want to be on Twitter right now?” It only takes an extra two or three seconds every couple of hours.
I think that’s just as true of this stuff. I mean, you go to a monastery and you’ll find a monk who’s on a vow of silence for a week or a month. That’s not just because of their ego and not wanting to hear their own voice, it’s because they understand that even language is a drug. English is a drug. It’s an operating system for your mind. So, if you’re “on English,” you’re going to think of things as subjects and verbs and nouns and predicates. It’s a way of understanding the world. So they want go “off” English and experience whatever that is, that wholism of a pre-linguistic sensibility. And it’s not that language is bad, it’s just a drug, an operating system.
When thinking about limiting our exposure to things like tech gadgets and social media, I still like the Jewish idea of the Sabbath as a great excuse to just not be accessible.
What I’m basically trying to say—it’s almost sarcastic—is join the freaking human race.
Are there any other points you’d like to drive home before we wrap it up? I’ve loved our talk—this has been fantastic, and a great addendum to your book and podcast.
Maybe it’s because of the way I named Team Human, but I’ve noticed one thing that happens with people who like the idea. Usually, instead of reading the book, they just want to “join” Team Human. You’re already on Team Human. What I’m basically trying to say—it’s almost sarcastic—is join the freaking human race. Humans are a collective. We are a team thing. So, people are acting as if it’s sort of incumbent upon me to create this organization that people can join. “Well, let’s join this ‘Team Human’ and organize and figure out how to save the planet and all that.” But there are already so many organizations out there.
I would rather think of Team Human as pollen. Take Team Human into the thing you’re doing. Find rapport, intimacy at your town hall meeting. At your school board. At your climate change meeting. At the biodiesel co-op you’re starting. One thing I am impatient for, I don’t think there’s time to have another group in order to sit around and figure out what it is we want to do. They’re all out there. Everybody’s doing it. It’s active. Wherever you are, there are people already gathering to address real issues. I would say, even if those things are uncool and there’s people that you don’t like there, that’s the work—it’s to do it with them. It’s not to find more fun progressive people who agree with us. It’s just to get the work done. Then, establish new sorts of relationships across the board.
Douglas Rushkoff was named one of “world’s ten most influential intellectuals” by MIT, and has long been a favorite of ours. He is an author and documentarian who studies human autonomy in a digital age. His twenty books include the just-published Team Human, based on his podcast, as well as the bestsellers Present Shock, Throwing Rocks and the Google Bus, Program or Be Programmed, Life Inc, and Media Virus. He also made the PBS Frontline documentaries Generation Like, The Persuaders, and Merchants of Cool. His book Coercion won the Marshall McLuhan Award, and the Media Ecology Association honored him with the first Neil Postman Award for Career Achievement in Public Intellectual Activity.
Rushkoff’s work explores how different technological environments change our relationship to narrative, money, power, and one another. He coined such concepts as “viral media,” “screenagers,” and “social currency,” and has been a leading voice for applying digital media toward social and economic justice. He a research fellow of the Institute for the Future, and founder of the Laboratory for Digital Humanism at CUNY/Queens, where he is a Professor of Media Theory and Digital Economics. He is a columnist for Medium, and his novels and comics, Ecstasy Club, A.D.D, and Aleister & Adolf, are all being developed for the screen.
Keep up with Doug on LinkedIn.
Illustration by Damian Didenko.