Cory Doctorow made a name for himself by writing about the changing world of the Internet and technology, both on the Boeing website, as at his science company. Through his novels Down and Out in the Magic Kingdom, My Little Brother, Maker, Homeland, and recently, Departure, view everything from the future of copyright, nationality and technologically advanced states, online communities, and more.
His next novel says More Attack, set in the same world as his novels My little brother and To the world. Those novels are set in a nearby dystopia, in which a transformed young man working to undermine the Department of Homeland Security after establishing a police state in the wake of a terrorist attack, and has been a staunch argument for the fight against the erosion of freedom and privacy that has occurred over the last two decades. The new book comes out in October. 13, and Tor will also offer both My little brother and To the world as omnibus edition on Aug. 11.
I spoke with Doctorow about this upcoming novel, and how he looks back on how his works have changed since he started writing.
Polygon: You released the first one My little brother and To the world in 2008 and 2013, respectively. Looking back on those two novels (and the short story, Lawful Intercept), what does the world look like, over a decade?
Cory Doctorow: There is a (very false) account of the existential view of connected computers and their relationships with the prosperity of the 2000s and early 2010s, namely that "technological prospects" believed that some kind of historical power, exacerbated by network resistance to movement, would free the world from dictatorship.
While there were some people – the so-called "overcoming experts," perhaps – who felt that way, my roommates, people who offered support and support (or directly contributed) to powerful democracies had much in common: not "all this would be great" but rather "this would be very good … if we didn't finish it."
After all, they are not building a hidden state-of-the-art proofing server or a weird VPN because you think the other case isn't * using technology! And you do not engage in policy battles by making it easy to search and browse the internet if you think everything will work out in the end. The so-called worthless activist was (and remains) fueled by the sheer terror of radical networkitarianism as hoped by the liberating power of anti-technology activism.
But things have changed in the last 10 years! The last decade has seen two things:
First, the rise and rise of networkitarianism in the network, chased by monopolistic technology companies, sees the worst fears of technology activists. Secondly (and paradoxically), the increased recognition that tech activists were 100% right to worry about what would happen if a networked world could develop without some kind of human rights policy, and, * at the same time, a broader revisionist project criticized those activists for their irrational thinking. such a thing can happen spontaneously, without human intervention.
There are ways in which this pattern is similar to other policy battles, especially those that activists try to avoid in the distant future. Think of climate change: decades, the climate
exchange change activists primarily involved people who believed that climate change was taking place. at all. The combination of a long-term relationship, as well as the result between driving your car and dealing with wildfire in the next generation; and widely funded disinformation campaigns to fund costly skepticism on climate change, are making people convinced of the fact that climate change has been a brutal, grinding, decades-long project.
Those activists had it others Success, but mostly, the thing that convinces people that real climate change is climate change itself. The “policy debt” created by inefficiency in climate change means that floods, droughts, wildfires, epidemics, hurricanes, and other climate conflicts have raged in the reg, and they have more persuasive power than even opposition activists.
The problem now is not to deny it, the trick. The climate problem has become so advanced today that it is easy to have no hope of avoiding it, to include something ineffective driven by failure to recognize the problem through unemployment driven by the belief that nothing can be done to reverse it.
The same thing happens with privacy, surveillance, control, and the online economy – after years of inactivity (driven by a combination of sincere doubts about whether problems will bring years down the road, and costly campaigns to suggest that technology and government should be left to co-operate in monitoring and control systems), now open to all person that something terrible happened to our electronic nervous system. The problem now is showing those same people that it's too late to do something about it.
So much has happened in the world over the last decade. How has writing about rights and oppression changed for you at that time?
There have been two major changes, I think.
First, it was once thought that a technology-based talk about human rights and civil liberties was a parlor game. Tech users are considered to be extremely wealthy, male and white, so concerns about the rights and privileges of those users were not true – After all, these were people who already had a lot of rights and very little to worry about in those departments.
However, the reality of people using technology today is … everybody. There is a digital divide, but at this point we are at a point where even the homeless tend to have smart phones and those who use them rely on libraries to access the Internet. Wide access to technology across all age, gender, age, race and geographical means that in the first place, it is impossible to talk about tech and rights without talking about the interconnectedness of those things, and secondly, any project to incorporate a human rights framework within tech affects all those other. stories, too.
Another major change concerns the views of the technical staff itself. I see three different waves of tech workers during my time in the industry (and struggle):
- The pre-dotcom bubble: Usually the rich, the passionate professionals. Computers are expensive so computer users mostly came from rich backgrounds, but computers were not the way to get the most, so computer users were (often) driven by the desire of the subject, not (just dreams of a fake. This group often combines missionary zeal to get everyone with it all of the information online, with a sense of monitoring of the networks and programs that were going on, acting as volunteers to alleviate the immovable systems and their former dependents.
- Dotcom Risk Recovery: After the dotcom crash, technology once again became dominant, with the high level of financial inclination of a few lucky entrepreneurs and their first hiring, and more. While pre-dotcom professionals often practiced, or did a CS or master's degree in favor of the subject, a new team signed in because of the possibility of a higher payout and a higher IPO payment or acquisition. These people may have been in the MBA program a generation before, or maybe law school. They wanted to look bigger, and technology was a sound bet. The penetration of people who were in pursuit of money before the charity lowered that feeling of stewardship on the network and their dependents, though it increased the missionary zeal. Most of these professionals, even the most talented, see big tech companies as something of an endless supply: they have little understanding that Facebook has replaced Myspace, that Google had offered Altavista and Yahoo, and that Apple had clobbered Nokia , but they couldn't. I truly believe that any of these companies will ever be tested.
- The 2008-today financial crisis: Awareness of the unbalanced, discriminatory, and promiscuous nature of business (from mishandled publicity to climate change to cover gay and lesbian replacements for Big Pharma's role in the opioid phase) various professionals turning their muscles economists – as a labor force in a labor market that is at low risk of being fired and who will find it easy to find more work if they do – press their employers and regulators into thinking about the human rights consequences of their commercial activities, from drones to research and surveillance and harassment. Initiatives such as the Tech Watch It and the Googler Walkout (20,000 people!) Are finding common cause through wider movements such as Black Lives Matter and Extinction Rebellion, building tools to support their wealthy brothers and sisters, while taking the combination that the cause of technological freedom cannot be traced to the cause of human freedom.
What is your latest novel, More Attack dad? Looks like you My little brother actor Marcus Yallow is about to have a tough little time.
I often write as a therapy: I have been watching the increase in the effectiveness of emerging technologies, its association with authoritative projects, and the growth of the technology industry (for example, Palantir) and it's becoming increasingly worrying. At hacker conferences such as Defcon and HOPE and CCC, I met security investigators who cared about human rights, but found the money paid by the companies that destroyed them. The My little brother
Putting it in the mind of someone who knew everything Marcus had done, but came to a different conclusion on what to do about it – which was worth it, too – was an act of controlling my anxiety, of thinking that people seemed good and imagined in person to do these terrible, ugly things.
They say that no one is known for their stories, but Masha is actually – she knows she's doing it wrong, but she also believes that in the good stuff, it doesn't matter, and if it does, she is limited by the good deeds she does to mimic her moral books. In contrast, Marcus, you don't have the arrogance to understand how he can be a member of someone else's story, so he never becomes a citizen. It leads him to put other people in a dangerous position, not to be rude. March knows exactly what he is doing.
The main character of the novel, Masheha Maximow, works for a global security company, where she helps the means of coming up with governments to spy on their citizens. I'm curious to see how he plans to do it, as long as it doesn't seem to affect him directly.
No one is innocent. We all do compromises, and we usually do those classifications as a series of small, logical steps – they seem. You have a code of ethics, but you're a little out of it, and now that new position is your new code of conduct. The next time you do something different, it has nothing to do with where you started, it's kind of where you are now, and that step, too, seems reasonable. One inch at a time, you travel miles from where you started, and unless you look back on the journey, it's easy to feel like you're doing the best in the world. However, if you look back to where you started, it may cause you to realize that you are doing horrible, terrible things.
Masha works with people trapped in that world, but she's not: you know indeed how to get there and know each and every compromise. He does it anyway, and instead of ordering, he meets on foot. The part of him that wants to do a good job for his boss is installing spyware to catch and intimidate the dissident movement. Part of her concern for other people is to secretly train those vulnerable people to avoid the software she just installed. He understands the reasons for doing both of these things and does not try to reconcile them – he lives in conflict.
There are many situations around the world where we see repressive governments using technology as a tool in their experiments. What are some examples that inspired this story?
Companies such as the Hacking Team, the NSO Group, and Palantir have implemented mass surveillance programs for both “developed” nations and the poor, post-colonial states to help dictators and the private use their power. From Ukrainian authorities who used ISMI kidnappers (fake cell towers, AKA Stingrays) to photograph anyone who opposed the regime and threatened with an SMS message, using NSO software to identify Mexican anti-corruption activists, and attacking the murder of Jamal Khashoggi. Palantir's police tools have put all racial communities under constant surveillance, with algorithmic case studies that judge people for what resembles the color of their skin.
How do you balance these two lines of activist mindsets, the naïve ideists vs the pragmatists? How do you see this play in the real world?
I think the best framework is "strategy" vs "strategy." As the people of China discover, using iPhones, the choice of tactics to use the iPhone – because you have it, because it works, because it's usable – runs counter to the purpose of global surveillance strategies. When the Chinese government told Apple to remove VPN-enabled software from the App store (and Apple was hacked), these people were open to an anonymous referendum that was, at once, collecting a million Uyghurs and installing them. concentration camps where forced labor, illegal medical examinations and erroneous rape are all done; and murdering members of another religion, Falun Gong, to harvest their organs.
Tactics and strategies are always chaotic: when donor Chechs want to support the decarceration movement – but also want to maintain hostility from working people that creates economic conditions that lead to mass incarceration – are you working with them? They can mobilize huge sums of money and resources for your immediate campaign, but they also work to undermine your cause, and give your credibility to the organization that is your ultimate enemy.
I think the answer is "yes." To some extent, simply exercising (reminding yourself that Koch and Apple are not your friends) can accentuate you from being overconfident and giving you the awareness you need to leave them when the mask is slippery, but that's a difficult discipline to keep. At the same time, only working with the people you support 100% is an enriching, sectarian movement. It's a constant balancing act, and if I had a way to fix it all the time, I'd be more successful as an activist!
What responsibilities do you feel the tech and software companies have in terms of how their tools are used globally, good or bad?
There are two ways to think about this: on the other hand, creating risk, harmful products is an act of immorality. If you design your system to approve the censorship and then the police, say, Bahrain or Saudi Arabia instructs you to check your customers for it, you are 100% compliant with that noticeable result.
On the other hand, there is a different kind of technical and more subtle compromise, which is how you can create products that can be changed by your users (or the professionals working for them) to protect themselves from the consequences of your design decisions. It is one thing to design (say) Twitter in a way that allows for mass victimization campaigns, but it is far worse to add that abuse problem by reducing or shutting down APIs, and using patents, terms of service and other legal tools to prevent those who will build their anti-bullying programs to protect themselves from behavior. the evil that has given you strength.
Creating an incomplete plan does not mean that you have not been rude or careless – but designing an imperfect system and preventing others from making mistakes will make you a great observer and a bad keeper of your users' trust.
There are many reporting in the US about how government officials and organizations are using new technologies such as facial recognition, prediction software, and machine learning. What lessons do you hope people will learn from reading a book like this? More Attack?
I think we spend a lot of energy thinking about what technology DO and don't think enough about who we are and what we do. One thing is to use police guessing tools to perform policing-washing operations (current use factory), but you can also use the same tools to enter annual police data in the future to see if there are any subtle ways to choose the police reform program yet.
The future of technology requires more than just technology: it requires technological autonomy (the right to decide which technology to use, and how), and quantities (any decisions regarding technology development, implementation and implementation can be put in the hands of a small technical council or their respective state administrators) .
Are you optimistic about the future?
I think optimism and overconfidence are the same things in the same coin, which is a powerful color. Optimists believe that things will improve differently from our actions, and pessimists believe that they will increase, no matter what we do.
I am hopefully. It trusts the belief in human agency – that we, together, can navigate our way to a better future, through our hard work, dedication, and good behavior. Hope does not require you to plan a course from today's world to a better one: that you can only point one step * to take in that country, because from that newly discovered point, you might see one step, then another.
Belief in "hope" is the belief that humans are powerful, hurled by the winds of history. Believing in "hope" is the belief that people have an agency, unable to direct their way to a better world.
More Attack available pre order.