A bumpy ride. (AP/Elaine Thompson)
Chinese companies that helped fuel the global hoverboard craze are unraveling rapidly, after Western retailers like Amazon demanded new safety and legality standards earlier this month. Some are so upset by Amazon’s actions they plan to protest outside Amazon’s Guangzhou office later today (Dec. 29).
“The whole industry has been wiped clean,” Lou Bin, a Hangzhou-based hoverboard re-seller whose business has suffered after Amazon introduced new requirements, told Quartz. “Small factories have exited the market, and everyone is paying more attention to safety and patents.”
In mid-December, Amazon, one of the last major Western retailers where US and UK consumers could buy no-name Chinese hoverboards, introduced strict new requirements that wiped out most of its online listings, in response to a string of fires linked to the boards (which don’t actually hover, but roll on the ground).
Amazon told vendors they must submit documents proving the boards met specific safety standards, along with a letter from a lawyer promising they would not implicate Amazon if they were sued for patent infringement by Razor, the American toy company that owns the US patent for the hoverboard (though it still remains in legal limbo). After Amazon’s missive, e-commerce sites like Overstock stopped carrying them altogether (although Alibaba, China’s e-commerce giant, continues to offer them).
As a result, only seven hoverboard brands are currently listed on Amazon in the US, a fraction of the dozens of importers and exporters that sold hoverboards on the site just a month ago. Amazon’s UK site now only sells hoverboard accessories, not actual hoverboards.
Millions of the boards were exported from China’s electronics manufacturing center of Shenzhen in the past year, but now that production has stalled.
Feng Jian of Shenzhen Bojulong Display Technology, a manufacturer, told Quartz he estimates more than half of of Shenzhen’s hoverboard factories have stopped making the product completely. His company’s orders fell 50% after Amazon’s crackdown, he said, and he cut his staff from 500 to 100.
“Before we were making about 1,000 hoverboards a day. Now we’re doing few hundred,” Feng Jian said. “As soon as Amazon sent out the notice, I went and laid off a few staff,” he adds.
British importers who sold hoverboards from China on Amazon have been hit hard too. Amazon advised all of its hoverboard customers to chuck devices bought before the crackdown, and told UK customers they’d be eligible for refunds. This caused a surge in returns, which ended up coming out of the UK vendors’ pockets. Amazon has not made a similar promise about refunds in the US.
Meanwhile, many Chinese hoverboard distributors allege Amazon illegally froze their online accounts on the e-commerce site, cheating them out of millions of dollars of sales they made before the crackdown. Today, dozens of these Chinese companies are planning to protest outside of Amazon’s Guangzhou office, several told Quartz, demanding the cash they say Amazon owes them.
“These are not requirements, this is a violation of the rights of sellers,” one former Amazon merchant based in Shenzhen told Quartz. “Amazon never negotiated with us, and what they have done is illegal.”
Amazon has not responded to multiple requests for comment or an interview.
Another seller, Ma Ning, told Quartz he sold 600 hoverboards in October over the course of ten days, earning 1 million yuan (about $154,000) in sales, but still hasn’t received any payment from Amazon.
“The patent and safety requirements are fair,” he told Quartz, but Amazon should not be deducting refunds from sales they made before the crackdown. “They’re like kings, and we’re getting massacred.”
THE POWER IS YOURS
We’re the reason we can’t have nice things on the internet
Whitney PhillipsDecember 29, 2015
We’re all responsible for contributing to a toxic online culture. (Fanqiao Wang for Quartz)
In a recent New York Times piece, Farhad Manjoo laments the increasingly shrill tone of political discourse. “If you’ve logged on to Twitter and Facebook in the waning weeks of 2015, you’ve surely noticed that the internet now seems to be on constant boil,” he writes.
But has the online world really entered a phase of permanent froth?
Vitriolic content may be par for the course in certain political circles. But not every story of online sparring in 2015 ended badly. On Twitter, a Jewish man befriended a member of the notoriously rancorous Westboro Baptist Church because, in his experience, “relating to hateful people on a human level” is “the best way to deal with them.” Using the same platform, a digital activist reached out to an Islamic State sympathizer and, through pointed, thoughtful engagement, convinced him to think differently. Feminist writer Lindy West engaged with one of her most mean-spirited online antagonists, and in the process came to see his humanity as clearly as he came to see hers.
These are inspiring stories. On the surface, they seem to provide heartening counterexamples to Manjoo’s claims. When placed in context, however, they prove to be the exception rather than the rule. The majority of stories about online harassment resist happy endings. Their conclusions tend to be unsatisfying or upsetting, if they end at all.
Women in the gaming and tech industries–women generally, though queer women, trans women, women of color and disabled women are particularly at risk–are ruthlessly targeted on a variety of platforms. School campuses are threatened with yet another shooting spree. Internet users face identity-based harassment and libel and find themselves on the receiving end of unconscionably racist and vitriolic content. The unluckiest of the bunch are subjected to unwarranted police raids.
Sometimes the police identify the most extreme online harassers. (Several of the above cases resulted in arrests.) And sometimes online antagonists are open about their identities. But very frequently, the people subjected to online abuse don’t know who is responsible, or even why they’re being targeted.
The majority of stories about online harassment resist happy endings.
Maybe they’re not being specifically targeted at all. Maybe the abusive behavior is more diffuse, directed at women or people of color generally. Maybe the behavior is targeted but doesn’t meet the legal threshold of harassment. As I braced myself for the response to the publication of my book on trolls, for example, I met with the chief of campus police to see what preemptive security measures we could take and who to call if something happened. I was told that an email containing the threat “I am going to rape you” was legally actionable—potentially, assuming that the person or IP address could be traced. An email that said “I hope you get raped” was not. Obviously both were, you know, bad. But as the chief explained, making a specific threat is different–legally speaking–than being vaguely threatening.
The fact that there are so many ways to be antagonized online, and so many different kinds of antagonizers, makes it difficult if not outright inadvisable to forward a universally applicable set of best practices in response to online harrassment. What might be appropriate in one case (naming and shaming; counter-antagonizing; refusing to engage; minimizing impact; maximizing impact) might be counter-productive, ineffective, or dangerous in another.
In some cases, the preferred response is logistically impossible. We might, for example, wish that we could relate to a harasser on a human level. But what if there’s no hint of who the harasser might be, or what shred of humanity we might try to speak to? Where would we even start?
Why are antagonistic behaviors so common, online and off?
This is not to say that it’s impossible to take action against online antagonism. It’s critical to talk about what can be done to minimize or mitigate its impact. To this point, Anita Sarkeesian, Renee Bracey Sherman, and Jaclyn Friedman recently teamed up to create an online safety guide aimed at addressing and hopefully preventing the most damaging and persistent forms of harassment. These are necessary–if depressing–conversations to have.
That said, focusing only on individual instances of bad online behaviors, and only on the guilty parties, risks framing the issue of online harassment in terms of a “them” who harasses and an “us” who does not.
On the surface, the distinction between “us” and “them” is apparent. Certain behaviors are just gross; certain people are just mean. If only we could figure out how to deal with those specific individuals, and their awful behavior.
The issue is that they aren’t the only problem. Moreover, they are able to thrive in so many contexts, from politics to sports to entertainment, to say nothing of the online bullying and harassment of everyday people. As much as we might condemn these behaviors, online instigators have certainly gone forth and multiplied. If it really is the case that online harassers are fundamentally different than the mainstream “us,” then why are antagonistic behaviors so common, online and off? Why do we sometimes find ourselves slipping into more subtle versions of precisely the behaviors we condemn in them?
To use a gardening metaphor, it’s not just the specific weeds that are at issue here. It’s also the soil that nourishes those weeds. That soil nourishes everyone, as Amanda Hess emphasizes in her discussion of the gendered and raced–in other words, embodied, offline–history of internet culture.
In order to change the ugly tenor of online conversations, we need to think collectively about how we might make the soil less hospitable to invasive species. This process begins with the acknowledgment that none of us are above self-reflection—and that we all have a part to play in improving the health of the garden. With that in mind, here are the steps we need to take to tackle our hate-filled online culture.
1. Rethink the “trolling” umbrella
I began researching and writing about subcultural trolls–those who self-identify as such and who partake in highly stylized language and behaviors–in 2008. In early 2015, MIT Press published my book on the subject. Before the book was even at the proofreading stage, I had grown wary of the term “trolling,” at least when used as a vague behavioral catch-all.
By then, “trolling” had taken on more connotations and meanings than could reasonably be contained by a single term–everything from the kinds of ritualized trolling behaviors I’d been researching to mean tweets to holding an opinion someone else disagrees with to taunting the police to being a horrible roommate to outright harassment. The term had become so unwieldy that it was essentially meaningless. I wouldn’t know how to respond when journalists asked (and they always asked) what trolling was and why people did it.
The term “trolling” implies that online aggressors are somehow playing.
But as I explain in this article, the imprecision of the term is the least of its sins. For one thing, the term trolling provides an all-too-convenient rhetorical out for aggressors: “I was just trolling, I didn’t really mean those racist or misogynist things I said.” In other words, “Stop being a baby or forcing me to be held responsible for my own actions, god.”
As it’s frequently used, “trolling” thus implies that participants are somehow playing, and that the antagonistic interaction is a game–one with rules dictated by the aggressor, and which only the aggressor can win. Both figuratively and literally, the aggressor is always the subject of the sentence. Everyone else is their object.
Furthermore, the implication that trolling is playful, disruptive for disruption’s sake, or fundamentally trivial (an attitude reflected in various year-end compilations of the “best trolls” of 2015) minimizes the experiences of those caught in the crosshairs of online harassers.
This problem is most conspicuous in the wake of the unmitigated shitshow that was GamerGate. Somehow, a year’s worth of cacophonous, horrific, violently misogynist attacks against women in the games and technology industries was “trolling,” a term also applied to silly comments posted in response to news articles. As Anita Sarkeesian, one of GamerGate’s most high-profile targets, notes, this framing obscures what was actually happening: toxic, abusive, violent misogyny. Harassment. Hell for the women involved.
We need to stop framing online harassment with the aggressors’ chosen terms.
We need to stop framing online harassment with the aggressors’ chosen terms, deferring to how aggressors prefer to be described and understood. We need to describe behaviors based on the impact they have. Highlight harm, not intent. Whistleblow, don’t whitewash. So: if a person is engaging in violently misogynist behaviors, then call it violent misogyny. I don’t care if the person responsible claims they were “just trolling.” If a person is so damn worried about being labeled a violent misogynist, then how about not engaging in violently misogynist behaviors, hmm?
This seemingly small rhetorical shift won’t undo harm. Whatever you call these behaviors, they can be devastating. But rethinking the trolling framework will help validate the experiences of people who are targeted by online harassers, preempt bogus victim-blaming logic, and empower individuals to tell their own stories–three important steps toward rewriting the rules of online discourse.
2. Stop incentivizing problematic online behavior
There is a symbiotic relationship between subcultural trolls–here I am using that term very specifically, referring to past research–and mainstream media outlets. During the “golden age” of subcultural trolling, which lasted from about 2008-2011, self-identifying trolls on and around 4chan’s /b/ board benefited from sensationalist, emotionally exploitative media coverage. Meanwhile, media outlets benefited from subcultural trolls’ sensationalist, emotionally exploitative behaviors. They were, in so many ways, perfect bedfellows.
Although subcultural trolling has since undergone a profound shift, the same basic argument holds. The primary reason that so many people engage in outrageous, exploitative, aggressive and damaging behaviors on the internet is that outrageous, exploitative, aggressive and damaging behaviors on the internet get the most attention. Attention means amplification. Which means more eyeballs glued to a story–and to that person’s hatefulness and delusions.
People engage in atrocious behavior because it’s worth their time and energy to do so.
People engage in atrocious behavior, in other words, because it’s worth their time and energy to do so. It works. The reanimated corpse of P.T. Barnum that we refer to as Donald Trump knows, for example, that when he says something ugly and racist about Muslims (again), all anyone will talk about is the ugly, racist thing Donald Trump said about Muslims (again). Mass shooters know that before the body count is even confirmed, all anyone will talk about is every little goddamned thing they ever posted to social media, and that for the next week, month, year, they will be the subject of endless speculation and attention. A star is born, thus begetting future stars.
It should go without saying that thoughtless amplification of incendiary content can have a devastating impact on the people affected. It should also go without saying that journalists have a job to do; they can’t not cover the news, even when the subject is, in a word, disgusting. There is an inherent tension between these two principles–a tension this article also navigates. While there are no perfect or easy solutions, there is a difference between engaging with the facts of a story and sensationalizing a story, flattening its subjects into fetishized objects, and essentially converting bad (or tragic, or just plain gross) news into an opportunity to sell more ads.
Concerns about amplification shouldn’t be restricted to media professionals. Take online disaster humor, which is often created and spread by individual internet users, and then further amplified by journalists covering the story, all but ensuring the jokes’ long and healthy half-life.
“Funny” memes in response to shooting sprees might feel like harmless jokes. But they can be profoundly traumatizing.
“Funny” memes in response to shooting sprees might feel like harmless jokes to participants. But they can be profoundly re-traumatizing for survivors and victims’ friends and family. That’s true even if participants have no intentions of harming anyone. Online content, after all, is always just a hotlink away from reaching far more people than planned. The Facebook page of one of the people who died. Someone’s mother’s Twitter feed.
Ryan Milner, my Between Play and Hate co-author, makes a similar point in his exploration of racist and sexist expression on 4